LookupNS(elasticsearch-logging):
-Result: ([]*net.NS)
-Error: <*>lookup elasticsearch-logging: no such host
-
-LookupTXT(elasticsearch-logging):
-Result: ([]string)
-Error: <*>lookup elasticsearch-logging: no such host
-
-LookupSRV("", "", elasticsearch-logging):
-cname: elasticsearch-logging.default.cluster.local.
-Result: ([]*net.SRV)[<*>{Target:(string)elasticsearch-logging.default.cluster.local. Port:(uint16)9200 Priority:(uint16)10 Weight:(uint16)100}]
-Error:
-
-LookupHost(elasticsearch-logging):
-Result: ([]string)[10.0.60.245]
-Error:
-
-LookupIP(elasticsearch-logging):
-Result: ([]net.IP)[10.0.60.245]
-Error:
-
-LookupMX(elasticsearch-logging):
-Result: ([]*net.MX)
-Error: <*>lookup elasticsearch-logging: no such host
-
-
-
-
-```
-
-
-[]()
-
-
-[]()
diff --git a/release-0.19.0/examples/explorer/explorer.go b/release-0.19.0/examples/explorer/explorer.go
deleted file mode 100644
index e10dfc925c9..00000000000
--- a/release-0.19.0/examples/explorer/explorer.go
+++ /dev/null
@@ -1,122 +0,0 @@
-/*
-Copyright 2015 The Kubernetes Authors All rights reserved.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-// A tiny web server for viewing the environment kubernetes creates for your
-// containers. It exposes the filesystem and environment variables via http
-// server.
-package main
-
-import (
- "flag"
- "fmt"
- "log"
- "net"
- "net/http"
- "os"
-
- "github.com/davecgh/go-spew/spew"
-)
-
-var (
- port = flag.Int("port", 8080, "Port number to serve at.")
-)
-
-func main() {
- flag.Parse()
- hostname, err := os.Hostname()
- if err != nil {
- log.Fatalf("Error getting hostname: %v", err)
- }
-
- links := []struct {
- link, desc string
- }{
- {"/fs/", "Complete file system as seen by this container."},
- {"/vars/", "Environment variables as seen by this container."},
- {"/hostname/", "Hostname as seen by this container."},
- {"/dns?q=google.com", "Explore DNS records seen by this container."},
- {"/quit", "Cause this container to exit."},
- }
-
- http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
- fmt.Fprintf(w, " Kubernetes environment explorer
")
- for _, v := range links {
- fmt.Fprintf(w, `%v: %v `, v.link, v.link, v.desc)
- }
- })
-
- http.Handle("/fs/", http.StripPrefix("/fs/", http.FileServer(http.Dir("/"))))
- http.HandleFunc("/vars/", func(w http.ResponseWriter, r *http.Request) {
- for _, v := range os.Environ() {
- fmt.Fprintf(w, "%v\n", v)
- }
- })
- http.HandleFunc("/hostname/", func(w http.ResponseWriter, r *http.Request) {
- fmt.Fprintf(w, hostname)
- })
- http.HandleFunc("/quit", func(w http.ResponseWriter, r *http.Request) {
- os.Exit(0)
- })
- http.HandleFunc("/dns", dns)
-
- go log.Fatal(http.ListenAndServe(fmt.Sprintf("0.0.0.0:%d", *port), nil))
-
- select {}
-}
-
-func dns(w http.ResponseWriter, r *http.Request) {
- q := r.URL.Query().Get("q")
- // Note that the below is NOT safe from input attacks, but that's OK
- // because this is just for debugging.
- fmt.Fprintf(w, `
-
-
-
-`)
-}
diff --git a/release-0.19.0/examples/explorer/pod.json b/release-0.19.0/examples/explorer/pod.json
deleted file mode 100644
index 99e68332255..00000000000
--- a/release-0.19.0/examples/explorer/pod.json
+++ /dev/null
@@ -1,36 +0,0 @@
-{
- "kind": "Pod",
- "apiVersion": "v1beta3",
- "metadata": {
- "name": "explorer"
- },
- "spec": {
- "containers": [
- {
- "name": "explorer",
- "image": "gcr.io/google_containers/explorer:1.0",
- "args": [
- "-port=8080"
- ],
- "ports": [
- {
- "containerPort": 8080,
- "protocol": "TCP"
- }
- ],
- "volumeMounts": [
- {
- "name": "test-volume",
- "mountPath": "/mount/test-volume"
- }
- ]
- }
- ],
- "volumes": [
- {
- "name": "test-volume",
- "emptyDir": {}
- }
- ]
- }
-}
diff --git a/release-0.19.0/examples/glusterfs/README.md b/release-0.19.0/examples/glusterfs/README.md
deleted file mode 100644
index 47d758f46c1..00000000000
--- a/release-0.19.0/examples/glusterfs/README.md
+++ /dev/null
@@ -1,89 +0,0 @@
-## Glusterfs
-
-[Glusterfs](http://www.gluster.org) is an open source scale-out filesystem. These examples provide information about how to allow containers use Glusterfs volumes.
-
-The example assumes that you have already set up a Glusterfs server cluster and the Glusterfs client package is installed on all Kubernetes nodes.
-
-### Prerequisites
-
-Set up Glusterfs server cluster; install Glusterfs client package on the Kubernetes nodes. ([Guide](https://www.howtoforge.com/high-availability-storage-with-glusterfs-3.2.x-on-debian-wheezy-automatic-file-replication-mirror-across-two-storage-servers))
-
-### Create endpoints
-Here is a snippet of [glusterfs-endpoints.json](glusterfs-endpoints.json),
-
-```
- "addresses": [
- {
- "IP": "10.240.106.152"
- }
- ],
- "ports": [
- {
- "port": 1,
- "protocol": "TCP"
- }
- ]
-
-```
-The "IP" field should be filled with the address of a node in the Glusterfs server cluster. In this example, it is fine to give any valid value (from 1 to 65535) to the "port" field.
-
-Create the endpoints,
-```shell
-$ kubectl create -f examples/glusterfs/glusterfs-endpoints.json
-```
-
-You can verify that the endpoints are successfully created by running
-```shell
-$ kubect get endpoints
-NAME ENDPOINTS
-glusterfs-cluster 10.240.106.152:1,10.240.79.157:1
-```
-
-### Create a POD
-
-The following *volume* spec in [glusterfs-pod.json](glusterfs-pod.json) illustrates a sample configuration.
-
-```js
-{
- "name": "glusterfsvol",
- "glusterfs": {
- "endpoints": "glusterfs-cluster",
- "path": "kube_vol",
- "readOnly": true
- }
-}
-```
-
-The parameters are explained as the followings.
-
-- **endpoints** is endpoints name that represents a Gluster cluster configuration. *kubelet* is optimized to avoid mount storm, it will randomly pick one from the endpoints to mount. If this host is unresponsive, the next Gluster host in the endpoints is automatically selected.
-- **path** is the Glusterfs volume name.
-- **readOnly** is the boolean that sets the mountpoint readOnly or readWrite.
-
-Create a pod that has a container using Glusterfs volume,
-```shell
-$ kubectl create -f examples/glusterfs/glusterfs-pod.json
-```
-You can verify that the pod is running:
-
-```shell
-$ kubectl get pods
-POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
-glusterfs 10.244.2.13 kubernetes-minion-151f/23.236.54.97 Running About a minute
- glusterfs kubernetes/pause Running About a minute
-
-```
-
-You may ssh to the host and run 'mount' to see if the Glusterfs volume is mounted,
-```shell
-$ mount | grep kube_vol
-10.240.106.152:kube_vol on /var/lib/kubelet/pods/f164a571-fa68-11e4-ad5c-42010af019b7/volumes/kubernetes.io~glusterfs/glusterfsvol type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
-```
-
-You may also run `docker ps` on the host to see the actual container.
-
-
-[]()
-
-
-[]()
diff --git a/release-0.19.0/examples/glusterfs/glusterfs-endpoints.json b/release-0.19.0/examples/glusterfs/glusterfs-endpoints.json
deleted file mode 100644
index 4c5d649e14a..00000000000
--- a/release-0.19.0/examples/glusterfs/glusterfs-endpoints.json
+++ /dev/null
@@ -1,35 +0,0 @@
-{
- "kind": "Endpoints",
- "apiVersion": "v1beta3",
- "metadata": {
- "name": "glusterfs-cluster"
- },
- "subsets": [
- {
- "addresses": [
- {
- "IP": "10.240.106.152"
- }
- ],
- "ports": [
- {
- "port": 1,
- "protocol": "TCP"
- }
- ]
- },
- {
- "addresses": [
- {
- "IP": "10.240.79.157"
- }
- ],
- "ports": [
- {
- "port": 1,
- "protocol": "TCP"
- }
- ]
- }
- ]
-}
diff --git a/release-0.19.0/examples/glusterfs/glusterfs-pod.json b/release-0.19.0/examples/glusterfs/glusterfs-pod.json
deleted file mode 100644
index 664a35dc0fa..00000000000
--- a/release-0.19.0/examples/glusterfs/glusterfs-pod.json
+++ /dev/null
@@ -1,32 +0,0 @@
-{
- "apiVersion": "v1beta3",
- "id": "glusterfs",
- "kind": "Pod",
- "metadata": {
- "name": "glusterfs"
- },
- "spec": {
- "containers": [
- {
- "name": "glusterfs",
- "image": "kubernetes/pause",
- "volumeMounts": [
- {
- "mountPath": "/mnt/glusterfs",
- "name": "glusterfsvol"
- }
- ]
- }
- ],
- "volumes": [
- {
- "name": "glusterfsvol",
- "glusterfs": {
- "endpoints": "glusterfs-cluster",
- "path": "kube_vol",
- "readOnly": true
- }
- }
- ]
- }
-}
\ No newline at end of file
diff --git a/release-0.19.0/examples/guestbook-go/README.md b/release-0.19.0/examples/guestbook-go/README.md
deleted file mode 100644
index 1c1b5e1af9e..00000000000
--- a/release-0.19.0/examples/guestbook-go/README.md
+++ /dev/null
@@ -1,212 +0,0 @@
-## GuestBook example
-
-This example shows how to build a simple multi-tier web application using Kubernetes and Docker. It consists of a web frontend, a redis master for storage and a replicated set of redis slaves.
-
-### Step Zero: Prerequisites
-
-This example assumes that you have forked the repository and [turned up a Kubernetes cluster](../../docs/getting-started-guides):
-
-```shell
-$ cd kubernetes
-$ hack/dev-build-and-up.sh
-```
-
-### Step One: Turn up the redis master.
-
-Use the file `examples/guestbook-go/redis-master-controller.json` to create a [replication controller](../../docs/replication-controller.md) which manages a single [pod](../../docs/pods.md). The pod runs a redis key-value server in a container. Using a replication controller is the preferred way to launch long-running pods, even for 1 replica, so the pod will benefit from self-healing mechanism in kubernetes.
-
-Create the redis master replication controller in your Kubernetes cluster using the `kubectl` CLI:
-
-```shell
-$ kubectl create -f examples/guestbook-go/redis-master-controller.json
-```
-
-Once that's up you can list the replication controllers in the cluster:
-```shell
-$ kubectl get rc
-CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
-redis-master-controller redis-master gurpartap/redis name=redis,role=master 1
-```
-
-List pods in cluster to verify the master is running. You'll see a single redis master pod. It will also display the machine that the pod is running on once it gets placed (may take up to thirty seconds).
-
-```shell
-$ kubectl get pods
-POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
-redis-master-y06lj 10.244.3.4 kubernetes-minion-bz1p/104.154.61.231 name=redis,role=master Running 8 seconds
- redis-master gurpartap/redis Running 3 seconds
-```
-
-If you ssh to that machine, you can run `docker ps` to see the actual pod:
-
-```shell
-me@workstation$ gcloud compute ssh --zone us-central1-b kubernetes-minion-bz1p
-
-me@kubernetes-minion-3:~$ sudo docker ps
-CONTAINER ID IMAGE COMMAND CREATED STATUS
-d5c458dabe50 gurpartap/redis:latest "/usr/local/bin/redi 5 minutes ago Up 5 minutes
-```
-
-(Note that initial `docker pull` may take a few minutes, depending on network conditions.)
-
-### Step Two: Turn up the master service.
-A Kubernetes '[service](../../docs/services.md)' is a named load balancer that proxies traffic to one or more containers. The services in a Kubernetes cluster are discoverable inside other containers via environment variables or DNS. Services find the containers to load balance based on pod labels.
-
-The pod that you created in Step One has the label `name=redis` and `role=master`. The selector field of the service determines which pods will receive the traffic sent to the service. Use the file `examples/guestbook-go/redis-master-service.json` to create the service in the `kubectl` cli:
-
-```shell
-$ kubectl create -f examples/guestbook-go/redis-master-service.json
-
-$ kubectl get services
-NAME LABELS SELECTOR IP(S) PORT(S)
-redis-master name=redis,role=master name=redis,role=master 10.0.11.173 6379/TCP
-```
-
-This will cause all new pods to see the redis master apparently running on $REDIS_MASTER_SERVICE_HOST at port 6379, or running on 'redis-master:6379'. Once created, the service proxy on each node is configured to set up a proxy on the specified port (in this case port 6379).
-
-### Step Three: Turn up the replicated slave pods.
-Although the redis master is a single pod, the redis read slaves are a 'replicated' pod. In Kubernetes, a replication controller is responsible for managing multiple instances of a replicated pod.
-
-Use the file `examples/guestbook-go/redis-slave-controller.json` to create the replication controller:
-
-```shell
-$ kubectl create -f examples/guestbook-go/redis-slave-controller.json
-
-$ kubectl get rc
-CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
-redis-master redis-master gurpartap/redis name=redis,role=master 1
-redis-slave redis-slave gurpartap/redis name=redis,role=slave 2
-```
-
-The redis slave configures itself by looking for the redis-master service name:port pair. In particular, the redis slave is started with the following command:
-
-```shell
-redis-server --slaveof redis-master 6379
-```
-
-Once that's up you can list the pods in the cluster, to verify that the master and slaves are running:
-
-```shell
-$ kubectl get pods
-POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
-redis-master-y06lj 10.244.3.4 kubernetes-minion-bz1p/104.154.61.231 name=redis,role=master Running 5 minutes
- redis-master gurpartap/redis Running 5 minutes
-redis-slave-3psic 10.244.0.4 kubernetes-minion-mluf/104.197.10.10 name=redis,role=slave Running 38 seconds
- redis-slave gurpartap/redis Running 33 seconds
-redis-slave-qtigf 10.244.2.4 kubernetes-minion-rcgd/130.211.122.180 name=redis,role=slave Running 38 seconds
- redis-slave gurpartap/redis Running 36 seconds
-```
-
-You will see a single redis master pod and two redis slave pods.
-
-### Step Four: Create the redis slave service.
-
-Just like the master, we want to have a service to proxy connections to the read slaves. In this case, in addition to discovery, the slave service provides transparent load balancing to clients. The service specification for the slaves is in `examples/guestbook-go/redis-slave-service.json`
-
-This time the selector for the service is `name=redis,role=slave`, because that identifies the pods running redis slaves. It may also be helpful to set labels on your service itself--as we've done here--to make it easy to locate them later.
-
-Now that you have created the service specification, create it in your cluster with the `kubectl` CLI:
-
-```shell
-$ kubectl create -f examples/guestbook-go/redis-slave-service.json
-
-$ kubectl get services
-NAME LABELS SELECTOR IP(S) PORT(S)
-redis-master name=redis,role=master name=redis,role=master 10.0.11.173 6379/TCP
-redis-slave name=redis,role=slave name=redis,role=slave 10.0.234.24 6379/TCP
-```
-
-### Step Five: Create the guestbook pod.
-
-This is a simple Go net/http ([negroni](https://github.com/codegangsta/negroni) based) server that is configured to talk to either the slave or master services depending on whether the request is a read or a write. It exposes a simple JSON interface, and serves a jQuery-Ajax based UX. Like the redis read slaves it is a replicated service instantiated by a replication controller.
-
-The pod is described in the file `examples/guestbook-go/guestbook-controller.json`. Using this file, you can turn up your guestbook with:
-
-```shell
-$ kubectl create -f examples/guestbook-go/guestbook-controller.json
-
-$ kubectl get replicationControllers
-CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
-guestbook guestbook kubernetes/guestbook:v2 name=guestbook 3
-redis-master redis-master gurpartap/redis name=redis,role=master 1
-redis-slave redis-slave gurpartap/redis name=redis,role=slave 2
-```
-
-Once that's up (it may take ten to thirty seconds to create the pods) you can list the pods in the cluster, to verify that the master, slaves and guestbook frontends are running:
-
-```shell
-POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
-guestbook-1xzms 10.244.1.6 kubernetes-minion-q6w5/23.236.54.97 name=guestbook Running 40 seconds
- guestbook kubernetes/guestbook:v2 Running 35 seconds
-guestbook-9ksu4 10.244.0.5 kubernetes-minion-mluf/104.197.10.10 name=guestbook Running 40 seconds
- guestbook kubernetes/guestbook:v2 Running 34 seconds
-guestbook-lycwm 10.244.1.7 kubernetes-minion-q6w5/23.236.54.97 name=guestbook Running 40 seconds
- guestbook kubernetes/guestbook:v2 Running 35 seconds
-redis-master-y06lj 10.244.3.4 kubernetes-minion-bz1p/104.154.61.231 name=redis,role=master Running 8 minutes
- redis-master gurpartap/redis Running 8 minutes
-redis-slave-3psic 10.244.0.4 kubernetes-minion-mluf/104.197.10.10 name=redis,role=slave Running 3 minutes
- redis-slave gurpartap/redis Running 3 minutes
-redis-slave-qtigf 10.244.2.4 kubernetes-minion-rcgd/130.211.122.180 name=redis,role=slave Running 3 minutes
- redis-slave gurpartap/redis Running 3 minutes
-```
-
-You will see a single redis master pod, two redis slaves, and three guestbook pods.
-
-### Step Six: Create the guestbook service.
-
-Just like the others, you want a service to group your guestbook pods. The service specification for the guestbook is in `examples/guestbook-go/guestbook-service.json`. There's a twist this time - because we want it to be externally visible, we set the `createExternalLoadBalancer` flag on the service.
-
-```shell
-$ kubectl create -f examples/guestbook-go/guestbook-service.json
-
-$ kubectl get services
-NAME LABELS SELECTOR IP(S) PORT(S)
-guestbook name=guestbook name=guestbook 10.0.114.109 3000/TCP
-redis-master name=redis,role=master name=redis,role=master 10.0.11.173 6379/TCP
-redis-slave name=redis,role=slave name=redis,role=slave 10.0.234.24 6379/TCP
-```
-
-To play with the service itself, find the external IP of the load balancer:
-
-```shell
-$ kubectl get services guestbook -o template --template='{{(index .status.loadBalancer.ingress 0).ip}}'
-104.154.63.66$
-```
-and then visit port 3000 of that IP address e.g. `http://104.154.63.66:3000`.
-
-**NOTE:** You may need to open the firewall for port 3000 using the [console][cloud-console] or the `gcloud` tool. The following command will allow traffic from any source to instances tagged `kubernetes-minion`:
-
-```shell
-$ gcloud compute firewall-rules create --allow=tcp:3000 --target-tags=kubernetes-minion kubernetes-minion-3000
-```
-
-If you are running Kubernetes locally, you can just visit http://localhost:3000
-For details about limiting traffic to specific sources, see the [GCE firewall documentation][gce-firewall-docs].
-
-[cloud-console]: https://console.developer.google.com
-[gce-firewall-docs]: https://cloud.google.com/compute/docs/networking#firewalls
-
-### Step Seven: Cleanup
-
-You should delete the service which will remove any associated resources that were created e.g. load balancers, forwarding rules and target pools. All the resources (replication controllers and service) can be deleted with a single command:
-```shell
-$ kubectl delete -f examples/guestbook-go
-guestbook-controller
-guestbook
-redis-master-controller
-redis-master
-redis-slave-controller
-redis-slave
-```
-
-To turn down a Kubernetes cluster:
-
-```shell
-$ cluster/kube-down.sh
-```
-
-
-[]()
-
-
-[]()
diff --git a/release-0.19.0/examples/guestbook-go/guestbook-controller.json b/release-0.19.0/examples/guestbook-go/guestbook-controller.json
deleted file mode 100644
index bcea604bd54..00000000000
--- a/release-0.19.0/examples/guestbook-go/guestbook-controller.json
+++ /dev/null
@@ -1,38 +0,0 @@
-{
- "kind":"ReplicationController",
- "apiVersion":"v1beta3",
- "metadata":{
- "name":"guestbook",
- "labels":{
- "name":"guestbook"
- }
- },
- "spec":{
- "replicas":3,
- "selector":{
- "name":"guestbook"
- },
- "template":{
- "metadata":{
- "labels":{
- "name":"guestbook"
- }
- },
- "spec":{
- "containers":[
- {
- "image":"kubernetes/guestbook:v2",
- "name":"guestbook",
- "ports":[
- {
- "name":"http-server",
- "containerPort":3000,
- "protocol":"TCP"
- }
- ]
- }
- ]
- }
- }
- }
-}
diff --git a/release-0.19.0/examples/guestbook-go/guestbook-service.json b/release-0.19.0/examples/guestbook-go/guestbook-service.json
deleted file mode 100644
index 3359efee25a..00000000000
--- a/release-0.19.0/examples/guestbook-go/guestbook-service.json
+++ /dev/null
@@ -1,23 +0,0 @@
-{
- "kind":"Service",
- "apiVersion":"v1beta3",
- "metadata":{
- "name":"guestbook",
- "labels":{
- "name":"guestbook"
- }
- },
- "spec":{
- "createExternalLoadBalancer": true,
- "ports": [
- {
- "port":3000,
- "targetPort":"http-server",
- "protocol":"TCP"
- }
- ],
- "selector":{
- "name":"guestbook"
- }
- }
-}
diff --git a/release-0.19.0/examples/guestbook-go/redis-master-controller.json b/release-0.19.0/examples/guestbook-go/redis-master-controller.json
deleted file mode 100644
index 2ca918e7398..00000000000
--- a/release-0.19.0/examples/guestbook-go/redis-master-controller.json
+++ /dev/null
@@ -1,42 +0,0 @@
-{
- "kind":"ReplicationController",
- "apiVersion":"v1beta3",
- "id":"redis-master",
- "metadata":{
- "name":"redis-master",
- "labels":{
- "name":"redis",
- "role":"master"
- }
- },
- "spec":{
- "replicas":1,
- "selector":{
- "name":"redis",
- "role":"master"
- },
- "template":{
- "metadata":{
- "labels":{
- "name":"redis",
- "role":"master"
- }
- },
- "spec":{
- "containers":[
- {
- "name":"redis-master",
- "image":"gurpartap/redis",
- "ports":[
- {
- "name":"redis-server",
- "containerPort":6379,
- "protocol":"TCP"
- }
- ]
- }
- ]
- }
- }
- }
-}
diff --git a/release-0.19.0/examples/guestbook-go/redis-master-service.json b/release-0.19.0/examples/guestbook-go/redis-master-service.json
deleted file mode 100644
index 5aed7d9ff84..00000000000
--- a/release-0.19.0/examples/guestbook-go/redis-master-service.json
+++ /dev/null
@@ -1,24 +0,0 @@
-{
- "kind":"Service",
- "apiVersion":"v1beta3",
- "metadata":{
- "name":"redis-master",
- "labels":{
- "name":"redis",
- "role":"master"
- }
- },
- "spec":{
- "ports": [
- {
- "port":6379,
- "targetPort":"redis-server",
- "protocol":"TCP"
- }
- ],
- "selector":{
- "name":"redis",
- "role":"master"
- }
- }
-}
diff --git a/release-0.19.0/examples/guestbook-go/redis-slave-controller.json b/release-0.19.0/examples/guestbook-go/redis-slave-controller.json
deleted file mode 100644
index 6fabb700889..00000000000
--- a/release-0.19.0/examples/guestbook-go/redis-slave-controller.json
+++ /dev/null
@@ -1,47 +0,0 @@
-{
- "kind":"ReplicationController",
- "apiVersion":"v1beta3",
- "id":"redis-slave",
- "metadata":{
- "name":"redis-slave",
- "labels":{
- "name":"redis",
- "role":"slave"
- }
- },
- "spec":{
- "replicas":2,
- "selector":{
- "name":"redis",
- "role":"slave"
- },
- "template":{
- "metadata":{
- "labels":{
- "name":"redis",
- "role":"slave"
- }
- },
- "spec":{
- "containers":[
- {
- "name":"redis-slave",
- "image":"gurpartap/redis",
- "command":[
- "sh",
- "-c",
- "redis-server /etc/redis/redis.conf --slaveof redis-master 6379"
- ],
- "ports":[
- {
- "name":"redis-server",
- "containerPort":6379,
- "protocol":"TCP"
- }
- ]
- }
- ]
- }
- }
- }
-}
diff --git a/release-0.19.0/examples/guestbook-go/redis-slave-service.json b/release-0.19.0/examples/guestbook-go/redis-slave-service.json
deleted file mode 100644
index 2eb1fb4ad04..00000000000
--- a/release-0.19.0/examples/guestbook-go/redis-slave-service.json
+++ /dev/null
@@ -1,24 +0,0 @@
-{
- "kind":"Service",
- "apiVersion":"v1beta3",
- "metadata":{
- "name":"redis-slave",
- "labels":{
- "name":"redis",
- "role":"slave"
- }
- },
- "spec":{
- "ports": [
- {
- "port":6379,
- "targetPort":"redis-server",
- "protocol":"TCP"
- }
- ],
- "selector":{
- "name":"redis",
- "role":"slave"
- }
- }
-}
diff --git a/release-0.19.0/examples/guestbook/README.md b/release-0.19.0/examples/guestbook/README.md
deleted file mode 100644
index 644465add99..00000000000
--- a/release-0.19.0/examples/guestbook/README.md
+++ /dev/null
@@ -1,549 +0,0 @@
-## GuestBook example
-
-This example shows how to build a simple, multi-tier web application using Kubernetes and Docker.
-
-The example consists of:
-- A web frontend
-- A redis master (for storage and a replicated set of redis slaves)
-
-The web front end interacts with the redis master via javascript redis API calls.
-
-### Step Zero: Prerequisites
-
-This example requires a kubernetes cluster. See the [Getting Started guides](../../docs/getting-started-guides) for how to get started.
-
-### Step One: Fire up the redis master
-
-Note: This redis-master is *not* highly available. Making it highly available would be a very interesting, but intricate exercise - redis doesn't actually support multi-master deployments at the time of this writing, so high availability would be a somewhat tricky thing to implement, and might involve periodic serialization to disk, and so on.
-
-Use (or just create) the file `examples/guestbook/redis-master-controller.json` which describes a single [pod](../../docs/pods.md) running a redis key-value server in a container:
-
-Note that, although the redis server runs just with a single replica, we use [replication controller](../../docs/replication-controller.md) to enforce that exactly one pod keeps running (e.g. in a event of node going down, the replication controller will ensure that the redis master gets restarted on a healthy node). This could result in data loss.
-
-
-```js
-{
- "kind":"ReplicationController",
- "apiVersion":"v1beta3",
- "metadata":{
- "name":"redis-master",
- "labels":{
- "name":"redis-master"
- }
- },
- "spec":{
- "replicas":1,
- "selector":{
- "name":"redis-master"
- },
- "template":{
- "metadata":{
- "labels":{
- "name":"redis-master"
- }
- },
- "spec":{
- "containers":[
- {
- "name":"master",
- "image":"redis",
- "ports":[
- {
- "containerPort":6379,
- "protocol":"TCP"
- }
- ]
- }
- ]
- }
- }
- }
-}
-```
-
-Now, create the redis pod in your Kubernetes cluster by running:
-
-```shell
-$ kubectl create -f examples/guestbook/redis-master-controller.json
-
-$ kubectl get rc
-CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
-redis-master master redis name=redis-master 1
-```
-
-Once that's up you can list the pods in the cluster, to verify that the master is running:
-
-```shell
-$ kubectl get pods
-```
-
-You'll see all kubernetes components, most importantly the redis master pod. It will also display the machine that the pod is running on once it gets placed (may take up to thirty seconds):
-
-```shell
-POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS
-redis-master-controller-gb50a 10.244.3.7 master redis kubernetes-minion-7agi.c.hazel-mote-834.internal/104.154.54.203 name=redis-master Running
-```
-
-If you ssh to that machine, you can run `docker ps` to see the actual pod:
-
-```shell
-me@workstation$ gcloud compute ssh kubernetes-minion-7agi
-
-me@kubernetes-minion-7agi:~$ sudo docker ps
-CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
-0ffef9649265 redis:latest "redis-server /etc/r About a minute ago Up About a minute k8s_redis-master.767aef46_redis-master-controller-gb50a.default.api_4530d7b3-ae5d-11e4-bf77-42010af0d719_579ee964
-```
-
-(Note that initial `docker pull` may take a few minutes, depending on network conditions. The pods will be reported as pending while the image is being downloaded.)
-
-### Step Two: Fire up the master service
-A Kubernetes '[service](../../docs/services.md)' is a named load balancer that proxies traffic to *one or more* containers. This is done using the *labels* metadata which we defined in the redis-master pod above. As mentioned, in redis there is only one master, but we nevertheless still want to create a service for it. Why? Because it gives us a deterministic way to route to the single master using an elastic IP.
-
-The services in a Kubernetes cluster are discoverable inside other containers via environment variables.
-
-Services find the containers to load balance based on pod labels.
-
-The pod that you created in Step One has the label `name=redis-master`. The selector field of the service determines *which pods will receive the traffic* sent to the service, and the port and targetPort information defines what port the service proxy will run at.
-
-Use the file `examples/guestbook/redis-master-service.json`:
-
-```js
-{
- "kind":"Service",
- "apiVersion":"v1beta3",
- "metadata":{
- "name":"redis-master",
- "labels":{
- "name":"redis-master"
- }
- },
- "spec":{
- "ports": [
- {
- "port":6379,
- "targetPort":6379,
- "protocol":"TCP"
- }
- ],
- "selector":{
- "name":"redis-master"
- }
- }
-}
-```
-
-to create the service by running:
-
-```shell
-$ kubectl create -f examples/guestbook/redis-master-service.json
-redis-master
-
-$ kubectl get services
-NAME LABELS SELECTOR IP PORT
-redis-master name=redis-master name=redis-master 10.0.246.242 6379
-```
-
-This will cause all pods to see the redis master apparently running on :6379. The traffic flow from slaves to masters can be described in two steps, like so.
-
-- A *redis slave* will connect to "port" on the *redis master service*
-- Traffic will be forwarded from the service "port" (on the service node) to the *targetPort* on the pod which (a node the service listens to).
-
-Thus, once created, the service proxy on each minion is configured to set up a proxy on the specified port (in this case port 6379).
-
-### Step Three: Fire up the replicated slave pods
-Although the redis master is a single pod, the redis read slaves are a 'replicated' pod. In Kubernetes, a replication controller is responsible for managing multiple instances of a replicated pod. The replication controller will automatically launch new pods if the number of replicas falls (this is quite easy - and fun - to test, just kill the docker processes for your pods at will and watch them come back online on a new node shortly thereafter).
-
-Use the file `examples/guestbook/redis-slave-controller.json`, which looks like this:
-
-```js
-{
- "kind":"ReplicationController",
- "apiVersion":"v1beta3",
- "metadata":{
- "name":"redis-slave",
- "labels":{
- "name":"redis-slave"
- }
- },
- "spec":{
- "replicas":2,
- "selector":{
- "name":"redis-slave"
- },
- "template":{
- "metadata":{
- "labels":{
- "name":"redis-slave"
- }
- },
- "spec":{
- "containers":[
- {
- "name":"slave",
- "image":"kubernetes/redis-slave:v2",
- "ports":[
- {
- "containerPort":6379,
- "protocol":"TCP"
- }
- ]
- }
- ]
- }
- }
- }
-}
-```
-
-to create the replication controller by running:
-
-```shell
-$ kubectl create -f examples/guestbook/redis-slave-controller.json
-redis-slave-controller
-
-$ kubectl get rc
-CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
-redis-master master redis name=redis-master 1
-redis-slave slave kubernetes/redis-slave:v2 name=redis-slave 2
-```
-
-Once that's up you can list the pods in the cluster, to verify that the master and slaves are running:
-
-```shell
-$ kubectl get pods
-POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS
-redis-master-controller-gb50a 10.244.3.7 master redis kubernetes-minion-7agi.c.hazel-mote-834.internal/104.154.54.203 name=redis-master Running
-redis-slave-controller-182tv 10.244.3.6 slave kubernetes/redis-slave:v2 kubernetes-minion-7agi.c.hazel-mote-834.internal/104.154.54.203 name=redis-slave Running
-redis-slave-controller-zwk1b 10.244.2.8 slave kubernetes/redis-slave:v2 kubernetes-minion-3vxa.c.hazel-mote-834.internal/104.154.54.6 name=redis-slave Running
-```
-
-You will see a single redis master pod and two redis slave pods.
-
-### Step Four: Create the redis slave service
-
-Just like the master, we want to have a service to proxy connections to the read slaves. In this case, in addition to discovery, the slave service provides transparent load balancing to web app clients.
-
-The service specification for the slaves is in `examples/guestbook/redis-slave-service.json`:
-
-```js
-{
- "kind":"Service",
- "apiVersion":"v1beta3",
- "metadata":{
- "name":"redis-slave",
- "labels":{
- "name":"redis-slave"
- }
- },
- "spec":{
- "ports": [
- {
- "port":6379,
- "targetPort":6379,
- "protocol":"TCP"
- }
- ],
- "selector":{
- "name":"redis-slave"
- }
- }
-}
-```
-
-This time the selector for the service is `name=redis-slave`, because that identifies the pods running redis slaves. It may also be helpful to set labels on your service itself as we've done here to make it easy to locate them with the `kubectl get services -l "label=value"` command.
-
-Now that you have created the service specification, create it in your cluster by running:
-
-```shell
-$ kubectl create -f examples/guestbook/redis-slave-service.json
-redis-slave
-
-$ kubectl get services
-NAME LABELS SELECTOR IP PORT
-redis-master name=redis-master name=redis-master 10.0.246.242 6379
-redis-slave name=redis-slave name=redis-slave 10.0.72.62 6379
-```
-
-### Step Five: Create the frontend pod
-
-This is a simple PHP server that is configured to talk to either the slave or master services depending on whether the request is a read or a write. It exposes a simple AJAX interface, and serves an angular-based UX. Like the redis read slaves it is a replicated service instantiated by a replication controller.
-
-It can now leverage writes to the load balancing redis-slaves, which can be highly replicated.
-
-The pod is described in the file `examples/guestbook/frontend-controller.json`:
-
-```js
-{
- "kind":"ReplicationController",
- "apiVersion":"v1beta3",
- "metadata":{
- "name":"frontend",
- "labels":{
- "name":"frontend"
- }
- },
- "spec":{
- "replicas":3,
- "selector":{
- "name":"frontend"
- },
- "template":{
- "metadata":{
- "labels":{
- "name":"frontend"
- }
- },
- "spec":{
- "containers":[
- {
- "name":"php-redis",
- "image":"kubernetes/example-guestbook-php-redis:v2",
- "ports":[
- {
- "containerPort":80,
- "protocol":"TCP"
- }
- ]
- }
- ]
- }
- }
- }
-}
-```
-
-Using this file, you can turn up your frontend with:
-
-```shell
-$ kubectl create -f examples/guestbook/frontend-controller.json
-frontend-controller
-
-$ kubectl get rc
-CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
-frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=frontend 3
-redis-master master redis name=redis-master 1
-redis-slave slave kubernetes/redis-slave:v2 name=redis-slave 2
-```
-
-Once that's up (it may take ten to thirty seconds to create the pods) you can list the pods in the cluster, to verify that the master, slaves and frontends are running:
-
-```shell
-$ kubectl get pods
-POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS
-frontend-5m1zc 10.244.1.131 php-redis kubernetes/example-guestbook-php-redis:v2 kubernetes-minion-3vxa.c.hazel-mote-834.internal/146.148.71.71 app=frontend,name=frontend,uses=redis-slave,redis-master Running
-frontend-ckn42 10.244.2.134 php-redis kubernetes/example-guestbook-php-redis:v2 kubernetes-minion-by92.c.hazel-mote-834.internal/104.154.54.6 app=frontend,name=frontend,uses=redis-slave,redis-master Running
-frontend-v5drx 10.244.0.128 php-redis kubernetes/example-guestbook-php-redis:v2 kubernetes-minion-wilb.c.hazel-mote-834.internal/23.236.61.63 app=frontend,name=frontend,uses=redis-slave,redis-master Running
-redis-master-gb50a 10.244.3.7 master redis kubernetes-minion-7agi.c.hazel-mote-834.internal/104.154.54.203 name=redis-master Running
-redis-slave-182tv 10.244.3.6 slave kubernetes/redis-slave:v2 kubernetes-minion-7agi.c.hazel-mote-834.internal/104.154.54.203 name=redis-slave Running
-redis-slave-zwk1b 10.244.2.8 slave kubernetes/redis-slave:v2 kubernetes-minion-3vxa.c.hazel-mote-834.internal/104.154.54.6 name=redis-slave Running
-```
-
-You will see a single redis master pod, two redis slaves, and three frontend pods.
-
-The code for the PHP service looks like this:
-
-```php
-
-
-set_include_path('.:/usr/share/php:/usr/share/pear:/vendor/predis');
-
-error_reporting(E_ALL);
-ini_set('display_errors', 1);
-
-require 'predis/autoload.php';
-
-if (isset($_GET['cmd']) === true) {
- header('Content-Type: application/json');
- if ($_GET['cmd'] == 'set') {
- $client = new Predis\Client([
- 'scheme' => 'tcp',
- 'host' => 'redis-master',
- 'port' => 6379,
- ]);
-
- $client->set($_GET['key'], $_GET['value']);
- print('{"message": "Updated"}');
- } else {
- $client = new Predis\Client([
- 'scheme' => 'tcp',
- 'host' => 'redis-slave',
- 'port' => 6379,
- ]);
-
- $value = $client->get($_GET['key']);
- print('{"data": "' . $value . '"}');
- }
-} else {
- phpinfo();
-} ?>
-```
-
-### Step Six: Create the guestbook service.
-
-Just like the others, you want a service to group your frontend pods.
-The service is described in the file `examples/guestbook/frontend-service.json`:
-
-```js
-{
- "kind":"Service",
- "apiVersion":"v1beta3",
- "metadata":{
- "name":"frontend",
- "labels":{
- "name":"frontend"
- }
- },
- "spec":{
- "ports": [
- {
- "port":80,
- "targetPort":80,
- "protocol":"TCP"
- }
- ],
- "selector":{
- "name":"frontend"
- }
- }
-}
-```
-
-When `createExternalLoadBalancer` is specified `"createExternalLoadBalancer":true`, it takes some time for an external IP to show up in `kubectl get services` output.
-There should eventually be an internal (10.x.x.x) and an external address assigned to the frontend service.
-If running a single node local setup, or single VM, you don't need `createExternalLoadBalancer`, nor do you need `publicIPs`.
-Read the *Accessing the guestbook site externally* section below for details and set 10.11.22.33 accordingly (for now, you can
-delete these parameters or run this - either way it won't hurt anything to have both parameters the way they are).
-
-```shell
-$ kubectl create -f examples/guestbook/frontend-service.json
-frontend
-
-$ kubectl get services
-NAME LABELS SELECTOR IP PORT
-frontend name=frontend name=frontend 10.0.93.211 8000
-redis-master name=redis-master name=redis-master 10.0.246.242 6379
-redis-slave name=redis-slave name=redis-slave 10.0.72.62 6379
-```
-
-### A few Google Container Engine specifics for playing around with the services.
-
-In GCE, `kubectl` automatically creates forwarding rule for services with `createExternalLoadBalancer`.
-
-```shell
-$ gcloud compute forwarding-rules list
-NAME REGION IP_ADDRESS IP_PROTOCOL TARGET
-frontend us-central1 130.211.188.51 TCP us-central1/targetPools/frontend
-```
-
-You can grab the external IP of the load balancer associated with that rule and visit `http://130.211.188.51:80`.
-
-In GCE, you also may need to open the firewall for port 80 using the [console][cloud-console] or the `gcloud` tool. The following command will allow traffic from any source to instances tagged `kubernetes-minion`:
-
-```shell
-$ gcloud compute firewall-rules create --allow=tcp:80 --target-tags=kubernetes-minion kubernetes-minion-80
-```
-
-For GCE details about limiting traffic to specific sources, see the [GCE firewall documentation][gce-firewall-docs].
-
-[cloud-console]: https://console.developer.google.com
-[gce-firewall-docs]: https://cloud.google.com/compute/docs/networking#firewalls
-
-### Accessing the guestbook site externally
-
-The pods that we have set up are reachable through the frontend service, but you'll notice that 10.0.93.211 (the IP of the frontend service) is unavailable from outside of kubernetes.
-Of course, if you are running kubernetes minions locally, this isn't such a big problem - the port binding will allow you to reach the guestbook website at localhost:80... but the beloved **localhost** solution obviously doesn't work in any real world scenario.
-
-Unless you have access to the `createExternalLoadBalancer` feature (cloud provider specific), you will want to set up a **publicIP on a node**, so that the service can be accessed from outside of the internal kubernetes network. This is quite easy. You simply look at your list of kubelet IP addresses, and update the service file to include a `publicIPs` string, which is mapped to an IP address of any number of your existing kubelets. This will allow all your kubelets to act as external entry points to the service (translation: this will allow you to browse the guestbook site at your kubelet IP address from your browser).
-
-If you are more advanced in the ops arena, note you can manually get the service IP from looking at the output of `kubectl get pods,services`, and modify your firewall using standard tools and services (firewalld, iptables, selinux) which you are already familar with.
-
-And of course, finally, if you are running Kubernetes locally, you can just visit http://localhost:80.
-
-### Step Seven: Cleanup
-
-If you are in a live kubernetes cluster, you can just kill the pods, using a script such as this (obviously, read through it and make sure you understand it before running it blindly, as it will kill several pods automatically for you).
-
-```shell
-### First, kill services and controllers.
-kubectl stop -f examples/guestbook/redis-master-controller.json
-kubectl stop -f examples/guestbook/redis-slave-controller.json
-kubectl stop -f examples/guestbook/frontend-controller.json
-kubectl delete -f examples/guestbook/redis-master-service.json
-kubectl delete -f examples/guestbook/redis-slave-service.json
-kubectl delete -f examples/guestbook/frontend-service.json
-```
-
-To completely tear down a Kubernetes cluster, if you ran this from source, you can use
-
-```shell
-$ cluster/kube-down.sh
-```
-
-### Troubleshooting
-
-the Guestbook example can fail for a variety of reasons, which makes it an effective test. Lets test the web app simply using *curl*, so we can see whats going on.
-
-Before we proceed, what are some setup idiosyncracies that might cause the app to fail (or, appear to fail, when merely you have a *cold start* issue.
-
-- running kubernetes from HEAD, in which case, there may be subtle bugs in the kubernetes core component interactions.
-- running kubernetes with security turned on, in such a way that containers are restricted from doing their job.
-- starting the kubernetes and not allowing enough time for all services and pods to come online, before doing testing.
-
-
-
-To post a message (Note that this call *overwrites* the messages field), so it will be reset to just one entry.
-
-```
-curl "localhost:8000/index.php?cmd=set&key=messages&value=jay_sais_hi"
-```
-
-And, to get messages afterwards...
-
-```
-curl "localhost:8000/index.php?cmd=get&key=messages"
-```
-
-1) When the *Web page hasn't come up yet*:
-
-When you go to localhost:8000, you might not see the page at all. Testing it with curl...
-```shell
- ==> default: curl: (56) Recv failure: Connection reset by peer
-```
-This means the web frontend isn't up yet. Specifically, the "reset by peer" message is occurring because you are trying to access the *right port*, but *nothing is bound* to that port yet. Wait a while, possibly about 2 minutes or more, depending on your set up. Also, run a *watch* on docker ps, to see if containers are cycling on and off or not starting.
-
-```watch
-$> watch -n 1 docker ps
-```
-
-If you run this on a node to which the frontend is assigned, you will eventually see the frontend container turns on. At that point, this basic error will likely go away.
-
-2) *Temporarily, while waiting for the app to come up* , you might see a few of these:
-
-```shell
-==> default:
-==> default: Fatal error: Uncaught exception 'Predis\Connection\ConnectionException' with message 'Error while reading line from the server [tcp://10.254.168.69:6379]' in /vendor/predis/predis/lib/Predis/Connection/AbstractConnection.php:141
-```
-
-The fix, just go get some coffee. When you come back, there is a good chance the service endpoint will eventually be up. If not, make sure its running and that the redis master / slave docker logs show something like this.
-
-```shell
-$> docker logs 26af6bd5ac12
-...
-[9] 20 Feb 23:47:51.015 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
-[9] 20 Feb 23:47:51.015 * The server is now ready to accept connections on port 6379
-[9] 20 Feb 23:47:52.005 * Connecting to MASTER 10.254.168.69:6379
-[9] 20 Feb 23:47:52.005 * MASTER <-> SLAVE sync started
-```
-
-3) *When security issues cause redis writes to fail* you may have to run *docker logs* on the redis containers:
-
-```shell
-==> default: Fatal error: Uncaught exception 'Predis\ServerException' with message 'MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on disk. Commands that may modify the data set are disabled. Please check Redis logs for details about the error.' in /vendor/predis/predis/lib/Predis/Client.php:282"
-```
-The fix is to setup SE Linux properly (don't just turn it off). Remember that you can also rebuild this entire app from scratch, using the dockerfiles, and modify while redeploying. Reach out on the mailing list if you need help doing so!
-
-
-[]()
-
-
-[]()
diff --git a/release-0.19.0/examples/guestbook/frontend-controller.json b/release-0.19.0/examples/guestbook/frontend-controller.json
deleted file mode 100644
index 8b8119b94cb..00000000000
--- a/release-0.19.0/examples/guestbook/frontend-controller.json
+++ /dev/null
@@ -1,37 +0,0 @@
-{
- "kind":"ReplicationController",
- "apiVersion":"v1beta3",
- "metadata":{
- "name":"frontend",
- "labels":{
- "name":"frontend"
- }
- },
- "spec":{
- "replicas":3,
- "selector":{
- "name":"frontend"
- },
- "template":{
- "metadata":{
- "labels":{
- "name":"frontend"
- }
- },
- "spec":{
- "containers":[
- {
- "name":"php-redis",
- "image":"kubernetes/example-guestbook-php-redis:v2",
- "ports":[
- {
- "containerPort":80,
- "protocol":"TCP"
- }
- ]
- }
- ]
- }
- }
- }
-}
diff --git a/release-0.19.0/examples/guestbook/frontend-service.json b/release-0.19.0/examples/guestbook/frontend-service.json
deleted file mode 100644
index 07e81f9942b..00000000000
--- a/release-0.19.0/examples/guestbook/frontend-service.json
+++ /dev/null
@@ -1,22 +0,0 @@
-{
- "kind":"Service",
- "apiVersion":"v1beta3",
- "metadata":{
- "name":"frontend",
- "labels":{
- "name":"frontend"
- }
- },
- "spec":{
- "ports": [
- {
- "port":80,
- "targetPort":80,
- "protocol":"TCP"
- }
- ],
- "selector":{
- "name":"frontend"
- }
- }
-}
diff --git a/release-0.19.0/examples/guestbook/php-redis/Dockerfile b/release-0.19.0/examples/guestbook/php-redis/Dockerfile
deleted file mode 100644
index 3cf7c2cfa20..00000000000
--- a/release-0.19.0/examples/guestbook/php-redis/Dockerfile
+++ /dev/null
@@ -1,7 +0,0 @@
-FROM brendanburns/php
-
-ADD index.php /var/www/index.php
-ADD controllers.js /var/www/controllers.js
-ADD index.html /var/www/index.html
-
-CMD /run.sh
diff --git a/release-0.19.0/examples/guestbook/php-redis/controllers.js b/release-0.19.0/examples/guestbook/php-redis/controllers.js
deleted file mode 100644
index 1ea5bdce18f..00000000000
--- a/release-0.19.0/examples/guestbook/php-redis/controllers.js
+++ /dev/null
@@ -1,29 +0,0 @@
-var redisApp = angular.module('redis', ['ui.bootstrap']);
-
-/**
- * Constructor
- */
-function RedisController() {}
-
-RedisController.prototype.onRedis = function() {
- this.scope_.messages.push(this.scope_.msg);
- this.scope_.msg = "";
- var value = this.scope_.messages.join();
- this.http_.get("/index.php?cmd=set&key=messages&value=" + value)
- .success(angular.bind(this, function(data) {
- this.scope_.redisResponse = "Updated.";
- }));
-};
-
-redisApp.controller('RedisCtrl', function ($scope, $http, $location) {
- $scope.controller = new RedisController();
- $scope.controller.scope_ = $scope;
- $scope.controller.location_ = $location;
- $scope.controller.http_ = $http;
-
- $scope.controller.http_.get("/index.php?cmd=get&key=messages")
- .success(function(data) {
- console.log(data);
- $scope.messages = data.data.split(",");
- });
-});
diff --git a/release-0.19.0/examples/guestbook/php-redis/index.html b/release-0.19.0/examples/guestbook/php-redis/index.html
deleted file mode 100644
index 81328b4fcd8..00000000000
--- a/release-0.19.0/examples/guestbook/php-redis/index.html
+++ /dev/null
@@ -1,25 +0,0 @@
-
-
- Guestbook
-
-
-
-
-
-
-
-
Guestbook
-
-
-
- {{msg}}
-
-
-
-
-
diff --git a/release-0.19.0/examples/guestbook/php-redis/index.php b/release-0.19.0/examples/guestbook/php-redis/index.php
deleted file mode 100644
index 18bff077579..00000000000
--- a/release-0.19.0/examples/guestbook/php-redis/index.php
+++ /dev/null
@@ -1,33 +0,0 @@
-
-
-set_include_path('.:/usr/share/php:/usr/share/pear:/vendor/predis');
-
-error_reporting(E_ALL);
-ini_set('display_errors', 1);
-
-require 'predis/autoload.php';
-
-if (isset($_GET['cmd']) === true) {
- header('Content-Type: application/json');
- if ($_GET['cmd'] == 'set') {
- $client = new Predis\Client([
- 'scheme' => 'tcp',
- 'host' => 'redis-master',
- 'port' => 6379,
- ]);
-
- $client->set($_GET['key'], $_GET['value']);
- print('{"message": "Updated"}');
- } else {
- $client = new Predis\Client([
- 'scheme' => 'tcp',
- 'host' => 'redis-slave',
- 'port' => 6379,
- ]);
-
- $value = $client->get($_GET['key']);
- print('{"data": "' . $value . '"}');
- }
-} else {
- phpinfo();
-} ?>
diff --git a/release-0.19.0/examples/guestbook/redis-master-controller.json b/release-0.19.0/examples/guestbook/redis-master-controller.json
deleted file mode 100644
index add8ba79904..00000000000
--- a/release-0.19.0/examples/guestbook/redis-master-controller.json
+++ /dev/null
@@ -1,37 +0,0 @@
-{
- "kind":"ReplicationController",
- "apiVersion":"v1beta3",
- "metadata":{
- "name":"redis-master",
- "labels":{
- "name":"redis-master"
- }
- },
- "spec":{
- "replicas":1,
- "selector":{
- "name":"redis-master"
- },
- "template":{
- "metadata":{
- "labels":{
- "name":"redis-master"
- }
- },
- "spec":{
- "containers":[
- {
- "name":"master",
- "image":"redis",
- "ports":[
- {
- "containerPort":6379,
- "protocol":"TCP"
- }
- ]
- }
- ]
- }
- }
- }
-}
diff --git a/release-0.19.0/examples/guestbook/redis-master-service.json b/release-0.19.0/examples/guestbook/redis-master-service.json
deleted file mode 100644
index 101d9ea965c..00000000000
--- a/release-0.19.0/examples/guestbook/redis-master-service.json
+++ /dev/null
@@ -1,22 +0,0 @@
-{
- "kind":"Service",
- "apiVersion":"v1beta3",
- "metadata":{
- "name":"redis-master",
- "labels":{
- "name":"redis-master"
- }
- },
- "spec":{
- "ports": [
- {
- "port":6379,
- "targetPort":6379,
- "protocol":"TCP"
- }
- ],
- "selector":{
- "name":"redis-master"
- }
- }
-}
diff --git a/release-0.19.0/examples/guestbook/redis-slave-controller.json b/release-0.19.0/examples/guestbook/redis-slave-controller.json
deleted file mode 100644
index 4a668fe091b..00000000000
--- a/release-0.19.0/examples/guestbook/redis-slave-controller.json
+++ /dev/null
@@ -1,37 +0,0 @@
-{
- "kind":"ReplicationController",
- "apiVersion":"v1beta3",
- "metadata":{
- "name":"redis-slave",
- "labels":{
- "name":"redis-slave"
- }
- },
- "spec":{
- "replicas":2,
- "selector":{
- "name":"redis-slave"
- },
- "template":{
- "metadata":{
- "labels":{
- "name":"redis-slave"
- }
- },
- "spec":{
- "containers":[
- {
- "name":"slave",
- "image":"kubernetes/redis-slave:v2",
- "ports":[
- {
- "containerPort":6379,
- "protocol":"TCP"
- }
- ]
- }
- ]
- }
- }
- }
-}
diff --git a/release-0.19.0/examples/guestbook/redis-slave-service.json b/release-0.19.0/examples/guestbook/redis-slave-service.json
deleted file mode 100644
index 2b866b6f94a..00000000000
--- a/release-0.19.0/examples/guestbook/redis-slave-service.json
+++ /dev/null
@@ -1,22 +0,0 @@
-{
- "kind":"Service",
- "apiVersion":"v1beta3",
- "metadata":{
- "name":"redis-slave",
- "labels":{
- "name":"redis-slave"
- }
- },
- "spec":{
- "ports": [
- {
- "port":6379,
- "targetPort":6379,
- "protocol":"TCP"
- }
- ],
- "selector":{
- "name":"redis-slave"
- }
- }
-}
diff --git a/release-0.19.0/examples/guestbook/redis-slave/Dockerfile b/release-0.19.0/examples/guestbook/redis-slave/Dockerfile
deleted file mode 100644
index 8167438bbea..00000000000
--- a/release-0.19.0/examples/guestbook/redis-slave/Dockerfile
+++ /dev/null
@@ -1,7 +0,0 @@
-FROM redis
-
-ADD run.sh /run.sh
-
-RUN chmod a+x /run.sh
-
-CMD /run.sh
diff --git a/release-0.19.0/examples/guestbook/redis-slave/run.sh b/release-0.19.0/examples/guestbook/redis-slave/run.sh
deleted file mode 100755
index bf48f27c015..00000000000
--- a/release-0.19.0/examples/guestbook/redis-slave/run.sh
+++ /dev/null
@@ -1,17 +0,0 @@
-#!/bin/bash
-
-# Copyright 2014 The Kubernetes Authors All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-redis-server --slaveof redis-master 6379
diff --git a/release-0.19.0/examples/hazelcast/Dockerfile b/release-0.19.0/examples/hazelcast/Dockerfile
deleted file mode 100644
index 55963290c1a..00000000000
--- a/release-0.19.0/examples/hazelcast/Dockerfile
+++ /dev/null
@@ -1,11 +0,0 @@
-FROM quay.io/pires/docker-jre:8u45-2
-
-MAINTAINER Paulo Pires
-
-EXPOSE 5701
-
-RUN \
- curl -Lskj https://github.com/pires/hazelcast-kubernetes-bootstrapper/releases/download/0.3.1/hazelcast-kubernetes-bootstrapper-0.3.1.jar \
- -o /bootstrapper.jar
-
-CMD java -jar /bootstrapper.jar
diff --git a/release-0.19.0/examples/hazelcast/README.md b/release-0.19.0/examples/hazelcast/README.md
deleted file mode 100644
index b8836d0b80a..00000000000
--- a/release-0.19.0/examples/hazelcast/README.md
+++ /dev/null
@@ -1,214 +0,0 @@
-## Cloud Native Deployments of Hazelcast using Kubernetes
-
-The following document describes the development of a _cloud native_ [Hazelcast](http://hazelcast.org/) deployment on Kubernetes. When we say _cloud native_ we mean an application which understands that it is running within a cluster manager, and uses this cluster management infrastructure to help implement the application. In particular, in this instance, a custom Hazelcast ```bootstrapper``` is used to enable Hazelcast to dynamically discover Hazelcast nodes that have already joined the cluster.
-
-Any topology changes are communicated and handled by Hazelcast nodes themselves.
-
-This document also attempts to describe the core components of Kubernetes: _Pods_, _Services_, and _Replication Controllers_.
-
-### Prerequisites
-This example assumes that you have a Kubernetes cluster installed and running, and that you have installed the `kubectl` command line tool somewhere in your path. Please see the [getting started](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/docs/getting-started-guides) for installation instructions for your platform.
-
-### A note for the impatient
-This is a somewhat long tutorial. If you want to jump straight to the "do it now" commands, please see the [tl; dr](#tl-dr) at the end.
-
-### Sources
-
-Source is freely available at:
-* Hazelcast Discovery - https://github.com/pires/hazelcast-kubernetes-bootstrapper
-* Dockerfile - https://github.com/pires/hazelcast-kubernetes
-* Docker Trusted Build - https://registry.hub.docker.com/u/pires/hazelcast-k8s
-
-### Simple Single Pod Hazelcast Node
-In Kubernetes, the atomic unit of an application is a [_Pod_](../../docs/pods.md). A Pod is one or more containers that _must_ be scheduled onto the same host. All containers in a pod share a network namespace, and may optionally share mounted volumes.
-
-In this case, we shall not run a single Hazelcast pod, because the discovery mechanism now relies on a service definition.
-
-
-### Adding a Hazelcast Service
-In Kubernetes a _[Service](../../docs/services.md)_ describes a set of Pods that perform the same task. For example, the set of nodes in a Hazelcast cluster. An important use for a Service is to create a load balancer which distributes traffic across members of the set. But a _Service_ can also be used as a standing query which makes a dynamically changing set of Pods available via the Kubernetes API. This is actually how our discovery mechanism works, by relying on the service to discover other Hazelcast pods.
-
-Here is the service description:
-```yaml
-apiVersion: v1beta3
-kind: Service
-metadata:
- labels:
- name: hazelcast
- name: hazelcast
-spec:
- ports:
- - port: 5701
- targetPort: 5701
- selector:
- name: hazelcast
-```
-
-The important thing to note here is the `selector`. It is a query over labels, that identifies the set of _Pods_ contained by the _Service_. In this case the selector is `name: hazelcast`. If you look at the Replication Controller specification below, you'll see that the pod has the corresponding label, so it will be selected for membership in this Service.
-
-Create this service as follows:
-```sh
-$ kubectl create -f hazelcast-service.yaml
-```
-
-### Adding replicated nodes
-The real power of Kubernetes and Hazelcast lies in easily building a replicated, resizable Hazelcast cluster.
-
-In Kubernetes a _[Replication Controller](../../docs/replication-controller.md)_ is responsible for replicating sets of identical pods. Like a _Service_ it has a selector query which identifies the members of it's set. Unlike a _Service_ it also has a desired number of replicas, and it will create or delete _Pods_ to ensure that the number of _Pods_ matches up with it's desired state.
-
-Replication Controllers will "adopt" existing pods that match their selector query, so let's create a Replication Controller with a single replica to adopt our existing Hazelcast Pod.
-
-```yaml
-apiVersion: v1beta3
-kind: ReplicationController
-metadata:
- labels:
- name: hazelcast
- name: hazelcast
-spec:
- replicas: 1
- selector:
- name: hazelcast
- template:
- metadata:
- labels:
- name: hazelcast
- spec:
- containers:
- - resources:
- limits:
- cpu: 1
- image: quay.io/pires/hazelcast-kubernetes:0.3.1
- name: hazelcast
- env:
- - name: "DNS_DOMAIN"
- value: "cluster.local"
- ports:
- - containerPort: 5701
- name: hazelcast
-```
-
-There are a few things to note in this description. First is that we are running the `quay.io/pires/hazelcast-kubernetes` image, tag `0.3.1`. This is a `busybox` installation with JRE 8. However it also adds a custom [`application`](https://github.com/pires/hazelcast-kubernetes-bootstrapper) that finds any Hazelcast nodes in the cluster and bootstraps an Hazelcast instance accordingle. The `HazelcastDiscoveryController` discovers the Kubernetes API Server using the built in Kubernetes discovery service, and then uses the Kubernetes API to find new nodes (more on this later).
-
-You may also note that we tell Kubernetes that the container exposes the `hazelcast` port. Finally, we tell the cluster manager that we need 1 cpu core.
-
-The bulk of the replication controller config is actually identical to the Hazelcast pod declaration above, it simply gives the controller a recipe to use when creating new pods. The other parts are the `selector` which contains the controller's selector query, and the `replicas` parameter which specifies the desired number of replicas, in this case 1.
-
-Last but not least, we set `DNS_DOMAIN` environment variable according to your Kubernetes clusters DNS configuration.
-
-Create this controller:
-
-```sh
-$ kubectl create -f hazelcast-controller.yaml
-```
-
-After the controller provisions successfully the pod, you can query the service endpoints:
-```sh
-$ kubectl get endpoints hazelcast -o yaml
-apiVersion: v1beta3
-kind: Endpoints
-metadata:
- creationTimestamp: 2015-05-04T17:43:40Z
- labels:
- name: hazelcast
- name: hazelcast
- namespace: default
- resourceVersion: "120480"
- selfLink: /api/v1beta3/namespaces/default/endpoints/hazelcast
- uid: 19a22aa9-f285-11e4-b38f-42010af0bbf9
-subsets:
-- addresses:
- - IP: 10.245.2.68
- targetRef:
- kind: Pod
- name: hazelcast
- namespace: default
- resourceVersion: "120479"
- uid: d7238173-f283-11e4-b38f-42010af0bbf9
- ports:
- - port: 5701
- protocol: TCP
-```
-
-You can see that the _Service_ has found the pod created by the replication controller.
-
-Now it gets even more interesting.
-
-Let's scale our cluster to 2 pods:
-```sh
-$ kubectl scale rc hazelcast --replicas=2
-```
-
-Now if you list the pods in your cluster, you should see two hazelcast pods:
-
-```sh
-$ kubectl get pods
-POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
-hazelcast-pkyzd 10.244.90.3 e2e-test-minion-vj7k/104.197.8.214 name=hazelcast Running 14 seconds
- hazelcast quay.io/pires/hazelcast-kubernetes:0.3.1 Running 2 seconds
-hazelcast-ulkws 10.244.66.2 e2e-test-minion-2x1f/146.148.62.37 name=hazelcast Running 7 seconds
- hazelcast quay.io/pires/hazelcast-kubernetes:0.3.1 Running 6 seconds
-```
-
-To prove that this all works, you can use the `log` command to examine the logs of one pod, for example:
-
-```sh
-$ kubectl log hazelcast-ulkws hazelcast
-2015-05-09 22:06:20.016 INFO 5 --- [ main] com.github.pires.hazelcast.Application : Starting Application v0.2-SNAPSHOT on hazelcast-enyli with PID 5 (/bootstrapper.jar started by root in /)
-2015-05-09 22:06:20.071 INFO 5 --- [ main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@5424f110: startup date [Sat May 09 22:06:20 GMT 2015]; root of context hierarchy
-2015-05-09 22:06:21.511 INFO 5 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup
-2015-05-09 22:06:21.549 INFO 5 --- [ main] c.g.p.h.HazelcastDiscoveryController : Asking k8s registry at https://kubernetes.default.cluster.local..
-2015-05-09 22:06:22.031 INFO 5 --- [ main] c.g.p.h.HazelcastDiscoveryController : Found 2 pods running Hazelcast.
-2015-05-09 22:06:22.176 INFO 5 --- [ main] c.h.instance.DefaultAddressPicker : [LOCAL] [someGroup] [3.4.2] Interfaces is disabled, trying to pick one address from TCP-IP config addresses: [10.244.90.3, 10.244.66.2]
-2015-05-09 22:06:22.177 INFO 5 --- [ main] c.h.instance.DefaultAddressPicker : [LOCAL] [someGroup] [3.4.2] Prefer IPv4 stack is true.
-2015-05-09 22:06:22.189 INFO 5 --- [ main] c.h.instance.DefaultAddressPicker : [LOCAL] [someGroup] [3.4.2] Picked Address[10.244.66.2]:5701, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5701], bind any local is true
-2015-05-09 22:06:22.642 INFO 5 --- [ main] com.hazelcast.spi.OperationService : [10.244.66.2]:5701 [someGroup] [3.4.2] Backpressure is disabled
-2015-05-09 22:06:22.647 INFO 5 --- [ main] c.h.spi.impl.BasicOperationScheduler : [10.244.66.2]:5701 [someGroup] [3.4.2] Starting with 2 generic operation threads and 2 partition operation threads.
-2015-05-09 22:06:22.796 INFO 5 --- [ main] com.hazelcast.system : [10.244.66.2]:5701 [someGroup] [3.4.2] Hazelcast 3.4.2 (20150326 - f6349a4) starting at Address[10.244.66.2]:5701
-2015-05-09 22:06:22.798 INFO 5 --- [ main] com.hazelcast.system : [10.244.66.2]:5701 [someGroup] [3.4.2] Copyright (C) 2008-2014 Hazelcast.com
-2015-05-09 22:06:22.800 INFO 5 --- [ main] com.hazelcast.instance.Node : [10.244.66.2]:5701 [someGroup] [3.4.2] Creating TcpIpJoiner
-2015-05-09 22:06:22.801 INFO 5 --- [ main] com.hazelcast.core.LifecycleService : [10.244.66.2]:5701 [someGroup] [3.4.2] Address[10.244.66.2]:5701 is STARTING
-2015-05-09 22:06:23.108 INFO 5 --- [cached.thread-2] com.hazelcast.nio.tcp.SocketConnector : [10.244.66.2]:5701 [someGroup] [3.4.2] Connecting to /10.244.90.3:5701, timeout: 0, bind-any: true
-2015-05-09 22:06:23.182 INFO 5 --- [cached.thread-2] c.h.nio.tcp.TcpIpConnectionManager : [10.244.66.2]:5701 [someGroup] [3.4.2] Established socket connection between /10.244.66.2:48051 and 10.244.90.3/10.244.90.3:5701
-2015-05-09 22:06:29.158 INFO 5 --- [ration.thread-1] com.hazelcast.cluster.ClusterService : [10.244.66.2]:5701 [someGroup] [3.4.2]
-
-Members [2] {
- Member [10.244.90.3]:5701
- Member [10.244.66.2]:5701 this
-}
-
-2015-05-09 22:06:31.177 INFO 5 --- [ main] com.hazelcast.core.LifecycleService : [10.244.66.2]:5701 [someGroup] [3.4.2] Address[10.244.66.2]:5701 is STARTED
-```
-
-Now let's scale our cluster to 4 nodes:
-```sh
-$ kubectl scale rc hazelcast --replicas=4
-```
-
-Examine the status again by checking a node’s log and you should see the 4 members connected.
-
-### tl; dr;
-For those of you who are impatient, here is the summary of the commands we ran in this tutorial.
-
-```sh
-# create a service to track all hazelcast nodes
-kubectl create -f hazelcast-service.yaml
-
-# create a replication controller to replicate hazelcast nodes
-kubectl create -f hazelcast-controller.yaml
-
-# scale up to 2 nodes
-kubectl scale rc hazelcast --replicas=2
-
-# scale up to 4 nodes
-kubectl scale rc hazelcast --replicas=4
-```
-
-### Hazelcast Discovery Source
-
-See [here](https://github.com/pires/hazelcast-kubernetes-bootstrapper/blob/master/src/main/java/com/github/pires/hazelcast/HazelcastDiscoveryController.java)
-
-[]()
-
-
-[]()
diff --git a/release-0.19.0/examples/hazelcast/hazelcast-controller.yaml b/release-0.19.0/examples/hazelcast/hazelcast-controller.yaml
deleted file mode 100644
index 86496ef665f..00000000000
--- a/release-0.19.0/examples/hazelcast/hazelcast-controller.yaml
+++ /dev/null
@@ -1,27 +0,0 @@
-apiVersion: v1beta3
-kind: ReplicationController
-metadata:
- labels:
- name: hazelcast
- name: hazelcast
-spec:
- replicas: 1
- selector:
- name: hazelcast
- template:
- metadata:
- labels:
- name: hazelcast
- spec:
- containers:
- - resources:
- limits:
- cpu: 1
- image: quay.io/pires/hazelcast-kubernetes:0.3.1
- name: hazelcast
- env:
- - name: "DNS_DOMAIN"
- value: "cluster.local"
- ports:
- - containerPort: 5701
- name: hazelcast
diff --git a/release-0.19.0/examples/hazelcast/hazelcast-service.yaml b/release-0.19.0/examples/hazelcast/hazelcast-service.yaml
deleted file mode 100644
index 1ea5a121209..00000000000
--- a/release-0.19.0/examples/hazelcast/hazelcast-service.yaml
+++ /dev/null
@@ -1,12 +0,0 @@
-apiVersion: v1beta3
-kind: Service
-metadata:
- labels:
- name: hazelcast
- name: hazelcast
-spec:
- ports:
- - port: 5701
- targetPort: 5701
- selector:
- name: hazelcast
diff --git a/release-0.19.0/examples/iscsi/README.md b/release-0.19.0/examples/iscsi/README.md
deleted file mode 100644
index 97731de8849..00000000000
--- a/release-0.19.0/examples/iscsi/README.md
+++ /dev/null
@@ -1,65 +0,0 @@
-## Step 1. Setting up iSCSI target and iSCSI initiator
-**Setup A.** On Fedora 21 nodes
-
-If you use Fedora 21 on Kubernetes node, then first install iSCSI initiator on the node:
-
- # yum -y install iscsi-initiator-utils
-
-
-then edit */etc/iscsi/initiatorname.iscsi* and */etc/iscsi/iscsid.conf* to match your iSCSI target configuration.
-
-I mostly followed these [instructions](http://www.server-world.info/en/note?os=Fedora_21&p=iscsi&f=2) to setup iSCSI initiator and these [instructions](http://www.server-world.info/en/note?os=Fedora_21&p=iscsi) to setup iSCSI target.
-
-**Setup B.** On Unbuntu 12.04 and Debian 7 nodes on GCE
-
-GCE does not provide preconfigured Fedora 21 image, so I set up the iSCSI target on a preconfigured Ubuntu 12.04 image, mostly following these [instructions](http://www.server-world.info/en/note?os=Ubuntu_12.04&p=iscsi). My Kubernetes cluster on GCE was running Debian 7 images, so I followed these [instructions](http://www.server-world.info/en/note?os=Debian_7.0&p=iscsi&f=2) to set up the iSCSI initiator.
-
-##Step 2. Creating the pod with iSCSI persistent storage
-Once you have installed iSCSI initiator and new Kubernetes, you can create a pod based on my example *iscsi.json*. In the pod JSON, you need to provide *targetPortal* (the iSCSI target's **IP** address and *port* if not the default port 3260), target's *iqn*, *lun*, and the type of the filesystem that has been created on the lun, and *readOnly* boolean.
-
-Once your pod is created, run it on the Kubernetes master:
-
-```console
-kubectl create -f your_new_pod.json
-```
-
-Here is my command and output:
-
-```console
-# kubectl create -f examples/iscsi/iscsi.json
-# kubectl get pods
-POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
-iscsipd 10.244.3.14 kubernetes-minion-bz1p/104.154.61.231 Running About an hour
- iscsipd-rw kubernetes/pause Running About an hour
- iscsipd-ro kubernetes/pause Running About an hour
-```
-
-On the Kubernetes node, I got these in mount output
-
-```console
-# mount |grep kub
-/dev/sdb on /var/lib/kubelet/plugins/kubernetes.io/iscsi/iscsi/10.240.205.13:3260-iqn-iqn.2014-12.world.server:storage.target1-lun-0 type ext4 (ro,relatime,data=ordered)
-/dev/sdb on /var/lib/kubelet/pods/e36158ce-f8d8-11e4-9ae7-42010af01964/volumes/kubernetes.io~iscsi/iscsipd-ro type ext4 (ro,relatime,data=ordered)
-/dev/sdc on /var/lib/kubelet/plugins/kubernetes.io/iscsi/iscsi/10.240.205.13:3260-iqn-iqn.2014-12.world.server:storage.target1-lun-1 type xfs (rw,relatime,attr2,inode64,noquota)
-/dev/sdc on /var/lib/kubelet/pods/e36158ce-f8d8-11e4-9ae7-42010af01964/volumes/kubernetes.io~iscsi/iscsipd-rw type xfs (rw,relatime,attr2,inode64,noquota)
-```
-
-If you ssh to that machine, you can run `docker ps` to see the actual pod.
-```console
-# docker ps
-CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
-cc051196e7af kubernetes/pause:latest "/pause" About an hour ago Up About an hour k8s_iscsipd-rw.ff2d2e9f_iscsipd_default_e36158ce-f8d8-11e4-9ae7-42010af01964_26f3a457
-8aa981443cf4 kubernetes/pause:latest "/pause" About an hour ago Up About an hour k8s_iscsipd-ro.d7752e8f_iscsipd_default_e36158ce-f8d8-11e4-9ae7-42010af01964_4939633d
-```
-
-Run *docker inspect* and I found the Containers mounted the host directory into the their */mnt/iscsipd* directory.
-```console
-# docker inspect --format '{{index .Volumes "/mnt/iscsipd"}}' cc051196e7af
-/var/lib/kubelet/pods/75e0af2b-f8e8-11e4-9ae7-42010af01964/volumes/kubernetes.io~iscsi/iscsipd-rw
-```
-
-
-[]()
-
-
-[]()
diff --git a/release-0.19.0/examples/iscsi/iscsi.json b/release-0.19.0/examples/iscsi/iscsi.json
deleted file mode 100644
index 439832b8049..00000000000
--- a/release-0.19.0/examples/iscsi/iscsi.json
+++ /dev/null
@@ -1,53 +0,0 @@
-{
- "apiVersion": "v1beta3",
- "kind": "Pod",
- "metadata": {
- "name": "iscsipd"
- },
- "spec": {
- "containers": [
- {
- "name": "iscsipd-ro",
- "image": "kubernetes/pause",
- "volumeMounts": [
- {
- "mountPath": "/mnt/iscsipd",
- "name": "iscsipd-ro"
- }
- ]
- },
- {
- "name": "iscsipd-rw",
- "image": "kubernetes/pause",
- "volumeMounts": [
- {
- "mountPath": "/mnt/iscsipd",
- "name": "iscsipd-rw"
- }
- ]
- }
- ],
- "volumes": [
- {
- "name": "iscsipd-ro",
- "iscsi": {
- "targetPortal": "10.16.154.81:3260",
- "iqn": "iqn.2014-12.world.server:storage.target01",
- "lun": 0,
- "fsType": "ext4",
- "readOnly": true
- }
- },
- {
- "name": "iscsipd-rw",
- "iscsi": {
- "targetPortal": "10.16.154.81:3260",
- "iqn": "iqn.2014-12.world.server:storage.target01",
- "lun": 1,
- "fsType": "xfs",
- "readOnly": false
- }
- }
- ]
- }
-}
diff --git a/release-0.19.0/examples/k8petstore/README.md b/release-0.19.0/examples/k8petstore/README.md
deleted file mode 100644
index 541cdc41b61..00000000000
--- a/release-0.19.0/examples/k8petstore/README.md
+++ /dev/null
@@ -1,117 +0,0 @@
-## Welcome to k8PetStore
-
-This is a follow up to the [Guestbook Example](../guestbook/README.md)'s [Go implementation](../guestbook-go/).
-
-- It leverages the same components (redis, Go REST API) as the guestbook application
-- It comes with visualizations for graphing whats happening in Redis transactions, along with commandline printouts of transaction throughput
-- It is hackable : you can build all images from the files is in this repository (With the exception of the data generator, which is apache bigtop).
-- It generates massive load using a semantically rich, realistic transaction simulator for petstores
-
-This application will run a web server which returns REDIS records for a petstore application.
-It is meant to simulate and test high load on kubernetes or any other docker based system.
-
-If you are new to kubernetes, and you haven't run guestbook yet,
-
-you might want to stop here and go back and run guestbook app first.
-
-The guestbook tutorial will teach you a lot about the basics of kubernetes, and we've tried not to be redundant here.
-
-## Architecture of this SOA
-
-A diagram of the overall architecture of this application can be seen in [arch.dot](arch.dot) (you can paste the contents in any graphviz viewer, including online ones such as http://sandbox.kidstrythisathome.com/erdos/.
-
-## Docker image dependencies
-
-Reading this section is optional, only if you want to rebuild everything from scratch.
-
-This project depends on three docker images which you can build for yourself and save
-in your dockerhub "dockerhub-name".
-
-Since these images are already published under other parties like redis, jayunit100, and so on,
-so you don't need to build the images to run the app.
-
-If you do want to build the images, you will need to build and push the images in this repository.
-
-For a list of those images, see the `build-and-push` shell script - it builds and pushes all the images for you, just
-
-modify the dockerhub user name in it accordingly.
-
-## Get started with the WEBAPP
-
-The web app is written in Go, and borrowed from the original Guestbook example by brendan burns.
-
-We have extended it to do some error reporting, persisting of JSON petstore transactions (not much different then guestbook entries),
-
-and supporting of additional REST calls, like LLEN, which returns the total # of transactions in the database.
-
-To work on the app, just cd to the `dev` directory, and follow the instructions. You can easily edit it in your local machine, by installing
-
-redis and go. Then you can use the `Vagrantfile` in this top level directory to launch a minimal version of the app in pure docker containers.
-
-If that is all working, you can finally run `k8petstore.sh` in any kubernetes cluster, and run the app at scale.
-
-## Set up the data generator (optional)
-
-The web front end provides users an interface for watching pet store transactions in real time as they occur.
-
-To generate those transactions, you can use the bigpetstore data generator. Alternatively, you could just write a
-
-shell script which calls "curl localhost:3000/k8petstore/rpush/blahblahblah" over and over again :). But thats not nearly
-
-as fun, and its not a good test of a real world scenario where payloads scale and have lots of information content.
-
-Similarly, you can locally run and test the data generator code, which is Java based, you can pull it down directly from
-
-apache bigtop.
-
-Directions for that are here : https://github.com/apache/bigtop/tree/master/bigtop-bigpetstore/bigpetstore-transaction-queue
-
-You will likely want to checkout the branch 2b2392bf135e9f1256bd0b930f05ae5aef8bbdcb, which is the exact commit which the current k8petstore was tested on.
-
-## Now what?
-
-Once you have done the above 3 steps, you have a working, from source, locally runnable version of the k8petstore app, now, we can try to run it in kubernetes.
-
-## Hacking, testing, benchmarking
-
-Once the app is running, you can go to the location of publicIP:3000 (the first parameter in the script). In your browser, you should see a chart
-
-and the k8petstore title page, as well as an indicator of transaction throughput, and so on. You should be able to modify
-
-You can modify the HTML pages, add new REST paths to the Go app, and so on.
-
-## Running in kubernetes
-
-Now that you are done hacking around on the app, you can run it in kubernetes. To do this, you will want to rebuild the docker images (most likely, for the Go web-server app), but less likely for the other images which you are less likely to need to change. Then you will push those images to dockerhub.
-
-Now, how to run the entire application in kubernetes?
-
-To simplify running this application, we have a single file, k8petstore.sh, which writes out json files on to disk. This allows us to have dynamic parameters, without needing to worry about managing multiplejson files.
-
-You might want to change it to point to your customized Go image, if you chose to modify things.
-
-like the number of data generators (more generators will create more load on the redis master).
-
-So, to run this app in kubernetes, simply run [The all in one k8petstore.sh shell script](k8petstore.sh).
-
-Note that there are a few , self explanatory parameters to set at the top of it.
-
-Most importantly, the Public IPs parameter, so that you can checkout the web ui (at $PUBLIC_IP:3000), which will show a plot and read outs of transaction throughput.
-
-## Future
-
-In the future, we plan to add cassandra support. Redis is a fabulous in memory data store, but it is not meant for truly available and resilient storage.
-
-Thus we plan to add another tier of queueing, which empties the REDIS transactions into a cassandra store which persists.
-
-## Questions
-
-For questions on running this app, you can ask on the google containers group (freenode ~ google-containers@googlegroups.com or #google-containers on IRC)
-
-For questions about bigpetstore, and how the data is generated, ask on the apache bigtop mailing list.
-
-
-[]()
-
-
-[]()
diff --git a/release-0.19.0/examples/k8petstore/Vagrantfile b/release-0.19.0/examples/k8petstore/Vagrantfile
deleted file mode 100644
index a96af767b65..00000000000
--- a/release-0.19.0/examples/k8petstore/Vagrantfile
+++ /dev/null
@@ -1,37 +0,0 @@
-# -*- mode: ruby -*-
-# vi: set ft=ruby :
-
-require 'fileutils'
-
-#$fes = 1
-#$rslavess = 1
-
-Vagrant.configure("2") do |config|
-
- config.vm.define "rmaster" do |rm|
- rm.vm.provider "docker" do |d|
- d.vagrant_vagrantfile = "./dev/hosts/Vagrantfile"
- d.build_dir = "redis-master"
- d.name = "rmaster"
- d.create_args = ["--privileged=true", "-m", "1g"]
- #d.ports = [ "6379:6379" ]
- d.remains_running = true
- end
- end
-
- config.vm.define "frontend" do |fe|
- fe.vm.provider "docker" do |d|
- d.vagrant_vagrantfile = "./dev/hosts/Vagrantfile"
- d.build_dir = "web-server"
- d.name = "web-server"
- d.create_args = ["--privileged=true"]
- d.remains_running = true
- d.create_args = d.create_args << "--link" << "rmaster:rmaster"
- d.ports = ["3000:3000"]
- d.env = {"REDISMASTER_SERVICE_HOST"=>"rmaster","REDISMASTER_SERVICE_PORT"=>"6379"}
- end
- end
-
- ### Todo , add data generator.
-
-end
diff --git a/release-0.19.0/examples/k8petstore/bps-data-generator/README.md b/release-0.19.0/examples/k8petstore/bps-data-generator/README.md
deleted file mode 100644
index 09b18fc9748..00000000000
--- a/release-0.19.0/examples/k8petstore/bps-data-generator/README.md
+++ /dev/null
@@ -1,21 +0,0 @@
-# How to generate the bps-data-generator container #
-
-This container is maintained as part of the apache bigtop project.
-
-To create it, simply
-
-`git clone https://github.com/apache/bigtop`
-
-and checkout the last exact version (will be updated periodically).
-
-`git checkout -b aNewBranch 2b2392bf135e9f1256bd0b930f05ae5aef8bbdcb`
-
-then, cd to bigtop-bigpetstore/bigpetstore-transaction-queue, and run the docker file, i.e.
-
-`Docker build -t -i jayunit100/bps-transaction-queue`.
-
-
-[]()
-
-
-[]()
diff --git a/release-0.19.0/examples/k8petstore/build-push-containers.sh b/release-0.19.0/examples/k8petstore/build-push-containers.sh
deleted file mode 100755
index 7733b6fdd48..00000000000
--- a/release-0.19.0/examples/k8petstore/build-push-containers.sh
+++ /dev/null
@@ -1,29 +0,0 @@
-#!/bin/bash
-
-# Copyright 2015 The Kubernetes Authors All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-#K8PetStore version is tied to the redis version. We will add more info to version tag later.
-#Change the 'jayunit100' string below to you're own dockerhub name and run this script.
-#It will build all the containers for this application and publish them to your dockerhub account
-version="r.2.8.19"
-docker build -t jayunit100/k8-petstore-redis:$version ./redis/
-docker build -t jayunit100/k8-petstore-redis-master:$version ./redis-master
-docker build -t jayunit100/k8-petstore-redis-slave:$version ./redis-slave
-docker build -t jayunit100/k8-petstore-web-server:$version ./web-server
-
-docker push jayunit100/k8-petstore-redis:$version
-docker push jayunit100/k8-petstore-redis-master:$version
-docker push jayunit100/k8-petstore-redis-slave:$version
-docker push jayunit100/k8-petstore-web-server:$version
diff --git a/release-0.19.0/examples/k8petstore/dev/README b/release-0.19.0/examples/k8petstore/dev/README
deleted file mode 100644
index 3b495ea7034..00000000000
--- a/release-0.19.0/examples/k8petstore/dev/README
+++ /dev/null
@@ -1,35 +0,0 @@
-### Local development
-
-1) Install Go
-
-2) Install Redis
-
-Now start a local redis instance
-
-```
-redis-server
-```
-
-And run the app
-
-```
-export GOPATH=~/Development/k8hacking/k8petstore/web-server/
-cd $GOPATH/src/main/
-## Now, you're in the local dir to run the app. Go get its depenedencies.
-go get
-go run PetStoreBook.go
-```
-
-Once the app works the way you want it to, test it in the vagrant recipe below. This will gaurantee that you're local environment isn't doing something that breaks the containers at the versioning level.
-
-### Testing
-
-This folder can be used by anyone interested in building and developing the k8petstore application.
-
-This is for dev and test.
-
-`vagrant up` gets you a cluster with the app's core components running.
-
-You can rename Vagrantfile_atomic to Vagrantfile if you want to try to test in atomic instead.
-
-** Now you can run the code on the kubernetes cluster with reasonable assurance that any problems you run into are not bugs in the code itself :) *
diff --git a/release-0.19.0/examples/k8petstore/dev/Vagrantfile b/release-0.19.0/examples/k8petstore/dev/Vagrantfile
deleted file mode 100755
index c4f19b2aa4d..00000000000
--- a/release-0.19.0/examples/k8petstore/dev/Vagrantfile
+++ /dev/null
@@ -1,44 +0,0 @@
-# -*- mode: ruby -*-
-# vi: set ft=ruby :
-
-require 'fileutils'
-
-#$fes = 1
-#$rslavess = 1
-
-Vagrant.configure("2") do |config|
-
- config.vm.define "rmaster" do |rm|
- rm.vm.provider "docker" do |d|
- d.vagrant_vagrantfile = "./hosts/Vagrantfile"
- d.build_dir = "../redis-master"
- d.name = "rmaster"
- d.create_args = ["--privileged=true"]
- #d.ports = [ "6379:6379" ]
- d.remains_running = true
- end
- end
-
- puts "sleep 20 to make sure container is up..."
- sleep(20)
- puts "resume"
-
- config.vm.define "frontend" do |fe|
- fe.vm.provider "docker" do |d|
- d.vagrant_vagrantfile = "./hosts/Vagrantfile"
- d.build_dir = "../web-server"
- d.name = "web-server"
- d.create_args = ["--privileged=true"]
- d.remains_running = true
- d.create_args = d.create_args << "--link" << "rmaster:rmaster"
- d.ports = ["3000:3000"]
- d.env = {"REDISMASTER_SERVICE_HOST"=>"rmaster","REDISMASTER_SERVICE_PORT"=>"6379"}
- end
- end
-
-
-
- ### Todo , add data generator.
-
-
-end
diff --git a/release-0.19.0/examples/k8petstore/dev/hosts/Vagrantfile b/release-0.19.0/examples/k8petstore/dev/hosts/Vagrantfile
deleted file mode 100644
index 72e86d72621..00000000000
--- a/release-0.19.0/examples/k8petstore/dev/hosts/Vagrantfile
+++ /dev/null
@@ -1,11 +0,0 @@
-VAGRANTFILE_API_VERSION = "2"
-
-Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
- config.vm.box = "jayunit100/centos7"
- config.vm.provision "docker"
- config.vm.provision "shell", inline: "ps aux | grep 'sshd:' | awk '{print $2}' | xargs kill"
- config.vm.provision "shell", inline: "yum install -y git && service firewalld stop && service docker restart"
- config.vm.provision "shell", inline: "docker ps -a | awk '{print $1}' | xargs --no-run-if-empty docker rm -f || ls"
- config.vm.network :forwarded_port, guest: 3000, host: 3000
-
-end
diff --git a/release-0.19.0/examples/k8petstore/dev/test.sh b/release-0.19.0/examples/k8petstore/dev/test.sh
deleted file mode 100755
index 53d42a8c5b7..00000000000
--- a/release-0.19.0/examples/k8petstore/dev/test.sh
+++ /dev/null
@@ -1,47 +0,0 @@
-#!/bin/bash
-
-# Copyright 2015 The Kubernetes Authors All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-## First set up the host VM. That ensures
-## we avoid vagrant race conditions.
-set -x
-
-cd hosts/
-echo "note: the VM must be running before you try this"
-echo "if not already running, cd to hosts and run vagrant up"
-vagrant provision
-#echo "removing containers"
-#vagrant ssh -c "sudo docker rm -f $(docker ps -a -q)"
-cd ..
-
-## Now spin up the docker containers
-## these will run in the ^ host vm above.
-
-vagrant up
-
-## Finally, curl the length, it should be 3 .
-
-x=`curl localhost:3000/llen`
-
-for i in `seq 1 100` do
- if [ x$x == "x3" ]; then
- echo " passed $3 "
- exit 0
- else
- echo " FAIL"
- fi
-done
-
-exit 1 # if we get here the test obviously failed.
diff --git a/release-0.19.0/examples/k8petstore/k8petstore.dot b/release-0.19.0/examples/k8petstore/k8petstore.dot
deleted file mode 100644
index 539132fb3aa..00000000000
--- a/release-0.19.0/examples/k8petstore/k8petstore.dot
+++ /dev/null
@@ -1,9 +0,0 @@
- digraph k8petstore {
-
- USERS -> publicIP_proxy -> web_server;
- bps_data_generator -> web_server [arrowhead = crow, label = "http://$FRONTEND_SERVICE_HOST:3000/rpush/k8petstore/{name..address..,product=..."];
- external -> web_server [arrowhead = crow, label=" http://$FRONTEND_SERVICE_HOST/k8petstore/llen:3000"];
- web_server -> redis_master [label=" RESP : k8petstore, llen"];
- redis_master -> redis_slave [arrowhead = crow] [label="replication (one-way)"];
-}
-
diff --git a/release-0.19.0/examples/k8petstore/k8petstore.sh b/release-0.19.0/examples/k8petstore/k8petstore.sh
deleted file mode 100755
index 5a5393435cf..00000000000
--- a/release-0.19.0/examples/k8petstore/k8petstore.sh
+++ /dev/null
@@ -1,287 +0,0 @@
-#!/bin/bash
-
-# Copyright 2015 The Kubernetes Authors All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-echo "WRITING KUBE FILES , will overwrite the jsons, then testing pods. is kube clean ready to go?"
-
-
-#Args below can be overriden when calling from cmd line.
-#Just send all the args in order.
-#for dev/test you can use:
-#kubectl=$GOPATH/src/github.com/GoogleCloudPlatform/kubernetes/cluster/kubectl.sh"
-kubectl="kubectl"
-VERSION="r.2.8.19"
-PUBLIC_IP="10.1.4.89" # ip which we use to access the Web server.
-_SECONDS=1000 # number of seconds to measure throughput.
-FE="1" # amount of Web server
-LG="1" # amount of load generators
-SLAVE="1" # amount of redis slaves
-TEST="1" # 0 = Dont run tests, 1 = Do run tests.
-NS="k8petstore" # namespace
-
-kubectl="${1:-$kubectl}"
-VERSION="${2:-$VERSION}"
-PUBLIC_IP="${3:-$PUBLIC_IP}" # ip which we use to access the Web server.
-_SECONDS="${4:-$_SECONDS}" # number of seconds to measure throughput.
-FE="${5:-$FE}" # amount of Web server
-LG="${6:-$LG}" # amount of load generators
-SLAVE="${7:-$SLAVE}" # amount of redis slaves
-TEST="${8:-$TEST}" # 0 = Dont run tests, 1 = Do run tests.
-NS="${9:-$NS}" # namespace
-
-echo "Running w/ args: kubectl $kubectl version $VERSION ip $PUBLIC_IP sec $_SECONDS fe $FE lg $LG slave $SLAVE test $TEST NAMESPACE $NS"
-function create {
-
-cat << EOF > fe-rc.json
-{
- "kind": "ReplicationController",
- "apiVersion": "v1beta3",
- "metadata": {
- "name": "fectrl",
- "labels": {"name": "frontend"}
- },
- "spec": {
- "replicas": $FE,
- "selector": {"name": "frontend"},
- "template": {
- "metadata": {
- "labels": {
- "name": "frontend",
- "uses": "redis-master"
- }
- },
- "spec": {
- "containers": [{
- "name": "frontend-go-restapi",
- "image": "jayunit100/k8-petstore-web-server:$VERSION"
- }]
- }
- }
- }
-}
-EOF
-
-cat << EOF > bps-load-gen-rc.json
-{
- "kind": "ReplicationController",
- "apiVersion": "v1beta3",
- "metadata": {
- "name": "bpsloadgenrc",
- "labels": {"name": "bpsLoadGenController"}
- },
- "spec": {
- "replicas": $LG,
- "selector": {"name": "bps"},
- "template": {
- "metadata": {
- "labels": {
- "name": "bps",
- "uses": "frontend"
- }
- },
- "spec": {
- "containers": [{
- "name": "bps",
- "image": "jayunit100/bigpetstore-load-generator",
- "command": ["sh","-c","/opt/PetStoreLoadGenerator-1.0/bin/PetStoreLoadGenerator http://\$FRONTEND_SERVICE_HOST:3000/rpush/k8petstore/ 4 4 1000 123"]
- }]
- }
- }
- }
-}
-EOF
-
-cat << EOF > fe-s.json
-{
- "kind": "Service",
- "apiVersion": "v1beta3",
- "metadata": {
- "name": "frontend",
- "labels": {
- "name": "frontend"
- }
- },
- "spec": {
- "ports": [{
- "port": 3000
- }],
- "publicIPs":["$PUBLIC_IP","10.1.4.89"],
- "selector": {
- "name": "frontend"
- }
- }
-}
-EOF
-
-cat << EOF > rm.json
-{
- "kind": "Pod",
- "apiVersion": "v1beta3",
- "metadata": {
- "name": "redismaster",
- "labels": {
- "name": "redis-master"
- }
- },
- "spec": {
- "containers": [{
- "name": "master",
- "image": "jayunit100/k8-petstore-redis-master:$VERSION",
- "ports": [{
- "containerPort": 6379
- }]
- }]
- }
-}
-EOF
-
-cat << EOF > rm-s.json
-{
- "kind": "Service",
- "apiVersion": "v1beta3",
- "metadata": {
- "name": "redismaster",
- "labels": {
- "name": "redis-master"
- }
- },
- "spec": {
- "ports": [{
- "port": 6379
- }],
- "selector": {
- "name": "redis-master"
- }
- }
-}
-EOF
-
-cat << EOF > rs-s.json
-{
- "kind": "Service",
- "apiVersion": "v1beta3",
- "metadata": {
- "name": "redisslave",
- "labels": {
- "name": "redisslave"
- }
- },
- "spec": {
- "ports": [{
- "port": 6379
- }],
- "selector": {
- "name": "redisslave"
- }
- }
-}
-EOF
-
-cat << EOF > slave-rc.json
-{
- "kind": "ReplicationController",
- "apiVersion": "v1beta3",
- "metadata": {
- "name": "redissc",
- "labels": {"name": "redisslave"}
- },
- "spec": {
- "replicas": $SLAVE,
- "selector": {"name": "redisslave"},
- "template": {
- "metadata": {
- "labels": {
- "name": "redisslave",
- "uses": "redis-master"
- }
- },
- "spec": {
- "containers": [{
- "name": "slave",
- "image": "jayunit100/k8-petstore-redis-slave:$VERSION",
- "ports": [{"containerPort": 6379}]
- }]
- }
- }
- }
-}
-EOF
-$kubectl create -f rm.json --namespace=$NS
-$kubectl create -f rm-s.json --namespace=$NS
-sleep 3 # precaution to prevent fe from spinning up too soon.
-$kubectl create -f slave-rc.json --namespace=$NS
-$kubectl create -f rs-s.json --namespace=$NS
-sleep 3 # see above comment.
-$kubectl create -f fe-rc.json --namespace=$NS
-$kubectl create -f fe-s.json --namespace=$NS
-$kubectl create -f bps-load-gen-rc.json --namespace=$NS
-}
-
-function pollfor {
- pass_http=0
-
- ### Test HTTP Server comes up.
- for i in `seq 1 150`;
- do
- ### Just testing that the front end comes up. Not sure how to test total entries etc... (yet)
- echo "Trying curl ... $PUBLIC_IP:3000 , attempt $i . expect a few failures while pulling images... "
- curl "$PUBLIC_IP:3000" > result
- cat result
- cat result | grep -q "k8-bps"
- if [ $? -eq 0 ]; then
- echo "TEST PASSED after $i tries !"
- i=1000
- break
- else
- echo "the above RESULT didn't contain target string for trial $i"
- fi
- sleep 3
- done
-
- if [ $i -eq 1000 ]; then
- pass_http=1
- fi
-
-}
-
-function tests {
- pass_load=0
-
- ### Print statistics of db size, every second, until $SECONDS are up.
- for i in `seq 1 $_SECONDS`;
- do
- echo "curl : $PUBLIC_IP:3000 , $i of $_SECONDS"
- curr_cnt="`curl "$PUBLIC_IP:3000/llen"`"
- ### Write CSV File of # of trials / total transcations.
- echo "$i $curr_cnt" >> result
- echo "total transactions so far : $curr_cnt"
- sleep 1
- done
-}
-
-create
-
-pollfor
-
-if [[ $pass_http -eq 1 ]]; then
- echo "Passed..."
-else
- exit 1
-fi
-
-if [[ $TEST -eq 1 ]]; then
- echo "running polling tests now"
- tests
-fi
diff --git a/release-0.19.0/examples/k8petstore/redis-master/Dockerfile b/release-0.19.0/examples/k8petstore/redis-master/Dockerfile
deleted file mode 100644
index bd3a67ced04..00000000000
--- a/release-0.19.0/examples/k8petstore/redis-master/Dockerfile
+++ /dev/null
@@ -1,17 +0,0 @@
-#
-# Redis Dockerfile
-#
-# https://github.com/dockerfile/redis
-#
-
-# Pull base image.
-#
-# Just a stub.
-
-FROM jayunit100/redis:2.8.19
-
-ADD etc_redis_redis.conf /etc/redis/redis.conf
-
-CMD ["redis-server", "/etc/redis/redis.conf"]
-# Expose ports.
-EXPOSE 6379
diff --git a/release-0.19.0/examples/k8petstore/redis-master/etc_redis_redis.conf b/release-0.19.0/examples/k8petstore/redis-master/etc_redis_redis.conf
deleted file mode 100644
index 38b8c701e7a..00000000000
--- a/release-0.19.0/examples/k8petstore/redis-master/etc_redis_redis.conf
+++ /dev/null
@@ -1,46 +0,0 @@
-pidfile /var/run/redis.pid
-port 6379
-tcp-backlog 511
-timeout 0
-tcp-keepalive 0
-loglevel verbose
-syslog-enabled yes
-databases 1
-save 1 1
-save 900 1
-save 300 10
-save 60 10000
-stop-writes-on-bgsave-error yes
-rdbcompression no
-rdbchecksum yes
-dbfilename dump.rdb
-dir /data
-slave-serve-stale-data no
-slave-read-only yes
-repl-disable-tcp-nodelay no
-slave-priority 100
-maxmemory
-appendonly yes
-appendfilename "appendonly.aof"
-appendfsync everysec
-no-appendfsync-on-rewrite no
-auto-aof-rewrite-percentage 100
-auto-aof-rewrite-min-size 1
-aof-load-truncated yes
-lua-time-limit 5000
-slowlog-log-slower-than 10000
-slowlog-max-len 128
-latency-monitor-threshold 0
-notify-keyspace-events "KEg$lshzxeA"
-list-max-ziplist-entries 512
-list-max-ziplist-value 64
-set-max-intset-entries 512
-zset-max-ziplist-entries 128
-zset-max-ziplist-value 64
-hll-sparse-max-bytes 3000
-activerehashing yes
-client-output-buffer-limit normal 0 0 0
-client-output-buffer-limit slave 256mb 64mb 60
-client-output-buffer-limit pubsub 32mb 8mb 60
-hz 10
-aof-rewrite-incremental-fsync yes
diff --git a/release-0.19.0/examples/k8petstore/redis-slave/Dockerfile b/release-0.19.0/examples/k8petstore/redis-slave/Dockerfile
deleted file mode 100644
index 67952daf116..00000000000
--- a/release-0.19.0/examples/k8petstore/redis-slave/Dockerfile
+++ /dev/null
@@ -1,15 +0,0 @@
-#
-# Redis Dockerfile
-#
-# https://github.com/dockerfile/redis
-#
-
-# Pull base image.
-#
-# Just a stub.
-
-FROM jayunit100/redis:2.8.19
-
-ADD run.sh /run.sh
-
-CMD /run.sh
diff --git a/release-0.19.0/examples/k8petstore/redis-slave/etc_redis_redis.conf b/release-0.19.0/examples/k8petstore/redis-slave/etc_redis_redis.conf
deleted file mode 100644
index 38b8c701e7a..00000000000
--- a/release-0.19.0/examples/k8petstore/redis-slave/etc_redis_redis.conf
+++ /dev/null
@@ -1,46 +0,0 @@
-pidfile /var/run/redis.pid
-port 6379
-tcp-backlog 511
-timeout 0
-tcp-keepalive 0
-loglevel verbose
-syslog-enabled yes
-databases 1
-save 1 1
-save 900 1
-save 300 10
-save 60 10000
-stop-writes-on-bgsave-error yes
-rdbcompression no
-rdbchecksum yes
-dbfilename dump.rdb
-dir /data
-slave-serve-stale-data no
-slave-read-only yes
-repl-disable-tcp-nodelay no
-slave-priority 100
-maxmemory
-appendonly yes
-appendfilename "appendonly.aof"
-appendfsync everysec
-no-appendfsync-on-rewrite no
-auto-aof-rewrite-percentage 100
-auto-aof-rewrite-min-size 1
-aof-load-truncated yes
-lua-time-limit 5000
-slowlog-log-slower-than 10000
-slowlog-max-len 128
-latency-monitor-threshold 0
-notify-keyspace-events "KEg$lshzxeA"
-list-max-ziplist-entries 512
-list-max-ziplist-value 64
-set-max-intset-entries 512
-zset-max-ziplist-entries 128
-zset-max-ziplist-value 64
-hll-sparse-max-bytes 3000
-activerehashing yes
-client-output-buffer-limit normal 0 0 0
-client-output-buffer-limit slave 256mb 64mb 60
-client-output-buffer-limit pubsub 32mb 8mb 60
-hz 10
-aof-rewrite-incremental-fsync yes
diff --git a/release-0.19.0/examples/k8petstore/redis-slave/run.sh b/release-0.19.0/examples/k8petstore/redis-slave/run.sh
deleted file mode 100755
index d42c8f261fa..00000000000
--- a/release-0.19.0/examples/k8petstore/redis-slave/run.sh
+++ /dev/null
@@ -1,30 +0,0 @@
-#!/bin/bash
-
-# Copyright 2014 The Kubernetes Authors All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-echo "Note, if you get errors below indicate kubernetes env injection could be faliing..."
-echo "env vars ="
-env
-echo "CHECKING ENVS BEFORE STARTUP........"
-if [ ! "$REDISMASTER_SERVICE_HOST" ]; then
- echo "Need to set REDIS_MASTER_SERVICE_HOST" && exit 1;
-fi
-if [ ! "$REDISMASTER_PORT" ]; then
- echo "Need to set REDIS_MASTER_PORT" && exit 1;
-fi
-
-echo "ENV Vars look good, starting !"
-
-redis-server --slaveof ${REDISMASTER_SERVICE_HOST:-$SERVICE_HOST} $REDISMASTER_SERVICE_PORT
diff --git a/release-0.19.0/examples/k8petstore/redis/Dockerfile b/release-0.19.0/examples/k8petstore/redis/Dockerfile
deleted file mode 100644
index 41ac9dcdd44..00000000000
--- a/release-0.19.0/examples/k8petstore/redis/Dockerfile
+++ /dev/null
@@ -1,45 +0,0 @@
-#
-# Redis Dockerfile
-#
-# https://github.com/dockerfile/redis
-#
-
-# Pull base image.
-FROM ubuntu
-
-# Install Redis.
-RUN \
- cd /tmp && \
- # Modify to stay at this version rather then always update.
-
- #################################################################
- ###################### REDIS INSTALL ############################
- wget http://download.redis.io/releases/redis-2.8.19.tar.gz && \
- tar xvzf redis-2.8.19.tar.gz && \
- cd redis-2.8.19 && \
- ################################################################
- ################################################################
- make && \
- make install && \
- cp -f src/redis-sentinel /usr/local/bin && \
- mkdir -p /etc/redis && \
- cp -f *.conf /etc/redis && \
- rm -rf /tmp/redis-stable* && \
- sed -i 's/^\(bind .*\)$/# \1/' /etc/redis/redis.conf && \
- sed -i 's/^\(daemonize .*\)$/# \1/' /etc/redis/redis.conf && \
- sed -i 's/^\(dir .*\)$/# \1\ndir \/data/' /etc/redis/redis.conf && \
- sed -i 's/^\(logfile .*\)$/# \1/' /etc/redis/redis.conf
-
-# Define mountable directories.
-VOLUME ["/data"]
-
-# Define working directory.
-WORKDIR /data
-
-ADD etc_redis_redis.conf /etc/redis/redis.conf
-
-# Print redis configs and start.
-# CMD "redis-server /etc/redis/redis.conf"
-
-# Expose ports.
-EXPOSE 6379
diff --git a/release-0.19.0/examples/k8petstore/redis/etc_redis_redis.conf b/release-0.19.0/examples/k8petstore/redis/etc_redis_redis.conf
deleted file mode 100644
index 38b8c701e7a..00000000000
--- a/release-0.19.0/examples/k8petstore/redis/etc_redis_redis.conf
+++ /dev/null
@@ -1,46 +0,0 @@
-pidfile /var/run/redis.pid
-port 6379
-tcp-backlog 511
-timeout 0
-tcp-keepalive 0
-loglevel verbose
-syslog-enabled yes
-databases 1
-save 1 1
-save 900 1
-save 300 10
-save 60 10000
-stop-writes-on-bgsave-error yes
-rdbcompression no
-rdbchecksum yes
-dbfilename dump.rdb
-dir /data
-slave-serve-stale-data no
-slave-read-only yes
-repl-disable-tcp-nodelay no
-slave-priority 100
-maxmemory
-appendonly yes
-appendfilename "appendonly.aof"
-appendfsync everysec
-no-appendfsync-on-rewrite no
-auto-aof-rewrite-percentage 100
-auto-aof-rewrite-min-size 1
-aof-load-truncated yes
-lua-time-limit 5000
-slowlog-log-slower-than 10000
-slowlog-max-len 128
-latency-monitor-threshold 0
-notify-keyspace-events "KEg$lshzxeA"
-list-max-ziplist-entries 512
-list-max-ziplist-value 64
-set-max-intset-entries 512
-zset-max-ziplist-entries 128
-zset-max-ziplist-value 64
-hll-sparse-max-bytes 3000
-activerehashing yes
-client-output-buffer-limit normal 0 0 0
-client-output-buffer-limit slave 256mb 64mb 60
-client-output-buffer-limit pubsub 32mb 8mb 60
-hz 10
-aof-rewrite-incremental-fsync yes
diff --git a/release-0.19.0/examples/k8petstore/web-server/Dockerfile b/release-0.19.0/examples/k8petstore/web-server/Dockerfile
deleted file mode 100644
index fe98d81ce26..00000000000
--- a/release-0.19.0/examples/k8petstore/web-server/Dockerfile
+++ /dev/null
@@ -1,21 +0,0 @@
-FROM google/golang:latest
-
-# Add source to gopath. This is defacto required for go apps.
-ADD ./src /gopath/src/
-ADD ./static /tmp/static
-ADD ./test.sh /opt/test.sh
-RUN chmod 777 /opt/test.sh
-# $GOPATH/[src/a/b/c]
-# go build a/b/c
-# go run main
-
-# So that we can easily run and install
-WORKDIR /gopath/src/
-
-# Install the code (the executables are in the main dir) This will get the deps also.
-RUN go get main
-#RUN go build main
-
-# Expected that you will override this in production kubernetes.
-ENV STATIC_FILES /tmp/static
-CMD /gopath/bin/main
diff --git a/release-0.19.0/examples/k8petstore/web-server/PetStoreBook.go b/release-0.19.0/examples/k8petstore/web-server/PetStoreBook.go
deleted file mode 100644
index 1c81cef9537..00000000000
--- a/release-0.19.0/examples/k8petstore/web-server/PetStoreBook.go
+++ /dev/null
@@ -1,204 +0,0 @@
-/*
-Copyright 2014 The Kubernetes Authors All rights reserved.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package main
-
-import (
- "encoding/json"
- "fmt"
- "net/http"
- "os"
- "strings"
-
- "github.com/codegangsta/negroni"
- "github.com/gorilla/mux"
- "github.com/xyproto/simpleredis"
-)
-
-//return the path to static assets (i.e. index.html)
-func pathToStaticContents() string {
- var static_content = os.Getenv("STATIC_FILES")
- // Take a wild guess. This will work in dev environment.
- if static_content == "" {
- println("*********** WARNING: DIDNT FIND ENV VAR 'STATIC_FILES', guessing your running in dev.")
- static_content = "../../static/"
- } else {
- println("=========== Read ENV 'STATIC_FILES', path to assets : " + static_content)
- }
-
- //Die if no the static files are missing.
- _, err := os.Stat(static_content)
- if err != nil {
- println("*********** os.Stat failed on " + static_content + " This means no static files are available. Dying...")
- os.Exit(2)
- }
- return static_content
-}
-
-func main() {
-
- var connection = os.Getenv("REDISMASTER_SERVICE_HOST") + ":" + os.Getenv("REDISMASTER_SERVICE_PORT")
-
- if connection == ":" {
- print("WARNING ::: If in kube, this is a failure: Missing env variable REDISMASTER_SERVICE_HOST")
- print("WARNING ::: Attempting to connect redis localhost.")
- connection = "127.0.0.1:6379"
- } else {
- print("Found redis master host " + os.Getenv("REDISMASTER_SERVICE_PORT"))
- connection = os.Getenv("REDISMASTER_SERVICE_HOST") + ":" + os.Getenv("REDISMASTER_SERVICE_PORT")
- }
-
- println("Now connecting to : " + connection)
- /**
- * Create a connection pool. ?The pool pointer will otherwise
- * not be of any use.?https://gist.github.com/jayunit100/1d00e6d343056401ef00
- */
- pool = simpleredis.NewConnectionPoolHost(connection)
-
- println("Connection pool established : " + connection)
-
- defer pool.Close()
-
- r := mux.NewRouter()
-
- println("Router created ")
-
- /**
- * Define a REST path.
- * - The parameters (key) can be accessed via mux.Vars.
- * - The Methods (GET) will be bound to a handler function.
- */
- r.Path("/info").Methods("GET").HandlerFunc(InfoHandler)
- r.Path("/lrange/{key}").Methods("GET").HandlerFunc(ListRangeHandler)
- r.Path("/rpush/{key}/{value}").Methods("GET").HandlerFunc(ListPushHandler)
- r.Path("/llen").Methods("GET").HandlerFunc(LLENHandler)
-
- //for dev environment, the site is one level up...
-
- r.PathPrefix("/").Handler(http.FileServer(http.Dir(pathToStaticContents())))
-
- r.Path("/env").Methods("GET").HandlerFunc(EnvHandler)
-
- list := simpleredis.NewList(pool, "k8petstore")
- HandleError(nil, list.Add("jayunit100"))
- HandleError(nil, list.Add("tstclaire"))
- HandleError(nil, list.Add("rsquared"))
-
- // Verify that this is 3 on startup.
- infoL := HandleError(pool.Get(0).Do("LLEN", "k8petstore")).(int64)
- fmt.Printf("\n=========== Starting DB has %d elements \n", infoL)
- if infoL < 3 {
- print("Not enough entries in DB. something is wrong w/ redis querying")
- print(infoL)
- panic("Failed ... ")
- }
-
- println("=========== Now launching negroni...this might take a second...")
- n := negroni.Classic()
- n.UseHandler(r)
- n.Run(":3000")
- println("Done ! Web app is now running.")
-
-}
-
-/**
-* the Pool will be populated on startup,
-* it will be an instance of a connection pool.
-* Hence, we reference its address rather than copying.
- */
-var pool *simpleredis.ConnectionPool
-
-/**
-* REST
-* input: key
-*
-* Writes all members to JSON.
- */
-func ListRangeHandler(rw http.ResponseWriter, req *http.Request) {
- println("ListRangeHandler")
-
- key := mux.Vars(req)["key"]
-
- list := simpleredis.NewList(pool, key)
-
- //members := HandleError(list.GetAll()).([]string)
- members := HandleError(list.GetLastN(4)).([]string)
-
- print(members)
- membersJSON := HandleError(json.MarshalIndent(members, "", " ")).([]byte)
-
- print("RETURN MEMBERS = " + string(membersJSON))
- rw.Write(membersJSON)
-}
-
-func LLENHandler(rw http.ResponseWriter, req *http.Request) {
- println("=========== LLEN HANDLER")
-
- infoL := HandleError(pool.Get(0).Do("LLEN", "k8petstore")).(int64)
- fmt.Printf("=========== LLEN is %d ", infoL)
- lengthJSON := HandleError(json.MarshalIndent(infoL, "", " ")).([]byte)
- fmt.Printf("================ LLEN json is %s", infoL)
-
- print("RETURN LEN = " + string(lengthJSON))
- rw.Write(lengthJSON)
-
-}
-
-func ListPushHandler(rw http.ResponseWriter, req *http.Request) {
- println("ListPushHandler")
-
- /**
- * Expect a key and value as input.
- *
- */
- key := mux.Vars(req)["key"]
- value := mux.Vars(req)["value"]
-
- println("New list " + key + " " + value)
- list := simpleredis.NewList(pool, key)
- HandleError(nil, list.Add(value))
- ListRangeHandler(rw, req)
-}
-
-func InfoHandler(rw http.ResponseWriter, req *http.Request) {
- println("InfoHandler")
-
- info := HandleError(pool.Get(0).Do("INFO")).([]byte)
- rw.Write(info)
-}
-
-func EnvHandler(rw http.ResponseWriter, req *http.Request) {
- println("EnvHandler")
-
- environment := make(map[string]string)
- for _, item := range os.Environ() {
- splits := strings.Split(item, "=")
- key := splits[0]
- val := strings.Join(splits[1:], "=")
- environment[key] = val
- }
-
- envJSON := HandleError(json.MarshalIndent(environment, "", " ")).([]byte)
- rw.Write(envJSON)
-}
-
-func HandleError(result interface{}, err error) (r interface{}) {
- if err != nil {
- print("ERROR : " + err.Error())
- //panic(err)
- }
- return result
-}
diff --git a/release-0.19.0/examples/k8petstore/web-server/dump.rdb b/release-0.19.0/examples/k8petstore/web-server/dump.rdb
deleted file mode 100644
index d1028f16798..00000000000
Binary files a/release-0.19.0/examples/k8petstore/web-server/dump.rdb and /dev/null differ
diff --git a/release-0.19.0/examples/k8petstore/web-server/static/histogram.js b/release-0.19.0/examples/k8petstore/web-server/static/histogram.js
deleted file mode 100644
index c9f20203e35..00000000000
--- a/release-0.19.0/examples/k8petstore/web-server/static/histogram.js
+++ /dev/null
@@ -1,39 +0,0 @@
-//var data = [4, 8, 15, 16, 23, 42];
-
-function defaults(){
-
- Chart.defaults.global.animation = false;
-
-}
-
-function f(data2) {
-
- defaults();
-
- // Get context with jQuery - using jQuery's .get() method.
- var ctx = $("#myChart").get(0).getContext("2d");
- ctx.width = $(window).width()*1.5;
- ctx.width = $(window).height *.5;
-
- // This will get the first returned node in the jQuery collection.
- var myNewChart = new Chart(ctx);
-
- var data = {
- labels: Array.apply(null, Array(data2.length)).map(function (_, i) {return i;}),
- datasets: [
- {
- label: "My First dataset",
- fillColor: "rgba(220,220,220,0.2)",
- strokeColor: "rgba(220,220,220,1)",
- pointColor: "rgba(220,220,220,1)",
- pointStrokeColor: "#fff",
- pointHighlightFill: "#fff",
- pointHighlightStroke: "rgba(220,220,220,1)",
- data: data2
- }
- ]
- };
-
- var myLineChart = new Chart(ctx).Line(data);
-}
-
diff --git a/release-0.19.0/examples/k8petstore/web-server/static/index.html b/release-0.19.0/examples/k8petstore/web-server/static/index.html
deleted file mode 100644
index b184ab0e782..00000000000
--- a/release-0.19.0/examples/k8petstore/web-server/static/index.html
+++ /dev/null
@@ -1,47 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
- ((( - PRODUCTION -))) Guestbook
-
-
-
-
-
-
-
-
Waiting for database connection...This will get overwritten...
");
- });
-
- }
-
- // colors = purple, blue, red, green, yellow
- var colors = ["#549", "#18d", "#d31", "#2a4", "#db1"];
- var randomColor = colors[Math.floor(5 * Math.random())];
- (
- function setElementsColor(color) {
- headerTitleElement.css("color", color);
- })
-
- (randomColor);
-
- hostAddressElement.append(document.URL);
-
- // Poll every second.
- (function fetchGuestbook() {
-
- // Get JSON by running the query, and append
- $.getJSON("lrange/k8petstore").done(updateEntries).always(
- function() {
- setTimeout(fetchGuestbook, 2000);
- });
- })();
-
- (function fetchLength(trial) {
- $.getJSON("llen").done(
- function a(llen1){
- updateEntryCount(llen1, trial)
- }).always(
- function() {
- // This function is run every 2 seconds.
- setTimeout(
- function(){
- trial+=1 ;
- fetchLength(trial);
- f();
- }, 5000);
- }
- )
- })(0);
-});
-
diff --git a/release-0.19.0/examples/k8petstore/web-server/static/style.css b/release-0.19.0/examples/k8petstore/web-server/static/style.css
deleted file mode 100644
index 36852934520..00000000000
--- a/release-0.19.0/examples/k8petstore/web-server/static/style.css
+++ /dev/null
@@ -1,69 +0,0 @@
-body, input {
- color: #123;
- font-family: "Gill Sans", sans-serif;
-}
-
-div {
- overflow: hidden;
- padding: 1em 0;
- position: relative;
- text-align: center;
-}
-
-h1, h2, p, input, a {
- font-weight: 300;
- margin: 0;
-}
-
-h1 {
- color: #BDB76B;
- font-size: 3.5em;
-}
-
-h2 {
- color: #999;
-}
-
-form {
- margin: 0 auto;
- max-width: 50em;
- text-align: center;
-}
-
-input {
- border: 0;
- border-radius: 1000px;
- box-shadow: inset 0 0 0 2px #BDB76B;
- display: inline;
- font-size: 1.5em;
- margin-bottom: 1em;
- outline: none;
- padding: .5em 5%;
- width: 55%;
-}
-
-form a {
- background: #BDB76B;
- border: 0;
- border-radius: 1000px;
- color: #FFF;
- font-size: 1.25em;
- font-weight: 400;
- padding: .75em 2em;
- text-decoration: none;
- text-transform: uppercase;
- white-space: normal;
-}
-
-p {
- font-size: 1.5em;
- line-height: 1.5;
-}
-.chart div {
- font: 10px sans-serif;
- background-color: steelblue;
- text-align: right;
- padding: 3px;
- margin: 1px;
- color: white;
-}
diff --git a/release-0.19.0/examples/k8petstore/web-server/test.sh b/release-0.19.0/examples/k8petstore/web-server/test.sh
deleted file mode 100644
index 7b8b0eacd10..00000000000
--- a/release-0.19.0/examples/k8petstore/web-server/test.sh
+++ /dev/null
@@ -1,25 +0,0 @@
-#!/bin/bash
-
-# Copyright 2015 The Kubernetes Authors All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-echo "start test of frontend"
-curl localhost:3000/llen
-curl localhost:3000/llen
-curl localhost:3000/llen
-curl localhost:3000/llen
-curl localhost:3000/llen
-curl localhost:3000/llen
-x=`curl localhost:3000/llen`
-echo "done testing frontend result = $x"
diff --git a/release-0.19.0/examples/kubectl-container/.gitignore b/release-0.19.0/examples/kubectl-container/.gitignore
deleted file mode 100644
index 50a4a06fd1d..00000000000
--- a/release-0.19.0/examples/kubectl-container/.gitignore
+++ /dev/null
@@ -1,2 +0,0 @@
-kubectl
-.tag
diff --git a/release-0.19.0/examples/kubectl-container/Dockerfile b/release-0.19.0/examples/kubectl-container/Dockerfile
deleted file mode 100644
index d27d3573644..00000000000
--- a/release-0.19.0/examples/kubectl-container/Dockerfile
+++ /dev/null
@@ -1,18 +0,0 @@
-# Copyright 2014 Google Inc. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-FROM scratch
-MAINTAINER Daniel Smith
-ADD kubectl kubectl
-ENTRYPOINT ["/kubectl"]
diff --git a/release-0.19.0/examples/kubectl-container/Makefile b/release-0.19.0/examples/kubectl-container/Makefile
deleted file mode 100644
index b13b09d2ec4..00000000000
--- a/release-0.19.0/examples/kubectl-container/Makefile
+++ /dev/null
@@ -1,30 +0,0 @@
-# Use:
-#
-# `make kubectl` will build kubectl.
-# `make tag` will suggest a tag.
-# `make container` will build a container-- you must supply a tag.
-# `make push` will push the container-- you must supply a tag.
-
-kubectl:
- KUBE_STATIC_OVERRIDES="kubectl" ../../hack/build-go.sh cmd/kubectl; cp ../../_output/local/bin/linux/amd64/kubectl .
-
-.tag: kubectl
- ./kubectl version -c | grep -o 'GitVersion:"[^"]*"' | cut -f 2 -d '"' > .tag
-
-tag: .tag
- @echo "Suggest using TAG=$(shell cat .tag)"
- @echo "$$ make container TAG=$(shell cat .tag)"
- @echo "or"
- @echo "$$ make push TAG=$(shell cat .tag)"
-
-container:
- $(if $(TAG),,$(error TAG is not defined. Use 'make tag' to see a suggestion))
- docker build -t gcr.io/google_containers/kubectl:$(TAG) .
-
-push: container
- $(if $(TAG),,$(error TAG is not defined. Use 'make tag' to see a suggestion))
- gcloud preview docker push gcr.io/google_containers/kubectl:$(TAG)
-
-clean:
- rm -f kubectl
- rm -f .tag
diff --git a/release-0.19.0/examples/kubectl-container/README.md b/release-0.19.0/examples/kubectl-container/README.md
deleted file mode 100644
index 697d1a9699f..00000000000
--- a/release-0.19.0/examples/kubectl-container/README.md
+++ /dev/null
@@ -1,24 +0,0 @@
-This directory contains a Dockerfile and Makefile for packaging up kubectl into
-a container.
-
-It's not currently automated as part of a release process, so for the moment
-this is an example of what to do if you want to package kubectl into a
-container/your pod.
-
-In the future, we may release consistently versioned groups of containers when
-we cut a release, in which case the source of gcr.io/google_containers/kubectl
-would become that automated process.
-
-```pod.json``` is provided as an example of packaging kubectl as a sidecar
-container, and to help you verify that kubectl works correctly in
-this configuration.
-
-A possible reason why you would want to do this is to use ```kubectl proxy``` as
-a drop-in replacement for the old no-auth KUBERNETES_RO service. The other
-containers in your pod will find the proxy apparently serving on localhost.
-
-
-[]()
-
-
-[]()
diff --git a/release-0.19.0/examples/kubectl-container/pod.json b/release-0.19.0/examples/kubectl-container/pod.json
deleted file mode 100644
index 756090862f2..00000000000
--- a/release-0.19.0/examples/kubectl-container/pod.json
+++ /dev/null
@@ -1,54 +0,0 @@
-{
- "kind": "Pod",
- "apiVersion": "v1beta3",
- "metadata": {
- "name": "kubectl-tester"
- },
- "spec": {
- "containers": [
- {
- "name": "bb",
- "image": "gcr.io/google_containers/busybox",
- "command": [
- "sh", "-c", "sleep 5; wget -O - ${KUBERNETES_RO_SERVICE_HOST}:${KUBERNETES_RO_SERVICE_PORT}/api/v1beta3/pods/; sleep 10000"
- ],
- "ports": [
- {
- "containerPort": 8080,
- "protocol": "TCP"
- }
- ],
- "env": [
- {
- "name": "KUBERNETES_RO_SERVICE_HOST",
- "value": "127.0.0.1"
- },
- {
- "name": "KUBERNETES_RO_SERVICE_PORT",
- "value": "8001"
- }
- ],
- "volumeMounts": [
- {
- "name": "test-volume",
- "mountPath": "/mount/test-volume"
- }
- ]
- },
- {
- "name": "kubectl",
- "image": "gcr.io/google_containers/kubectl:v0.18.0-120-gaeb4ac55ad12b1-dirty",
- "imagePullPolicy": "Always",
- "args": [
- "proxy", "-p", "8001"
- ]
- }
- ],
- "volumes": [
- {
- "name": "test-volume",
- "emptyDir": {}
- }
- ]
- }
-}
diff --git a/release-0.19.0/examples/kubernetes-namespaces/README.md b/release-0.19.0/examples/kubernetes-namespaces/README.md
deleted file mode 100644
index 8d2bae92696..00000000000
--- a/release-0.19.0/examples/kubernetes-namespaces/README.md
+++ /dev/null
@@ -1,255 +0,0 @@
-## Kubernetes Namespaces
-
-Kubernetes _[namespaces](../../docs/namespaces.md)_ help different projects, teams, or customers to share a Kubernetes cluster.
-
-It does this by providing the following:
-
-1. A scope for [Names](../../docs/identifiers.md).
-2. A mechanism to attach authorization and policy to a subsection of the cluster.
-
-Use of multiple namespaces is optional.
-
-This example demonstrates how to use Kubernetes namespaces to subdivide your cluster.
-
-### Step Zero: Prerequisites
-
-This example assumes the following:
-
-1. You have an [existing Kubernetes cluster](../../docs/getting-started-guides).
-2. You have a basic understanding of Kubernetes _[pods](../../docs/pods.md)_, _[services](../../docs/services.md)_, and _[replication controllers](../../docs/replication-controller.md)_.
-
-### Step One: Understand the default namespace
-
-By default, a Kubernetes cluster will instantiate a default namespace when provisioning the cluster to hold the default set of pods,
-services, and replication controllers used by the cluster.
-
-Assuming you have a fresh cluster, you can introspect the available namespace's by doing the following:
-
-```shell
-$ kubectl get namespaces
-NAME LABELS
-default
-```
-
-### Step Two: Create new namespaces
-
-For this exercise, we will create two additional Kubernetes namespaces to hold our content.
-
-Let's imagine a scenario where an organization is using a shared Kubernetes cluster for development and production use cases.
-
-The development team would like to maintain a space in the cluster where they can get a view on the list of pods, services, and replication-controllers
-they use to build and run their application. In this space, Kubernetes resources come and go, and the restrictions on who can or cannot modify resources
-are relaxed to enable agile development.
-
-The operations team would like to maintain a space in the cluster where they can enforce strict procedures on who can or cannot manipulate the set of
-pods, services, and replication controllers that run the production site.
-
-One pattern this organization could follow is to partition the Kubernetes cluster into two namespaces: development and production.
-
-Let's create two new namespaces to hold our work.
-
-Use the file [`examples/kubernetes-namespaces/namespace-dev.json`](namespace-dev.json) which describes a development namespace:
-
-```js
-{
- "kind": "Namespace",
- "apiVersion": "v1beta3",
- "metadata": {
- "name": "development",
- "labels": {
- "name": "development"
- }
- }
-}
-```
-
-Create the development namespace using kubectl.
-
-```shell
-$ kubectl create -f examples/kubernetes-namespaces/namespace-dev.json
-```
-
-And then lets create the production namespace using kubectl.
-
-```shell
-$ kubectl create -f examples/kubernetes-namespaces/namespace-prod.json
-```
-
-To be sure things are right, let's list all of the namespaces in our cluster.
-
-```shell
-$ kubectl get namespaces
-NAME LABELS STATUS
-default Active
-development name=development Active
-production name=production Active
-```
-
-
-### Step Three: Create pods in each namespace
-
-A Kubernetes namespace provides the scope for pods, services, and replication controllers in the cluster.
-
-Users interacting with one namespace do not see the content in another namespace.
-
-To demonstrate this, let's spin up a simple replication controller and pod in the development namespace.
-
-We first check what is the current context:
-
-```shell
-apiVersion: v1
-clusters:
-- cluster:
- certificate-authority-data: REDACTED
- server: https://130.211.122.180
- name: lithe-cocoa-92103_kubernetes
-contexts:
-- context:
- cluster: lithe-cocoa-92103_kubernetes
- user: lithe-cocoa-92103_kubernetes
- name: lithe-cocoa-92103_kubernetes
-current-context: lithe-cocoa-92103_kubernetes
-kind: Config
-preferences: {}
-users:
-- name: lithe-cocoa-92103_kubernetes
- user:
- client-certificate-data: REDACTED
- client-key-data: REDACTED
- token: 65rZW78y8HbwXXtSXuUw9DbP4FLjHi4b
-- name: lithe-cocoa-92103_kubernetes-basic-auth
- user:
- password: h5M0FtUUIflBSdI7
- username: admin
-```
-
-The next step is to define a context for the kubectl client to work in each namespace. The value of "cluster" and "user" fields are copied from the current context.
-
-```shell
-$ kubectl config set-context dev --namespace=development --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes
-$ kubectl config set-context prod --namespace=production --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes
-```
-
-The above commands provided two request contexts you can alternate against depending on what namespace you
-wish to work against.
-
-Let's switch to operate in the development namespace.
-
-```shell
-$ kubectl config use-context dev
-```
-
-You can verify your current context by doing the following:
-
-```shell
-$ kubectl config view
-apiVersion: v1
-clusters:
-- cluster:
- certificate-authority-data: REDACTED
- server: https://130.211.122.180
- name: lithe-cocoa-92103_kubernetes
-contexts:
-- context:
- cluster: lithe-cocoa-92103_kubernetes
- namespace: development
- user: lithe-cocoa-92103_kubernetes
- name: dev
-- context:
- cluster: lithe-cocoa-92103_kubernetes
- user: lithe-cocoa-92103_kubernetes
- name: lithe-cocoa-92103_kubernetes
-- context:
- cluster: lithe-cocoa-92103_kubernetes
- namespace: production
- user: lithe-cocoa-92103_kubernetes
- name: prod
-current-context: dev
-kind: Config
-preferences: {}
-users:
-- name: lithe-cocoa-92103_kubernetes
- user:
- client-certificate-data: REDACTED
- client-key-data: REDACTED
- token: 65rZW78y8HbwXXtSXuUw9DbP4FLjHi4b
-- name: lithe-cocoa-92103_kubernetes-basic-auth
- user:
- password: h5M0FtUUIflBSdI7
- username: admin
-```
-
-At this point, all requests we make to the Kubernetes cluster from the command line are scoped to the development namespace.
-
-Let's create some content.
-
-```shell
-$ kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2
-```
-
-We have just created a replication controller whose replica size is 2 that is running the pod called snowflake with a basic container that just serves the hostname.
-
-```shell
-kubectl get rc
-CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
-snowflake snowflake kubernetes/serve_hostname run=snowflake 2
-
-$ kubectl get pods
-POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
-snowflake-mbrfi 10.244.2.4 kubernetes-minion-ilqx/104.197.8.214 run=snowflake Running About an hour
- snowflake kubernetes/serve_hostname Running About an hour
-snowflake-p78ev 10.244.2.5 kubernetes-minion-ilqx/104.197.8.214 run=snowflake Running About an hour
- snowflake kubernetes/serve_hostname Running About an hour
-```
-
-And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace.
-
-Let's switch to the production namespace and show how resources in one namespace are hidden from the other.
-
-```shell
-$ kubectl config use-context prod
-```
-
-The production namespace should be empty.
-
-```shell
-$ kubectl get rc
-CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
-
-$ kubectl get pods
-POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
-```
-
-Production likes to run cattle, so let's create some cattle pods.
-
-```shell
-$ kubectl run cattle --image=kubernetes/serve_hostname --replicas=5
-
-$ kubectl get rc
-CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
-cattle cattle kubernetes/serve_hostname run=cattle 5
-
-$ kubectl get pods
-POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
-cattle-1kyvj 10.244.0.4 kubernetes-minion-7s1y/23.236.54.97 run=cattle Running About an hour
- cattle kubernetes/serve_hostname Running About an hour
-cattle-kobrk 10.244.1.4 kubernetes-minion-cfs6/104.154.61.231 run=cattle Running About an hour
- cattle kubernetes/serve_hostname Running About an hour
-cattle-l1v9t 10.244.0.5 kubernetes-minion-7s1y/23.236.54.97 run=cattle Running About an hour
- cattle kubernetes/serve_hostname Running About an hour
-cattle-ne2sj 10.244.3.7 kubernetes-minion-x8gx/104.154.47.83 run=cattle Running About an hour
- cattle kubernetes/serve_hostname Running About an hour
-cattle-qrk4x 10.244.0.6 kubernetes-minion-7s1y/23.236.54.97 run=cattle Running About an hour
- cattle kubernetes/serve_hostname
-```
-
-At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.
-
-As the policy support in Kubernetes evolves, we will extend this scenario to show how you can provide different
-authorization rules for each namespace.
-
-
-[]()
-
-
-[]()
diff --git a/release-0.19.0/examples/kubernetes-namespaces/namespace-dev.json b/release-0.19.0/examples/kubernetes-namespaces/namespace-dev.json
deleted file mode 100644
index 2561e92a38f..00000000000
--- a/release-0.19.0/examples/kubernetes-namespaces/namespace-dev.json
+++ /dev/null
@@ -1,10 +0,0 @@
-{
- "kind": "Namespace",
- "apiVersion": "v1beta3",
- "metadata": {
- "name": "development",
- "labels": {
- "name": "development"
- }
- }
-}
diff --git a/release-0.19.0/examples/kubernetes-namespaces/namespace-prod.json b/release-0.19.0/examples/kubernetes-namespaces/namespace-prod.json
deleted file mode 100644
index 149183c94ab..00000000000
--- a/release-0.19.0/examples/kubernetes-namespaces/namespace-prod.json
+++ /dev/null
@@ -1,10 +0,0 @@
-{
- "kind": "Namespace",
- "apiVersion": "v1beta3",
- "metadata": {
- "name": "production",
- "labels": {
- "name": "production"
- }
- }
-}
diff --git a/release-0.19.0/examples/limitrange/README.md b/release-0.19.0/examples/limitrange/README.md
deleted file mode 100644
index ea330d924ad..00000000000
--- a/release-0.19.0/examples/limitrange/README.md
+++ /dev/null
@@ -1,7 +0,0 @@
-Please refer to this [doc](https://github.com/GoogleCloudPlatform/kubernetes/blob/620af168920b773ade28e27211ad684903a1db21/docs/design/admission_control_limit_range.md#kubectl).
-
-
-[]()
-
-
-[]()
diff --git a/release-0.19.0/examples/limitrange/invalid-pod.json b/release-0.19.0/examples/limitrange/invalid-pod.json
deleted file mode 100644
index 3c622859f81..00000000000
--- a/release-0.19.0/examples/limitrange/invalid-pod.json
+++ /dev/null
@@ -1,22 +0,0 @@
-{
- "apiVersion":"v1beta3",
- "kind": "Pod",
- "metadata": {
- "name": "invalid-pod",
- "labels": {
- "name": "invalid-pod"
- }
- },
- "spec": {
- "containers": [{
- "name": "kubernetes-serve-hostname",
- "image": "gcr.io/google_containers/serve_hostname",
- "resources": {
- "limits": {
- "cpu": "10m",
- "memory": "5Mi"
- }
- }
- }]
- }
-}
diff --git a/release-0.19.0/examples/limitrange/limit-range.json b/release-0.19.0/examples/limitrange/limit-range.json
deleted file mode 100644
index c27e9f14fe1..00000000000
--- a/release-0.19.0/examples/limitrange/limit-range.json
+++ /dev/null
@@ -1,37 +0,0 @@
-{
- "apiVersion": "v1beta3",
- "kind": "LimitRange",
- "metadata": {
- "name": "limits"
- },
- "spec": {
- "limits": [
- {
- "type": "Pod",
- "max": {
- "memory": "1Gi",
- "cpu": "2"
- },
- "min": {
- "memory": "6Mi",
- "cpu": "250m"
- }
- },
- {
- "type": "Container",
- "max": {
- "memory": "1Gi",
- "cpu": "2"
- },
- "min": {
- "memory": "6Mi",
- "cpu": "250m"
- },
- "default": {
- "memory": "6Mi",
- "cpu": "250m"
- }
- }
- ]
- }
-}
diff --git a/release-0.19.0/examples/limitrange/valid-pod.json b/release-0.19.0/examples/limitrange/valid-pod.json
deleted file mode 100644
index 350a844d2ca..00000000000
--- a/release-0.19.0/examples/limitrange/valid-pod.json
+++ /dev/null
@@ -1,22 +0,0 @@
-{
- "apiVersion":"v1beta3",
- "kind": "Pod",
- "metadata": {
- "name": "valid-pod",
- "labels": {
- "name": "valid-pod"
- }
- },
- "spec": {
- "containers": [{
- "name": "kubernetes-serve-hostname",
- "image": "gcr.io/google_containers/serve_hostname",
- "resources": {
- "limits": {
- "cpu": "1",
- "memory": "6Mi"
- }
- }
- }]
- }
-}
diff --git a/release-0.19.0/examples/liveness/README.md b/release-0.19.0/examples/liveness/README.md
deleted file mode 100644
index 16689ac0365..00000000000
--- a/release-0.19.0/examples/liveness/README.md
+++ /dev/null
@@ -1,82 +0,0 @@
-## Overview
-This example shows two types of pod health checks: HTTP checks and container execution checks.
-
-The [exec-liveness.yaml](./exec-liveness.yaml) demonstrates the container execution check.
-```
- livenessProbe:
- exec:
- command:
- - cat
- - /tmp/health
- initialDelaySeconds: 15
- timeoutSeconds: 1
-```
-Kubelet executes the command cat /tmp/health in the container and reports failure if the command returns a non-zero exit code.
-
-Note that the container removes the /tmp/health file after 10 seconds,
-```
-echo ok > /tmp/health; sleep 10; rm -rf /tmp/health; sleep 600
-```
-so when Kubelet executes the health check 15 seconds (defined by initialDelaySeconds) after the container started, the check would fail.
-
-
-The [http-liveness.yaml](http-liveness.yaml) demonstrates the HTTP check.
-```
- livenessProbe:
- httpGet:
- path: /healthz
- port: 8080
- initialDelaySeconds: 15
- timeoutSeconds: 1
-```
-The Kubelet sends a HTTP request to the specified path and port to perform the health check. If you take a look at image/server.go, you will see the server starts to respond with an error code 500 after 10 seconds, so the check fails.
-
-This [guide](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/examples/walkthrough/k8s201.md#health-checking) has more information on health checks.
-
-## Get your hands dirty
-To show the health check is actually working, first create the pods:
-```
-# kubectl create -f exec-liveness.yaml
-# cluster/kbuectl.sh create -f http-liveness.yaml
-```
-
-Check the status of the pods once they are created:
-```
-# kubectl get pods
-POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
-liveness-exec 10.244.3.7 kubernetes-minion-f08h/130.211.122.180 test=liveness Running 3 seconds
- liveness gcr.io/google_containers/busybox Running 2 seconds
-liveness-http 10.244.0.8 kubernetes-minion-0bks/104.197.10.10 test=liveness Running 3 seconds
- liveness gcr.io/google_containers/liveness Running 2 seconds
-```
-
-Check the status half a minute later, you will see the termination messages:
-```
-# kubectl get pods
-POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
-liveness-exec 10.244.3.7 kubernetes-minion-f08h/130.211.122.180 test=liveness Running 34 seconds
- liveness gcr.io/google_containers/busybox Running 3 seconds last termination: exit code 137
-liveness-http 10.244.0.8 kubernetes-minion-0bks/104.197.10.10 test=liveness Running 34 seconds
- liveness gcr.io/google_containers/liveness Running 13 seconds last termination: exit code 2
-```
-The termination messages indicate that the liveness probes have failed, and the containers have been killed and recreated.
-
-You can also see the container restart count being incremented by running `kubectl describe`.
-```
-# kubectl describe pods liveness-exec | grep "Restart Count"
-Restart Count: 8
-```
-
-You would also see the killing and creating events at the bottom of the *kubectl describe* output:
-```
- Thu, 14 May 2015 15:23:25 -0700 Thu, 14 May 2015 15:23:25 -0700 1 {kubelet kubernetes-minion-0uzf} spec.containers{liveness} killing Killing 88c8b717d8b0940d52743c086b43c3fad0d725a36300b9b5f0ad3a1c8cef2d3e
- Thu, 14 May 2015 15:23:25 -0700 Thu, 14 May 2015 15:23:25 -0700 1 {kubelet kubernetes-minion-0uzf} spec.containers{liveness} created Created with docker id b254a9810073f9ee9075bb38ac29a4b063647176ad9eabd9184078ca98a60062
- Thu, 14 May 2015 15:23:25 -0700 Thu, 14 May 2015 15:23:25 -0700 1 {kubelet kubernetes-minion-0uzf} spec.containers{liveness} started Started with docker id b254a9810073f9ee9075bb38ac29a4b063647176ad9eabd9184078ca98a60062
- ...
-```
-
-
-[]()
-
-
-[]()
diff --git a/release-0.19.0/examples/liveness/exec-liveness.yaml b/release-0.19.0/examples/liveness/exec-liveness.yaml
deleted file mode 100644
index b72dac0f595..00000000000
--- a/release-0.19.0/examples/liveness/exec-liveness.yaml
+++ /dev/null
@@ -1,21 +0,0 @@
-apiVersion: v1beta3
-kind: Pod
-metadata:
- labels:
- test: liveness
- name: liveness-exec
-spec:
- containers:
- - args:
- - /bin/sh
- - -c
- - echo ok > /tmp/health; sleep 10; rm -rf /tmp/health; sleep 600
- image: gcr.io/google_containers/busybox
- livenessProbe:
- exec:
- command:
- - cat
- - /tmp/health
- initialDelaySeconds: 15
- timeoutSeconds: 1
- name: liveness
diff --git a/release-0.19.0/examples/liveness/http-liveness.yaml b/release-0.19.0/examples/liveness/http-liveness.yaml
deleted file mode 100644
index 36d3d70caf0..00000000000
--- a/release-0.19.0/examples/liveness/http-liveness.yaml
+++ /dev/null
@@ -1,18 +0,0 @@
-apiVersion: v1beta3
-kind: Pod
-metadata:
- labels:
- test: liveness
- name: liveness-http
-spec:
- containers:
- - args:
- - /server
- image: gcr.io/google_containers/liveness
- livenessProbe:
- httpGet:
- path: /healthz
- port: 8080
- initialDelaySeconds: 15
- timeoutSeconds: 1
- name: liveness
diff --git a/release-0.19.0/examples/liveness/image/Dockerfile b/release-0.19.0/examples/liveness/image/Dockerfile
deleted file mode 100644
index d057ecd309e..00000000000
--- a/release-0.19.0/examples/liveness/image/Dockerfile
+++ /dev/null
@@ -1,4 +0,0 @@
-FROM scratch
-
-ADD server /server
-
diff --git a/release-0.19.0/examples/liveness/image/Makefile b/release-0.19.0/examples/liveness/image/Makefile
deleted file mode 100644
index c123ac6df9d..00000000000
--- a/release-0.19.0/examples/liveness/image/Makefile
+++ /dev/null
@@ -1,13 +0,0 @@
-all: push
-
-server: server.go
- CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -ldflags '-w' ./server.go
-
-container: server
- docker build -t gcr.io/google_containers/liveness .
-
-push: container
- gcloud preview docker push gcr.io/google_containers/liveness
-
-clean:
- rm -f server
diff --git a/release-0.19.0/examples/liveness/image/server.go b/release-0.19.0/examples/liveness/image/server.go
deleted file mode 100644
index 26c337e767b..00000000000
--- a/release-0.19.0/examples/liveness/image/server.go
+++ /dev/null
@@ -1,46 +0,0 @@
-/*
-Copyright 2014 The Kubernetes Authors All rights reserved.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-// A simple server that is alive for 10 seconds, then reports unhealthy for
-// the rest of its (hopefully) short existence.
-package main
-
-import (
- "fmt"
- "log"
- "net/http"
- "time"
-)
-
-func main() {
- started := time.Now()
- http.HandleFunc("/started", func(w http.ResponseWriter, r *http.Request) {
- w.WriteHeader(200)
- data := (time.Now().Sub(started)).String()
- w.Write([]byte(data))
- })
- http.HandleFunc("/healthz", func(w http.ResponseWriter, r *http.Request) {
- duration := time.Now().Sub(started)
- if duration.Seconds() > 10 {
- w.WriteHeader(500)
- w.Write([]byte(fmt.Sprintf("error: %v", duration.Seconds())))
- } else {
- w.WriteHeader(200)
- w.Write([]byte("ok"))
- }
- })
- log.Fatal(http.ListenAndServe(":8080", nil))
-}
diff --git a/release-0.19.0/examples/logging-demo/Makefile b/release-0.19.0/examples/logging-demo/Makefile
deleted file mode 100644
index c847f9d6b35..00000000000
--- a/release-0.19.0/examples/logging-demo/Makefile
+++ /dev/null
@@ -1,34 +0,0 @@
-# Makefile for launching syntheitc logging sources (any platform)
-# and for reporting the forwarding rules for the
-# Elasticsearch and Kibana pods for the GCE platform.
-
-
-.PHONY: up down logger-up logger-down logger10-up logger10-downget net
-
-KUBECTL=../../cluster/kubectl.sh
-
-up: logger-up logger10-up
-
-down: logger-down logger10-down
-
-
-logger-up:
- -${KUBECTL} create -f synthetic_0_25lps.yaml
-
-logger-down:
- -${KUBECTL} delete pods synthetic-logger-0.25lps-pod
-
-logger10-up:
- -${KUBECTL} create -f synthetic_10lps.yaml
-
-logger10-down:
- -${KUBECTL} delete pods synthetic-logger-10lps-pod
-
-get:
- ${KUBECTL} get pods
- ${KUBECTL} get replicationControllers
- ${KUBECTL} get services
-
-net:
- ${KUBECTL} get services elasticsearch-logging -o json
- ${KUBECTL} get services kibana-logging -o json
diff --git a/release-0.19.0/examples/logging-demo/README.md b/release-0.19.0/examples/logging-demo/README.md
deleted file mode 100644
index 159eb353589..00000000000
--- a/release-0.19.0/examples/logging-demo/README.md
+++ /dev/null
@@ -1,248 +0,0 @@
-# Elasticsearch/Kibana Logging Demonstration
-This directory contains two [pod](../../docs/pods.md) specifications which can be used as synthetic
-logging sources. The pod specification in [synthetic_0_25lps.yaml](synthetic_0_25lps.yaml)
-describes a pod that just emits a log message once every 4 seconds:
-```
-# This pod specification creates an instance of a synthetic logger. The logger
-# is simply a program that writes out the hostname of the pod, a count which increments
-# by one on each iteration (to help notice missing log enteries) and the date using
-# a long format (RFC-3339) to nano-second precision. This program logs at a frequency
-# of 0.25 lines per second. The shellscript program is given directly to bash as -c argument
-# and could have been written out as:
-# i="0"
-# while true
-# do
-# echo -n "`hostname`: $i: "
-# date --rfc-3339 ns
-# sleep 4
-# i=$[$i+1]
-# done
-apiVersion: v1beta3
-kind: Pod
-metadata:
- labels:
- name: synth-logging-source
- name: synthetic-logger-0.25lps-pod
-spec:
- containers:
- - args:
- - bash
- - -c
- - 'i="0"; while true; do echo -n "`hostname`: $i: "; date --rfc-3339 ns; sleep
- 4; i=$[$i+1]; done'
- image: ubuntu:14.04
- name: synth-lgr
-```
-
-The other YAML file [synthetic_10lps.yaml](synthetic_10lps.yaml) specifies a similar synthetic logger that emits 10 log messages every second. To run both synthetic loggers:
-```
-$ make up
-../../../kubectl.sh create -f synthetic_0_25lps.yaml
-Running: ../../../cluster/../cluster/gce/../../_output/dockerized/bin/linux/amd64/kubectl create -f synthetic_0_25lps.yaml
-synthetic-logger-0.25lps-pod
-../../../kubectl.sh create -f synthetic_10lps.yaml
-Running: ../../../cluster/../cluster/gce/../../_output/dockerized/bin/linux/amd64/kubectl create -f synthetic_10lps.yaml
-synthetic-logger-10lps-pod
-
-```
-
-Visiting the Kibana dashboard should make it clear that logs are being collected from the two synthetic loggers:
-
-
-You can report the running pods, [replication controllers](../../docs/replication-controller.md), and [services](../../docs/services.md) with another Makefile rule:
-```
-$ make get
-../../../kubectl.sh get pods
-Running: ../../../../cluster/gce/../../_output/dockerized/bin/linux/amd64/kubectl get pods
-POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
-elasticsearch-logging-f0smz 10.244.2.3 kubernetes-minion-ilqx/104.197.8.214 kubernetes.io/cluster-service=true,name=elasticsearch-logging Running 5 hours
- elasticsearch-logging gcr.io/google_containers/elasticsearch:1.0 Running 5 hours
-etcd-server-kubernetes-master kubernetes-master/ Running 5 hours
- etcd-container gcr.io/google_containers/etcd:2.0.9 Running 5 hours
-fluentd-elasticsearch-kubernetes-minion-7s1y 10.244.0.2 kubernetes-minion-7s1y/23.236.54.97 Running 5 hours
- fluentd-elasticsearch gcr.io/google_containers/fluentd-elasticsearch:1.5 Running 5 hours
-fluentd-elasticsearch-kubernetes-minion-cfs6 10.244.1.2 kubernetes-minion-cfs6/104.154.61.231 Running 5 hours
- fluentd-elasticsearch gcr.io/google_containers/fluentd-elasticsearch:1.5 Running 5 hours
-fluentd-elasticsearch-kubernetes-minion-ilqx 10.244.2.2 kubernetes-minion-ilqx/104.197.8.214 Running 5 hours
- fluentd-elasticsearch gcr.io/google_containers/fluentd-elasticsearch:1.5 Running 5 hours
-fluentd-elasticsearch-kubernetes-minion-x8gx 10.244.3.2 kubernetes-minion-x8gx/104.154.47.83 Running 5 hours
- fluentd-elasticsearch gcr.io/google_containers/fluentd-elasticsearch:1.5 Running 5 hours
-kibana-logging-cwe0b 10.244.1.3 kubernetes-minion-cfs6/104.154.61.231 kubernetes.io/cluster-service=true,name=kibana-logging Running 5 hours
- kibana-logging gcr.io/google_containers/kibana:1.2 Running 5 hours
-kube-apiserver-kubernetes-master kubernetes-master/ Running 5 hours
- kube-apiserver gcr.io/google_containers/kube-apiserver:f0c332fc2582927ec27d24965572d4b0 Running 5 hours
-kube-controller-manager-kubernetes-master kubernetes-master/ Running 5 hours
- kube-controller-manager gcr.io/google_containers/kube-controller-manager:6729154dfd4e2a19752bdf9ceff8464c Running 5 hours
-kube-dns-swd4n 10.244.3.5 kubernetes-minion-x8gx/104.154.47.83 k8s-app=kube-dns,kubernetes.io/cluster-service=true,name=kube-dns Running 5 hours
- kube2sky gcr.io/google_containers/kube2sky:1.2 Running 5 hours
- etcd quay.io/coreos/etcd:v2.0.3 Running 5 hours
- skydns gcr.io/google_containers/skydns:2015-03-11-001 Running 5 hours
-kube-scheduler-kubernetes-master kubernetes-master/ Running 5 hours
- kube-scheduler gcr.io/google_containers/kube-scheduler:ec9d2092f754211cc5ab3a5162c05fc1 Running 5 hours
-monitoring-heapster-controller-zpjj1 10.244.3.3 kubernetes-minion-x8gx/104.154.47.83 kubernetes.io/cluster-service=true,name=heapster Running 5 hours
- heapster gcr.io/google_containers/heapster:v0.10.0 Running 5 hours
-monitoring-influx-grafana-controller-dqan4 10.244.3.4 kubernetes-minion-x8gx/104.154.47.83 kubernetes.io/cluster-service=true,name=influxGrafana Running 5 hours
- grafana gcr.io/google_containers/heapster_grafana:v0.6 Running 5 hours
- influxdb gcr.io/google_containers/heapster_influxdb:v0.3 Running 5 hours
-synthetic-logger-0.25lps-pod 10.244.0.7 kubernetes-minion-7s1y/23.236.54.97 name=synth-logging-source Running 19 minutes
- synth-lgr ubuntu:14.04 Running 19 minutes
-synthetic-logger-10lps-pod 10.244.3.14 kubernetes-minion-x8gx/104.154.47.83 name=synth-logging-source Running 19 minutes
- synth-lgr ubuntu:14.04 Running 19 minutes
-../../_output/local/bin/linux/amd64/kubectl get replicationControllers
-CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
-elasticsearch-logging elasticsearch-logging gcr.io/google_containers/elasticsearch:1.0 name=elasticsearch-logging 1
-kibana-logging kibana-logging gcr.io/google_containers/kibana:1.2 name=kibana-logging 1
-kube-dns etcd quay.io/coreos/etcd:v2.0.3 k8s-app=kube-dns 1
- kube2sky gcr.io/google_containers/kube2sky:1.2
- skydns gcr.io/google_containers/skydns:2015-03-11-001
-monitoring-heapster-controller heapster gcr.io/google_containers/heapster:v0.10.0 name=heapster 1
-monitoring-influx-grafana-controller influxdb gcr.io/google_containers/heapster_influxdb:v0.3 name=influxGrafana 1
- grafana gcr.io/google_containers/heapster_grafana:v0.6
-../../_output/local/bin/linux/amd64/kubectl get services
-NAME LABELS SELECTOR IP(S) PORT(S)
-elasticsearch-logging kubernetes.io/cluster-service=true,name=elasticsearch-logging name=elasticsearch-logging 10.0.251.221 9200/TCP
-kibana-logging kubernetes.io/cluster-service=true,name=kibana-logging name=kibana-logging 10.0.188.118 5601/TCP
-kube-dns k8s-app=kube-dns,kubernetes.io/cluster-service=true,name=kube-dns k8s-app=kube-dns 10.0.0.10 53/UDP
-kubernetes component=apiserver,provider=kubernetes 10.0.0.2 443/TCP
-monitoring-grafana kubernetes.io/cluster-service=true,name=grafana name=influxGrafana 10.0.254.202 80/TCP
-monitoring-heapster kubernetes.io/cluster-service=true,name=heapster name=heapster 10.0.19.214 80/TCP
-monitoring-influxdb name=influxGrafana name=influxGrafana 10.0.198.71 80/TCP
-monitoring-influxdb-ui name=influxGrafana name=influxGrafana 10.0.109.66 80/TCP
-```
-
-The `net` rule in the Makefile will report information about the Elasticsearch and Kibana services including the public IP addresses of each service.
-```
-$ make net
-../../../kubectl.sh get services elasticsearch-logging -o json
-current-context: "lithe-cocoa-92103_kubernetes"
-Running: ../../_output/local/bin/linux/amd64/kubectl get services elasticsearch-logging -o json
-{
- "kind": "Service",
- "apiVersion": "v1beta3",
- "metadata": {
- "name": "elasticsearch-logging",
- "namespace": "default",
- "selfLink": "/api/v1beta3/namespaces/default/services/elasticsearch-logging",
- "uid": "9dc7290f-f358-11e4-a58e-42010af09a93",
- "resourceVersion": "28",
- "creationTimestamp": "2015-05-05T18:57:45Z",
- "labels": {
- "kubernetes.io/cluster-service": "true",
- "name": "elasticsearch-logging"
- }
- },
- "spec": {
- "ports": [
- {
- "name": "",
- "protocol": "TCP",
- "port": 9200,
- "targetPort": "es-port"
- }
- ],
- "selector": {
- "name": "elasticsearch-logging"
- },
- "portalIP": "10.0.251.221",
- "sessionAffinity": "None"
- },
- "status": {}
-}
-current-context: "lithe-cocoa-92103_kubernetes"
-Running: ../../_output/local/bin/linux/amd64/kubectl get services kibana-logging -o json
-{
- "kind": "Service",
- "apiVersion": "v1beta3",
- "metadata": {
- "name": "kibana-logging",
- "namespace": "default",
- "selfLink": "/api/v1beta3/namespaces/default/services/kibana-logging",
- "uid": "9dc6f856-f358-11e4-a58e-42010af09a93",
- "resourceVersion": "31",
- "creationTimestamp": "2015-05-05T18:57:45Z",
- "labels": {
- "kubernetes.io/cluster-service": "true",
- "name": "kibana-logging"
- }
- },
- "spec": {
- "ports": [
- {
- "name": "",
- "protocol": "TCP",
- "port": 5601,
- "targetPort": "kibana-port"
- }
- ],
- "selector": {
- "name": "kibana-logging"
- },
- "portalIP": "10.0.188.118",
- "sessionAffinity": "None"
- },
- "status": {}
-}
-```
-To find the URLs to access the Elasticsearch and Kibana viewer,
-```
-$ kubectl cluster-info
-Kubernetes master is running at https://130.211.122.180
-elasticsearch-logging is running at https://130.211.122.180/api/v1beta3/proxy/namespaces/default/services/elasticsearch-logging
-kibana-logging is running at https://130.211.122.180/api/v1beta3/proxy/namespaces/default/services/kibana-logging
-kube-dns is running at https://130.211.122.180/api/v1beta3/proxy/namespaces/default/services/kube-dns
-grafana is running at https://130.211.122.180/api/v1beta3/proxy/namespaces/default/services/monitoring-grafana
-heapster is running at https://130.211.122.180/api/v1beta3/proxy/namespaces/default/services/monitoring-heapster
-```
-
-To find the user name and password to access the URLs,
-```
-$ kubectl config view
-apiVersion: v1
-clusters:
-- cluster:
- certificate-authority-data: REDACTED
- server: https://130.211.122.180
- name: lithe-cocoa-92103_kubernetes
-contexts:
-- context:
- cluster: lithe-cocoa-92103_kubernetes
- user: lithe-cocoa-92103_kubernetes
- name: lithe-cocoa-92103_kubernetes
-current-context: lithe-cocoa-92103_kubernetes
-kind: Config
-preferences: {}
-users:
-- name: lithe-cocoa-92103_kubernetes
- user:
- client-certificate-data: REDACTED
- client-key-data: REDACTED
- token: 65rZW78y8HxmXXtSXuUw9DbP4FLjHi4b
-- name: lithe-cocoa-92103_kubernetes-basic-auth
- user:
- password: h5M0FtVXXflBSdI7
- username: admin
-```
-
-Access the Elasticsearch service at URL `https://130.211.122.180/api/v1beta3/proxy/namespaces/default/services/elasticsearch-logging`, use the user name 'admin' and password 'h5M0FtVXXflBSdI7',
-```
-{
- "status" : 200,
- "name" : "Major Mapleleaf",
- "cluster_name" : "kubernetes_logging",
- "version" : {
- "number" : "1.4.4",
- "build_hash" : "c88f77ffc81301dfa9dfd81ca2232f09588bd512",
- "build_timestamp" : "2015-02-19T13:05:36Z",
- "build_snapshot" : false,
- "lucene_version" : "4.10.3"
- },
- "tagline" : "You Know, for Search"
-}
-```
-Visiting the URL `https://130.211.122.180/api/v1beta3/proxy/namespaces/default/services/kibana-logging` should show the Kibana viewer for the logging information stored in the Elasticsearch service.
-
-
-[]()
-
-
-[]()
diff --git a/release-0.19.0/examples/logging-demo/synth-logger.png b/release-0.19.0/examples/logging-demo/synth-logger.png
deleted file mode 100644
index bd19ea3ee41..00000000000
Binary files a/release-0.19.0/examples/logging-demo/synth-logger.png and /dev/null differ
diff --git a/release-0.19.0/examples/logging-demo/synthetic_0_25lps.yaml b/release-0.19.0/examples/logging-demo/synthetic_0_25lps.yaml
deleted file mode 100644
index 5ff01e52874..00000000000
--- a/release-0.19.0/examples/logging-demo/synthetic_0_25lps.yaml
+++ /dev/null
@@ -1,29 +0,0 @@
-# This pod specification creates an instance of a synthetic logger. The logger
-# is simply a program that writes out the hostname of the pod, a count which increments
-# by one on each iteration (to help notice missing log enteries) and the date using
-# a long format (RFC-3339) to nano-second precision. This program logs at a frequency
-# of 0.25 lines per second. The shellscript program is given directly to bash as -c argument
-# and could have been written out as:
-# i="0"
-# while true
-# do
-# echo -n "`hostname`: $i: "
-# date --rfc-3339 ns
-# sleep 4
-# i=$[$i+1]
-# done
-apiVersion: v1beta3
-kind: Pod
-metadata:
- labels:
- name: synth-logging-source
- name: synthetic-logger-0.25lps-pod
-spec:
- containers:
- - args:
- - bash
- - -c
- - 'i="0"; while true; do echo -n "`hostname`: $i: "; date --rfc-3339 ns; sleep
- 4; i=$[$i+1]; done'
- image: ubuntu:14.04
- name: synth-lgr
diff --git a/release-0.19.0/examples/logging-demo/synthetic_10lps.yaml b/release-0.19.0/examples/logging-demo/synthetic_10lps.yaml
deleted file mode 100644
index 35f305d260f..00000000000
--- a/release-0.19.0/examples/logging-demo/synthetic_10lps.yaml
+++ /dev/null
@@ -1,30 +0,0 @@
-# This pod specification creates an instance of a synthetic logger. The logger
-# is simply a program that writes out the hostname of the pod, a count which increments
-# by one on each iteration (to help notice missing log enteries) and the date using
-# a long format (RFC-3339) to nano-second precision. This program logs at a frequency
-# of 0.25 lines per second. The shellscript program is given directly to bash as -c argument
-# and could have been written out as:
-# i="0"
-# while true
-# do
-# echo -n "`hostname`: $i: "
-# date --rfc-3339 ns
-# sleep 4
-# i=$[$i+1]
-# done
-apiVersion: v1beta3
-kind: Pod
-metadata:
- creationTimestamp: null
- labels:
- name: synth-logging-source
- name: synthetic-logger-10lps-pod
-spec:
- containers:
- - args:
- - bash
- - -c
- - 'i="0"; while true; do echo -n "`hostname`: $i: "; date --rfc-3339 ns; sleep
- 0.1; i=$[$i+1]; done'
- image: ubuntu:14.04
- name: synth-lgr
diff --git a/release-0.19.0/examples/meteor/README.md b/release-0.19.0/examples/meteor/README.md
deleted file mode 100644
index 6641943bdfe..00000000000
--- a/release-0.19.0/examples/meteor/README.md
+++ /dev/null
@@ -1,171 +0,0 @@
-Meteor on Kuberenetes
-=====================
-
-This example shows you how to package and run a
-[Meteor](https://www.meteor.com/) app on Kubernetes.
-
-Build a container for your Meteor app
--------------------------------------
-
-To be able to run your Meteor app on Kubernetes you need to build a
-Docker container for it first. To do that you need to install
-[Docker](https://www.docker.com) Once you have that you need to add 2
-files to your existing Meteor project `Dockerfile` and
-`.dockerignore`.
-
-`Dockerfile` should contain the below lines. You should replace the
-`ROOT_URL` with the actual hostname of your app.
-```
-FROM chees/meteor-kubernetes
-ENV ROOT_URL http://myawesomeapp.com
-```
-
-The `.dockerignore` file should contain the below lines. This tells
-Docker to ignore the files on those directories when it's building
-your container.
-```
-.meteor/local
-packages/*/.build*
-```
-
-You can see an example meteor project already set up at:
-[meteor-gke-example](https://github.com/Q42/meteor-gke-example). Feel
-free to use this app for this example.
-
-> Note: The next step will not work if you have added mobile platforms
-> to your meteor project. Check with `meteor list-platforms`
-
-Now you can build your container by running this in
-your Meteor project directory:
-```
-docker build -t my-meteor .
-```
-
-Pushing to a registry
----------------------
-
-For the [Docker Hub](https://hub.docker.com/), tag your app image with
-your username and push to the Hub with the below commands. Replace
-`` with your Hub username.
-```
-docker tag my-meteor /my-meteor
-docker push /my-meteor
-```
-
-For [Google Container
-Registry](https://cloud.google.com/tools/container-registry/), tag
-your app image with your project ID, and push to GCR. Replace
-`` with your project ID.
-```
-docker tag my-meteor gcr.io//my-meteor
-gcloud preview docker push gcr.io//my-meteor
-```
-
-Running
--------
-
-Now that you have containerized your Meteor app it's time to set up
-your cluster. Edit [`meteor-controller.json`](meteor-controller.json) and make sure the `image`
-points to the container you just pushed to the Docker Hub or GCR.
-
-As you may know, Meteor uses MongoDB, and we'll need to provide it a
-persistent Kuberetes volume to store its data. See the [volumes
-documentation](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/volumes.md)
-for options. We're going to use Google Compute Engine persistent
-disks. Create the MongoDB disk by running:
-```
-gcloud compute disks create --size=200GB mongo-disk
-```
-
-You also need to format the disk before you can use it:
-```
-gcloud compute instances attach-disk --disk=mongo-disk --device-name temp-data kubernetes-master
-gcloud compute ssh kubernetes-master --command "sudo mkdir /mnt/tmp && sudo /usr/share/google/safe_format_and_mount /dev/disk/by-id/google-temp-data /mnt/tmp"
-gcloud compute instances detach-disk --disk mongo-disk kubernetes-master
-```
-
-Now you can start Mongo using that disk:
-```
-kubectl create -f mongo-pod.json
-kubectl create -f mongo-service.json
-```
-
-Wait until Mongo is started completely and then start up your Meteor app:
-```
-kubectl create -f meteor-controller.json
-kubectl create -f meteor-service.json
-```
-
-Note that [`meteor-service.json`](meteor-service.json) creates an external load balancer, so
-your app should be available through the IP of that load balancer once
-the Meteor pods are started. You can find the IP of your load balancer
-by running:
-```
-kubectl get services/meteor -o template -t "{{.spec.publicIPs}}"
-```
-
-You will have to open up port 80 if it's not open yet in your
-environment. On GCE, you may run the below command.
-```
-gcloud compute firewall-rules create meteor-80 --allow=tcp:80 --target-tags kubernetes-minion
-```
-
-What is going on?
------------------
-
-Firstly, the `FROM chees/meteor-kubernetes` line in your `Dockerfile`
-specifies the base image for your Meteor app. The code for that image
-is located in the `dockerbase/` subdirectory. Open up the `Dockerfile`
-to get an insight of what happens during the `docker build` step. The
-image is based on the Node.js official image. It then installs Meteor
-and copies in your apps' code. The last line specifies what happens
-when your app container is run.
-```
-ENTRYPOINT MONGO_URL=mongodb://$MONGO_SERVICE_HOST:$MONGO_SERVICE_PORT /usr/local/bin/node main.js
-```
-
-Here we can see the MongoDB host and port information being passed
-into the Meteor app. The `MONGO_SERVICE...` environment variables are
-set by Kubernetes, and point to the service named `mongo` specified in
-[`mongo-service.json`](mongo-service.json). See the [environment
-documentation](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/container-environment.md)
-for more details.
-
-As you may know, Meteor uses long lasting connections, and requires
-_sticky sessions_. With Kubernetes you can scale out your app easily
-with session affinity. The [`meteor-service.json`](meteor-service.json) file contains
-`"sessionAffinity": "ClientIP"`, which provides this for us. See the
-[service
-documentation](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/services.md#virtual-ips-and-service-proxies)
-for more information.
-
-As mentioned above, the mongo container uses a volume which is mapped
-to a persistent disk by Kubernetes. In [`mongo-pod.json`](mongo-pod.json) the container
-section specifies the volume:
-```
- "volumeMounts": [
- {
- "name": "mongo-disk",
- "mountPath": "/data/db"
- }
-```
-
-The name `mongo-disk` refers to the volume specified outside the
-container section:
-```
- "volumes": [
- {
- "name": "mongo-disk",
- "gcePersistentDisk": {
- "pdName": "mongo-disk",
- "fsType": "ext4"
- }
- }
- ],
-```
-
-
-[]()
-
-
-[]()
diff --git a/release-0.19.0/examples/meteor/dockerbase/Dockerfile b/release-0.19.0/examples/meteor/dockerbase/Dockerfile
deleted file mode 100644
index 8ce633c634b..00000000000
--- a/release-0.19.0/examples/meteor/dockerbase/Dockerfile
+++ /dev/null
@@ -1,18 +0,0 @@
-FROM node:0.10
-MAINTAINER Christiaan Hees
-
-ONBUILD WORKDIR /appsrc
-ONBUILD COPY . /appsrc
-
-ONBUILD RUN curl https://install.meteor.com/ | sh && \
- meteor build ../app --directory --architecture os.linux.x86_64 && \
- rm -rf /appsrc
-# TODO rm meteor so it doesn't take space in the image?
-
-ONBUILD WORKDIR /app/bundle
-
-ONBUILD RUN (cd programs/server && npm install)
-EXPOSE 8080
-CMD []
-ENV PORT 8080
-ENTRYPOINT MONGO_URL=mongodb://$MONGO_SERVICE_HOST:$MONGO_SERVICE_PORT /usr/local/bin/node main.js
diff --git a/release-0.19.0/examples/meteor/dockerbase/README.md b/release-0.19.0/examples/meteor/dockerbase/README.md
deleted file mode 100644
index a17b773e6ad..00000000000
--- a/release-0.19.0/examples/meteor/dockerbase/README.md
+++ /dev/null
@@ -1,15 +0,0 @@
-Building the meteor-kubernetes base image
------------------------------------------
-
-As a normal user you don't need to do this since the image is already built and pushed to Docker Hub. You can just use it as a base image. See [this example](https://github.com/Q42/meteor-gke-example/blob/master/Dockerfile).
-
-To build and push the base meteor-kubernetes image:
-
- docker build -t chees/meteor-kubernetes .
- docker push chees/meteor-kubernetes
-
-
-[]()
-
-
-[]()
diff --git a/release-0.19.0/examples/meteor/meteor-controller.json b/release-0.19.0/examples/meteor/meteor-controller.json
deleted file mode 100644
index 2935126e03f..00000000000
--- a/release-0.19.0/examples/meteor/meteor-controller.json
+++ /dev/null
@@ -1,40 +0,0 @@
-{
- "kind": "ReplicationController",
- "apiVersion": "v1beta3",
- "metadata": {
- "name": "meteor-controller",
- "labels": {
- "name": "meteor"
- }
- },
- "spec": {
- "replicas": 2,
- "selector": {
- "name": "meteor"
- },
- "template": {
- "metadata": {
- "labels": {
- "name": "meteor"
- }
- },
- "spec": {
- "containers": [
- {
- "name": "meteor",
- "image": "chees/meteor-gke-example:latest",
- "ports": [
- {
- "name": "http-server",
- "hostPort": 80,
- "containerPort": 8080,
- "protocol": "TCP"
- }
- ],
- "resources": {}
- }
- ]
- }
- }
- }
-}
diff --git a/release-0.19.0/examples/meteor/meteor-service.json b/release-0.19.0/examples/meteor/meteor-service.json
deleted file mode 100644
index e04be7c13f8..00000000000
--- a/release-0.19.0/examples/meteor/meteor-service.json
+++ /dev/null
@@ -1,21 +0,0 @@
-{
- "kind": "Service",
- "apiVersion": "v1beta3",
- "metadata": {
- "name": "meteor"
- },
- "spec": {
- "ports": [
- {
- "protocol": "TCP",
- "port": 80,
- "targetPort": "http-server"
- }
- ],
- "selector": {
- "name": "meteor"
- },
- "createExternalLoadBalancer": true,
- "sessionAffinity": "ClientIP"
- }
-}
diff --git a/release-0.19.0/examples/meteor/mongo-pod.json b/release-0.19.0/examples/meteor/mongo-pod.json
deleted file mode 100644
index cd7deba68e8..00000000000
--- a/release-0.19.0/examples/meteor/mongo-pod.json
+++ /dev/null
@@ -1,42 +0,0 @@
-{
- "kind": "Pod",
- "apiVersion": "v1beta3",
- "metadata": {
- "name": "mongo",
- "labels": {
- "name": "mongo",
- "role": "mongo"
- }
- },
- "spec": {
- "volumes": [
- {
- "name": "mongo-disk",
- "gcePersistentDisk": {
- "pdName": "mongo-disk",
- "fsType": "ext4"
- }
- }
- ],
- "containers": [
- {
- "name": "mongo",
- "image": "mongo:latest",
- "ports": [
- {
- "name": "mongo",
- "containerPort": 27017,
- "protocol": "TCP"
- }
- ],
- "resources": {},
- "volumeMounts": [
- {
- "name": "mongo-disk",
- "mountPath": "/data/db"
- }
- ]
- }
- ]
- }
-}
diff --git a/release-0.19.0/examples/meteor/mongo-service.json b/release-0.19.0/examples/meteor/mongo-service.json
deleted file mode 100644
index 72e9ed46503..00000000000
--- a/release-0.19.0/examples/meteor/mongo-service.json
+++ /dev/null
@@ -1,23 +0,0 @@
-{
- "kind": "Service",
- "apiVersion": "v1beta3",
- "metadata": {
- "name": "mongo",
- "labels": {
- "name": "mongo"
- }
- },
- "spec": {
- "ports": [
- {
- "protocol": "TCP",
- "port": 27017,
- "targetPort": "mongo"
- }
- ],
- "selector": {
- "name": "mongo",
- "role": "mongo"
- }
- }
-}
diff --git a/release-0.19.0/examples/mysql-wordpress-pd/README.md b/release-0.19.0/examples/mysql-wordpress-pd/README.md
deleted file mode 100644
index 5362451f6a1..00000000000
--- a/release-0.19.0/examples/mysql-wordpress-pd/README.md
+++ /dev/null
@@ -1,314 +0,0 @@
-
-# Persistent Installation of MySQL and WordPress on Kubernetes
-
-This example describes how to run a persistent installation of [Wordpress](https://wordpress.org/) using the [volumes](/docs/volumes.md) feature of Kubernetes, and [Google Compute Engine](https://cloud.google.com/compute/docs/disks) [persistent disks](/docs/volumes.md#gcepersistentdisk).
-
-We'll use the [mysql](https://registry.hub.docker.com/_/mysql/) and [wordpress](https://registry.hub.docker.com/_/wordpress/) official [Docker](https://www.docker.com/) images for this installation. (The wordpress image includes an Apache server).
-
-We'll create two Kubernetes [pods](http://docs.k8s.io/pods.md) to run mysql and wordpress, both with associated persistent disks, then set up a Kubernetes [service](http://docs.k8s.io/services.md) to front each pod.
-
-This example demonstrates several useful things, including: how to set up and use persistent disks with Kubernetes pods; how to define Kubernetes services to leverage docker-links-compatible service environment variables; and use of an external load balancer to expose the wordpress service externally and make it transparent to the user if the wordpress pod moves to a different cluster node.
-
-## Install gcloud and start up a Kubernetes cluster
-
-First, if you have not already done so, [create](https://cloud.google.com/compute/docs/quickstart) a [Google Cloud Platform](https://cloud.google.com/) project, and install the [gcloud SDK](https://cloud.google.com/sdk/).
-
-Then, set the gcloud default project name to point to the project you want to use for your Kubernetes cluster:
-
-```
-gcloud config set project