diff --git a/release-0.19.0/docs/.files_generated b/release-0.19.0/docs/.files_generated deleted file mode 100644 index ea5ef406c64..00000000000 --- a/release-0.19.0/docs/.files_generated +++ /dev/null @@ -1,28 +0,0 @@ -kubectl.md -kubectl_api-versions.md -kubectl_cluster-info.md -kubectl_config.md -kubectl_config_set-cluster.md -kubectl_config_set-context.md -kubectl_config_set-credentials.md -kubectl_config_set.md -kubectl_config_unset.md -kubectl_config_use-context.md -kubectl_config_view.md -kubectl_create.md -kubectl_delete.md -kubectl_describe.md -kubectl_exec.md -kubectl_expose.md -kubectl_get.md -kubectl_label.md -kubectl_logs.md -kubectl_namespace.md -kubectl_port-forward.md -kubectl_proxy.md -kubectl_rolling-update.md -kubectl_run.md -kubectl_scale.md -kubectl_stop.md -kubectl_update.md -kubectl_version.md diff --git a/release-0.19.0/docs/README.md b/release-0.19.0/docs/README.md deleted file mode 100644 index 137a5d1bd0f..00000000000 --- a/release-0.19.0/docs/README.md +++ /dev/null @@ -1,29 +0,0 @@ -# Kubernetes Documentation - -**Note** -This documentation is current for 0.19.0. - -Documentation for previous releases is available in their respective branches: - * [v0.18.1](https://github.com/GoogleCloudPlatform/kubernetes/tree/release-0.18/docs) - * [v0.17.1](https://github.com/GoogleCloudPlatform/kubernetes/tree/release-0.17/docs) - -* The [User's guide](user-guide.md) is for anyone who wants to run programs and services on an exisiting Kubernetes cluster. - -* The [Cluster Admin's guide](cluster-admin-guide.md) is for anyone setting up a Kubernetes cluster or administering it. - -* The [Developer guide](developer-guide.md) is for anyone wanting to write programs that access the kubernetes API, - write plugins or extensions, or modify the core code of kubernetes. - -* The [Kubectl Command Line Interface](kubectl.md) is a detailed reference on the `kubectl` CLI. - -* The [API object documentation](http://kubernetes.io/third_party/swagger-ui/) is a detailed description of all fields found in core API objects. - -* An overview of the [Design of Kubernetes](design) - -* There are example files and walkthroughs in the [examples](../examples) folder. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/README.md?pixel)]() diff --git a/release-0.19.0/docs/accessing-the-cluster.md b/release-0.19.0/docs/accessing-the-cluster.md deleted file mode 100644 index 0811b04534f..00000000000 --- a/release-0.19.0/docs/accessing-the-cluster.md +++ /dev/null @@ -1,342 +0,0 @@ -# User Guide to Accessing the Cluster - * [Accessing the cluster API](#api) - * [Accessing services running on the cluster](#otherservices) - * [Requesting redirects](#redirect) - * [So many proxies](#somanyproxies) - -## Accessing the cluster API -### Accessing for the first time with kubectl -When accessing the Kubernetes API for the first time, we suggest using the -kubernetes CLI, `kubectl`. - -To access a cluster, you need to know the location of the cluster and have credentials -to access it. Typically, this is automatically set-up when you work through -though a [Getting started guide](../docs/getting-started-guide/README.md), -or someone else setup the cluster and provided you with credentials and a location. - -Check the location and credentials that kubectl knows about with this command: -``` -kubectl config view -``` -. - -Many of the [examples](../examples/README.md) provide an introduction to using -kubectl and complete documentation is found in the [kubectl manual](../docs/kubectl.md). - -### Directly accessing the REST API -Kubectl handles locating and authenticating to the apiserver. -If you want to directly access the REST API with an http client like -curl or wget, or a browser, there are several ways to locate and authenticate: - - Run kubectl in proxy mode. - - Recommended approach. - - Uses stored apiserver location. - - Verifies identity of apiserver using self-signed cert. No MITM possible. - - Authenticates to apiserver. - - In future, may do intelligent client-side load-balancing and failover. - - Provide the location and credentials directly to the http client. - - Alternate approach. - - Works with some types of client code that are confused by using a proxy. - - Need to import a root cert into your browser to protect against MITM. - -#### Using kubectl proxy - -The following command runs kubectl in a mode where it acts as a reverse proxy. It handles -locating the apiserver and authenticating. -Run it like this: -``` -kubectl proxy --port=8080 & -``` -See [kubectl proxy](../docs/kubectl-proxy.md) for more details. - -Then you can explore the API with curl, wget, or a browser, like so: -``` -$ curl http://localhost:8080/api -{ - "versions": [ - "v1" - ] -} -``` -#### Without kubectl proxy -It is also possible to avoid using kubectl proxy by passing an authentication token -directly to the apiserver, like this: -``` -$ APISERVER=$(kubectl config view | grep server | cut -f 2- -d ":" | tr -d " ") -$ TOKEN=$(kubectl config view | grep token | cut -f 2 -d ":" | tr -d " ") -$ curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure -{ - "versions": [ - "v1" - ] -} -``` - -The above example uses the `--insecure` flag. This leaves it subject to MITM -attacks. When kubectl accesses the cluster it uses a stored root certificate -and client certificates to access the server. (These are installed in the -`~/.kube` directory). Since cluster certificates are typically self-signed, it -make take special configuration to get your http client to use root -certificate. - -On some clusters, the apiserver does not require authentication; it may serve -on localhost, or be protected by a firewall. There is not a standard -for this. [Configuring Access to the API](../docs/accessing_the_api.md) -describes how a cluster admin can configure this. Such approaches may conflict -with future high-availability support. - -### Programmatic access to the API - -There are [client libraries](../docs/client-libraries.md) for accessing the API -from several languages. The Kubernetes project-supported -[Go](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/pkg/client) -client library can use the same [kubeconfig file](../docs/kubeconfig-file.md) -as the kubectl CLI does to locate and authenticate to the apiserver. - -See documentation for other libraries for how they authenticate. - -### Accessing the API from a Pod - -When accessing the API from a pod, locating and authenticating -to the api server are somewhat different. - -The recommended way to locate the apiserver within the pod is with -the `kubernetes` DNS name, which resolves to a Service IP which in turn -will be routed to an apiserver. - -The recommended way to authenticate to the apiserver is with a -[service account](../docs/service_accounts.md). By default, a pod -is associated with a service account, and a credential (token) for that -service account is placed into the filetree of each container in that pod, -at `/var/run/secrets/kubernetes.io/serviceaccount`. - -From within a pod the recommended ways to connect to API are: - - run a kubectl proxy as one of the containers in the pod, or as a background - process within a container. This proxies the - kubernetes API to the localhost interface of the pod, so that other processes - in any container of the pod can access it. See this [example of using kubectl proxy - in a pod](../examples/kubectl-container/README.md). - - use the Go client library, and create a client using the `client.NewInContainer()` factory. - This handles locating and authenticating to the apiserver. - - -## Accessing services running on the cluster -The previous section was about connecting the Kubernetes API server. This section is about -connecting to other services running on Kubernetes cluster. In kubernetes, the -[nodes](../docs/node.md), [pods](../docs/pods.md) and [services](services.md) all have -their own IPs. In many cases, the node IPs, pod IPs, and some service IPs on a cluster will not be -routable outside from a machine outside the cluster, such as your desktop machine. - -### Ways to connect -You have several options for connecting to nodes, pods and services from outside the cluster: - - Access services through public IPs. - - Use a service with type `NodePort` or `LoadBalancer` to make the service reachable outside - the cluster. See the [services](../docs/services.md) and - [kubectl expose](../docs/kubectl_expose.md) documentation. - - Depending on your cluster environment, this may just expose the service to your corporate network, - or it may expose it to the internet. Think about whether the service being exposed is secure. - Does it do its own authentication? - - Place pods behind services. To access one specific pod from a set of replicas, such as for debugging, - place a unique label on the pod it and create a new service which selects this label. - - In most cases, it should not be necessary for application developer to directly access - nodes via their nodeIPs. - - Access services, nodes, or pods using the Proxy Verb. - - Does apiserver authentication and authorization prior to accessing the remote service. - Use this if the services are not secure enough to expose to the internet, or to gain - access to ports on the node IP, or for debugging. - - Proxies may cause problems for some web applications. - - Only works for HTTP/HTTPS. - - Described in [using the apiserver proxy](#apiserverproxy). - - Access from a node or pod in the cluster. - - Run a pod, and then connect to a shell in it using [kubectl exec](../docs/kubectl_exec.md). - Connect to other nodes, pods, and services from that shell. - - Some clusters may allow you to ssh to a node in the cluster. From there you may be able to - access cluster services. This is a non-standard method, and will work on some clusters but - not others. Browsers and other tools may or may not be installed. Cluster DNS may not work. - -### Discovering builtin services - -Typically, there are several services which are started on a cluster by default. Get a list of these -with the `kubectl cluster-info` command: -``` -$ kubectl cluster-info - - Kubernetes master is running at https://104.197.5.247 - elasticsearch-logging is running at https://104.197.5.247/api/v1/proxy/namespaces/default/services/elasticsearch-logging - kibana-logging is running at https://104.197.5.247/api/v1/proxy/namespaces/default/services/kibana-logging - kube-dns is running at https://104.197.5.247/api/v1/proxy/namespaces/default/services/kube-dns - grafana is running at https://104.197.5.247/api/v1/proxy/namespaces/default/services/monitoring-grafana - heapster is running at https://104.197.5.247/api/v1/proxy/namespaces/default/services/monitoring-heapster -``` -This shows the proxy-verb URL for accessing each service. -For example, this cluster has cluster-level logging enabled (using Elasticsearch), which can be reached -at `https://104.197.5.247/api/v1/proxy/namespaces/default/services/elasticsearch-logging/` if suitable credentials are passed, or through a kubectl proxy at, for example: -`http://localhost:8080/api/v1/proxy/namespaces/default/services/elasticsearch-logging/`. -(See [above](#api) for how to pass credentials or use kubectl proxy.) - -#### Manually constructing apiserver proxy URLs -As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you simply append to the service's proxy URL: -`http://`*`kubernetes_master_address`*`/`*`service_path`*`/`*`service_name`*`/`*`service_endpoint-suffix-parameter`* -##### Examples - * To access the Elasticsearch service endpoint `_search?q=user:kimchy`, you would use: `http://104.197.5.247/api/v1/proxy/namespaces/default/services/elasticsearch-logging/_search?q=user:kimchy` - - * To access the Elasticsearch cluster health information `_cluster/health?pretty=true`, you would use: `https://104.197.5.247/api/v1/proxy/namespaces/default/services/elasticsearch-logging/_cluster/health?pretty=true` - ``` - { - "cluster_name" : "kubernetes_logging", - "status" : "yellow", - "timed_out" : false, - "number_of_nodes" : 1, - "number_of_data_nodes" : 1, - "active_primary_shards" : 5, - "active_shards" : 5, - "relocating_shards" : 0, - "initializing_shards" : 0, - "unassigned_shards" : 5 - } - ``` - -#### Using web browsers to access services running on the cluster -You may be able to put a apiserver proxy url into the address bar of a browser. However: - - Web browsers cannot usually pass tokens, so you may need to use basic (password) auth. Apiserver can be configured to accespt basic auth, - but your cluster may not be configured to accept basic auth. - - Some web apps may not work, particularly those with client side javascript that construct urls in a - way that is unaware of the proxy path prefix. - -## Requesting redirects -Use a `redirect` request so that the server returns an HTTP redirect response and identifies the specific node and service that -can handle the request. - -**Note**: Since the hostname or address that is returned is usually only accessible from inside the cluster, -sending `redirect` requests is useful only for code running inside the cluster. Also, keep in mind that any subsequent `redirect` requests to the same -server might return different results (because another node at that point in time can better serve the request). - -**Tip**: Use a redirect request to reduce calls to the proxy server by first obtaining the address of a node on the -cluster and then using that returned address for all subsequent requests. - -##### Example -To request a redirect and then verify the address that gets returned, let's run a query on `oban` (Google Compute Engine virtual machine). Note that `oban` is running in the same project and default network (Google Compute Engine) as the Kubernetes cluster. - -To request a redirect for the Elasticsearch service, we can run the following `curl` command: -``` -user@oban:~$ curl -L -k -u admin:4mty0Vl9nNFfwLJz https://104.197.5.247/api/v1/redirect/namespaces/default/services/elasticsearch-logging/ -{ - "status" : 200, - "name" : "Skin", - "cluster_name" : "kubernetes_logging", - "version" : { - "number" : "1.4.4", - "build_hash" : "c88f77ffc81301dfa9dfd81ca2232f09588bd512", - "build_timestamp" : "2015-02-19T13:05:36Z", - "build_snapshot" : false, - "lucene_version" : "4.10.3" - }, - "tagline" : "You Know, for Search" -} -``` -**Note**: We use the `-L` flag in the request so that `curl` follows the returned redirect address and retrieves the Elasticsearch service information. - -If we examine the actual redirect header (instead run the same `curl` command with `-v`), we see that the request to `https://104.197.5.247/api/v1/redirect/namespaces/default/services/elasticsearch-logging/` is redirected to `http://10.244.2.7:9200`: -``` -user@oban:~$ curl -v -k -u admin:4mty0Vl9nNFfwLJz https://104.197.5.247/api/v1/redirect/namespaces/default/services/elasticsearch-logging/ -* About to connect() to 104.197.5.247 port 443 (#0) -* Trying 104.197.5.247... -* connected -* Connected to 104.197.5.247 (104.197.5.247) port 443 (#0) -* successfully set certificate verify locations: -* CAfile: none - CApath: /etc/ssl/certs -* SSLv3, TLS handshake, Client hello (1): -* SSLv3, TLS handshake, Server hello (2): -* SSLv3, TLS handshake, CERT (11): -* SSLv3, TLS handshake, Server key exchange (12): -* SSLv3, TLS handshake, Server finished (14): -* SSLv3, TLS handshake, Client key exchange (16): -* SSLv3, TLS change cipher, Client hello (1): -* SSLv3, TLS handshake, Finished (20): -* SSLv3, TLS change cipher, Client hello (1): -* SSLv3, TLS handshake, Finished (20): -* SSL connection using ECDHE-RSA-AES256-GCM-SHA384 -* Server certificate: -* subject: CN=kubernetes-master -* start date: 2015-03-04 19:40:24 GMT -* expire date: 2025-03-01 19:40:24 GMT -* issuer: CN=104.197.5.247@1425498024 -* SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway. -* Server auth using Basic with user 'admin' -> GET /api/v1/redirect/namespaces/default/services/elasticsearch-logging HTTP/1.1 -> Authorization: Basic YWRtaW46M210eTBWbDluTkZmd0xKeg== -> User-Agent: curl/7.26.0 -> Host: 104.197.5.247 -> Accept: */* -> -* additional stuff not fine transfer.c:1037: 0 0 -* HTTP 1.1 or later with persistent connection, pipelining supported -< HTTP/1.1 307 Temporary Redirect -< Server: nginx/1.2.1 -< Date: Thu, 05 Mar 2015 00:14:45 GMT -< Content-Type: text/plain; charset=utf-8 -< Content-Length: 0 -< Connection: keep-alive -< Location: http://10.244.2.7:9200 -< -* Connection #0 to host 104.197.5.247 left intact -* Closing connection #0 -* SSLv3, TLS alert, Client hello (1): -``` - -We can also run the `kubectl get pods` command to view a list of the pods on the cluster and verify that `http://10.244.2.7` is where the Elasticsearch service is running: -``` -$ kubectl get pods -POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED -elasticsearch-logging-controller-gziey 10.244.2.7 elasticsearch-logging kubernetes/elasticsearch:1.0 kubernetes-minion-hqhv.c.kubernetes-user2.internal/104.154.33.252 kubernetes.io/cluster-service=true,name=elasticsearch-logging Running 5 hours -kibana-logging-controller-ls6k1 10.244.1.9 kibana-logging kubernetes/kibana:1.1 kubernetes-minion-h5kt.c.kubernetes-user2.internal/146.148.80.37 kubernetes.io/cluster-service=true,name=kibana-logging Running 5 hours -kube-dns-oh43e 10.244.1.10 etcd quay.io/coreos/etcd:v2.0.3 kubernetes-minion-h5kt.c.kubernetes-user2.internal/146.148.80.37 k8s-app=kube-dns,kubernetes.io/cluster-service=true,name=kube-dns Running 5 hours - kube2sky kubernetes/kube2sky:1.0 - skydns kubernetes/skydns:2014-12-23-001 -monitoring-heapster-controller-fplln 10.244.0.4 heapster kubernetes/heapster:v0.8 kubernetes-minion-2il2.c.kubernetes-user2.internal/130.211.155.16 kubernetes.io/cluster-service=true,name=heapster,uses=monitoring-influxdb Running 5 hours -monitoring-influx-grafana-controller-0133o 10.244.3.4 influxdb kubernetes/heapster_influxdb:v0.3 kubernetes-minion-kmin.c.kubernetes-user2.internal/130.211.173.22 kubernetes.io/cluster-service=true,name=influxGrafana Running 5 hours - grafana kubernetes/heapster_grafana:v0.4 -``` - -##So Many Proxies -There are several different proxies you may encounter when using kubernetes: - 1. The [kubectl proxy](#kubectlproxy): - - runs on a user's desktop or in a pod - - proxies from a localhost address to the kubernetes apiserver - - client to proxy uses HTTP - - proxy to apiserver uses HTTPS - - locates apiserver - - adds authentication headers - 1. The [apiserver proxy](#apiserverproxy): - - is a bastion built into the apiserver - - connects a user outside of the cluster to cluster IPs which otherwise might not be reachable - - runs in the apiserver processes - - client to proxy uses HTTPS (or http if apiserver so configured) - - proxy to target may use HTTP or HTTPS as chosen by proxy using available information - - can be used to reach a Node, Pod, or Service - - does load balancing when used to reach a Service - 1. The [kube proxy](../docs/services.md#ips-and-vips): - - runs on each node - - proxies UDP and TCP - - does not understand HTTP - - provides load balancing - - is just used to reach services - 1. A Proxy/Load-balancer in front of apiserver(s): - - existence and implementation varies from cluster to cluster (e.g. nginx) - - sits between all clients and one or more apiservers - - acts as load balancer if there are several apiservers. - 1. Cloud Load Balancers on external services: - - are provided by some cloud providers (e.g. AWS ELB, Google Cloud Load Balancer) - - are created automatically when the kubernetes service has type `LoadBalancer` - - use UDP/TCP only - - implementation varies by cloud provider. - - - -Kubernetes users will typically not need to worry about anything other than the first two types. The cluster admin -will typically ensure that the latter types are setup correctly. - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/accessing-the-cluster.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/accessing-the-cluster.md?pixel)]() diff --git a/release-0.19.0/docs/accessing_the_api.md b/release-0.19.0/docs/accessing_the_api.md deleted file mode 100644 index 910a012a562..00000000000 --- a/release-0.19.0/docs/accessing_the_api.md +++ /dev/null @@ -1,81 +0,0 @@ -# Configuring APIserver ports - -This document describes what ports the kubernetes apiserver -may serve on and how to reach them. The audience is -cluster administrators who want to customize their cluster -or understand the details. - -Most questions about accessing the cluster are covered -in [Accessing the cluster](../docs/accessing-the-cluster.md). - - -## Ports and IPs Served On -The Kubernetes API is served by the Kubernetes APIServer process. Typically, -there is one of these running on a single kubernetes-master node. - -By default the Kubernetes APIserver serves HTTP on 2 ports: - 1. Localhost Port - - serves HTTP - - default is port 8080, change with `-port` flag. - - defaults IP is localhost, change with `-address` flag. - - no authentication or authorization checks in HTTP - - protected by need to have host access - 2. Secure Port - - default is port 443, change with `-secure_port` - - default IP is first non-localhost network interface, change with `-public_address_override` - - serves HTTPS. Set cert with `-tls_cert_file` and key with `-tls_private_key_file`. - - uses token-file or client-certificate based [authentication](./authentication.md). - - uses policy-based [authorization](./authorization.md). - 3. Removed: ReadOnly Port - - For security reasons, this had to be removed. Use the service account feature instead. - -## Proxies and Firewall rules - -Additionally, in some configurations there is a proxy (nginx) running -on the same machine as the apiserver process. The proxy serves HTTPS protected -by Basic Auth on port 443, and proxies to the apiserver on localhost:8080. In -these configurations the secure port is typically set to 6443. - -A firewall rule is typically configured to allow external HTTPS access to port 443. - -The above are defaults and reflect how Kubernetes is deployed to GCE using -kube-up.sh. Other cloud providers may vary. - -## Use Cases vs IP:Ports - -There are three differently configured serving ports because there are a -variety of uses cases: - 1. Clients outside of a Kubernetes cluster, such as human running `kubectl` - on desktop machine. Currently, accesses the Localhost Port via a proxy (nginx) - running on the `kubernetes-master` machine. Proxy uses bearer token authentication. - 2. Processes running in Containers on Kubernetes that need to do read from - the apiserver. Currently, these can use a service account. - 3. Scheduler and Controller-manager processes, which need to do read-write - API operations. Currently, these have to run on the operations on the - apiserver. Currently, these have to run on the same host as the - apiserver and use the Localhost Port. In the future, these will be - switched to using service accounts to avoid the need to be co-located. - 4. Kubelets, which need to do read-write API operations and are necessarily - on different machines than the apiserver. Kubelet uses the Secure Port - to get their pods, to find the services that a pod can see, and to - write events. Credentials are distributed to kubelets at cluster - setup time. - -## Expected changes - - Policy will limit the actions kubelets can do via the authed port. - - Kubelets will change from token-based authentication to cert-based-auth. - - Scheduler and Controller-manager will use the Secure Port too. They - will then be able to run on different machines than the apiserver. - - A general mechanism will be provided for [giving credentials to - pods]( - https://github.com/GoogleCloudPlatform/kubernetes/issues/1907). - - Clients, like kubectl, will all support token-based auth, and the - Localhost will no longer be needed, and will not be the default. - However, the localhost port may continue to be an option for - installations that want to do their own auth proxy. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/accessing_the_api.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/accessing_the_api.md?pixel)]() diff --git a/release-0.19.0/docs/admission_controllers.md b/release-0.19.0/docs/admission_controllers.md deleted file mode 100644 index 9d302c0fa79..00000000000 --- a/release-0.19.0/docs/admission_controllers.md +++ /dev/null @@ -1,112 +0,0 @@ -# Admission Controllers - -## What are they? - -An admission control plug-in is a piece of code that intercepts requests to the Kubernetes -API server prior to persistence of the object, but after the request is authenticated -and authorized. The plug-in code is in the API server process -and must be compiled into the binary in order to be used at this time. - -Each admission control plug-in is run in sequence before a request is accepted into the cluster. If -any of the plug-ins in the sequence reject the request, the entire request is rejected immediately -and an error is returned to the end-user. - -Admission control plug-ins may mutate the incoming object in some cases to apply system configured -defaults. In addition, admission control plug-ins may mutate related resources as part of request -processing to do things like increment quota usage. - -## Why do I need them? - -Many advanced features in Kubernetes require an admission control plug-in to be enabled in order -to properly support the feature. As a result, a Kubernetes API server that is not properly -configured with the right set of admission control plug-ins is an incomplete server and will not -support all the features you expect. - -## How do I turn on an admission control plug-in? - -The Kubernetes API server supports a flag, ```admission_control``` that takes a comma-delimited, -ordered list of admission control choices to invoke prior to modifying objects in the cluster. - -## What does each plug-in do? - -### AlwaysAdmit - -Use this plugin by itself to pass-through all requests. - -### AlwaysDeny - -Rejects all requests. Used for testing. - -### DenyExecOnPrivileged - -This plug-in will intercept all requests to exec a command in a pod if that pod has a privileged container. - -If your cluster supports privileged containers, and you want to restrict the ability of end-users to exec -commands in those containers, we strongly encourage enabling this plug-in. - -### ServiceAccount - -This plug-in implements automation for [serviceAccounts]( service_accounts.md). -We strongly recommend using this plug-in if you intend to make use of Kubernetes ```ServiceAccount``` objects. - -### SecurityContextDeny - -This plug-in will deny any pod with a [SecurityContext](security_context.md) that defines options that were not available on the ```Container```. - -### ResourceQuota - -This plug-in will observe the incoming request and ensure that it does not violate any of the constraints -enumerated in the ```ResourceQuota``` object in a ```Namespace```. If you are using ```ResourceQuota``` -objects in your Kubernetes deployment, you MUST use this plug-in to enforce quota constraints. - -See the [resourceQuota design doc]( design/admission_control_resource_quota.md). - -It is strongly encouraged that this plug-in is configured last in the sequence of admission control plug-ins. This is -so that quota is not prematurely incremented only for the request to be rejected later in admission control. - -### LimitRanger - -This plug-in will observe the incoming request and ensure that it does not violate any of the constraints -enumerated in the ```LimitRange``` object in a ```Namespace```. If you are using ```LimitRange``` objects in -your Kubernetes deployment, you MUST use this plug-in to enforce those constraints. - -See the [limitRange design doc]( design/admission_control_limit_range.md). - -### NamespaceExists - -This plug-in will observe all incoming requests that attempt to create a resource in a Kubernetes ```Namespace``` -and reject the request if the ```Namespace``` was not previously created. We strongly recommend running -this plug-in to ensure integrity of your data. - -### NamespaceAutoProvision (deprecated) - -This plug-in will observe all incoming requests that attempt to create a resource in a Kubernetes ```Namespace``` -and create a new ```Namespace``` if one did not already exist previously. - -We strongly recommend ```NamespaceExists``` over ```NamespaceAutoProvision```. - -### NamespaceLifecycle - -This plug-in enforces that a ```Namespace``` that is undergoing termination cannot have new content created in it. - -A ```Namespace``` deletion kicks off a sequence of operations that remove all content (pods, services, etc.) in that -namespace. In order to enforce integrity of that process, we strongly recommend running this plug-in. - -Once ```NamespaceAutoProvision``` is deprecated, we anticipate ```NamespaceLifecycle``` and ```NamespaceExists``` will -be merged into a single plug-in that enforces the life-cycle of a ```Namespace``` in Kubernetes. - -## Is there a recommended set of plug-ins to use? - -Yes. - -For Kubernetes 1.0, we strongly recommend running the following set of admission control plug-ins (order matters): - -```shell ---admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota -``` - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admission_controllers.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/admission_controllers.md?pixel)]() diff --git a/release-0.19.0/docs/annotations.md b/release-0.19.0/docs/annotations.md deleted file mode 100644 index 070fe1e22e0..00000000000 --- a/release-0.19.0/docs/annotations.md +++ /dev/null @@ -1,31 +0,0 @@ -# Annotations - -We have [labels](labels.md) for identifying metadata. - -It is also useful to be able to attach arbitrary non-identifying metadata, for retrieval by API clients such as tools, libraries, etc. This information may be large, may be structured or unstructured, may include characters not permitted by labels, etc. Such information would not be used for object selection and therefore doesn't belong in labels. - -Like labels, annotations are key-value maps. -``` -"annotations": { - "key1" : "value1", - "key2" : "value2" -} -``` - -Possible information that could be recorded in annotations: - -* fields managed by a declarative configuration layer, to distinguish them from client- and/or server-set default values and other auto-generated fields, fields set by auto-sizing/auto-scaling systems, etc., in order to facilitate merging -* build/release/image information (timestamps, release ids, git branch, PR numbers, image hashes, registry address, etc.) -* pointers to logging/monitoring/analytics/audit repos -* client library/tool information (e.g. for debugging purposes -- name, version, build info) -* other user and/or tool/system provenance info, such as URLs of related objects from other ecosystem components -* lightweight rollout tool metadata (config and/or checkpoints) -* phone/pager number(s) of person(s) responsible, or directory entry where that info could be found, such as a team website - -Yes, this information could be stored in an external database or directory, but that would make it much harder to produce shared client libraries and tools for deployment, management, introspection, etc. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/annotations.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/annotations.md?pixel)]() diff --git a/release-0.19.0/docs/api-conventions.md b/release-0.19.0/docs/api-conventions.md deleted file mode 100644 index 0ee4d0d442c..00000000000 --- a/release-0.19.0/docs/api-conventions.md +++ /dev/null @@ -1,593 +0,0 @@ -API Conventions -=============== - -Updated: 4/16/2015 - -The conventions of the [Kubernetes API](api.md) (and related APIs in the ecosystem) are intended to ease client development and ensure that configuration mechanisms can be implemented that work across a diverse set of use cases consistently. - -The general style of the Kubernetes API is RESTful - clients create, update, delete, or retrieve a description of an object via the standard HTTP verbs (POST, PUT, DELETE, and GET) - and those APIs preferentially accept and return JSON. Kubernetes also exposes additional endpoints for non-standard verbs and allows alternative content types. All of the JSON accepted and returned by the server has a schema, identified by the "kind" and "apiVersion" fields. Where relevant HTTP header fields exist, they should mirror the content of JSON fields, but the information should not be represented only in the HTTP header. - -The following terms are defined: - -* **Kind** the name of a particular object schema (e.g. the "Cat" and "Dog" kinds would have different attributes and properties) -* **Resource** a representation of a system entity, sent or retrieved as JSON via HTTP to the server. Resources are exposed via: - * Collections - a list of resources of the same type, which may be queryable - * Elements - an individual resource, addressable via a URL - -Each resource typically accepts and returns data of a single kind. A kind may be accepted or returned by multiple resources that reflect specific use cases. For instance, the kind "pod" is exposed as a "pods" resource that allows end users to create, update, and delete pods, while a separate "pod status" resource (that acts on "pod" kind) allows automated processes to update a subset of the fields in that resource. A "restart" resource might be exposed for a number of different resources to allow the same action to have different results for each object. - -Resource collections should be all lowercase and plural, whereas kinds are CamelCase and singular. - - -Types (Kinds) -------------- - -Kinds are grouped into three categories: - -1. **Objects** represent a persistent entity in the system. - - Creating an API object is a record of intent - once created, the system will work to ensure that resource exists. All API objects have common metadata. - - An object may have multiple resources that clients can use to perform specific actions than create, update, delete, or get. - - Examples: Pods, ReplicationControllers, Services, Namespaces, Nodes - -2. **Lists** are collections of **resources** of one (usually) or more (occasionally) kinds. - - Lists have a limited set of common metadata. All lists use the "items" field to contain the array of objects they return. - - Most objects defined in the system should have an endpoint that returns the full set of resources, as well as zero or more endpoints that return subsets of the full list. Some objects may be singletons (the current user, the system defaults) and may not have lists. - - In addition, all lists that return objects with labels should support label filtering (see [labels.md](labels.md), and most lists should support filtering by fields. - - Examples: PodLists, ServiceLists, NodeLists - - TODO: Describe field filtering below or in a separate doc. - -3. **Simple** kinds are used for specific actions on objects and for non-persistent entities. - - Given their limited scope, they have the same set of limited common metadata as lists. - - The "size" action may accept a simple resource that has only a single field as input (the number of things). The "status" kind is returned when errors occur and is not persisted in the system. - - Examples: Binding, Status - -The standard REST verbs (defined below) MUST return singular JSON objects. Some API endpoints may deviate from the strict REST pattern and return resources that are not singular JSON objects, such as streams of JSON objects or unstructured text log data. - -The term "kind" is reserved for these "top-level" API types. The term "type" should be used for distinguishing sub-categories within objects or subobjects. - -### Resources - -All JSON objects returned by an API MUST have the following fields: - -* kind: a string that identifies the schema this object should have -* apiVersion: a string that identifies the version of the schema the object should have - -These fields are required for proper decoding of the object. They may be populated by the server by default from the specified URL path, but the client likely needs to know the values in order to construct the URL path. - -### Objects - -#### Metadata - -Every object kind MUST have the following metadata in a nested object field called "metadata": - -* namespace: a namespace is a DNS compatible subdomain that objects are subdivided into. The default namespace is 'default'. See [namespaces.md](namespaces.md) for more. -* name: a string that uniquely identifies this object within the current namespace (see [identifiers.md](identifiers.md)). This value is used in the path when retrieving an individual object. -* uid: a unique in time and space value (typically an RFC 4122 generated identifier, see [identifiers.md](identifiers.md)) used to distinguish between objects with the same name that have been deleted and recreated - -Every object SHOULD have the following metadata in a nested object field called "metadata": - -* resourceVersion: a string that identifies the internal version of this object that can be used by clients to determine when objects have changed. This value MUST be treated as opaque by clients and passed unmodified back to the server. Clients should not assume that the resource version has meaning across namespaces, different kinds of resources, or different servers. (see [concurrency control](#concurrency-control-and-consistency), below, for more details) -* creationTimestamp: a string representing an RFC 3339 date of the date and time an object was created -* deletionTimestamp: a string representing an RFC 3339 date of the date and time after which this resource will be deleted. This field is set by the server when a graceful deletion is requested by the user, and is not directly settable by a client. The resource will be deleted (no longer visible from resource lists, and not reachable by name) after the time in this field. Once set, this value may not be unset or be set further into the future, although it may be shortened or the resource may be deleted prior to this time. -* labels: a map of string keys and values that can be used to organize and categorize objects (see [labels.md](labels.md)) -* annotations: a map of string keys and values that can be used by external tooling to store and retrieve arbitrary metadata about this object (see [annotations.md](annotations.md)) - -Labels are intended for organizational purposes by end users (select the pods that match this label query). Annotations enable third-party automation and tooling to decorate objects with additional metadata for their own use. - -#### Spec and Status - -By convention, the Kubernetes API makes a distinction between the specification of the desired state of an object (a nested object field called "spec") and the status of the object at the current time (a nested object field called "status"). The specification is a complete description of the desired state, including configuration settings provided by the user, [default values](#defaulting) expanded by the system, and properties initialized or otherwise changed after creation by other ecosystem components (e.g., schedulers, auto-scalers), and is persisted in stable storage with the API object. If the specification is deleted, the object will be purged from the system. The status summarizes the current state of the object in the system, and is usually persisted with the object by an automated processes but may be generated on the fly. At some cost and perhaps some temporary degradation in behavior, the status could be reconstructed by observation if it were lost. - -When a new version of an object is POSTed or PUT, the "spec" is updated and available immediately. Over time the system will work to bring the "status" into line with the "spec". The system will drive toward the most recent "spec" regardless of previous versions of that stanza. In other words, if a value is changed from 2 to 5 in one PUT and then back down to 3 in another PUT the system is not required to 'touch base' at 5 before changing the "status" to 3. In other words, the system's behavior is *level-based* rather than *edge-based*. This enables robust behavior in the presence of missed intermediate state changes. - -The Kubernetes API also serves as the foundation for the declarative configuration schema for the system. In order to facilitate level-based operation and expression of declarative configuration, fields in the specification should have declarative rather than imperative names and semantics -- they represent the desired state, not actions intended to yield the desired state. - -The PUT and POST verbs on objects will ignore the "status" values. A `/status` subresource is provided to enable system components to update statuses of resources they manage. - -Otherwise, PUT expects the whole object to be specified. Therefore, if a field is omitted it is assumed that the client wants to clear that field's value. The PUT verb does not accept partial updates. Modification of just part of an object may be achieved by GETting the resource, modifying part of the spec, labels, or annotations, and then PUTting it back. See [concurrency control](#concurrency-control-and-consistency), below, regarding read-modify-write consistency when using this pattern. Some objects may expose alternative resource representations that allow mutation of the status, or performing custom actions on the object. - -All objects that represent a physical resource whose state may vary from the user's desired intent SHOULD have a "spec" and a "status". Objects whose state cannot vary from the user's desired intent MAY have only "spec", and MAY rename "spec" to a more appropriate name. - -Objects that contain both spec and status should not contain additional top-level fields other than the standard metadata fields. - -##### Typical status properties - -* **phase**: The phase is a simple, high-level summary of the phase of the lifecycle of an object. The phase should progress monotonically. Typical phase values are `Pending` (not yet fully physically realized), `Running` or `Active` (fully realized and active, but not necessarily operating correctly), and `Terminated` (no longer active), but may vary slightly for different types of objects. New phase values should not be added to existing objects in the future. Like other status fields, it must be possible to ascertain the lifecycle phase by observation. Additional details regarding the current phase may be contained in other fields. -* **conditions**: Conditions represent orthogonal observations of an object's current state. Objects may report multiple conditions, and new types of conditions may be added in the future. Condition status values may be `True`, `False`, or `Unknown`. Unlike the phase, conditions are not expected to be monotonic -- their values may change back and forth. A typical condition type is `Ready`, which indicates the object was believed to be fully operational at the time it was last probed. Conditions may carry additional information, such as the last probe time or last transition time. - -TODO(@vishh): Reason and Message. - -Phases and conditions are observations and not, themselves, state machines, nor do we define comprehensive state machines for objects with behaviors associated with state transitions. The system is level-based and should assume an Open World. Additionally, new observations and details about these observations may be added over time. - -In order to preserve extensibility, in the future, we intend to explicitly convey properties that users and components care about rather than requiring those properties to be inferred from observations. - -Note that historical information status (e.g., last transition time, failure counts) is only provided at best effort, and is not guaranteed to not be lost. - -Status information that may be large (especially unbounded in size, such as lists of references to other objects -- see below) and/or rapidly changing, such as [resource usage](resources.md#usage-data), should be put into separate objects, with possibly a reference from the original object. This helps to ensure that GETs and watch remain reasonably efficient for the majority of clients, which may not need that data. - -#### References to related objects - -References to loosely coupled sets of objects, such as [pods](pods.md) overseen by a [replication controller](replication-controller.md), are usually best referred to using a [label selector](labels.md). In order to ensure that GETs of individual objects remain bounded in time and space, these sets may be queried via separate API queries, but will not be expanded in the referring object's status. - -References to specific objects, especially specific resource versions and/or specific fields of those objects, are specified using the `ObjectReference` type. Unlike partial URLs, the ObjectReference type facilitates flexible defaulting of fields from the referring object or other contextual information. - -References in the status of the referee to the referrer may be permitted, when the references are one-to-one and do not need to be frequently updated, particularly in an edge-based manner. - -#### Lists of named subobjects preferred over maps - -Discussed in [#2004](https://github.com/GoogleCloudPlatform/kubernetes/issues/2004) and elsewhere. There are no maps of subobjects in any API objects. Instead, the convention is to use a list of subobjects containing name fields. - -For example: -```yaml -ports: - - name: www - containerPort: 80 -``` -vs. -```yaml -ports: - www: - containerPort: 80 -``` - -This rule maintains the invariant that all JSON/YAML keys are fields in API objects. The only exceptions are pure maps in the API (currently, labels, selectors, and annotations), as opposed to sets of subobjects. - -#### Constants - -Some fields will have a list of allowed values (enumerations). These values will be strings, and they will be in CamelCase, with an initial uppercase letter. Examples: "ClusterFirst", "Pending", "ClientIP". - -### Lists and Simple kinds - -Every list or simple kind SHOULD have the following metadata in a nested object field called "metadata": - -* resourceVersion: a string that identifies the common version of the objects returned by in a list. This value MUST be treated as opaque by clients and passed unmodified back to the server. A resource version is only valid within a single namespace on a single kind of resource. - -Every simple kind returned by the server, and any simple kind sent to the server that must support idempotency or optimistic concurrency should return this value.Since simple resources are often used as input alternate actions that modify objects, the resource version of the simple resource should correspond to the resource version of the object. - - -Differing Representations -------------------------- - -An API may represent a single entity in different ways for different clients, or transform an object after certain transitions in the system occur. In these cases, one request object may have two representations available as different resources, or different kinds. - -An example is a Service, which represents the intent of the user to group a set of pods with common behavior on common ports. When Kubernetes detects a pod matches the service selector, the IP address and port of the pod are added to an Endpoints resource for that Service. The Endpoints resource exists only if the Service exists, but exposes only the IPs and ports of the selected pods. The full service is represented by two distinct resources - under the original Service resource the user created, as well as in the Endpoints resource. - -As another example, a "pod status" resource may accept a PUT with the "pod" kind, with different rules about what fields may be changed. - -Future versions of Kubernetes may allow alternative encodings of objects beyond JSON. - - -Verbs on Resources ------------------- - -API resources should use the traditional REST pattern: - -* GET /<resourceNamePlural> - Retrieve a list of type <resourceName>, e.g. GET /pods returns a list of Pods. -* POST /<resourceNamePlural> - Create a new resource from the JSON object provided by the client. -* GET /<resourceNamePlural>/<name> - Retrieves a single resource with the given name, e.g. GET /pods/first returns a Pod named 'first'. Should be constant time, and the resource should be bounded in size. -* DELETE /<resourceNamePlural>/<name> - Delete the single resource with the given name. DeleteOptions may specify gracePeriodSeconds, the optional duration in seconds before the object should be deleted. Individual kinds may declare fields which provide a default grace period, and different kinds may have differing kind-wide default grace periods. A user provided grace period overrides a default grace period, including the zero grace period ("now"). -* PUT /<resourceNamePlural>/<name> - Update or create the resource with the given name with the JSON object provided by the client. -* PATCH /<resourceNamePlural>/<name> - Selectively modify the specified fields of the resource. See more information [below](#patch). - -Kubernetes by convention exposes additional verbs as new root endpoints with singular names. Examples: - -* GET /watch/<resourceNamePlural> - Receive a stream of JSON objects corresponding to changes made to any resource of the given kind over time. -* GET /watch/<resourceNamePlural>/<name> - Receive a stream of JSON objects corresponding to changes made to the named resource of the given kind over time. - -These are verbs which change the fundamental type of data returned (watch returns a stream of JSON instead of a single JSON object). Support of additional verbs is not required for all object types. - -Two additional verbs `redirect` and `proxy` provide access to cluster resources as described in [accessing-the-cluster.md](accessing-the-cluster.md). - -When resources wish to expose alternative actions that are closely coupled to a single resource, they should do so using new sub-resources. An example is allowing automated processes to update the "status" field of a Pod. The `/pods` endpoint only allows updates to "metadata" and "spec", since those reflect end-user intent. An automated process should be able to modify status for users to see by sending an updated Pod kind to the server to the "/pods/<name>/status" endpoint - the alternate endpoint allows different rules to be applied to the update, and access to be appropriately restricted. Likewise, some actions like "stop" or "scale" are best represented as REST sub-resources that are POSTed to. The POST action may require a simple kind to be provided if the action requires parameters, or function without a request body. - -TODO: more documentation of Watch - -### PATCH operations - -The API supports three different PATCH operations, determined by their corresponding Content-Type header: - -* JSON Patch, `Content-Type: application/json-patch+json` - * As defined in [RFC6902](https://tools.ietf.org/html/rfc6902), a JSON Patch is a sequence of operations that are executed on the resource, e.g. `{"op": "add", "path": "/a/b/c", "value": [ "foo", "bar" ]}`. For more details on how to use JSON Patch, see the RFC. -* Merge Patch, `Content-Type: application/merge-json-patch+json` - * As defined in [RFC7386](https://tools.ietf.org/html/rfc7386), a Merge Patch is essentially a partial representation of the resource. The submitted JSON is "merged" with the current resource to create a new one, then the new one is saved. For more details on how to use Merge Patch, see the RFC. -* Strategic Merge Patch, `Content-Type: application/strategic-merge-patch+json` - * Strategic Merge Patch is a custom implementation of Merge Patch. For a detailed explanation of how it works and why it needed to be introduced, see below. - -#### Strategic Merge Patch - -In the standard JSON merge patch, JSON objects are always merged but lists are always replaced. Often that isn't what we want. Let's say we start with the following Pod: - -```yaml -spec: - containers: - - name: nginx - image: nginx-1.0 -``` - -...and we POST that to the server (as JSON). Then let's say we want to *add* a container to this Pod. - -```yaml -PATCH /api/v1/namespaces/default/pods/pod-name -spec: - containers: - - name: log-tailer - image: log-tailer-1.0 -``` - -If we were to use standard Merge Patch, the entire container list would be replaced with the single log-tailer container. However, our intent is for the container lists to merge together based on the `name` field. - -To solve this problem, Strategic Merge Patch uses metadata attached to the API objects to determine what lists should be merged and which ones should not. Currently the metadata is available as struct tags on the API objects themselves, but will become available to clients as Swagger annotations in the future. In the above example, the `patchStrategy` metadata for the `containers` field would be `merge` and the `patchMergeKey` would be `name`. - -Note: If the patch results in merging two lists of scalars, the scalars are first deduplicated and then merged. - -Strategic Merge Patch also supports special operations as listed below. - -### List Operations - -To override the container list to be strictly replaced, regardless of the default: - -```yaml -containers: - - name: nginx - image: nginx-1.0 - - $patch: replace # any further $patch operations nested in this list will be ignored -``` - -To delete an element of a list that should be merged: - -```yaml -containers: - - name: nginx - image: nginx-1.0 - - $patch: delete - name: log-tailer # merge key and value goes here -``` - -### Map Operations - -To indicate that a map should not be merged and instead should be taken literally: - -```yaml -$patch: replace # recursive and applies to all fields of the map it's in -containers: -- name: nginx - image: nginx-1.0 -``` - -To delete a field of a map: - -```yaml -name: nginx -image: nginx-1.0 -labels: - live: null # set the value of the map key to null -``` - - -Idempotency ------------ - -All compatible Kubernetes APIs MUST support "name idempotency" and respond with an HTTP status code 409 when a request is made to POST an object that has the same name as an existing object in the system. See [identifiers.md](identifiers.md) for details. - -Names generated by the system may be requested using `metadata.generateName`. GenerateName indicates that the name should be made unique by the server prior to persisting it. A non-empty value for the field indicates the name will be made unique (and the name returned to the client will be different than the name passed). The value of this field will be combined with a unique suffix on the server if the Name field has not been provided. The provided value must be valid within the rules for Name, and may be truncated by the length of the suffix required to make the value unique on the server. If this field is specified, and Name is not present, the server will NOT return a 409 if the generated name exists - instead, it will either return 201 Created or 504 with Reason `ServerTimeout` indicating a unique name could not be found in the time allotted, and the client should retry (optionally after the time indicated in the Retry-After header). - -Defaulting ----------- - -Default resource values are API version-specific, and they are applied during -the conversion from API-versioned declarative configuration to internal objects -representing the desired state (`Spec`) of the resource. Subsequent GETs of the -resource will include the default values explicitly. - -Incorporating the default values into the `Spec` ensures that `Spec` depicts the -full desired state so that it is easier for the system to determine how to -achieve the state, and for the user to know what to anticipate. - -API version-specific default values are set by the API server. - -Late Initialization -------------------- -Late initialization is when resource fields are set by a system controller -after an object is created/updated. - -For example, the scheduler sets the pod.spec.nodeName field after the pod is created. - -Late-initializers should only make the following types of modifications: - - Setting previously unset fields - - Adding keys to maps - - Adding values to arrays which have mergeable semantics (`patchStrategy:"merge"` attribute in - go definition of type). -These conventions: - 1. allow a user (with sufficient privilege) to override any system-default behaviors by setting - the fields that would otherwise have been defaulted. - 1. enables updates from users to be merged with changes made during late initialization, using - strategic merge patch, as opposed to clobbering the change. - 1. allow the component which does the late-initialization to use strategic merge patch, which - facilitates composition and concurrency of such components. - -Although the apiserver Admission Control stage acts prior to object creation, -Admission Control plugins should follow the Late Initialization conventions -too, to allow their implementation to be later moved to a controller, or to client libraries. - -Concurrency Control and Consistency ------------------------------------ - -Kubernetes leverages the concept of *resource versions* to achieve optimistic concurrency. All Kubernetes resources have a "resourceVersion" field as part of their metadata. This resourceVersion is a string that identifies the internal version of an object that can be used by clients to determine when objects have changed. When a record is about to be updated, it's version is checked against a pre-saved value, and if it doesn't match, the update fails with a StatusConflict (HTTP status code 409). - -The resourceVersion is changed by the server every time an object is modified. If resourceVersion is included with the PUT operation the system will verify that there have not been other successful mutations to the resource during a read/modify/write cycle, by verifying that the current value of resourceVersion matches the specified value. - -The resourceVersion is currently backed by [etcd's modifiedIndex](https://coreos.com/docs/distributed-configuration/etcd-api/). However, it's important to note that the application should *not* rely on the implementation details of the versioning system maintained by Kubernetes. We may change the implementation of resourceVersion in the future, such as to change it to a timestamp or per-object counter. - -The only way for a client to know the expected value of resourceVersion is to have received it from the server in response to a prior operation, typically a GET. This value MUST be treated as opaque by clients and passed unmodified back to the server. Clients should not assume that the resource version has meaning across namespaces, different kinds of resources, or different servers. Currently, the value of resourceVersion is set to match etcd's sequencer. You could think of it as a logical clock the API server can use to order requests. However, we expect the implementation of resourceVersion to change in the future, such as in the case we shard the state by kind and/or namespace, or port to another storage system. - -In the case of a conflict, the correct client action at this point is to GET the resource again, apply the changes afresh, and try submitting again. This mechanism can be used to prevent races like the following: - -``` -Client #1 Client #2 -GET Foo GET Foo -Set Foo.Bar = "one" Set Foo.Baz = "two" -PUT Foo PUT Foo -``` - -When these sequences occur in parallel, either the change to Foo.Bar or the change to Foo.Baz can be lost. - -On the other hand, when specifying the resourceVersion, one of the PUTs will fail, since whichever write succeeds changes the resourceVersion for Foo. - -resourceVersion may be used as a precondition for other operations (e.g., GET, DELETE) in the future, such as for read-after-write consistency in the presence of caching. - -"Watch" operations specify resourceVersion using a query parameter. It is used to specify the point at which to begin watching the specified resources. This may be used to ensure that no mutations are missed between a GET of a resource (or list of resources) and a subsequent Watch, even if the current version of the resource is more recent. This is currently the main reason that list operations (GET on a collection) return resourceVersion. - - -Serialization Format --------------------- - -APIs may return alternative representations of any resource in response to an Accept header or under alternative endpoints, but the default serialization for input and output of API responses MUST be JSON. - -All dates should be serialized as RFC3339 strings. - - -Units ------ - -Units must either be explicit in the field name (e.g., `timeoutSeconds`), or must be specified as part of the value (e.g., `resource.Quantity`). Which approach is preferred is TBD. - - -Selecting Fields ----------------- - -Some APIs may need to identify which field in a JSON object is invalid, or to reference a value to extract from a separate resource. The current recommendation is to use standard JavaScript syntax for accessing that field, assuming the JSON object was transformed into a JavaScript object. - -Examples: - -* Find the field "current" in the object "state" in the second item in the array "fields": `fields[0].state.current` - -TODO: Plugins, extensions, nested kinds, headers - - -HTTP Status codes ------------------ - -The server will respond with HTTP status codes that match the HTTP spec. See the section below for a breakdown of the types of status codes the server will send. - -The following HTTP status codes may be returned by the API. - -#### Success codes - -* `200 StatusOK` - * Indicates that the request completed successfully. -* `201 StatusCreated` - * Indicates that the request to create kind completed successfully. -* `204 StatusNoContent` - * Indicates that the request completed successfully, and the response contains no body. - * Returned in response to HTTP OPTIONS requests. - -#### Error codes -* `307 StatusTemporaryRedirect` - * Indicates that the address for the requested resource has changed. - * Suggested client recovery behavior - * Follow the redirect. -* `400 StatusBadRequest` - * Indicates the requested is invalid. - * Suggested client recovery behavior: - * Do not retry. Fix the request. -* `401 StatusUnauthorized` - * Indicates that the server can be reached and understood the request, but refuses to take any further action, because the client must provide authorization. If the client has provided authorization, the server is indicating the provided authorization is unsuitable or invalid. - * Suggested client recovery behavior - * If the user has not supplied authorization information, prompt them for the appropriate credentials - * If the user has supplied authorization information, inform them their credentials were rejected and optionally prompt them again. -* `403 StatusForbidden` - * Indicates that the server can be reached and understood the request, but refuses to take any further action, because it is configured to deny access for some reason to the requested resource by the client. - * Suggested client recovery behavior - * Do not retry. Fix the request. -* `404 StatusNotFound` - * Indicates that the requested resource does not exist. - * Suggested client recovery behavior - * Do not retry. Fix the request. -* `405 StatusMethodNotAllowed` - * Indicates that that the action the client attempted to perform on the resource was not supported by the code. - * Suggested client recovery behavior - * Do not retry. Fix the request. -* `409 StatusConflict` - * Indicates that either the resource the client attempted to create already exists or the requested update operation cannot be completed due to a conflict. - * Suggested client recovery behavior - * * If creating a new resource - * * Either change the identifier and try again, or GET and compare the fields in the pre-existing object and issue a PUT/update to modify the existing object. - * * If updating an existing resource: - * See `Conflict` from the `status` response section below on how to retrieve more information about the nature of the conflict. - * GET and compare the fields in the pre-existing object, merge changes (if still valid according to preconditions), and retry with the updated request (including `ResourceVersion`). -* `422 StatusUnprocessableEntity` - * Indicates that the requested create or update operation cannot be completed due to invalid data provided as part of the request. - * Suggested client recovery behavior - * Do not retry. Fix the request. -* `429 StatusTooManyRequests` - * Indicates that the either the client rate limit has been exceeded or the server has received more requests then it can process. - * Suggested client recovery behavior: - * Read the ```Retry-After``` HTTP header from the response, and wait at least that long before retrying. -* `500 StatusInternalServerError` - * Indicates that the server can be reached and understood the request, but either an unexpected internal error occurred and the outcome of the call is unknown, or the server cannot complete the action in a reasonable time (this maybe due to temporary server load or a transient communication issue with another server). - * Suggested client recovery behavior: - * Retry with exponential backoff. -* `503 StatusServiceUnavailable` - * Indicates that required service is unavailable. - * Suggested client recovery behavior: - * Retry with exponential backoff. -* `504 StatusServerTimeout` - * Indicates that the request could not be completed within the given time. Clients can get this response ONLY when they specified a timeout param in the request. - * Suggested client recovery behavior: - * Increase the value of the timeout param and retry with exponential backoff - -Response Status Kind --------------------- - -Kubernetes will always return the ```Status``` kind from any API endpoint when an error occurs. -Clients SHOULD handle these types of objects when appropriate. - -A ```Status``` kind will be returned by the API in two cases: - * When an operation is not successful (i.e. when the server would return a non 2xx HTTP status code). - * When a HTTP ```DELETE``` call is successful. - -The status object is encoded as JSON and provided as the body of the response. The status object contains fields for humans and machine consumers of the API to get more detailed information for the cause of the failure. The information in the status object supplements, but does not override, the HTTP status code's meaning. When fields in the status object have the same meaning as generally defined HTTP headers and that header is returned with the response, the header should be considered as having higher priority. - -**Example:** -``` -$ curl -v -k -H "Authorization: Bearer WhCDvq4VPpYhrcfmF6ei7V9qlbqTubUc" https://10.240.122.184:443/api/v1/namespaces/default/pods/grafana - -> GET /api/v1/namespaces/default/pods/grafana HTTP/1.1 -> User-Agent: curl/7.26.0 -> Host: 10.240.122.184 -> Accept: */* -> Authorization: Bearer WhCDvq4VPpYhrcfmF6ei7V9qlbqTubUc -> - -< HTTP/1.1 404 Not Found -< Content-Type: application/json -< Date: Wed, 20 May 2015 18:10:42 GMT -< Content-Length: 232 -< -{ - "kind": "Status", - "apiVersion": "v1", - "metadata": {}, - "status": "Failure", - "message": "pods \"grafana\" not found", - "reason": "NotFound", - "details": { - "name": "grafana", - "kind": "pods" - }, - "code": 404 -} -``` - -```status``` field contains one of two possible values: -* `Success` -* `Failure` - -`message` may contain human-readable description of the error - -```reason``` may contain a machine-readable description of why this operation is in the `Failure` status. If this value is empty there is no information available. The `reason` clarifies an HTTP status code but does not override it. - -```details``` may contain extended data associated with the reason. Each reason may define its own extended details. This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type. - -Possible values for the ```reason``` and ```details``` fields: -* `BadRequest` - * Indicates that the request itself was invalid, because the request doesn't make any sense, for example deleting a read-only object. - * This is different than `status reason` `Invalid` above which indicates that the API call could possibly succeed, but the data was invalid. - * API calls that return BadRequest can never succeed. - * Http status code: `400 StatusBadRequest` -* `Unauthorized` - * Indicates that the server can be reached and understood the request, but refuses to take any further action without the client providing appropriate authorization. If the client has provided authorization, this error indicates the provided credentials are insufficient or invalid. - * Details (optional): - * `kind string` - * The kind attribute of the unauthorized resource (on some operations may differ from the requested resource). - * `name string` - * The identifier of the unauthorized resource. - * HTTP status code: `401 StatusUnauthorized` -* `Forbidden` - * Indicates that the server can be reached and understood the request, but refuses to take any further action, because it is configured to deny access for some reason to the requested resource by the client. - * Details (optional): - * `kind string` - * The kind attribute of the forbidden resource (on some operations may differ from the requested resource). - * `name string` - * The identifier of the forbidden resource. - * HTTP status code: `403 StatusForbidden` -* `NotFound` - * Indicates that one or more resources required for this operation could not be found. - * Details (optional): - * `kind string` - * The kind attribute of the missing resource (on some operations may differ from the requested resource). - * `name string` - * The identifier of the missing resource. - * HTTP status code: `404 StatusNotFound` -* `AlreadyExists` - * Indicates that the resource you are creating already exists. - * Details (optional): - * `kind string` - * The kind attribute of the conflicting resource. - * `name string` - * The identifier of the conflicting resource. - * HTTP status code: `409 StatusConflict` -* `Conflict` - * Indicates that the requested update operation cannot be completed due to a conflict. The client may need to alter the request. Each resource may define custom details that indicate the nature of the conflict. - * HTTP status code: `409 StatusConflict` -* `Invalid` - * Indicates that the requested create or update operation cannot be completed due to invalid data provided as part of the request. - * Details (optional): - * `kind string` - * the kind attribute of the invalid resource - * `name string` - * the identifier of the invalid resource - * `causes` - * One or more `StatusCause` entries indicating the data in the provided resource that was invalid. The `reason`, `message`, and `field` attributes will be set. - * HTTP status code: `422 StatusUnprocessableEntity` -* `Timeout` - * Indicates that the request could not be completed within the given time. Clients may receive this response if the server has decided to rate limit the client, or if the server is overloaded and cannot process the request at this time. - * Http status code: `429 TooManyRequests` - * The server should set the `Retry-After` HTTP header and return `retryAfterSeconds` in the details field of the object. A value of `0` is the default. -* `ServerTimeout` - * Indicates that the server can be reached and understood the request, but cannot complete the action in a reasonable time. This maybe due to temporary server load or a transient communication issue with another server. - * Details (optional): - * `kind string` - * The kind attribute of the resource being acted on. - * `name string` - * The operation that is being attempted. - * The server should set the `Retry-After` HTTP header and return `retryAfterSeconds` in the details field of the object. A value of `0` is the default. - * Http status code: `504 StatusServerTimeout` -* `MethodNotAllowed` - * Indicates that that the action the client attempted to perform on the resource was not supported by the code. - * For instance, attempting to delete a resource that can only be created. - * API calls that return MethodNotAllowed can never succeed. - * Http status code: `405 StatusMethodNotAllowed` -* `InternalError` - * Indicates that an internal error occurred, it is unexpected and the outcome of the call is unknown. - * Details (optional): - * `causes` - * The original error. - * Http status code: `500 StatusInternalServerError` - -`code` may contain the suggested HTTP return code for this status. - - -Events ------- - -TODO: Document events (refer to another doc for details) - - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/api-conventions.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/api-conventions.md?pixel)]() diff --git a/release-0.19.0/docs/api.md b/release-0.19.0/docs/api.md deleted file mode 100644 index b721b53f317..00000000000 --- a/release-0.19.0/docs/api.md +++ /dev/null @@ -1,74 +0,0 @@ -# The Kubernetes API - -Primary system and API concepts are documented in the [User guide](user-guide.md). - -Overall API conventions are described in the [API conventions doc](api-conventions.md). - -Complete API details are documented via [Swagger](http://swagger.io/). The Kubernetes apiserver (aka "master") exports an API that can be used to retrieve the [Swagger spec](https://github.com/swagger-api/swagger-spec/tree/master/schemas/v1.2) for the Kubernetes API, by default at `/swaggerapi`, and a UI you can use to browse the API documentation at `/swagger-ui`. We also periodically update a [statically generated UI](http://kubernetes.io/third_party/swagger-ui/). - -Remote access to the API is discussed in the [access doc](accessing_the_api.md). - -The Kubernetes API also serves as the foundation for the declarative configuration schema for the system. The [Kubectl](kubectl.md) command-line tool can be used to create, update, delete, and get API objects. - -Kubernetes also stores its serialized state (currently in [etcd](https://coreos.com/docs/distributed-configuration/getting-started-with-etcd/)) in terms of the API resources. - -Kubernetes itself is decomposed into multiple components, which interact through its API. - -## API changes - -In our experience, any system that is successful needs to grow and change as new use cases emerge or existing ones change. Therefore, we expect the Kubernetes API to continuously change and grow. However, we intend to not break compatibility with existing clients, for an extended period of time. In general, new API resources and new resource fields can be expected to be added frequently. Elimination of resources or fields will require following a deprecation process. The precise deprecation policy for eliminating features is TBD, but once we reach our 1.0 milestone, there will be a specific policy. - -What constitutes a compatible change and how to change the API are detailed by the [API change document](devel/api_changes.md). - -## API versioning - -Fine-grain resource evolution alone makes it difficult to eliminate fields or restructure resource representations. Therefore, Kubernetes supports multiple API versions, each at a different API path prefix, such as `/api/v1beta3`. These are simply different interfaces to read and/or modify the same underlying resources. In general, all API resources are accessible via all API versions, though there may be some cases in the future where that is not true. - -Distinct API versions present more clear, consistent views of system resources and behavior than intermingled, independently evolved resources. They also provide a more straightforward mechanism for controlling access to end-of-lifed and/or experimental APIs. - -The [API and release versioning proposal](versioning.md) describes the current thinking on the API version evolution process. - -## v1beta1, v1beta2, and v1beta3 are deprecated; please move to v1 ASAP - -As of June 4, 2015, the Kubernetes v1 API has been enabled by default. The v1beta1 and v1beta2 APIs were deleted on June 1, 2015. v1beta3 is planned to be deleted on July 6, 2015. - -### v1 conversion tips (from v1beta3) - -We're working to convert all documentation and examples to v1. A simple [API conversion tool](cluster_management.md#switching-your-config-files-to-a-new-api-version) has been written to simplify the translation process. Use `kubectl create --validate` in order to validate your json or yaml against our Swagger spec. - -Changes to services are the most significant difference between v1beta3 and v1. -* The `service.spec.portalIP` property is renamed to `service.spec.clusterIP`. -* The `service.spec.createExternalLoadBalancer` property is removed. Specify `service.spec.type: "LoadBalancer"` to create an external load balancer instead. -* The `service.spec.publicIPs` property is deprecated and now called `service.spec.deprecatedPublicIPs`. This property will be removed entirely when v1beta3 is removed. The vast majority of users of this field were using it to expose services on ports on the node. Those users should specify `service.spec.type: "NodePort"` instead. Read [External Services](services.md#external-services) for more info. If this is not sufficient for your use case, please file an issue or contact @thockin. - -Some other difference between v1beta3 and v1: -* The `pod.spec.containers[*].privileged` and `pod.spec.containers[*].capabilities` properties are now nested under the `pod.spec.containers[*].securityContext` property. See [Security Contexts](security_context.md). -* The `pod.spec.host` property is renamed to `pod.spec.nodeName`. -* The `endpoints.subsets[*].addresses.IP` property is renamed to `endpoints.subsets[*].addresses.ip`. -* The `pod.status.containerStatuses[*].state.termination` and `pod.status.containerStatuses[*].lastState.termination` properties are renamed to `pod.status.containerStatuses[*].state.terminated` and `pod.status.containerStatuses[*].state.terminated` respectively. -* The `pod.status.Condition` property is renamed to `pod.status.conditions`. -* The `status.details.id` property is renamed to `status.details.name`. - -### v1beta3 conversion tips (from v1beta1/2) - -Some important differences between v1beta1/2 and v1beta3: -* The resource `id` is now called `name`. -* `name`, `labels`, `annotations`, and other metadata are now nested in a map called `metadata` -* `desiredState` is now called `spec`, and `currentState` is now called `status` -* `/minions` has been moved to `/nodes`, and the resource has kind `Node` -* The namespace is required (for all namespaced resources) and has moved from a URL parameter to the path: `/api/v1beta3/namespaces/{namespace}/{resource_collection}/{resource_name}`. If you were not using a namespace before, use `default` here. -* The names of all resource collections are now lower cased - instead of `replicationControllers`, use `replicationcontrollers`. -* To watch for changes to a resource, open an HTTP or Websocket connection to the collection query and provide the `?watch=true` query parameter along with the desired `resourceVersion` parameter to watch from. -* The `labels` query parameter has been renamed to `label-selector`. -* The container `entrypoint` has been renamed to `command`, and `command` has been renamed to `args`. -* Container, volume, and node resources are expressed as nested maps (e.g., `resources{cpu:1}`) rather than as individual fields, and resource values support [scaling suffixes](resources.md#resource-quantities) rather than fixed scales (e.g., milli-cores). -* Restart policy is represented simply as a string (e.g., `"Always"`) rather than as a nested map (`always{}`). -* Pull policies changed from `PullAlways`, `PullNever`, and `PullIfNotPresent` to `Always`, `Never`, and `IfNotPresent`. -* The volume `source` is inlined into `volume` rather than nested. -* Host volumes have been changed from `hostDir` to `hostPath` to better reflect that they can be files or directories. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/api.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/api.md?pixel)]() diff --git a/release-0.19.0/docs/architecture.dia b/release-0.19.0/docs/architecture.dia deleted file mode 100644 index 26e0eed22e6..00000000000 Binary files a/release-0.19.0/docs/architecture.dia and /dev/null differ diff --git a/release-0.19.0/docs/architecture.png b/release-0.19.0/docs/architecture.png deleted file mode 100644 index fa39039aaff..00000000000 Binary files a/release-0.19.0/docs/architecture.png and /dev/null differ diff --git a/release-0.19.0/docs/architecture.svg b/release-0.19.0/docs/architecture.svg deleted file mode 100644 index 825c0ace8fb..00000000000 --- a/release-0.19.0/docs/architecture.svg +++ /dev/null @@ -1,499 +0,0 @@ - - - - - - - - - - - - - Node - - - - - - kubelet - - - - - - - - - - - container - - - - - - - container - - - - - - - cAdvisor - - - - - - - Pod - - - - - - - - - - - container - - - - - - - container - - - - - - - container - - - - - - - Pod - - - - - - - - - - - - container - - - - - - - container - - - - - - - container - - - - - - - Pod - - - - - - - Proxy - - - - - - - kubectl (user commands) - - - - - - - - - - - - - - - Firewall - - - - - - - Internet - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - replication controller - - - - - - - Scheduler - - - - - - - Scheduler - - - - Master components - Colocated, or spread across machines, - as dictated by cluster size. - - - - - - - - - - - - REST - (pods, services, - rep. controllers) - - - - - - - authorization - authentication - - - - - - - scheduling - actuator - - - - APIs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - docker - - - - - - - - .. - - - ... - - - - - - - - - - - - - - - - - - - - - - - - Node - - - - - - kubelet - - - - - - - - - - - container - - - - - - - container - - - - - - - cAdvisor - - - - - - - Pod - - - - - - - - - - - container - - - - - - - container - - - - - - - container - - - - - - - Pod - - - - - - - - - - - - container - - - - - - - container - - - - - - - container - - - - - - - Pod - - - - - - - Proxy - - - - - - - - - - - - - - - - - - - docker - - - - - - - - .. - - - ... - - - - - - - - - - - - - - - - - - - - - - - - - - Distributed - Watchable - Storage - - (implemented via etcd) - - - diff --git a/release-0.19.0/docs/authentication.md b/release-0.19.0/docs/authentication.md deleted file mode 100644 index a1e8b91de64..00000000000 --- a/release-0.19.0/docs/authentication.md +++ /dev/null @@ -1,46 +0,0 @@ -# Authentication Plugins - -Kubernetes uses client certificates, tokens, or http basic auth to authenticate users for API calls. - -Client certificate authentication is enabled by passing the `--client_ca_file=SOMEFILE` -option to apiserver. The referenced file must contain one or more certificates authorities -to use to validate client certificates presented to the apiserver. If a client certificate -is presented and verified, the common name of the subject is used as the user name for the -request. - -Token authentication is enabled by passing the `--token_auth_file=SOMEFILE` option -to apiserver. Currently, tokens last indefinitely, and the token list cannot -be changed without restarting apiserver. We plan in the future for tokens to -be short-lived, and to be generated as needed rather than stored in a file. - -The token file format is implemented in `plugin/pkg/auth/authenticator/token/tokenfile/...` -and is a csv file with 3 columns: token, user name, user uid. - -When using token authentication from an http client the apiserver expects an `Authorization` -header with a value of `Bearer SOMETOKEN`. - -Basic authentication is enabled by passing the `--basic_auth_file=SOMEFILE` -option to apiserver. Currently, the basic auth credentials last indefinitely, -and the password cannot be changed without restarting apiserver. Note that basic -authentication is currently supported for convenience while we finish making the -more secure modes described above easier to use. - -The basic auth file format is implemented in `plugin/pkg/auth/authenticator/password/passwordfile/...` -and is a csv file with 3 columns: password, user name, user id. - -When using basic authentication from an http client the apiserver expects an `Authorization` header -with a value of `Basic BASE64ENCODEDUSER:PASSWORD`. - -## Plugin Development - -We plan for the Kubernetes API server to issue tokens -after the user has been (re)authenticated by a *bedrock* authentication -provider external to Kubernetes. We plan to make it easy to develop modules -that interface between kubernetes and a bedrock authentication provider (e.g. -github.com, google.com, enterprise directory, kerberos, etc.) - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/authentication.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/authentication.md?pixel)]() diff --git a/release-0.19.0/docs/authorization.md b/release-0.19.0/docs/authorization.md deleted file mode 100644 index 80f42173f34..00000000000 --- a/release-0.19.0/docs/authorization.md +++ /dev/null @@ -1,109 +0,0 @@ -# Authorization Plugins - - -In Kubernetes, authorization happens as a separate step from authentication. -See the [authentication documentation](./authentication.md) for an -overview of authentication. - -Authorization applies to all HTTP accesses on the main apiserver port. (The -readonly port is not currently subject to authorization, but is planned to be -removed soon.) - -The authorization check for any request compares attributes of the context of -the request, (such as user, resource, and namespace) with access -policies. An API call must be allowed by some policy in order to proceed. - -The following implementations are available, and are selected by flag: - - `--authorization_mode=AlwaysDeny` - - `--authorization_mode=AlwaysAllow` - - `--authorization_mode=ABAC` - -`AlwaysDeny` blocks all requests (used in tests). -`AlwaysAllow` allows all requests; use if you don't need authorization. -`ABAC` allows for user-configured authorization policy. ABAC stands for Attribute-Based Access Control. - -## ABAC Mode -### Request Attributes - -A request has 4 attributes that can be considered for authorization: - - user (the user-string which a user was authenticated as). - - whether the request is readonly (GETs are readonly) - - what resource is being accessed - - applies only to the API endpoints, such as - `/api/v1/namespaces/default/pods`. For miscellaneous endpoints, like `/version`, the - resource is the empty string. - - the namespace of the object being access, or the empty string if the - endpoint does not support namespaced objects. - -We anticipate adding more attributes to allow finer grained access control and -to assist in policy management. - -### Policy File Format - -For mode `ABAC`, also specify `--authorization_policy_file=SOME_FILENAME`. - -The file format is [one JSON object per line](http://jsonlines.org/). There should be no enclosing list or map, just -one map per line. - -Each line is a "policy object". A policy object is a map with the following properties: - - `user`, type string; the user-string from `--token_auth_file` - - `readonly`, type boolean, when true, means that the policy only applies to GET - operations. - - `resource`, type string; a resource from an URL, such as `pods`. - - `namespace`, type string; a namespace string. - -An unset property is the same as a property set to the zero value for its type (e.g. empty string, 0, false). -However, unset should be preferred for readability. - -In the future, policies may be expressed in a JSON format, and managed via a REST -interface. - -### Authorization Algorithm - -A request has attributes which correspond to the properties of a policy object. - -When a request is received, the attributes are determined. Unknown attributes -are set to the zero value of its type (e.g. empty string, 0, false). - -An unset property will match any value of the corresponding -attribute. An unset attribute will match any value of the corresponding property. - -The tuple of attributes is checked for a match against every policy in the policy file. -If at least one line matches the request attributes, then the request is authorized (but may fail later validation). - -To permit any user to do something, write a policy with the user property unset. -To permit an action Policy with an unset namespace applies regardless of namespace. - -### Examples - 1. Alice can do anything: `{"user":"alice"}` - 2. Kubelet can read any pods: `{"user":"kubelet", "resource": "pods", "readonly": true}` - 3. Kubelet can read and write events: `{"user":"kubelet", "resource": "events"}` - 4. Bob can just read pods in namespace "projectCaribou": `{"user":"bob", "resource": "pods", "readonly": true, "ns": "projectCaribou"}` - -[Complete file example](../pkg/auth/authorizer/abac/example_policy_file.jsonl) - -## Plugin Development - -Other implementations can be developed fairly easily. -The APIserver calls the Authorizer interface: -```go -type Authorizer interface { - Authorize(a Attributes) error -} -``` -to determine whether or not to allow each API action. - -An authorization plugin is a module that implements this interface. -Authorization plugin code goes in `pkg/auth/authorization/$MODULENAME`. - -An authorization module can be completely implemented in go, or can call out -to a remote authorization service. Authorization modules can implement -their own caching to reduce the cost of repeated authorization calls with the -same or similar arguments. Developers should then consider the interaction between -caching and revocation of permissions. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/authorization.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/authorization.md?pixel)]() diff --git a/release-0.19.0/docs/availability.md b/release-0.19.0/docs/availability.md deleted file mode 100644 index 4ac0da8a061..00000000000 --- a/release-0.19.0/docs/availability.md +++ /dev/null @@ -1,136 +0,0 @@ -# Availability - -This document collects advice on reasoning about and provisioning for high-availability when using Kubernetes clusters. - -## Failure modes - -This is an incomplete list of things that could go wrong, and how to deal with them. - -Root causes: - - VM(s) shutdown - - network partition within cluster, or between cluster and users. - - crashes in Kubernetes software - - data loss or unavailability of persistent storage (e.g. GCE PD or AWS EBS volume). - - operator error misconfigures kubernetes software or application software. - -Specific scenarios: - - Apiserver VM shutdown or apiserver crashing - - Results - - unable to stop, update, or start new pods, services, replication controller - - existing pods and services should continue to work normally, unless they depend on the Kubernetes API - - Apiserver backing storage lost - - Results - - apiserver should fail to come up. - - kubelets will not be able to reach it but will continute to run the same pods and provide the same service proxying. - - manual recovery or recreation of apiserver state necessary before apiserver is restarted. - - Supporting services (node controller, replication controller manager, scheduler, etc) VM shutdown or crashes - - currently those are colocated with the apiserver, and their unavailability has similar consequences as apiserver - - in future, these will be replicated as well and may not be co-located - - they do not have own persistent state - - Node (thing that runs kubelet and kube-proxy and pods) shutdown - - Results - - pods on that Node stop running - - Kubelet software fault - - Results - - crashing kubelet cannot start new pods on the node - - kubelet might delete the pods or not - - node marked unhealthy - - replication controllers start new pods elsewhere - - Cluster operator error - - Results: - - loss of pods, services, etc - - lost of apiserver backing store - - users unable to read API - - etc - -Mitigations: -- Action: Use IaaS providers automatic VM restarting feature for IaaS VMs. - - Mitigates: Apiserver VM shutdown or apiserver crashing - - Mitigates: Supporting services VM shutdown or crashes - -- Action use IaaS providers reliable storage (e.g GCE PD or AWS EBS volume) for VMs with apiserver+etcd. - - Mitigates: Apiserver backing storage lost - -- Action: Use Replicated APIserver feature (when complete: feature is planned but not implemented) - - Mitigates: Apiserver VM shutdown or apiserver crashing - - Will tolerate one or more simultaneous apiserver failures. - - Mitigates: Apiserver backing storage lost - - Each apiserver has independent storage. Etcd will recover from loss of one member. Risk of total data loss greatly reduced. - -- Action: Snapshot apiserver PDs/EBS-volumes periodically - - Mitigates: Apiserver backing storage lost - - Mitigates: Some cases of operator error - - Mitigates: Some cases of kubernetes software fault - -- Action: use replication controller and services in front of pods - - Mitigates: Node shutdown - - Mitigates: Kubelet software fault - -- Action: applications (containers) designed to tolerate unexpected restarts - - Mitigates: Node shutdown - - Mitigates: Kubelet software fault - -- Action: Multiple independent clusters (and avoid making risky changes to all clusters at once) - - Mitigates: Everything listed above. - -## Choosing Multiple Kubernetes Clusters - -You may want to set up multiple kubernetes clusters, both to -have clusters in different regions to be nearer to your users; and to tolerate failures and/or invasive maintenance. - -### Scope of a single cluster - -On IaaS providers such as Google Compute Engine or Amazon Web Services, a VM exists in a -[zone](https://cloud.google.com/compute/docs/zones) or [availability -zone](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html). -We suggest that all the VMs in a Kubernetes cluster should be in the same availability zone, because: - - compared to having a single global Kubernetes cluster, there are fewer single-points of failure - - compared to a cluster that spans availability zones, it is easier to reason about the availability properties of a - single-zone cluster. - - when the Kubernetes developers are designing the system (e.g. making assumptions about latency, bandwidth, or - correlated failures) they are assuming all the machines are in a single data center, or otherwise closely connected. - -It is okay to have multiple clusters per availability zone, though on balance we think fewer is better. -Reasons to prefer fewer clusters are: - - improved bin packing of Pods in some cases with more nodes in one cluster. - - reduced operational overhead (though the advantage is diminished as ops tooling and processes matures). - - reduced costs for per-cluster fixed resource costs, e.g. apiserver VMs (but small as a percentage - of overall cluster cost for medium to large clusters). - -Reasons to have multiple clusters include: - - strict security policies requiring isolation of one class of work from another (but, see Partitioning Clusters - below). - - test clusters to canary new Kubernetes releases or other cluster software. - -### Selecting the right number of clusters -The selection of the number of kubernetes clusters may be a relatively static choice, only revisted occasionally. -By contrast, the number of nodes in a cluster and the number of pods in a service may be change frequently according to -load and growth. - -To pick the number of clusters, first, decide which regions you need to be in to have adequete latency to all your end users, for services that will run -on Kubernetes (if you use a Content Distribution Network, the latency requirements for the CDN-hosted content need not -be considered). Legal issues might influence this as well. For example, a company with a global customer base might decide to have clusters in US, EU, AP, and SA regions. -Call the number of regions to be in `R`. - -Second, decide how many clusters should be able to be unavailable at the same time, while still being available. Call -the number that can be unavailable `U`. If you are not sure, then 1 is a fine choice. - -If it is allowable for load-balancing to direct traffic to any region in the event of a cluster failure, then -then you need `R + U` clusters. If it is not (e.g you want to ensure low latency for all users in the event of a -cluster failure), then you need to have `R * U` clusters (`U` in each of `R` regions). In any case, try to put each cluster in a different zone. - -Finally, if any of your clusters would need more than the maximum recommended number of nodes for a Kubernetes cluster, then -you may need even more clusters. Our [roadmap](http://docs.k8s.io/roadmap.md) -calls for maximum 100 node clusters at v1.0 and maximum 1000 node clusters in the middle of 2015. - -## Working with multiple clusters - -When you have multiple clusters, you would typically create services with the same config in each cluster and put each of those -service instances behind a load balancer (AWS Elastic Load Balancer, GCE Forwarding Rule or HTTP Load Balancer), so that -failures of a single cluster are not visible to end users. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/availability.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/availability.md?pixel)]() diff --git a/release-0.19.0/docs/cli-roadmap.md b/release-0.19.0/docs/cli-roadmap.md deleted file mode 100644 index 402eb0aecfe..00000000000 --- a/release-0.19.0/docs/cli-roadmap.md +++ /dev/null @@ -1,84 +0,0 @@ -# Kubernetes CLI/Configuration Roadmap - -See also issues with the following labels: -* [area/config-deployment](https://github.com/GoogleCloudPlatform/kubernetes/labels/area%2Fconfig-deployment) -* [component/CLI](https://github.com/GoogleCloudPlatform/kubernetes/labels/component%2FCLI) -* [component/client](https://github.com/GoogleCloudPlatform/kubernetes/labels/component%2Fclient) - -1. Create services before other objects, or at least before objects that depend upon them. Namespace-relative DNS mitigates this some, but most users are still using service environment variables. [#1768](https://github.com/GoogleCloudPlatform/kubernetes/issues/1768) -1. Finish rolling update [#1353](https://github.com/GoogleCloudPlatform/kubernetes/issues/1353) - 1. Friendly to auto-scaling [#2863](https://github.com/GoogleCloudPlatform/kubernetes/pull/2863#issuecomment-69701562) - 1. Rollback (make rolling-update reversible, and complete an in-progress rolling update by taking 2 replication controller names rather than always taking a file) - 1. Rollover (replace multiple replication controllers with one, such as to clean up an aborted partial rollout) - 1. Write a ReplicationController generator to derive the new ReplicationController from an old one (e.g., `--image-version=newversion`, which would apply a name suffix, update a label value, and apply an image tag) - 1. Use readiness [#620](https://github.com/GoogleCloudPlatform/kubernetes/issues/620) - 1. Perhaps factor this in a way that it can be shared with [Openshift’s deployment controller](https://github.com/GoogleCloudPlatform/kubernetes/issues/1743) - 1. Rolling update service as a plugin -1. Kind-based filtering on object streams -- only operate on the kinds of objects specified. This would make directory-based kubectl operations much more useful. Users should be able to instantiate the example applications using `kubectl create -f ...` -1. Improved pretty printing of endpoints, such as in the case that there are more than a few endpoints -1. Service address/port lookup command(s) -1. List supported resources -1. Swagger lookups [#3060](https://github.com/GoogleCloudPlatform/kubernetes/issues/3060) -1. --name, --name-suffix applied during creation and updates -1. --labels and opinionated label injection: --app=foo, --tier={fe,cache,be,db}, --uservice=redis, --env={dev,test,prod}, --stage={canary,final}, --track={hourly,daily,weekly}, --release=0.4.3c2. Exact ones TBD. We could allow arbitrary values -- the keys are important. The actual label keys would be (optionally?) namespaced with kubectl.kubernetes.io/, or perhaps the user’s namespace. -1. --annotations and opinionated annotation injection: --description, --revision -1. Imperative updates. We'll want to optionally make these safe(r) by supporting preconditions based on the current value and resourceVersion. - 1. annotation updates similar to label updates - 1. other custom commands for common imperative updates - 1. more user-friendly (but still generic) on-command-line json for patch -1. We also want to support the following flavors of more general updates: - 1. whichever we don’t support: - 1. safe update: update the full resource, guarded by resourceVersion precondition (and perhaps selected value-based preconditions) - 1. forced update: update the full resource, blowing away the previous Spec without preconditions; delete and re-create if necessary - 1. diff/dryrun: Compare new config with current Spec [#6284](https://github.com/GoogleCloudPlatform/kubernetes/issues/6284) - 1. submit/apply/reconcile/ensure/merge: Merge user-provided fields with current Spec. Keep track of user-provided fields using an annotation -- see [#1702](https://github.com/GoogleCloudPlatform/kubernetes/issues/1702). Delete all objects with deployment-specific labels. -1. --dry-run for all commands -1. Support full label selection syntax, including support for namespaces. -1. Wait on conditions [#1899](https://github.com/GoogleCloudPlatform/kubernetes/issues/1899) -1. Make kubectl scriptable: make output and exit code behavior consistent and useful for wrapping in workflows and piping back into kubectl and/or xargs (e.g., dump full URLs?, distinguish permanent and retry-able failure, identify objects that should be retried) - 1. Here's [an example](http://techoverflow.net/blog/2013/10/22/docker-remove-all-images-and-containers/) where multiple objects on the command line and an option to dump object names only (`-q`) would be useful in combination. [#5906](https://github.com/GoogleCloudPlatform/kubernetes/issues/5906) -1. Easy generation of clean configuration files from existing objects (including containers -- podex) -- remove readonly fields, status - 1. Export from one namespace, import into another is an important use case -1. Derive objects from other objects - 1. pod clone - 1. rc from pod - 1. --labels-from (services from pods or rcs) -1. Kind discovery (i.e., operate on objects of all kinds) [#5278](https://github.com/GoogleCloudPlatform/kubernetes/issues/5278) -1. A fairly general-purpose way to specify fields on the command line during creation and update, not just from a config file -1. Extensible API-based generator framework (i.e. invoke generators via an API/URL rather than building them into kubectl), so that complex client libraries don’t need to be rewritten in multiple languages, and so that the abstractions are available through all interfaces: API, CLI, UI, logs, ... [#5280](https://github.com/GoogleCloudPlatform/kubernetes/issues/5280) - 1. Need schema registry, and some way to invoke generator (e.g., using a container) - 1. Convert run command to API-based generator -1. Transformation framework - 1. More intelligent defaulting of fields (e.g., [#2643](https://github.com/GoogleCloudPlatform/kubernetes/issues/2643)) -1. Update preconditions based on the values of arbitrary object fields. -1. Deployment manager compatibility on GCP: [#3685](https://github.com/GoogleCloudPlatform/kubernetes/issues/3685) -1. Describe multiple objects, multiple kinds of objects [#5905](https://github.com/GoogleCloudPlatform/kubernetes/issues/5905) -1. Support yaml document separator [#5840](https://github.com/GoogleCloudPlatform/kubernetes/issues/5840) - -TODO: -* watch -* attach [#1521](https://github.com/GoogleCloudPlatform/kubernetes/issues/1521) -* image/registry commands -* do any other server paths make sense? validate? generic curl functionality? -* template parameterization -* dynamic/runtime configuration - -Server-side support: - -1. Default selectors from labels [#1698](https://github.com/GoogleCloudPlatform/kubernetes/issues/1698#issuecomment-71048278) -1. Stop [#1535](https://github.com/GoogleCloudPlatform/kubernetes/issues/1535) -1. Deleted objects [#2789](https://github.com/GoogleCloudPlatform/kubernetes/issues/2789) -1. Clone [#170](https://github.com/GoogleCloudPlatform/kubernetes/issues/170) -1. Resize [#1629](https://github.com/GoogleCloudPlatform/kubernetes/issues/1629) -1. Useful /operations API: wait for finalization/reification -1. List supported resources [#2057](https://github.com/GoogleCloudPlatform/kubernetes/issues/2057) -1. Reverse label lookup [#1348](https://github.com/GoogleCloudPlatform/kubernetes/issues/1348) -1. Field selection [#1362](https://github.com/GoogleCloudPlatform/kubernetes/issues/1362) -1. Field filtering [#1459](https://github.com/GoogleCloudPlatform/kubernetes/issues/1459) -1. Operate on uids - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/cli-roadmap.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/cli-roadmap.md?pixel)]() diff --git a/release-0.19.0/docs/client-libraries.md b/release-0.19.0/docs/client-libraries.md deleted file mode 100644 index d5e087801d9..00000000000 --- a/release-0.19.0/docs/client-libraries.md +++ /dev/null @@ -1,20 +0,0 @@ -## kubernetes API client libraries - -### Supported - * [Go](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/pkg/client) - -### User Contributed -*Note: Libraries provided by outside parties are supported by their authors, not the core Kubernetes team* - - * [Java](https://github.com/fabric8io/fabric8/tree/master/components/kubernetes-api) - * [Ruby1](https://github.com/Ch00k/kuber) - * [Ruby2](https://github.com/abonas/kubeclient) - * [PHP](https://github.com/devstub/kubernetes-api-php-client) - * [Node.js](https://github.com/tenxcloud/node-kubernetes-client) - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/client-libraries.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/client-libraries.md?pixel)]() diff --git a/release-0.19.0/docs/cluster-admin-guide.md b/release-0.19.0/docs/cluster-admin-guide.md deleted file mode 100644 index 388636c74a4..00000000000 --- a/release-0.19.0/docs/cluster-admin-guide.md +++ /dev/null @@ -1,80 +0,0 @@ -# Kubernetes Cluster Admin Guide - -The cluster admin guide is for anyone creating or administering a Kubernetes cluster. -It assumes some familiarity with concepts in the [User Guide](user-guide.md). - -## Planning a cluster - -There are many different examples of how to setup a kubernetes cluster. Many of them are listed in this -[matrix](getting-started-guides/README.md). We call each of the combinations in this matrix a *distro*. - -Before chosing a particular guide, here are some things to consider: - - Are you just looking to try out Kubernetes on your laptop, or build a high-availability many-node cluster? Both - models are supported, but some distros are better for one case or the other. - - Will you be using a hosted Kubernetes cluster, such as [GKE](https://cloud.google.com/container-engine), or setting - one up yourself? - - Will your cluster be on-premises, or in the cloud (IaaS)? Kubernetes does not directly support hybrid clusters. We - recommend setting up multiple clusters rather than spanning distant locations. - - Will you be running Kubernetes on "bare metal" or virtual machines? Kubernetes supports both, via different distros. - - Do you just want to run a cluster, or do you expect to do active development of kubernetes project code? If the - latter, it is better to pick a distro actively used by other developers. Some distros only use binary releases, but - offer is a greater variety of choices. - - Not all distros are maintained as actively. Prefer ones which are listed as tested on a more recent version of - Kubernetes. - - If you are configuring kubernetes on-premises, you will need to consider what [networking - model](networking.md) fits best. - - If you are designing for very [high-availability](availability.md), you may want multiple clusters in multiple zones. - -## Setting up a cluster - -Pick one of the Getting Started Guides from the [matrix](getting-started-guides/README.md) and follow it. -If none of the Getting Started Guides fits, you may want to pull ideas from several of the guides. - -One option for custom networking is *OpenVSwitch GRE/VxLAN networking* ([ovs-networking.md](ovs-networking.md)), which -uses OpenVSwitch to set up networking between pods across - Kubernetes nodes. - -If you are modifying an existing guide which uses Salt, this document explains [how Salt is used in the Kubernetes -project.](salt.md). - -## Upgrading a cluster -[Upgrading a cluster](cluster_management.md). - -## Managing nodes - -[Managing nodes](node.md). - -## Optional Cluster Services - -* **DNS Integration with SkyDNS** ([dns.md](dns.md)): - Resolving a DNS name directly to a Kubernetes service. - -* **Logging** with [Kibana](logging.md) - -## Multi-tenant support - -* **Namespaces** ([namespaces.md](namespaces.md)): Namespaces help different - projects, teams, or customers to share a kubernetes cluster. - -* **Resource Quota** ([resource_quota_admin.md](resource_quota_admin.md)) - -## Security - -* **Kubernetes Container Environment** ([container-environment.md](container-environment.md)): - Describes the environment for Kubelet managed containers on a Kubernetes - node. - -* **Securing access to the API Server** [accessing the api]( accessing_the_api.md) - -* **Authentication** [authentication]( authentication.md) - -* **Authorization** [authorization]( authorization.md) - -* **Admission Controllers** [admission_controllers]( admission_controllers.md) - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/cluster-admin-guide.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/cluster-admin-guide.md?pixel)]() diff --git a/release-0.19.0/docs/cluster_management.md b/release-0.19.0/docs/cluster_management.md deleted file mode 100644 index 56470f0d978..00000000000 --- a/release-0.19.0/docs/cluster_management.md +++ /dev/null @@ -1,65 +0,0 @@ -# Cluster Management - -This doc is in progress. - -## Upgrading a cluster - -The `cluster/kube-push.sh` script will do a rudimentary update; it is a 1.0 roadmap item to have a robust live cluster update system. - -## Updgrading to a different API version - -There is a sequence of steps to upgrade to a new API version. - -1. Turn on the new api version -2. Upgrade the cluster's storage to use the new version. -3. Upgrade all config files. Identify users of the old api version endpoints. -4. Update existing objects in the storage to new version by running cluster/update-storage-objects.sh -3. Turn off the old version. - -### Turn on or off an API version for your cluster - -Specific API versions can be turned on or off by passing --runtime-config=api/ flag while bringing up the server. For example: to turn off v1 API, pass --runtime-config=api/v1=false. -runtime-config also supports 2 special keys: api/all and api/legacy to control all and legacy APIs respectively. For example, for turning off all api versions except v1, pass --runtime-config=api/all=false,api/v1=true. - -### Switching your cluster's storage API version - -KUBE_API_VERSIONS env var controls the API versions that are supported in the cluster. The first version in the list is used as the cluster's storage version. Hence, to set a specific version as the storage version, bring it to the front of list of versions in the value of KUBE_API_VERSIONS. - -### Switching your config files to a new API version - -You can use the kube-version-change utility to convert config files between different API versions. - -``` -$ hack/build-go.sh cmd/kube-version-change -$ _output/local/go/bin/kube-version-change -i myPod.v1beta3.yaml -o myPod.v1.yaml -``` - -### Maintenance on a Node - -If you need to reboot a node (such as for a kernel upgrade, libc upgrade, hardware repair, etc.), and the downtime is -brief, then when the Kubelet restarts, it will attempt to restart the pods scheduled to it. If the reboot takes longer, -then the node controller will terminate the pods that are bound to the unavailable node. If there is a corresponding -replication controller, then a new copy of the pod will be started on a different node. So, in the case where all -pods are replicated, upgrades can be done without special coordination. - -If you want more control over the upgrading process, you may use the following workflow: - 1. Mark the node to be rebooted as unschedulable: - `kubectl update nodes $NODENAME --patch='{"apiVersion": "v1", "spec": {"unschedulable": true}}'`. - This keeps new pods from landing on the node while you are trying to get them off. - 1. Get the pods off the machine, via any of the following strategies: - 1. wait for finite-duration pods to complete - 1. delete pods with `kubectl delete pods $PODNAME` - 1. for pods with a replication controller, the pod will eventually be replaced by a new pod which will be scheduled to a new node. additionally, if the pod is part of a service, then clients will automatically be redirected to the new pod. - 1. for pods with no replication controller, you need to bring up a new copy of the pod, and assuming it is not part of a service, redirect clients to it. - 1. Work on the node - 1. Make the node schedulable again: - `kubectl update nodes $NODENAME --patch='{"apiVersion": "v1", "spec": {"unschedulable": false}}'`. - If you deleted the node's VM instance and created a new one, then a new schedulable node resource will - be created automatically when you create a new VM instance (if you're using a cloud provider that supports - node discovery; currently this is only GCE, not including CoreOS on GCE using kube-register). See [Node](node.md). - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/cluster_management.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/cluster_management.md?pixel)]() diff --git a/release-0.19.0/docs/container-environment.md b/release-0.19.0/docs/container-environment.md deleted file mode 100644 index 2e1cf3d2821..00000000000 --- a/release-0.19.0/docs/container-environment.md +++ /dev/null @@ -1,94 +0,0 @@ - -# Kubernetes Container Environment - -## Overview -This document describes the environment for Kubelet managed containers on a Kubernetes node (kNode).  In contrast to the Kubernetes cluster API, which provides an API for creating and managing containers, the Kubernetes container environment provides the container access to information about what else is going on in the cluster.  - -This cluster information makes it possible to build applications that are *cluster aware*.   -Additionally, the Kubernetes container environment defines a series of hooks that are surfaced to optional hook handlers defined as part of individual containers.  Container hooks are somewhat analogous to operating system signals in a traditional process model.   However these hooks are designed to make it easier to build reliable, scalable cloud applications in the Kubernetes cluster.  Containers that participate in this cluster lifecycle become *cluster native*.  - -Another important part of the container environment is the file system that is available to the container. In Kubernetes, the filesystem is a combination of an [image](./images.md) and one or more [volumes](./volumes.md). - - -The following sections describe both the cluster information provided to containers, as well as the hooks and life-cycle that allows containers to interact with the management system. - -## Cluster Information -There are two types of information that are available within the container environment.  There is information about the container itself, and there is information about other objects in the system. - -### Container Information -Currently, the only information about the container that is available to the container is the Pod name for the pod in which the container is running.  This ID is set as the hostname of the container, and is accessible through all calls to access the hostname within the container (e.g. the hostname command, or the [gethostname][1] function call in libc).  Additionally, user-defined environment variables from the pod definition, are also available to the container, as are any environment variables specified statically in the Docker image. - -In the future, we anticipate expanding this information with richer information about the container.  Examples include available memory, number of restarts, and in general any state that you could get from the call to GET /pods on the API server. - -### Cluster Information -Currently the list of all services that are running at the time when the container was created via the Kubernetes Cluster API are available to the container as environment variables.  The set of environment variables matches the syntax of Docker links. - -For a service named **foo** that maps to a container port named **bar**, the following variables are defined: - -```sh -FOO_SERVICE_HOST= -FOO_SERVICE_PORT= -``` - -Going forward, we expect that Services will have a dedicated IP address.  In that context, we will also surface services to the container via DNS.  Of course DNS is still not an enumerable protocol, so we will continue to provide environment variables so that containers can do discovery. - -## Container Hooks -*NB*: Container hooks are under active development, we anticipate adding additional hooks as the Kubernetes container management system evolves.* - -Container hooks provide information to the container about events in its management lifecycle.  For example, immediately after a container is started, it receives a *PostStart* hook.  These hooks are broadcast *into* the container with information about the life-cycle of the container.  They are different from the events provided by Docker and other systems which are *output* from the container.  Output events provide a log of what has already happened.  Input hooks provide real-time notification about things that are happening, but no historical log.   - -### Hook Details -There are currently two container hooks that are surfaced to containers, and two proposed hooks: - -*PreStart - ****Proposed*** - -This hook is sent immediately before a container is created.  It notifies that the container will be created immediately after the call completes.  No parameters are passed. *Note - *Some event handlers (namely ‘exec’ are incompatible with this event) - -*PostStart* - -This hook is sent immediately after a container is created.  It notifies the container that it has been created.  No parameters are passed to the handler. - -*PostRestart - ****Proposed*** - -This hook is called before the PostStart handler, when a container has been restarted, rather than started for the first time.  No parameters are passed to the handler. - -*PreStop* - -This hook is called immediately before a container is terminated.  This event handler is blocking, and must complete before the call to delete the container is sent to the Docker daemon. The SIGTERM notification sent by Docker is also still sent. - -A single parameter named reason is passed to the handler which contains the reason for termination.  Currently the valid values for reason are: - -* ```Delete``` - indicating an API call to delete the pod containing this container. -* ```Health``` - indicating that a health check of the container failed. -* ```Dependency``` - indicating that a dependency for the container or the pod is missing, and thus, the container needs to be restarted.  Examples include, the pod infra container crashing, or persistent disk failing for a container that mounts PD. - -Eventually, user specified reasons may be [added to the API](https://github.com/GoogleCloudPlatform/kubernetes/issues/137). - - -### Hook Handler Execution -When a management hook occurs, the management system calls into any registered hook handlers in the container for that hook.  These hook handler calls are synchronous in the context of the pod containing the container. Note:this means that hook handler execution blocks any further management of the pod.  If your hook handler blocks, no other management (including health checks) will occur until the hook handler completes.  Blocking hook handlers do *not* affect management of other Pods.  Typically we expect that users will make their hook handlers as lightweight as possible, but there are cases where long running commands make sense (e.g. saving state prior to container stop) - -For hooks which have parameters, these parameters are passed to the event handler as a set of key/value pairs.  The details of this parameter passing is handler implementation dependent (see below). - -### Hook delivery guarantees -Hook delivery is "at least one", which means that a hook may be called multiple times for any given event (e.g. "start" or "stop") and it is up to the hook implementer to be able to handle this -correctly. - -We expect double delivery to be rare, but in some cases if the ```kubelet``` restarts in the middle of sending a hook, the hook may be resent after the kubelet comes back up. - -Likewise, we only make a single delivery attempt. If (for example) an http hook receiver is down, and unable to take traffic, we do not make any attempts to resend. - -### Hook Handler Implementations -Hook handlers are the way that hooks are surfaced to containers.  Containers can select the type of hook handler they would like to implement.  Kubernetes currently supports two different hook handler types: - - * Exec - Executes a specific command (e.g. pre-stop.sh) inside the cgroup and namespaces of the container.  Resources consumed by the command are counted against the container.  Commands which print "ok" to standard out (stdout) are treated as healthy, any other output is treated as container failures (and will cause kubelet to forcibly restart the container).  Parameters are passed to the command as traditional linux command line flags (e.g. pre-stop.sh --reason=HEALTH) - - * HTTP - Executes an HTTP request against a specific endpoint on the container.  HTTP error codes (5xx) and non-response/failure to connect are treated as container failures. Parameters are passed to the http endpoint as query args (e.g. http://some.server.com/some/path?reason=HEALTH) - -[1]: http://man7.org/linux/man-pages/man2/gethostname.2.html - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/container-environment.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/container-environment.md?pixel)]() diff --git a/release-0.19.0/docs/containers.md b/release-0.19.0/docs/containers.md deleted file mode 100644 index c77ea80ca6e..00000000000 --- a/release-0.19.0/docs/containers.md +++ /dev/null @@ -1,95 +0,0 @@ -# Containers with Kubernetes - -## Containers and commands - -So far the Pods we've seen have all used the `image` field to indicate what process Kubernetes -should run in a container. In this case, Kubernetes runs the image's default command. If we want -to run a particular command or override the image's defaults, there are two additional fields that -we can use: - -1. `Command`: Controls the actual command run by the image -2. `Args`: Controls the arguments passed to the command - -### How docker handles command and arguments - -Docker images have metadata associated with them that is used to store information about the image. -The image author may use this to define defaults for the command and arguments to run a container -when the user does not supply values. Docker calls the fields for commands and arguments -`Entrypoint` and `Cmd` respectively. The full details for this feature are too complicated to -describe here, mostly due to the fact that the docker API allows users to specify both of these -fields as either a string array or a string and there are subtle differences in how those cases are -handled. We encourage the curious to check out [docker's documentation]() for this feature. - -Kubernetes allows you to override both the image's default command (docker `Entrypoint`) and args -(docker `Cmd`) with the `Command` and `Args` fields of `Container`. The rules are: - -1. If you do not supply a `Command` or `Args` for a container, the defaults defined by the image - will be used -2. If you supply a `Command` but no `Args` for a container, only the supplied `Command` will be - used; the image's default arguments are ignored -3. If you supply only `Args`, the image's default command will be used with the arguments you - supply -4. If you supply a `Command` **and** `Args`, the image's defaults will be ignored and the values - you supply will be used - -Here are examples for these rules in table format - -| Image `Entrypoint` | Image `Cmd` | Container `Command` | Container `Args` | Command Run | -|--------------------|------------------|---------------------|--------------------|------------------| -| `[/ep-1]` | `[foo bar]` | <not set> | <not set> | `[ep-1 foo bar]` | -| `[/ep-1]` | `[foo bar]` | `[/ep-2]` | <not set> | `[ep-2]` | -| `[/ep-1]` | `[foo bar]` | <not set> | `[zoo boo]` | `[ep-1 zoo boo]` | -| `[/ep-1]` | `[foo bar]` | `[/ep-2]` | `[zoo boo]` | `[ep-2 zoo boo]` | - - -## Capabilities - -By default, Docker containers are "unprivileged" and cannot, for example, run a Docker daemon inside a Docker container. We can have fine grain control over the capabilities using cap-add and cap-drop.More details [here](https://docs.docker.com/reference/run/#runtime-privilege-linux-capabilities-and-lxc-configuration). - -The relationship between Docker's capabilities and [Linux capabilities](http://man7.org/linux/man-pages/man7/capabilities.7.html) - -| Docker's capabilities | Linux capabilities | -| ---- | ---- | -| SETPCAP | CAP_SETPCAP | -| SYS_MODULE | CAP_SYS_MODULE | -| SYS_RAWIO | CAP_SYS_RAWIO | -| SYS_PACCT | CAP_SYS_PACCT | -| SYS_ADMIN | CAP_SYS_ADMIN | -| SYS_NICE | CAP_SYS_NICE | -| SYS_RESOURCE | CAP_SYS_RESOURCE | -| SYS_TIME | CAP_SYS_TIME | -| SYS_TTY_CONFIG | CAP_SYS_TTY_CONFIG | -| MKNOD | CAP_MKNOD | -| AUDIT_WRITE | CAP_AUDIT_WRITE | -| AUDIT_CONTROL | CAP_AUDIT_CONTROL | -| MAC_OVERRIDE | CAP_MAC_OVERRIDE | -| MAC_ADMIN | CAP_MAC_ADMIN | -| NET_ADMIN | CAP_NET_ADMIN | -| SYSLOG | CAP_SYSLOG | -| CHOWN | CAP_CHOWN | -| NET_RAW | CAP_NET_RAW | -| DAC_OVERRIDE | CAP_DAC_OVERRIDE | -| FOWNER | CAP_FOWNER | -| DAC_READ_SEARCH | CAP_DAC_READ_SEARCH | -| FSETID | CAP_FSETID | -| KILL | CAP_KILL | -| SETGID | CAP_SETGID | -| SETUID | CAP_SETUID | -| LINUX_IMMUTABLE | CAP_LINUX_IMMUTABLE | -| NET_BIND_SERVICE | CAP_NET_BIND_SERVICE | -| NET_BROADCAST | CAP_NET_BROADCAST | -| IPC_LOCK | CAP_IPC_LOCK | -| IPC_OWNER | CAP_IPC_OWNER | -| SYS_CHROOT | CAP_SYS_CHROOT | -| SYS_PTRACE | CAP_SYS_PTRACE | -| SYS_BOOT | CAP_SYS_BOOT | -| LEASE | CAP_LEASE | -| SETFCAP | CAP_SETFCAP | -| WAKE_ALARM | CAP_WAKE_ALARM | -| BLOCK_SUSPEND | CAP_BLOCK_SUSPEND | - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/containers.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/containers.md?pixel)]() diff --git a/release-0.19.0/docs/design/README.md b/release-0.19.0/docs/design/README.md deleted file mode 100644 index befb6da3099..00000000000 --- a/release-0.19.0/docs/design/README.md +++ /dev/null @@ -1,23 +0,0 @@ -# Kubernetes Design Overview - -Kubernetes is a system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance, and scaling of applications. - -Kubernetes establishes robust declarative primitives for maintaining the desired state requested by the user. We see these primitives as the main value added by Kubernetes. Self-healing mechanisms, such as auto-restarting, re-scheduling, and replicating containers require active controllers, not just imperative orchestration. - -Kubernetes is primarily targeted at applications composed of multiple containers, such as elastic, distributed micro-services. It is also designed to facilitate migration of non-containerized application stacks to Kubernetes. It therefore includes abstractions for grouping containers in both loosely coupled and tightly coupled formations, and provides ways for containers to find and communicate with each other in relatively familiar ways. - -Kubernetes enables users to ask a cluster to run a set of containers. The system automatically chooses hosts to run those containers on. While Kubernetes's scheduler is currently very simple, we expect it to grow in sophistication over time. Scheduling is a policy-rich, topology-aware, workload-specific function that significantly impacts availability, performance, and capacity. The scheduler needs to take into account individual and collective resource requirements, quality of service requirements, hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference, deadlines, and so on. Workload-specific requirements will be exposed through the API as necessary. - -Kubernetes is intended to run on a number of cloud providers, as well as on physical hosts. - -A single Kubernetes cluster is not intended to span multiple availability zones. Instead, we recommend building a higher-level layer to replicate complete deployments of highly available applications across multiple zones (see [the availability doc](../availability.md) and [cluster federation proposal](../proposals/federation.md) for more details). - -Finally, Kubernetes aspires to be an extensible, pluggable, building-block OSS platform and toolkit. Therefore, architecturally, we want Kubernetes to be built as a collection of pluggable components and layers, with the ability to use alternative schedulers, controllers, storage systems, and distribution mechanisms, and we're evolving its current code in that direction. Furthermore, we want others to be able to extend Kubernetes functionality, such as with higher-level PaaS functionality or multi-cluster layers, without modification of core Kubernetes source. Therefore, its API isn't just (or even necessarily mainly) targeted at end users, but at tool and extension developers. Its APIs are intended to serve as the foundation for an open ecosystem of tools, automation systems, and higher-level API layers. Consequently, there are no "internal" inter-component APIs. All APIs are visible and available, including the APIs used by the scheduler, the node controller, the replication-controller manager, Kubelet's API, etc. There's no glass to break -- in order to handle more complex use cases, one can just access the lower-level APIs in a fully transparent, composable manner. - -For more about the Kubernetes architecture, see [architecture](architecture.md). - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/design/README.md?pixel)]() diff --git a/release-0.19.0/docs/design/access.md b/release-0.19.0/docs/design/access.md deleted file mode 100644 index 6bded9e2dc2..00000000000 --- a/release-0.19.0/docs/design/access.md +++ /dev/null @@ -1,254 +0,0 @@ -# K8s Identity and Access Management Sketch - -This document suggests a direction for identity and access management in the Kubernetes system. - - -## Background - -High level goals are: - - Have a plan for how identity, authentication, and authorization will fit in to the API. - - Have a plan for partitioning resources within a cluster between independent organizational units. - - Ease integration with existing enterprise and hosted scenarios. - -### Actors -Each of these can act as normal users or attackers. - - External Users: People who are accessing applications running on K8s (e.g. a web site served by webserver running in a container on K8s), but who do not have K8s API access. - - K8s Users : People who access the K8s API (e.g. create K8s API objects like Pods) - - K8s Project Admins: People who manage access for some K8s Users - - K8s Cluster Admins: People who control the machines, networks, or binaries that make up a K8s cluster. - - K8s Admin means K8s Cluster Admins and K8s Project Admins taken together. - -### Threats -Both intentional attacks and accidental use of privilege are concerns. - -For both cases it may be useful to think about these categories differently: - - Application Path - attack by sending network messages from the internet to the IP/port of any application running on K8s. May exploit weakness in application or misconfiguration of K8s. - - K8s API Path - attack by sending network messages to any K8s API endpoint. - - Insider Path - attack on K8s system components. Attacker may have privileged access to networks, machines or K8s software and data. Software errors in K8s system components and administrator error are some types of threat in this category. - -This document is primarily concerned with K8s API paths, and secondarily with Internal paths. The Application path also needs to be secure, but is not the focus of this document. - -### Assets to protect - -External User assets: - - Personal information like private messages, or images uploaded by External Users - - web server logs - -K8s User assets: - - External User assets of each K8s User - - things private to the K8s app, like: - - credentials for accessing other services (docker private repos, storage services, facebook, etc) - - SSL certificates for web servers - - proprietary data and code - -K8s Cluster assets: - - Assets of each K8s User - - Machine Certificates or secrets. - - The value of K8s cluster computing resources (cpu, memory, etc). - -This document is primarily about protecting K8s User assets and K8s cluster assets from other K8s Users and K8s Project and Cluster Admins. - -### Usage environments -Cluster in Small organization: - - K8s Admins may be the same people as K8s Users. - - few K8s Admins. - - prefer ease of use to fine-grained access control/precise accounting, etc. - - Product requirement that it be easy for potential K8s Cluster Admin to try out setting up a simple cluster. - -Cluster in Large organization: - - K8s Admins typically distinct people from K8s Users. May need to divide K8s Cluster Admin access by roles. - - K8s Users need to be protected from each other. - - Auditing of K8s User and K8s Admin actions important. - - flexible accurate usage accounting and resource controls important. - - Lots of automated access to APIs. - - Need to integrate with existing enterprise directory, authentication, accounting, auditing, and security policy infrastructure. - -Org-run cluster: - - organization that runs K8s master components is same as the org that runs apps on K8s. - - Nodes may be on-premises VMs or physical machines; Cloud VMs; or a mix. - -Hosted cluster: - - Offering K8s API as a service, or offering a Paas or Saas built on K8s - - May already offer web services, and need to integrate with existing customer account concept, and existing authentication, accounting, auditing, and security policy infrastructure. - - May want to leverage K8s User accounts and accounting to manage their User accounts (not a priority to support this use case.) - - Precise and accurate accounting of resources needed. Resource controls needed for hard limits (Users given limited slice of data) and soft limits (Users can grow up to some limit and then be expanded). - -K8s ecosystem services: - - There may be companies that want to offer their existing services (Build, CI, A/B-test, release automation, etc) for use with K8s. There should be some story for this case. - -Pods configs should be largely portable between Org-run and hosted configurations. - - -# Design -Related discussion: -- https://github.com/GoogleCloudPlatform/kubernetes/issues/442 -- https://github.com/GoogleCloudPlatform/kubernetes/issues/443 - -This doc describes two security profiles: - - Simple profile: like single-user mode. Make it easy to evaluate K8s without lots of configuring accounts and policies. Protects from unauthorized users, but does not partition authorized users. - - Enterprise profile: Provide mechanisms needed for large numbers of users. Defense in depth. Should integrate with existing enterprise security infrastructure. - -K8s distribution should include templates of config, and documentation, for simple and enterprise profiles. System should be flexible enough for knowledgeable users to create intermediate profiles, but K8s developers should only reason about those two Profiles, not a matrix. - -Features in this doc are divided into "Initial Feature", and "Improvements". Initial features would be candidates for version 1.00. - -## Identity -###userAccount -K8s will have a `userAccount` API object. -- `userAccount` has a UID which is immutable. This is used to associate users with objects and to record actions in audit logs. -- `userAccount` has a name which is a string and human readable and unique among userAccounts. It is used to refer to users in Policies, to ensure that the Policies are human readable. It can be changed only when there are no Policy objects or other objects which refer to that name. An email address is a suggested format for this field. -- `userAccount` is not related to the unix username of processes in Pods created by that userAccount. -- `userAccount` API objects can have labels - -The system may associate one or more Authentication Methods with a -`userAccount` (but they are not formally part of the userAccount object.) -In a simple deployment, the authentication method for a -user might be an authentication token which is verified by a K8s server. In a -more complex deployment, the authentication might be delegated to -another system which is trusted by the K8s API to authenticate users, but where -the authentication details are unknown to K8s. - -Initial Features: -- there is no superuser `userAccount` -- `userAccount` objects are statically populated in the K8s API store by reading a config file. Only a K8s Cluster Admin can do this. -- `userAccount` can have a default `namespace`. If API call does not specify a `namespace`, the default `namespace` for that caller is assumed. -- `userAccount` is global. A single human with access to multiple namespaces is recommended to only have one userAccount. - -Improvements: -- Make `userAccount` part of a separate API group from core K8s objects like `pod`. Facilitates plugging in alternate Access Management. - -Simple Profile: - - single `userAccount`, used by all K8s Users and Project Admins. One access token shared by all. - -Enterprise Profile: - - every human user has own `userAccount`. - - `userAccount`s have labels that indicate both membership in groups, and ability to act in certain roles. - - each service using the API has own `userAccount` too. (e.g. `scheduler`, `repcontroller`) - - automated jobs to denormalize the ldap group info into the local system list of users into the K8s userAccount file. - -###Unix accounts -A `userAccount` is not a Unix user account. The fact that a pod is started by a `userAccount` does not mean that the processes in that pod's containers run as a Unix user with a corresponding name or identity. - -Initially: -- The unix accounts available in a container, and used by the processes running in a container are those that are provided by the combination of the base operating system and the Docker manifest. -- Kubernetes doesn't enforce any relation between `userAccount` and unix accounts. - -Improvements: -- Kubelet allocates disjoint blocks of root-namespace uids for each container. This may provide some defense-in-depth against container escapes. (https://github.com/docker/docker/pull/4572) -- requires docker to integrate user namespace support, and deciding what getpwnam() does for these uids. -- any features that help users avoid use of privileged containers (https://github.com/GoogleCloudPlatform/kubernetes/issues/391) - -###Namespaces -K8s will have a have a `namespace` API object. It is similar to a Google Compute Engine `project`. It provides a namespace for objects created by a group of people co-operating together, preventing name collisions with non-cooperating groups. It also serves as a reference point for authorization policies. - -Namespaces are described in [namespace.md](namespaces.md). - -In the Enterprise Profile: - - a `userAccount` may have permission to access several `namespace`s. - -In the Simple Profile: - - There is a single `namespace` used by the single user. - -Namespaces versus userAccount vs Labels: -- `userAccount`s are intended for audit logging (both name and UID should be logged), and to define who has access to `namespace`s. -- `labels` (see [docs/labels.md](/docs/labels.md)) should be used to distinguish pods, users, and other objects that cooperate towards a common goal but are different in some way, such as version, or responsibilities. -- `namespace`s prevent name collisions between uncoordinated groups of people, and provide a place to attach common policies for co-operating groups of people. - - -## Authentication - -Goals for K8s authentication: -- Include a built-in authentication system with no configuration required to use in single-user mode, and little configuration required to add several user accounts, and no https proxy required. -- Allow for authentication to be handled by a system external to Kubernetes, to allow integration with existing to enterprise authorization systems. The kubernetes namespace itself should avoid taking contributions of multiple authorization schemes. Instead, a trusted proxy in front of the apiserver can be used to authenticate users. - - For organizations whose security requirements only allow FIPS compliant implementations (e.g. apache) for authentication. - - So the proxy can terminate SSL, and isolate the CA-signed certificate from less trusted, higher-touch APIserver. - - For organizations that already have existing SaaS web services (e.g. storage, VMs) and want a common authentication portal. -- Avoid mixing authentication and authorization, so that authorization policies be centrally managed, and to allow changes in authentication methods without affecting authorization code. - -Initially: -- Tokens used to authenticate a user. -- Long lived tokens identify a particular `userAccount`. -- Administrator utility generates tokens at cluster setup. -- OAuth2.0 Bearer tokens protocol, http://tools.ietf.org/html/rfc6750 -- No scopes for tokens. Authorization happens in the API server -- Tokens dynamically generated by apiserver to identify pods which are making API calls. -- Tokens checked in a module of the APIserver. -- Authentication in apiserver can be disabled by flag, to allow testing without authorization enabled, and to allow use of an authenticating proxy. In this mode, a query parameter or header added by the proxy will identify the caller. - -Improvements: -- Refresh of tokens. -- SSH keys to access inside containers. - -To be considered for subsequent versions: -- Fuller use of OAuth (http://tools.ietf.org/html/rfc6749) -- Scoped tokens. -- Tokens that are bound to the channel between the client and the api server - - http://www.ietf.org/proceedings/90/slides/slides-90-uta-0.pdf - - http://www.browserauth.net - - -## Authorization - -K8s authorization should: -- Allow for a range of maturity levels, from single-user for those test driving the system, to integration with existing to enterprise authorization systems. -- Allow for centralized management of users and policies. In some organizations, this will mean that the definition of users and access policies needs to reside on a system other than k8s and encompass other web services (such as a storage service). -- Allow processes running in K8s Pods to take on identity, and to allow narrow scoping of permissions for those identities in order to limit damage from software faults. -- Have Authorization Policies exposed as API objects so that a single config file can create or delete Pods, Controllers, Services, and the identities and policies for those Pods and Controllers. -- Be separate as much as practical from Authentication, to allow Authentication methods to change over time and space, without impacting Authorization policies. - -K8s will implement a relatively simple -[Attribute-Based Access Control](http://en.wikipedia.org/wiki/Attribute_Based_Access_Control) model. -The model will be described in more detail in a forthcoming document. The model will -- Be less complex than XACML -- Be easily recognizable to those familiar with Amazon IAM Policies. -- Have a subset/aliases/defaults which allow it to be used in a way comfortable to those users more familiar with Role-Based Access Control. - -Authorization policy is set by creating a set of Policy objects. - -The API Server will be the Enforcement Point for Policy. For each API call that it receives, it will construct the Attributes needed to evaluate the policy (what user is making the call, what resource they are accessing, what they are trying to do that resource, etc) and pass those attributes to a Decision Point. The Decision Point code evaluates the Attributes against all the Policies and allows or denies the API call. The system will be modular enough that the Decision Point code can either be linked into the APIserver binary, or be another service that the apiserver calls for each Decision (with appropriate time-limited caching as needed for performance). - -Policy objects may be applicable only to a single namespace or to all namespaces; K8s Project Admins would be able to create those as needed. Other Policy objects may be applicable to all namespaces; a K8s Cluster Admin might create those in order to authorize a new type of controller to be used by all namespaces, or to make a K8s User into a K8s Project Admin.) - - -## Accounting - -The API should have a `quota` concept (see https://github.com/GoogleCloudPlatform/kubernetes/issues/442). A quota object relates a namespace (and optionally a label selector) to a maximum quantity of resources that may be used (see [resources.md](/docs/resources.md)). - -Initially: -- a `quota` object is immutable. -- for hosted K8s systems that do billing, Project is recommended level for billing accounts. -- Every object that consumes resources should have a `namespace` so that Resource usage stats are roll-up-able to `namespace`. -- K8s Cluster Admin sets quota objects by writing a config file. - -Improvements: -- allow one namespace to charge the quota for one or more other namespaces. This would be controlled by a policy which allows changing a billing_namespace= label on an object. -- allow quota to be set by namespace owners for (namespace x label) combinations (e.g. let "webserver" namespace use 100 cores, but to prevent accidents, don't allow "webserver" namespace and "instance=test" use more than 10 cores. -- tools to help write consistent quota config files based on number of nodes, historical namespace usages, QoS needs, etc. -- way for K8s Cluster Admin to incrementally adjust Quota objects. - -Simple profile: - - a single `namespace` with infinite resource limits. - -Enterprise profile: - - multiple namespaces each with their own limits. - -Issues: -- need for locking or "eventual consistency" when multiple apiserver goroutines are accessing the object store and handling pod creations. - - -## Audit Logging - -API actions can be logged. - -Initial implementation: -- All API calls logged to nginx logs. - -Improvements: -- API server does logging instead. -- Policies to drop logging for high rate trusted API calls, or by users performing audit or other sensitive functions. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/access.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/design/access.md?pixel)]() diff --git a/release-0.19.0/docs/design/admission_control.md b/release-0.19.0/docs/design/admission_control.md deleted file mode 100644 index eee3c94a77c..00000000000 --- a/release-0.19.0/docs/design/admission_control.md +++ /dev/null @@ -1,85 +0,0 @@ -# Kubernetes Proposal - Admission Control - -**Related PR:** - -| Topic | Link | -| ----- | ---- | -| Separate validation from RESTStorage | https://github.com/GoogleCloudPlatform/kubernetes/issues/2977 | - -## Background - -High level goals: - -* Enable an easy-to-use mechanism to provide admission control to cluster -* Enable a provider to support multiple admission control strategies or author their own -* Ensure any rejected request can propagate errors back to the caller with why the request failed - -Authorization via policy is focused on answering if a user is authorized to perform an action. - -Admission Control is focused on if the system will accept an authorized action. - -Kubernetes may choose to dismiss an authorized action based on any number of admission control strategies. - -This proposal documents the basic design, and describes how any number of admission control plug-ins could be injected. - -Implementation of specific admission control strategies are handled in separate documents. - -## kube-apiserver - -The kube-apiserver takes the following OPTIONAL arguments to enable admission control - -| Option | Behavior | -| ------ | -------- | -| admission_control | Comma-delimited, ordered list of admission control choices to invoke prior to modifying or deleting an object. | -| admission_control_config_file | File with admission control configuration parameters to boot-strap plug-in. | - -An **AdmissionControl** plug-in is an implementation of the following interface: - -```go -package admission - -// Attributes is an interface used by a plug-in to make an admission decision on a individual request. -type Attributes interface { - GetNamespace() string - GetKind() string - GetOperation() string - GetObject() runtime.Object -} - -// Interface is an abstract, pluggable interface for Admission Control decisions. -type Interface interface { - // Admit makes an admission decision based on the request attributes - // An error is returned if it denies the request. - Admit(a Attributes) (err error) -} -``` - -A **plug-in** must be compiled with the binary, and is registered as an available option by providing a name, and implementation -of admission.Interface. - -```go -func init() { - admission.RegisterPlugin("AlwaysDeny", func(client client.Interface, config io.Reader) (admission.Interface, error) { return NewAlwaysDeny(), nil }) -} -``` - -Invocation of admission control is handled by the **APIServer** and not individual **RESTStorage** implementations. - -This design assumes that **Issue 297** is adopted, and as a consequence, the general framework of the APIServer request/response flow -will ensure the following: - -1. Incoming request -2. Authenticate user -3. Authorize user -4. If operation=create|update, then validate(object) -5. If operation=create|update|delete, then admission.Admit(requestAttributes) - a. invoke each admission.Interface object in sequence -6. Object is persisted - -If at any step, there is an error, the request is canceled. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/admission_control.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/design/admission_control.md?pixel)]() diff --git a/release-0.19.0/docs/design/admission_control_limit_range.md b/release-0.19.0/docs/design/admission_control_limit_range.md deleted file mode 100644 index 9ed98e1c535..00000000000 --- a/release-0.19.0/docs/design/admission_control_limit_range.md +++ /dev/null @@ -1,138 +0,0 @@ -# Admission control plugin: LimitRanger - -## Background - -This document proposes a system for enforcing min/max limits per resource as part of admission control. - -## Model Changes - -A new resource, **LimitRange**, is introduced to enumerate min/max limits for a resource type scoped to a -Kubernetes namespace. - -```go -const ( - // Limit that applies to all pods in a namespace - LimitTypePod string = "Pod" - // Limit that applies to all containers in a namespace - LimitTypeContainer string = "Container" -) - -// LimitRangeItem defines a min/max usage limit for any resource that matches on kind -type LimitRangeItem struct { - // Type of resource that this limit applies to - Type string `json:"type,omitempty"` - // Max usage constraints on this kind by resource name - Max ResourceList `json:"max,omitempty"` - // Min usage constraints on this kind by resource name - Min ResourceList `json:"min,omitempty"` - // Default usage constraints on this kind by resource name - Default ResourceList `json:"default,omitempty"` -} - -// LimitRangeSpec defines a min/max usage limit for resources that match on kind -type LimitRangeSpec struct { - // Limits is the list of LimitRangeItem objects that are enforced - Limits []LimitRangeItem `json:"limits"` -} - -// LimitRange sets resource usage limits for each kind of resource in a Namespace -type LimitRange struct { - TypeMeta `json:",inline"` - ObjectMeta `json:"metadata,omitempty"` - - // Spec defines the limits enforced - Spec LimitRangeSpec `json:"spec,omitempty"` -} - -// LimitRangeList is a list of LimitRange items. -type LimitRangeList struct { - TypeMeta `json:",inline"` - ListMeta `json:"metadata,omitempty"` - - // Items is a list of LimitRange objects - Items []LimitRange `json:"items"` -} -``` - -## AdmissionControl plugin: LimitRanger - -The **LimitRanger** plug-in introspects all incoming admission requests. - -It makes decisions by evaluating the incoming object against all defined **LimitRange** objects in the request context namespace. - -The following min/max limits are imposed: - -**Type: Container** - -| ResourceName | Description | -| ------------ | ----------- | -| cpu | Min/Max amount of cpu per container | -| memory | Min/Max amount of memory per container | - -**Type: Pod** - -| ResourceName | Description | -| ------------ | ----------- | -| cpu | Min/Max amount of cpu per pod | -| memory | Min/Max amount of memory per pod | - -If a resource specifies a default value, it may get applied on the incoming resource. For example, if a default -value is provided for container cpu, it is set on the incoming container if and only if the incoming container -does not specify a resource requirements limit field. - -If a resource specifies a min value, it may get applied on the incoming resource. For example, if a min -value is provided for container cpu, it is set on the incoming container if and only if the incoming container does -not specify a resource requirements requests field. - -If the incoming object would cause a violation of the enumerated constraints, the request is denied with a set of -messages explaining what constraints were the source of the denial. - -If a constraint is not enumerated by a **LimitRange** it is not tracked. - -## kube-apiserver - -The server is updated to be aware of **LimitRange** objects. - -The constraints are only enforced if the kube-apiserver is started as follows: - -``` -$ kube-apiserver -admission_control=LimitRanger -``` - -## kubectl - -kubectl is modified to support the **LimitRange** resource. - -```kubectl describe``` provides a human-readable output of limits. - -For example, - -```shell -$ kubectl namespace myspace -$ kubectl create -f examples/limitrange/limit-range.json -$ kubectl get limits -NAME -limits -$ kubectl describe limits limits -Name: limits -Type Resource Min Max Default ----- -------- --- --- --- -Pod memory 1Mi 1Gi - -Pod cpu 250m 2 - -Container memory 1Mi 1Gi 1Mi -Container cpu 250m 250m 250m -``` - -## Future Enhancements: Define limits for a particular pod or container. - -In the current proposal, the **LimitRangeItem** matches purely on **LimitRangeItem.Type** - -It is expected we will want to define limits for particular pods or containers by name/uid and label/field selector. - -To make a **LimitRangeItem** more restrictive, we will intend to add these additional restrictions at a future point in time. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/admission_control_limit_range.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/design/admission_control_limit_range.md?pixel)]() diff --git a/release-0.19.0/docs/design/admission_control_resource_quota.md b/release-0.19.0/docs/design/admission_control_resource_quota.md deleted file mode 100644 index 1ea19b75dee..00000000000 --- a/release-0.19.0/docs/design/admission_control_resource_quota.md +++ /dev/null @@ -1,159 +0,0 @@ -# Admission control plugin: ResourceQuota - -## Background - -This document proposes a system for enforcing hard resource usage limits per namespace as part of admission control. - -## Model Changes - -A new resource, **ResourceQuota**, is introduced to enumerate hard resource limits in a Kubernetes namespace. - -A new resource, **ResourceQuotaUsage**, is introduced to support atomic updates of a **ResourceQuota** status. - -```go -// The following identify resource constants for Kubernetes object types -const ( - // Pods, number - ResourcePods ResourceName = "pods" - // Services, number - ResourceServices ResourceName = "services" - // ReplicationControllers, number - ResourceReplicationControllers ResourceName = "replicationcontrollers" - // ResourceQuotas, number - ResourceQuotas ResourceName = "resourcequotas" -) - -// ResourceQuotaSpec defines the desired hard limits to enforce for Quota -type ResourceQuotaSpec struct { - // Hard is the set of desired hard limits for each named resource - Hard ResourceList `json:"hard,omitempty"` -} - -// ResourceQuotaStatus defines the enforced hard limits and observed use -type ResourceQuotaStatus struct { - // Hard is the set of enforced hard limits for each named resource - Hard ResourceList `json:"hard,omitempty"` - // Used is the current observed total usage of the resource in the namespace - Used ResourceList `json:"used,omitempty"` -} - -// ResourceQuota sets aggregate quota restrictions enforced per namespace -type ResourceQuota struct { - TypeMeta `json:",inline"` - ObjectMeta `json:"metadata,omitempty"` - - // Spec defines the desired quota - Spec ResourceQuotaSpec `json:"spec,omitempty"` - - // Status defines the actual enforced quota and its current usage - Status ResourceQuotaStatus `json:"status,omitempty"` -} - -// ResourceQuotaUsage captures system observed quota status per namespace -// It is used to enforce atomic updates of a backing ResourceQuota.Status field in storage -type ResourceQuotaUsage struct { - TypeMeta `json:",inline"` - ObjectMeta `json:"metadata,omitempty"` - - // Status defines the actual enforced quota and its current usage - Status ResourceQuotaStatus `json:"status,omitempty"` -} - -// ResourceQuotaList is a list of ResourceQuota items -type ResourceQuotaList struct { - TypeMeta `json:",inline"` - ListMeta `json:"metadata,omitempty"` - - // Items is a list of ResourceQuota objects - Items []ResourceQuota `json:"items"` -} - -``` - -## AdmissionControl plugin: ResourceQuota - -The **ResourceQuota** plug-in introspects all incoming admission requests. - -It makes decisions by evaluating the incoming object against all defined **ResourceQuota.Status.Hard** resource limits in the request -namespace. If acceptance of the resource would cause the total usage of a named resource to exceed its hard limit, the request is denied. - -The following resource limits are imposed as part of core Kubernetes at the namespace level: - -| ResourceName | Description | -| ------------ | ----------- | -| cpu | Total cpu usage | -| memory | Total memory usage | -| pods | Total number of pods | -| services | Total number of services | -| replicationcontrollers | Total number of replication controllers | -| resourcequotas | Total number of resource quotas | - -Any resource that is not part of core Kubernetes must follow the resource naming convention prescribed by Kubernetes. - -This means the resource must have a fully-qualified name (i.e. mycompany.org/shinynewresource) - -If the incoming request does not cause the total usage to exceed any of the enumerated hard resource limits, the plug-in will post a -**ResourceQuotaUsage** document to the server to atomically update the observed usage based on the previously read -**ResourceQuota.ResourceVersion**. This keeps incremental usage atomically consistent, but does introduce a bottleneck (intentionally) -into the system. - -To optimize system performance, it is encouraged that all resource quotas are tracked on the same **ResourceQuota** document. As a result, -its encouraged to actually impose a cap on the total number of individual quotas that are tracked in the **Namespace** to 1 by explicitly -capping it in **ResourceQuota** document. - -## kube-apiserver - -The server is updated to be aware of **ResourceQuota** objects. - -The quota is only enforced if the kube-apiserver is started as follows: - -``` -$ kube-apiserver -admission_control=ResourceQuota -``` - -## kube-controller-manager - -A new controller is defined that runs a synch loop to calculate quota usage across the namespace. - -**ResourceQuota** usage is only calculated if a namespace has a **ResourceQuota** object. - -If the observed usage is different than the recorded usage, the controller sends a **ResourceQuotaUsage** resource -to the server to atomically update. - -The synchronization loop frequency will control how quickly DELETE actions are recorded in the system and usage is ticked down. - -To optimize the synchronization loop, this controller will WATCH on Pod resources to track DELETE events, and in response, recalculate -usage. This is because a Pod deletion will have the most impact on observed cpu and memory usage in the system, and we anticipate -this being the resource most closely running at the prescribed quota limits. - -## kubectl - -kubectl is modified to support the **ResourceQuota** resource. - -```kubectl describe``` provides a human-readable output of quota. - -For example, - -``` -$ kubectl namespace myspace -$ kubectl create -f examples/resourcequota/resource-quota.json -$ kubectl get quota -NAME -quota -$ kubectl describe quota quota -Name: quota -Resource Used Hard --------- ---- ---- -cpu 0m 20 -memory 0 1Gi -pods 5 10 -replicationcontrollers 5 20 -resourcequotas 1 1 -services 3 5 -``` - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/admission_control_resource_quota.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/design/admission_control_resource_quota.md?pixel)]() diff --git a/release-0.19.0/docs/design/architecture.md b/release-0.19.0/docs/design/architecture.md deleted file mode 100644 index 92f73a72894..00000000000 --- a/release-0.19.0/docs/design/architecture.md +++ /dev/null @@ -1,50 +0,0 @@ -# Kubernetes architecture - -A running Kubernetes cluster contains node agents (kubelet) and master components (APIs, scheduler, etc), on top of a distributed storage solution. This diagram shows our desired eventual state, though we're still working on a few things, like making kubelet itself (all our components, really) run within containers, and making the scheduler 100% pluggable. - -![Architecture Diagram](../architecture.png?raw=true "Architecture overview") - -## The Kubernetes Node - -When looking at the architecture of the system, we'll break it down to services that run on the worker node and services that compose the cluster-level control plane. - -The Kubernetes node has the services necessary to run application containers and be managed from the master systems. - -Each node runs Docker, of course. Docker takes care of the details of downloading images and running containers. - -### Kubelet -The **Kubelet** manages [pods](../pods.md) and their containers, their images, their volumes, etc. - -### Kube-Proxy - -Each node also runs a simple network proxy and load balancer (see the [services FAQ](https://github.com/GoogleCloudPlatform/kubernetes/wiki/Services-FAQ) for more details). This reflects `services` (see [the services doc](../services.md) for more details) as defined in the Kubernetes API on each node and can do simple TCP and UDP stream forwarding (round robin) across a set of backends. - -Service endpoints are currently found via [DNS](../dns.md) or through environment variables (both [Docker-links-compatible](https://docs.docker.com/userguide/dockerlinks/) and Kubernetes {FOO}_SERVICE_HOST and {FOO}_SERVICE_PORT variables are supported). These variables resolve to ports managed by the service proxy. - -## The Kubernetes Control Plane - -The Kubernetes control plane is split into a set of components. Currently they all run on a single _master_ node, but that is expected to change soon in order to support high-availability clusters. These components work together to provide a unified view of the cluster. - -### etcd - -All persistent master state is stored in an instance of `etcd`. This provides a great way to store configuration data reliably. With `watch` support, coordinating components can be notified very quickly of changes. - -### Kubernetes API Server - -The apiserver serves up the [Kubernetes API](../api.md). It is intended to be a CRUD-y server, with most/all business logic implemented in separate components or in plug-ins. It mainly processes REST operations, validates them, and updates the corresponding objects in `etcd` (and eventually other stores). - -### Scheduler - -The scheduler binds unscheduled pods to nodes via the `/binding` API. The scheduler is pluggable, and we expect to support multiple cluster schedulers and even user-provided schedulers in the future. - -### Kubernetes Controller Manager Server - -All other cluster-level functions are currently performed by the Controller Manager. For instance, `Endpoints` objects are created and updated by the endpoints controller, and nodes are discovered, managed, and monitored by the node controller. These could eventually be split into separate components to make them independently pluggable. - -The [`replicationcontroller`](../replication-controller.md) is a mechanism that is layered on top of the simple [`pod`](../pods.md) API. We eventually plan to port it to a generic plug-in mechanism, once one is implemented. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/architecture.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/design/architecture.md?pixel)]() diff --git a/release-0.19.0/docs/design/clustering.md b/release-0.19.0/docs/design/clustering.md deleted file mode 100644 index 43cb31645da..00000000000 --- a/release-0.19.0/docs/design/clustering.md +++ /dev/null @@ -1,66 +0,0 @@ -# Clustering in Kubernetes - - -## Overview -The term "clustering" refers to the process of having all members of the kubernetes cluster find and trust each other. There are multiple different ways to achieve clustering with different security and usability profiles. This document attempts to lay out the user experiences for clustering that Kubernetes aims to address. - -Once a cluster is established, the following is true: - -1. **Master -> Node** The master needs to know which nodes can take work and what their current status is wrt capacity. - 1. **Location** The master knows the name and location of all of the nodes in the cluster. - * For the purposes of this doc, location and name should be enough information so that the master can open a TCP connection to the Node. Most probably we will make this either an IP address or a DNS name. It is going to be important to be consistent here (master must be able to reach kubelet on that DNS name) so that we can verify certificates appropriately. - 2. **Target AuthN** A way to securely talk to the kubelet on that node. Currently we call out to the kubelet over HTTP. This should be over HTTPS and the master should know what CA to trust for that node. - 3. **Caller AuthN/Z** This would be the master verifying itself (and permissions) when calling the node. Currently, this is only used to collect statistics as authorization isn't critical. This may change in the future though. -2. **Node -> Master** The nodes currently talk to the master to know which pods have been assigned to them and to publish events. - 1. **Location** The nodes must know where the master is at. - 2. **Target AuthN** Since the master is assigning work to the nodes, it is critical that they verify whom they are talking to. - 3. **Caller AuthN/Z** The nodes publish events and so must be authenticated to the master. Ideally this authentication is specific to each node so that authorization can be narrowly scoped. The details of the work to run (including things like environment variables) might be considered sensitive and should be locked down also. - -**Note:** While the description here refers to a singular Master, in the future we should enable multiple Masters operating in an HA mode. While the "Master" is currently the combination of the API Server, Scheduler and Controller Manager, we will restrict ourselves to thinking about the main API and policy engine -- the API Server. - -## Current Implementation - -A central authority (generally the master) is responsible for determining the set of machines which are members of the cluster. Calls to create and remove worker nodes in the cluster are restricted to this single authority, and any other requests to add or remove worker nodes are rejected. (1.i). - -Communication from the master to nodes is currently over HTTP and is not secured or authenticated in any way. (1.ii, 1.iii). - -The location of the master is communicated out of band to the nodes. For GCE, this is done via Salt. Other cluster instructions/scripts use other methods. (2.i) - -Currently most communication from the node to the master is over HTTP. When it is done over HTTPS there is currently no verification of the cert of the master (2.ii). - -Currently, the node/kubelet is authenticated to the master via a token shared across all nodes. This token is distributed out of band (using Salt for GCE) and is optional. If it is not present then the kubelet is unable to publish events to the master. (2.iii) - -Our current mix of out of band communication doesn't meet all of our needs from a security point of view and is difficult to set up and configure. - -## Proposed Solution - -The proposed solution will provide a range of options for setting up and maintaining a secure Kubernetes cluster. We want to both allow for centrally controlled systems (leveraging pre-existing trust and configuration systems) or more ad-hoc automagic systems that are incredibly easy to set up. - -The building blocks of an easier solution: - -* **Move to TLS** We will move to using TLS for all intra-cluster communication. We will explicitly idenitfy the trust chain (the set of trusted CAs) as opposed to trusting the system CAs. We will also use client certificates for all AuthN. -* [optional] **API driven CA** Optionally, we will run a CA in the master that will mint certificates for the nodes/kubelets. There will be pluggable policies that will automatically approve certificate requests here as appropriate. - * **CA approval policy** This is a pluggable policy object that can automatically approve CA signing requests. Stock policies will include `always-reject`, `queue` and `insecure-always-approve`. With `queue` there would be an API for evaluating and accepting/rejecting requests. Cloud providers could implement a policy here that verifies other out of band information and automatically approves/rejects based on other external factors. -* **Scoped Kubelet Accounts** These accounts are per-minion and (optionally) give a minion permission to register itself. - * To start with, we'd have the kubelets generate a cert/account in the form of `kubelet:`. To start we would then hard code policy such that we give that particular account appropriate permissions. Over time, we can make the policy engine more generic. -* [optional] **Bootstrap API endpoint** This is a helper service hosted outside of the Kubernetes cluster that helps with initial discovery of the master. - -### Static Clustering - -In this sequence diagram there is out of band admin entity that is creating all certificates and distributing them. It is also making sure that the kubelets know where to find the master. This provides for a lot of control but is more difficult to set up as lots of information must be communicated outside of Kubernetes. - -![Static Sequence Diagram](clustering/static.png) - -### Dynamic Clustering - -This diagram dynamic clustering using the bootstrap API endpoint. That API endpoint is used to both find the location of the master and communicate the root CA for the master. - -This flow has the admin manually approving the kubelet signing requests. This is the `queue` policy defined above.This manual intervention could be replaced by code that can verify the signing requests via other means. - -![Dynamic Sequence Diagram](clustering/dynamic.png) - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/clustering.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/design/clustering.md?pixel)]() diff --git a/release-0.19.0/docs/design/clustering/.gitignore b/release-0.19.0/docs/design/clustering/.gitignore deleted file mode 100644 index 67bcd6cb58a..00000000000 --- a/release-0.19.0/docs/design/clustering/.gitignore +++ /dev/null @@ -1 +0,0 @@ -DroidSansMono.ttf diff --git a/release-0.19.0/docs/design/clustering/Dockerfile b/release-0.19.0/docs/design/clustering/Dockerfile deleted file mode 100644 index 3353419d843..00000000000 --- a/release-0.19.0/docs/design/clustering/Dockerfile +++ /dev/null @@ -1,12 +0,0 @@ -FROM debian:jessie - -RUN apt-get update -RUN apt-get -qy install python-seqdiag make curl - -WORKDIR /diagrams - -RUN curl -sLo DroidSansMono.ttf https://googlefontdirectory.googlecode.com/hg/apache/droidsansmono/DroidSansMono.ttf - -ADD . /diagrams - -CMD bash -c 'make >/dev/stderr && tar cf - *.png' \ No newline at end of file diff --git a/release-0.19.0/docs/design/clustering/Makefile b/release-0.19.0/docs/design/clustering/Makefile deleted file mode 100644 index f6aa53ed442..00000000000 --- a/release-0.19.0/docs/design/clustering/Makefile +++ /dev/null @@ -1,29 +0,0 @@ -FONT := DroidSansMono.ttf - -PNGS := $(patsubst %.seqdiag,%.png,$(wildcard *.seqdiag)) - -.PHONY: all -all: $(PNGS) - -.PHONY: watch -watch: - fswatch *.seqdiag | xargs -n 1 sh -c "make || true" - -$(FONT): - curl -sLo $@ https://googlefontdirectory.googlecode.com/hg/apache/droidsansmono/$(FONT) - -%.png: %.seqdiag $(FONT) - seqdiag --no-transparency -a -f '$(FONT)' $< - -# Build the stuff via a docker image -.PHONY: docker -docker: - docker build -t clustering-seqdiag . - docker run --rm clustering-seqdiag | tar xvf - - -docker-clean: - docker rmi clustering-seqdiag || true - docker images -q --filter "dangling=true" | xargs docker rmi - -fix-clock-skew: - boot2docker ssh sudo date -u -D "%Y%m%d%H%M.%S" --set "$(shell date -u +%Y%m%d%H%M.%S)" diff --git a/release-0.19.0/docs/design/clustering/README.md b/release-0.19.0/docs/design/clustering/README.md deleted file mode 100644 index a81b5660e99..00000000000 --- a/release-0.19.0/docs/design/clustering/README.md +++ /dev/null @@ -1,31 +0,0 @@ -This directory contains diagrams for the clustering design doc. - -This depends on the `seqdiag` [utility](http://blockdiag.com/en/seqdiag/index.html). Assuming you have a non-borked python install, this should be installable with - -```bash -pip install seqdiag -``` - -Just call `make` to regenerate the diagrams. - -## Building with Docker -If you are on a Mac or your pip install is messed up, you can easily build with docker. - -``` -make docker -``` - -The first run will be slow but things should be fast after that. - -To clean up the docker containers that are created (and other cruft that is left around) you can run `make docker-clean`. - -If you are using boot2docker and get warnings about clock skew (or if things aren't building for some reason) then you can fix that up with `make fix-clock-skew`. - -## Automatically rebuild on file changes - -If you have the fswatch utility installed, you can have it monitor the file system and automatically rebuild when files have changed. Just do a `make watch`. - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/clustering/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/design/clustering/README.md?pixel)]() diff --git a/release-0.19.0/docs/design/clustering/dynamic.png b/release-0.19.0/docs/design/clustering/dynamic.png deleted file mode 100644 index 92b40fee362..00000000000 Binary files a/release-0.19.0/docs/design/clustering/dynamic.png and /dev/null differ diff --git a/release-0.19.0/docs/design/clustering/dynamic.seqdiag b/release-0.19.0/docs/design/clustering/dynamic.seqdiag deleted file mode 100644 index 95bb395e886..00000000000 --- a/release-0.19.0/docs/design/clustering/dynamic.seqdiag +++ /dev/null @@ -1,24 +0,0 @@ -seqdiag { - activation = none; - - - user[label = "Admin User"]; - bootstrap[label = "Bootstrap API\nEndpoint"]; - master; - kubelet[stacked]; - - user -> bootstrap [label="createCluster", return="cluster ID"]; - user <-- bootstrap [label="returns\n- bootstrap-cluster-uri"]; - - user ->> master [label="start\n- bootstrap-cluster-uri"]; - master => bootstrap [label="setMaster\n- master-location\n- master-ca"]; - - user ->> kubelet [label="start\n- bootstrap-cluster-uri"]; - kubelet => bootstrap [label="get-master", return="returns\n- master-location\n- master-ca"]; - kubelet ->> master [label="signCert\n- unsigned-kubelet-cert", return="retuns\n- kubelet-cert"]; - user => master [label="getSignRequests"]; - user => master [label="approveSignRequests"]; - kubelet <<-- master [label="returns\n- kubelet-cert"]; - - kubelet => master [label="register\n- kubelet-location"] -} diff --git a/release-0.19.0/docs/design/clustering/static.png b/release-0.19.0/docs/design/clustering/static.png deleted file mode 100644 index bcdeca7e6f5..00000000000 Binary files a/release-0.19.0/docs/design/clustering/static.png and /dev/null differ diff --git a/release-0.19.0/docs/design/clustering/static.seqdiag b/release-0.19.0/docs/design/clustering/static.seqdiag deleted file mode 100644 index bdc54b764e2..00000000000 --- a/release-0.19.0/docs/design/clustering/static.seqdiag +++ /dev/null @@ -1,16 +0,0 @@ -seqdiag { - activation = none; - - admin[label = "Manual Admin"]; - ca[label = "Manual CA"] - master; - kubelet[stacked]; - - admin => ca [label="create\n- master-cert"]; - admin ->> master [label="start\n- ca-root\n- master-cert"]; - - admin => ca [label="create\n- kubelet-cert"]; - admin ->> kubelet [label="start\n- ca-root\n- kubelet-cert\n- master-location"]; - - kubelet => master [label="register\n- kubelet-location"]; -} diff --git a/release-0.19.0/docs/design/command_execution_port_forwarding.md b/release-0.19.0/docs/design/command_execution_port_forwarding.md deleted file mode 100644 index 68b71dc2bd3..00000000000 --- a/release-0.19.0/docs/design/command_execution_port_forwarding.md +++ /dev/null @@ -1,149 +0,0 @@ -# Container Command Execution & Port Forwarding in Kubernetes - -## Abstract - -This describes an approach for providing support for: - -- executing commands in containers, with stdin/stdout/stderr streams attached -- port forwarding to containers - -## Background - -There are several related issues/PRs: - -- [Support attach](https://github.com/GoogleCloudPlatform/kubernetes/issues/1521) -- [Real container ssh](https://github.com/GoogleCloudPlatform/kubernetes/issues/1513) -- [Provide easy debug network access to services](https://github.com/GoogleCloudPlatform/kubernetes/issues/1863) -- [OpenShift container command execution proposal](https://github.com/openshift/origin/pull/576) - -## Motivation - -Users and administrators are accustomed to being able to access their systems -via SSH to run remote commands, get shell access, and do port forwarding. - -Supporting SSH to containers in Kubernetes is a difficult task. You must -specify a "user" and a hostname to make an SSH connection, and `sshd` requires -real users (resolvable by NSS and PAM). Because a container belongs to a pod, -and the pod belongs to a namespace, you need to specify namespace/pod/container -to uniquely identify the target container. Unfortunately, a -namespace/pod/container is not a real user as far as SSH is concerned. Also, -most Linux systems limit user names to 32 characters, which is unlikely to be -large enough to contain namespace/pod/container. We could devise some scheme to -map each namespace/pod/container to a 32-character user name, adding entries to -`/etc/passwd` (or LDAP, etc.) and keeping those entries fully in sync all the -time. Alternatively, we could write custom NSS and PAM modules that allow the -host to resolve a namespace/pod/container to a user without needing to keep -files or LDAP in sync. - -As an alternative to SSH, we are using a multiplexed streaming protocol that -runs on top of HTTP. There are no requirements about users being real users, -nor is there any limitation on user name length, as the protocol is under our -control. The only downside is that standard tooling that expects to use SSH -won't be able to work with this mechanism, unless adapters can be written. - -## Constraints and Assumptions - -- SSH support is not currently in scope -- CGroup confinement is ultimately desired, but implementing that support is not currently in scope -- SELinux confinement is ultimately desired, but implementing that support is not currently in scope - -## Use Cases - -- As a user of a Kubernetes cluster, I want to run arbitrary commands in a container, attaching my local stdin/stdout/stderr to the container -- As a user of a Kubernetes cluster, I want to be able to connect to local ports on my computer and have them forwarded to ports in the container - -## Process Flow - -### Remote Command Execution Flow -1. The client connects to the Kubernetes Master to initiate a remote command execution -request -2. The Master proxies the request to the Kubelet where the container lives -3. The Kubelet executes nsenter + the requested command and streams stdin/stdout/stderr back and forth between the client and the container - -### Port Forwarding Flow -1. The client connects to the Kubernetes Master to initiate a remote command execution -request -2. The Master proxies the request to the Kubelet where the container lives -3. The client listens on each specified local port, awaiting local connections -4. The client connects to one of the local listening ports -4. The client notifies the Kubelet of the new connection -5. The Kubelet executes nsenter + socat and streams data back and forth between the client and the port in the container - - -## Design Considerations - -### Streaming Protocol - -The current multiplexed streaming protocol used is SPDY. This is not the -long-term desire, however. As soon as there is viable support for HTTP/2 in Go, -we will switch to that. - -### Master as First Level Proxy - -Clients should not be allowed to communicate directly with the Kubelet for -security reasons. Therefore, the Master is currently the only suggested entry -point to be used for remote command execution and port forwarding. This is not -necessarily desirable, as it means that all remote command execution and port -forwarding traffic must travel through the Master, potentially impacting other -API requests. - -In the future, it might make more sense to retrieve an authorization token from -the Master, and then use that token to initiate a remote command execution or -port forwarding request with a load balanced proxy service dedicated to this -functionality. This would keep the streaming traffic out of the Master. - -### Kubelet as Backend Proxy - -The kubelet is currently responsible for handling remote command execution and -port forwarding requests. Just like with the Master described above, this means -that all remote command execution and port forwarding streaming traffic must -travel through the Kubelet, which could result in a degraded ability to service -other requests. - -In the future, it might make more sense to use a separate service on the node. - -Alternatively, we could possibly inject a process into the container that only -listens for a single request, expose that process's listening port on the node, -and then issue a redirect to the client such that it would connect to the first -level proxy, which would then proxy directly to the injected process's exposed -port. This would minimize the amount of proxying that takes place. - -### Scalability - -There are at least 2 different ways to execute a command in a container: -`docker exec` and `nsenter`. While `docker exec` might seem like an easier and -more obvious choice, it has some drawbacks. - -#### `docker exec` - -We could expose `docker exec` (i.e. have Docker listen on an exposed TCP port -on the node), but this would require proxying from the edge and securing the -Docker API. `docker exec` calls go through the Docker daemon, meaning that all -stdin/stdout/stderr traffic is proxied through the Daemon, adding an extra hop. -Additionally, you can't isolate 1 malicious `docker exec` call from normal -usage, meaning an attacker could initiate a denial of service or other attack -and take down the Docker daemon, or the node itself. - -We expect remote command execution and port forwarding requests to be long -running and/or high bandwidth operations, and routing all the streaming data -through the Docker daemon feels like a bottleneck we can avoid. - -#### `nsenter` - -The implementation currently uses `nsenter` to run commands in containers, -joining the appropriate container namespaces. `nsenter` runs directly on the -node and is not proxied through any single daemon process. - -### Security - -Authentication and authorization hasn't specifically been tested yet with this -functionality. We need to make sure that users are not allowed to execute -remote commands or do port forwarding to containers they aren't allowed to -access. - -Additional work is required to ensure that multiple command execution or port forwarding connections from different clients are not able to see each other's data. This can most likely be achieved via SELinux labeling and unique process contexts. - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/command_execution_port_forwarding.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/design/command_execution_port_forwarding.md?pixel)]() diff --git a/release-0.19.0/docs/design/event_compression.md b/release-0.19.0/docs/design/event_compression.md deleted file mode 100644 index f769d890a90..00000000000 --- a/release-0.19.0/docs/design/event_compression.md +++ /dev/null @@ -1,84 +0,0 @@ -# Kubernetes Event Compression - -This document captures the design of event compression. - - -## Background - -Kubernetes components can get into a state where they generate tons of events which are identical except for the timestamp. For example, when pulling a non-existing image, Kubelet will repeatedly generate ```image_not_existing``` and ```container_is_waiting``` events until upstream components correct the image. When this happens, the spam from the repeated events makes the entire event mechanism useless. It also appears to cause memory pressure in etcd (see [#3853](https://github.com/GoogleCloudPlatform/kubernetes/issues/3853)). - -## Proposal -Each binary that generates events (for example, ```kubelet```) should keep track of previously generated events so that it can collapse recurring events into a single event instead of creating a new instance for each new event. - -Event compression should be best effort (not guaranteed). Meaning, in the worst case, ```n``` identical (minus timestamp) events may still result in ```n``` event entries. - -## Design -Instead of a single Timestamp, each event object [contains](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/pkg/api/types.go#L1111) the following fields: - * ```FirstTimestamp util.Time``` - * The date/time of the first occurrence of the event. - * ```LastTimestamp util.Time``` - * The date/time of the most recent occurrence of the event. - * On first occurrence, this is equal to the FirstTimestamp. - * ```Count int``` - * The number of occurrences of this event between FirstTimestamp and LastTimestamp - * On first occurrence, this is 1. - -Each binary that generates events: - * Maintains a historical record of previously generated events: - * Implmented with ["Least Recently Used Cache"](https://github.com/golang/groupcache/blob/master/lru/lru.go) in [```pkg/client/record/events_cache.go```](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/pkg/client/record/events_cache.go). - * The key in the cache is generated from the event object minus timestamps/count/transient fields, specifically the following events fields are used to construct a unique key for an event: - * ```event.Source.Component``` - * ```event.Source.Host``` - * ```event.InvolvedObject.Kind``` - * ```event.InvolvedObject.Namespace``` - * ```event.InvolvedObject.Name``` - * ```event.InvolvedObject.UID``` - * ```event.InvolvedObject.APIVersion``` - * ```event.Reason``` - * ```event.Message``` - * The LRU cache is capped at 4096 events. That means if a component (e.g. kubelet) runs for a long period of time and generates tons of unique events, the previously generated events cache will not grow unchecked in memory. Instead, after 4096 unique events are generated, the oldest events are evicted from the cache. - * When an event is generated, the previously generated events cache is checked (see [```pkg/client/record/event.go```](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/pkg/client/record/event.go)). - * If the key for the new event matches the key for a previously generated event (meaning all of the above fields match between the new event and some previously generated event), then the event is considered to be a duplicate and the existing event entry is updated in etcd: - * The new PUT (update) event API is called to update the existing event entry in etcd with the new last seen timestamp and count. - * The event is also updated in the previously generated events cache with an incremented count, updated last seen timestamp, name, and new resource version (all required to issue a future event update). - * If the key for the new event does not match the key for any previously generated event (meaning none of the above fields match between the new event and any previously generated events), then the event is considered to be new/unique and a new event entry is created in etcd: - * The usual POST/create event API is called to create a new event entry in etcd. - * An entry for the event is also added to the previously generated events cache. - -## Issues/Risks - * Compression is not guaranteed, because each component keeps track of event history in memory - * An application restart causes event history to be cleared, meaning event history is not preserved across application restarts and compression will not occur across component restarts. - * Because an LRU cache is used to keep track of previously generated events, if too many unique events are generated, old events will be evicted from the cache, so events will only be compressed until they age out of the events cache, at which point any new instance of the event will cause a new entry to be created in etcd. - -## Example -Sample kubectl output -``` -FIRSTSEEN LASTSEEN COUNT NAME KIND SUBOBJECT REASON SOURCE MESSAGE -Thu, 12 Feb 2015 01:13:02 +0000 Thu, 12 Feb 2015 01:13:02 +0000 1 kubernetes-minion-4.c.saad-dev-vms.internal Minion starting {kubelet kubernetes-minion-4.c.saad-dev-vms.internal} Starting kubelet. -Thu, 12 Feb 2015 01:13:09 +0000 Thu, 12 Feb 2015 01:13:09 +0000 1 kubernetes-minion-1.c.saad-dev-vms.internal Minion starting {kubelet kubernetes-minion-1.c.saad-dev-vms.internal} Starting kubelet. -Thu, 12 Feb 2015 01:13:09 +0000 Thu, 12 Feb 2015 01:13:09 +0000 1 kubernetes-minion-3.c.saad-dev-vms.internal Minion starting {kubelet kubernetes-minion-3.c.saad-dev-vms.internal} Starting kubelet. -Thu, 12 Feb 2015 01:13:09 +0000 Thu, 12 Feb 2015 01:13:09 +0000 1 kubernetes-minion-2.c.saad-dev-vms.internal Minion starting {kubelet kubernetes-minion-2.c.saad-dev-vms.internal} Starting kubelet. -Thu, 12 Feb 2015 01:13:05 +0000 Thu, 12 Feb 2015 01:13:12 +0000 4 monitoring-influx-grafana-controller-0133o Pod failedScheduling {scheduler } Error scheduling: no minions available to schedule pods -Thu, 12 Feb 2015 01:13:05 +0000 Thu, 12 Feb 2015 01:13:12 +0000 4 elasticsearch-logging-controller-fplln Pod failedScheduling {scheduler } Error scheduling: no minions available to schedule pods -Thu, 12 Feb 2015 01:13:05 +0000 Thu, 12 Feb 2015 01:13:12 +0000 4 kibana-logging-controller-gziey Pod failedScheduling {scheduler } Error scheduling: no minions available to schedule pods -Thu, 12 Feb 2015 01:13:05 +0000 Thu, 12 Feb 2015 01:13:12 +0000 4 skydns-ls6k1 Pod failedScheduling {scheduler } Error scheduling: no minions available to schedule pods -Thu, 12 Feb 2015 01:13:05 +0000 Thu, 12 Feb 2015 01:13:12 +0000 4 monitoring-heapster-controller-oh43e Pod failedScheduling {scheduler } Error scheduling: no minions available to schedule pods -Thu, 12 Feb 2015 01:13:20 +0000 Thu, 12 Feb 2015 01:13:20 +0000 1 kibana-logging-controller-gziey BoundPod implicitly required container POD pulled {kubelet kubernetes-minion-4.c.saad-dev-vms.internal} Successfully pulled image "kubernetes/pause:latest" -Thu, 12 Feb 2015 01:13:20 +0000 Thu, 12 Feb 2015 01:13:20 +0000 1 kibana-logging-controller-gziey Pod scheduled {scheduler } Successfully assigned kibana-logging-controller-gziey to kubernetes-minion-4.c.saad-dev-vms.internal - -``` - -This demonstrates what would have been 20 separate entries (indicating scheduling failure) collapsed/compressed down to 5 entries. - -## Related Pull Requests/Issues - * Issue [#4073](https://github.com/GoogleCloudPlatform/kubernetes/issues/4073): Compress duplicate events - * PR [#4157](https://github.com/GoogleCloudPlatform/kubernetes/issues/4157): Add "Update Event" to Kubernetes API - * PR [#4206](https://github.com/GoogleCloudPlatform/kubernetes/issues/4206): Modify Event struct to allow compressing multiple recurring events in to a single event - * PR [#4306](https://github.com/GoogleCloudPlatform/kubernetes/issues/4306): Compress recurring events in to a single event to optimize etcd storage - * PR [#4444](https://github.com/GoogleCloudPlatform/kubernetes/pull/4444): Switch events history to use LRU cache instead of map - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/event_compression.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/design/event_compression.md?pixel)]() diff --git a/release-0.19.0/docs/design/expansion.md b/release-0.19.0/docs/design/expansion.md deleted file mode 100644 index 83ab9aeb523..00000000000 --- a/release-0.19.0/docs/design/expansion.md +++ /dev/null @@ -1,391 +0,0 @@ -# Variable expansion in pod command, args, and env - -## Abstract - -A proposal for the expansion of environment variables using a simple `$(var)` syntax. - -## Motivation - -It is extremely common for users to need to compose environment variables or pass arguments to -their commands using the values of environment variables. Kubernetes should provide a facility for -the 80% cases in order to decrease coupling and the use of workarounds. - -## Goals - -1. Define the syntax format -2. Define the scoping and ordering of substitutions -3. Define the behavior for unmatched variables -4. Define the behavior for unexpected/malformed input - -## Constraints and Assumptions - -* This design should describe the simplest possible syntax to accomplish the use-cases -* Expansion syntax will not support more complicated shell-like behaviors such as default values - (viz: `$(VARIABLE_NAME:"default")`), inline substitution, etc. - -## Use Cases - -1. As a user, I want to compose new environment variables for a container using a substitution - syntax to reference other variables in the container's environment and service environment - variables -1. As a user, I want to substitute environment variables into a container's command -1. As a user, I want to do the above without requiring the container's image to have a shell -1. As a user, I want to be able to specify a default value for a service variable which may - not exist -1. As a user, I want to see an event associated with the pod if an expansion fails (ie, references - variable names that cannot be expanded) - -### Use Case: Composition of environment variables - -Currently, containers are injected with docker-style environment variables for the services in -their pod's namespace. There are several variables for each service, but users routinely need -to compose URLs based on these variables because there is not a variable for the exact format -they need. Users should be able to build new environment variables with the exact format they need. -Eventually, it should also be possible to turn off the automatic injection of the docker-style -variables into pods and let the users consume the exact information they need via the downward API -and composition. - -#### Expanding expanded variables - -It should be possible to reference an variable which is itself the result of an expansion, if the -referenced variable is declared in the container's environment prior to the one referencing it. -Put another way -- a container's environment is expanded in order, and expanded variables are -available to subsequent expansions. - -### Use Case: Variable expansion in command - -Users frequently need to pass the values of environment variables to a container's command. -Currently, Kubernetes does not perform any expansion of varibles. The workaround is to invoke a -shell in the container's command and have the shell perform the substitution, or to write a wrapper -script that sets up the environment and runs the command. This has a number of drawbacks: - -1. Solutions that require a shell are unfriendly to images that do not contain a shell -2. Wrapper scripts make it harder to use images as base images -3. Wrapper scripts increase coupling to kubernetes - -Users should be able to do the 80% case of variable expansion in command without writing a wrapper -script or adding a shell invocation to their containers' commands. - -### Use Case: Images without shells - -The current workaround for variable expansion in a container's command requires the container's -image to have a shell. This is unfriendly to images that do not contain a shell (`scratch` images, -for example). Users should be able to perform the other use-cases in this design without regard to -the content of their images. - -### Use Case: See an event for incomplete expansions - -It is possible that a container with incorrect variable values or command line may continue to run -for a long period of time, and that the end-user would have no visual or obvious warning of the -incorrect configuration. If the kubelet creates an event when an expansion references a variable -that cannot be expanded, it will help users quickly detect problems with expansions. - -## Design Considerations - -### What features should be supported? - -In order to limit complexity, we want to provide the right amount of functionality so that the 80% -cases can be realized and nothing more. We felt that the essentials boiled down to: - -1. Ability to perform direct expansion of variables in a string -2. Ability to specify default values via a prioritized mapping function but without support for - defaults as a syntax-level feature - -### What should the syntax be? - -The exact syntax for variable expansion has a large impact on how users perceive and relate to the -feature. We considered implementing a very restrictive subset of the shell `${var}` syntax. This -syntax is an attractive option on some level, because many people are familiar with it. However, -this syntax also has a large number of lesser known features such as the ability to provide -default values for unset variables, perform inline substitution, etc. - -In the interest of preventing conflation of the expansion feature in Kubernetes with the shell -feature, we chose a different syntax similar to the one in Makefiles, `$(var)`. We also chose not -to support the bar `$var` format, since it is not required to implement the required use-cases. - -Nested references, ie, variable expansion within variable names, are not supported. - -#### How should unmatched references be treated? - -Ideally, it should be extremely clear when a variable reference couldn't be expanded. We decided -the best experience for unmatched variable references would be to have the entire reference, syntax -included, show up in the output. As an example, if the reference `$(VARIABLE_NAME)` cannot be -expanded, then `$(VARIABLE_NAME)` should be present in the output. - -#### Escaping the operator - -Although the `$(var)` syntax does overlap with the `$(command)` form of command substitution -supported by many shells, because unexpanded variables are present verbatim in the output, we -expect this will not present a problem to many users. If there is a collision between a varible -name and command substitution syntax, the syntax can be escaped with the form `$$(VARIABLE_NAME)`, -which will evaluate to `$(VARIABLE_NAME)` whether `VARIABLE_NAME` can be expanded or not. - -## Design - -This design encompasses the variable expansion syntax and specification and the changes needed to -incorporate the expansion feature into the container's environment and command. - -### Syntax and expansion mechanics - -This section describes the expansion syntax, evaluation of variable values, and how unexpected or -malformed inputs are handled. - -#### Syntax - -The inputs to the expansion feature are: - -1. A utf-8 string (the input string) which may contain variable references -2. A function (the mapping function) that maps the name of a variable to the variable's value, of - type `func(string) string` - -Variable references in the input string are indicated exclusively with the syntax -`$()`. The syntax tokens are: - -- `$`: the operator -- `(`: the reference opener -- `)`: the reference closer - -The operator has no meaning unless accompanied by the reference opener and closer tokens. The -operator can be escaped using `$$`. One literal `$` will be emitted for each `$$` in the input. - -The reference opener and closer characters have no meaning when not part of a variable reference. -If a variable reference is malformed, viz: `$(VARIABLE_NAME` without a closing expression, the -operator and expression opening characters are treated as ordinary characters without special -meanings. - -#### Scope and ordering of substitutions - -The scope in which variable references are expanded is defined by the mapping function. Within the -mapping function, any arbitrary strategy may be used to determine the value of a variable name. -The most basic implementation of a mapping function is to use a `map[string]string` to lookup the -value of a variable. - -In order to support default values for variables like service variables presented by the kubelet, -which may not be bound because the service that provides them does not yet exist, there should be a -mapping function that uses a list of `map[string]string` like: - -```go -func MakeMappingFunc(maps ...map[string]string) func(string) string { - return func(input string) string { - for _, context := range maps { - val, ok := context[input] - if ok { - return val - } - } - - return "" - } -} - -// elsewhere -containerEnv := map[string]string{ - "FOO": "BAR", - "ZOO": "ZAB", - "SERVICE2_HOST": "some-host", -} - -serviceEnv := map[string]string{ - "SERVICE_HOST": "another-host", - "SERVICE_PORT": "8083", -} - -// single-map variation -mapping := MakeMappingFunc(containerEnv) - -// default variables not found in serviceEnv -mappingWithDefaults := MakeMappingFunc(serviceEnv, containerEnv) -``` - -### Implementation changes - -The necessary changes to implement this functionality are: - -1. Add a new interface, `ObjectEventRecorder`, which is like the `EventRecorder` interface, but - scoped to a single object, and a function that returns an `ObjectEventRecorder` given an - `ObjectReference` and an `EventRecorder` -2. Introduce `third_party/golang/expansion` package that provides: - 1. An `Expand(string, func(string) string) string` function - 2. A `MappingFuncFor(ObjectEventRecorder, ...map[string]string) string` function -3. Make the kubelet expand environment correctly -4. Make the kubelet expand command correctly - -#### Event Recording - -In order to provide an event when an expansion references undefined variables, the mapping function -must be able to create an event. In order to facilitate this, we should create a new interface in -the `api/client/record` package which is similar to `EventRecorder`, but scoped to a single object: - -```go -// ObjectEventRecorder knows how to record events about a single object. -type ObjectEventRecorder interface { - // Event constructs an event from the given information and puts it in the queue for sending. - // 'reason' is the reason this event is generated. 'reason' should be short and unique; it will - // be used to automate handling of events, so imagine people writing switch statements to - // handle them. You want to make that easy. - // 'message' is intended to be human readable. - // - // The resulting event will be created in the same namespace as the reference object. - Event(reason, message string) - - // Eventf is just like Event, but with Sprintf for the message field. - Eventf(reason, messageFmt string, args ...interface{}) - - // PastEventf is just like Eventf, but with an option to specify the event's 'timestamp' field. - PastEventf(timestamp util.Time, reason, messageFmt string, args ...interface{}) -} -``` - -There should also be a function that can construct an `ObjectEventRecorder` from a `runtime.Object` -and an `EventRecorder`: - -```go -type objectRecorderImpl struct { - object runtime.Object - recorder EventRecorder -} - -func (r *objectRecorderImpl) Event(reason, message string) { - r.recorder.Event(r.object, reason, message) -} - -func ObjectEventRecorderFor(object runtime.Object, recorder EventRecorder) ObjectEventRecorder { - return &objectRecorderImpl{object, recorder} -} -``` - -#### Expansion package - -The expansion package should provide two methods: - -```go -// MappingFuncFor returns a mapping function for use with Expand that -// implements the expansion semantics defined in the expansion spec; it -// returns the input string wrapped in the expansion syntax if no mapping -// for the input is found. If no expansion is found for a key, an event -// is raised on the given recorder. -func MappingFuncFor(recorder record.ObjectEventRecorder, context ...map[string]string) func(string) string { - // ... -} - -// Expand replaces variable references in the input string according to -// the expansion spec using the given mapping function to resolve the -// values of variables. -func Expand(input string, mapping func(string) string) string { - // ... -} -``` - -#### Kubelet changes - -The Kubelet should be made to correctly expand variables references in a container's environment, -command, and args. Changes will need to be made to: - -1. The `makeEnvironmentVariables` function in the kubelet; this is used by - `GenerateRunContainerOptions`, which is used by both the docker and rkt container runtimes -2. The docker manager `setEntrypointAndCommand` func has to be changed to perform variable - expansion -3. The rkt runtime should be made to support expansion in command and args when support for it is - implemented - -### Examples - -#### Inputs and outputs - -These examples are in the context of the mapping: - -| Name | Value | -|-------------|------------| -| `VAR_A` | `"A"` | -| `VAR_B` | `"B"` | -| `VAR_C` | `"C"` | -| `VAR_REF` | `$(VAR_A)` | -| `VAR_EMPTY` | `""` | - -No other variables are defined. - -| Input | Result | -|--------------------------------|----------------------------| -| `"$(VAR_A)"` | `"A"` | -| `"___$(VAR_B)___"` | `"___B___"` | -| `"___$(VAR_C)"` | `"___C"` | -| `"$(VAR_A)-$(VAR_A)"` | `"A-A"` | -| `"$(VAR_A)-1"` | `"A-1"` | -| `"$(VAR_A)_$(VAR_B)_$(VAR_C)"` | `"A_B_C"` | -| `"$$(VAR_B)_$(VAR_A)"` | `"$(VAR_B)_A"` | -| `"$$(VAR_A)_$$(VAR_B)"` | `"$(VAR_A)_$(VAR_B)"` | -| `"f000-$$VAR_A"` | `"f000-$VAR_A"` | -| `"foo\\$(VAR_C)bar"` | `"foo\Cbar"` | -| `"foo\\\\$(VAR_C)bar"` | `"foo\\Cbar"` | -| `"foo\\\\\\\\$(VAR_A)bar"` | `"foo\\\\Abar"` | -| `"$(VAR_A$(VAR_B))"` | `"$(VAR_A$(VAR_B))"` | -| `"$(VAR_A$(VAR_B)"` | `"$(VAR_A$(VAR_B)"` | -| `"$(VAR_REF)"` | `"$(VAR_A)"` | -| `"%%$(VAR_REF)--$(VAR_REF)%%"` | `"%%$(VAR_A)--$(VAR_A)%%"` | -| `"foo$(VAR_EMPTY)bar"` | `"foobar"` | -| `"foo$(VAR_Awhoops!"` | `"foo$(VAR_Awhoops!"` | -| `"f00__(VAR_A)__"` | `"f00__(VAR_A)__"` | -| `"$?_boo_$!"` | `"$?_boo_$!"` | -| `"$VAR_A"` | `"$VAR_A"` | -| `"$(VAR_DNE)"` | `"$(VAR_DNE)"` | -| `"$$$$$$(BIG_MONEY)"` | `"$$$(BIG_MONEY)"` | -| `"$$$$$$(VAR_A)"` | `"$$$(VAR_A)"` | -| `"$$$$$$$(GOOD_ODDS)"` | `"$$$$(GOOD_ODDS)"` | -| `"$$$$$$$(VAR_A)"` | `"$$$A"` | -| `"$VAR_A)"` | `"$VAR_A)"` | -| `"${VAR_A}"` | `"${VAR_A}"` | -| `"$(VAR_B)_______$(A"` | `"B_______$(A"` | -| `"$(VAR_C)_______$("` | `"C_______$("` | -| `"$(VAR_A)foobarzab$"` | `"Afoobarzab$"` | -| `"foo-\\$(VAR_A"` | `"foo-\$(VAR_A"` | -| `"--$($($($($--"` | `"--$($($($($--"` | -| `"$($($($($--foo$("` | `"$($($($($--foo$("` | -| `"foo0--$($($($("` | `"foo0--$($($($("` | -| `"$(foo$$var)` | `$(foo$$var)` | - -#### In a pod: building a URL - -Notice the `$(var)` syntax. - -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: expansion-pod -spec: - containers: - - name: test-container - image: gcr.io/google_containers/busybox - command: [ "/bin/sh", "-c", "env" ] - env: - - name: PUBLIC_URL - value: "http://$(GITSERVER_SERVICE_HOST):$(GITSERVER_SERVICE_PORT)" - restartPolicy: Never -``` - -#### In a pod: building a URL using downward API - -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: expansion-pod -spec: - containers: - - name: test-container - image: gcr.io/google_containers/busybox - command: [ "/bin/sh", "-c", "env" ] - env: - - name: POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: "metadata.namespace" - - name: PUBLIC_URL - value: "http://gitserver.$(POD_NAMESPACE):$(SERVICE_PORT)" - restartPolicy: Never -``` - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/expansion.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/design/expansion.md?pixel)]() diff --git a/release-0.19.0/docs/design/identifiers.md b/release-0.19.0/docs/design/identifiers.md deleted file mode 100644 index 1eaa301a80a..00000000000 --- a/release-0.19.0/docs/design/identifiers.md +++ /dev/null @@ -1,96 +0,0 @@ -# Identifiers and Names in Kubernetes - -A summarization of the goals and recommendations for identifiers in Kubernetes. Described in [GitHub issue #199](https://github.com/GoogleCloudPlatform/kubernetes/issues/199). - - -## Definitions - -UID -: A non-empty, opaque, system-generated value guaranteed to be unique in time and space; intended to distinguish between historical occurrences of similar entities. - -Name -: A non-empty string guaranteed to be unique within a given scope at a particular time; used in resource URLs; provided by clients at creation time and encouraged to be human friendly; intended to facilitate creation idempotence and space-uniqueness of singleton objects, distinguish distinct entities, and reference particular entities across operations. - -[rfc1035](http://www.ietf.org/rfc/rfc1035.txt)/[rfc1123](http://www.ietf.org/rfc/rfc1123.txt) label (DNS_LABEL) -: An alphanumeric (a-z, and 0-9) string, with a maximum length of 63 characters, with the '-' character allowed anywhere except the first or last character, suitable for use as a hostname or segment in a domain name - -[rfc1035](http://www.ietf.org/rfc/rfc1035.txt)/[rfc1123](http://www.ietf.org/rfc/rfc1123.txt) subdomain (DNS_SUBDOMAIN) -: One or more lowercase rfc1035/rfc1123 labels separated by '.' with a maximum length of 253 characters - -[rfc4122](http://www.ietf.org/rfc/rfc4122.txt) universally unique identifier (UUID) -: A 128 bit generated value that is extremely unlikely to collide across time and space and requires no central coordination - - -## Objectives for names and UIDs - -1. Uniquely identify (via a UID) an object across space and time - -2. Uniquely name (via a name) an object across space - -3. Provide human-friendly names in API operations and/or configuration files - -4. Allow idempotent creation of API resources (#148) and enforcement of space-uniqueness of singleton objects - -5. Allow DNS names to be automatically generated for some objects - - -## General design - -1. When an object is created via an API, a Name string (a DNS_SUBDOMAIN) must be specified. Name must be non-empty and unique within the apiserver. This enables idempotent and space-unique creation operations. Parts of the system (e.g. replication controller) may join strings (e.g. a base name and a random suffix) to create a unique Name. For situations where generating a name is impractical, some or all objects may support a param to auto-generate a name. Generating random names will defeat idempotency. - * Examples: "guestbook.user", "backend-x4eb1" - -2. When an object is created via an API, a Namespace string (a DNS_SUBDOMAIN? format TBD via #1114) may be specified. Depending on the API receiver, namespaces might be validated (e.g. apiserver might ensure that the namespace actually exists). If a namespace is not specified, one will be assigned by the API receiver. This assignment policy might vary across API receivers (e.g. apiserver might have a default, kubelet might generate something semi-random). - * Example: "api.k8s.example.com" - -3. Upon acceptance of an object via an API, the object is assigned a UID (a UUID). UID must be non-empty and unique across space and time. - * Example: "01234567-89ab-cdef-0123-456789abcdef" - - -## Case study: Scheduling a pod - -Pods can be placed onto a particular node in a number of ways. This case -study demonstrates how the above design can be applied to satisfy the -objectives. - -### A pod scheduled by a user through the apiserver - -1. A user submits a pod with Namespace="" and Name="guestbook" to the apiserver. - -2. The apiserver validates the input. - 1. A default Namespace is assigned. - 2. The pod name must be space-unique within the Namespace. - 3. Each container within the pod has a name which must be space-unique within the pod. - -3. The pod is accepted. - 1. A new UID is assigned. - -4. The pod is bound to a node. - 1. The kubelet on the node is passed the pod's UID, Namespace, and Name. - -5. Kubelet validates the input. - -6. Kubelet runs the pod. - 1. Each container is started up with enough metadata to distinguish the pod from whence it came. - 2. Each attempt to run a container is assigned a UID (a string) that is unique across time. - * This may correspond to Docker's container ID. - -### A pod placed by a config file on the node - -1. A config file is stored on the node, containing a pod with UID="", Namespace="", and Name="cadvisor". - -2. Kubelet validates the input. - 1. Since UID is not provided, kubelet generates one. - 2. Since Namespace is not provided, kubelet generates one. - 1. The generated namespace should be deterministic and cluster-unique for the source, such as a hash of the hostname and file path. - * E.g. Namespace="file-f4231812554558a718a01ca942782d81" - -3. Kubelet runs the pod. - 1. Each container is started up with enough metadata to distinguish the pod from whence it came. - 2. Each attempt to run a container is assigned a UID (a string) that is unique across time. - 1. This may correspond to Docker's container ID. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/identifiers.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/design/identifiers.md?pixel)]() diff --git a/release-0.19.0/docs/design/namespaces.md b/release-0.19.0/docs/design/namespaces.md deleted file mode 100644 index cf67b56acae..00000000000 --- a/release-0.19.0/docs/design/namespaces.md +++ /dev/null @@ -1,340 +0,0 @@ -# Namespaces - -## Abstract - -A Namespace is a mechanism to partition resources created by users into -a logically named group. - -## Motivation - -A single cluster should be able to satisfy the needs of multiple user communities. - -Each user community wants to be able to work in isolation from other communities. - -Each user community has its own: - -1. resources (pods, services, replication controllers, etc.) -2. policies (who can or cannot perform actions in their community) -3. constraints (this community is allowed this much quota, etc.) - -A cluster operator may create a Namespace for each unique user community. - -The Namespace provides a unique scope for: - -1. named resources (to avoid basic naming collisions) -2. delegated management authority to trusted users -3. ability to limit community resource consumption - -## Use cases - -1. As a cluster operator, I want to support multiple user communities on a single cluster. -2. As a cluster operator, I want to delegate authority to partitions of the cluster to trusted users - in those communities. -3. As a cluster operator, I want to limit the amount of resources each community can consume in order - to limit the impact to other communities using the cluster. -4. As a cluster user, I want to interact with resources that are pertinent to my user community in - isolation of what other user communities are doing on the cluster. - -## Design - -### Data Model - -A *Namespace* defines a logically named group for multiple *Kind*s of resources. - -``` -type Namespace struct { - TypeMeta `json:",inline"` - ObjectMeta `json:"metadata,omitempty"` - - Spec NamespaceSpec `json:"spec,omitempty"` - Status NamespaceStatus `json:"status,omitempty"` -} -``` - -A *Namespace* name is a DNS compatible label. - -A *Namespace* must exist prior to associating content with it. - -A *Namespace* must not be deleted if there is content associated with it. - -To associate a resource with a *Namespace* the following conditions must be satisfied: - -1. The resource's *Kind* must be registered as having *RESTScopeNamespace* with the server -2. The resource's *TypeMeta.Namespace* field must have a value that references an existing *Namespace* - -The *Name* of a resource associated with a *Namespace* is unique to that *Kind* in that *Namespace*. - -It is intended to be used in resource URLs; provided by clients at creation time, and encouraged to be -human friendly; intended to facilitate idempotent creation, space-uniqueness of singleton objects, -distinguish distinct entities, and reference particular entities across operations. - -### Authorization - -A *Namespace* provides an authorization scope for accessing content associated with the *Namespace*. - -See [Authorization plugins](../authorization.md) - -### Limit Resource Consumption - -A *Namespace* provides a scope to limit resource consumption. - -A *LimitRange* defines min/max constraints on the amount of resources a single entity can consume in -a *Namespace*. - -See [Admission control: Limit Range](admission_control_limit_range.md) - -A *ResourceQuota* tracks aggregate usage of resources in the *Namespace* and allows cluster operators -to define *Hard* resource usage limits that a *Namespace* may consume. - -See [Admission control: Resource Quota](admission_control_resource_quota.md) - -### Finalizers - -Upon creation of a *Namespace*, the creator may provide a list of *Finalizer* objects. - -``` -type FinalizerName string - -// These are internal finalizers to Kubernetes, must be qualified name unless defined here -const ( - FinalizerKubernetes FinalizerName = "kubernetes" -) - -// NamespaceSpec describes the attributes on a Namespace -type NamespaceSpec struct { - // Finalizers is an opaque list of values that must be empty to permanently remove object from storage - Finalizers []FinalizerName -} -``` - -A *FinalizerName* is a qualified name. - -The API Server enforces that a *Namespace* can only be deleted from storage if and only if -it's *Namespace.Spec.Finalizers* is empty. - -A *finalize* operation is the only mechanism to modify the *Namespace.Spec.Finalizers* field post creation. - -Each *Namespace* created has *kubernetes* as an item in its list of initial *Namespace.Spec.Finalizers* -set by default. - -### Phases - -A *Namespace* may exist in the following phases. - -``` -type NamespacePhase string -const( - NamespaceActive NamespacePhase = "Active" - NamespaceTerminating NamespaceTerminating = "Terminating" -) - -type NamespaceStatus struct { - ... - Phase NamespacePhase -} -``` - -A *Namespace* is in the **Active** phase if it does not have a *ObjectMeta.DeletionTimestamp*. - -A *Namespace* is in the **Terminating** phase if it has a *ObjectMeta.DeletionTimestamp*. - -**Active** - -Upon creation, a *Namespace* goes in the *Active* phase. This means that content may be associated with -a namespace, and all normal interactions with the namespace are allowed to occur in the cluster. - -If a DELETE request occurs for a *Namespace*, the *Namespace.ObjectMeta.DeletionTimestamp* is set -to the current server time. A *namespace controller* observes the change, and sets the *Namespace.Status.Phase* -to *Terminating*. - -**Terminating** - -A *namespace controller* watches for *Namespace* objects that have a *Namespace.ObjectMeta.DeletionTimestamp* -value set in order to know when to initiate graceful termination of the *Namespace* associated content that -are known to the cluster. - -The *namespace controller* enumerates each known resource type in that namespace and deletes it one by one. - -Admission control blocks creation of new resources in that namespace in order to prevent a race-condition -where the controller could believe all of a given resource type had been deleted from the namespace, -when in fact some other rogue client agent had created new objects. Using admission control in this -scenario allows each of registry implementations for the individual objects to not need to take into account Namespace life-cycle. - -Once all objects known to the *namespace controller* have been deleted, the *namespace controller* -executes a *finalize* operation on the namespace that removes the *kubernetes* value from -the *Namespace.Spec.Finalizers* list. - -If the *namespace controller* sees a *Namespace* whose *ObjectMeta.DeletionTimestamp* is set, and -whose *Namespace.Spec.Finalizers* list is empty, it will signal the server to permanently remove -the *Namespace* from storage by sending a final DELETE action to the API server. - -### REST API - -To interact with the Namespace API: - -| Action | HTTP Verb | Path | Description | -| ------ | --------- | ---- | ----------- | -| CREATE | POST | /api/{version}/namespaces | Create a namespace | -| LIST | GET | /api/{version}/namespaces | List all namespaces | -| UPDATE | PUT | /api/{version}/namespaces/{namespace} | Update namespace {namespace} | -| DELETE | DELETE | /api/{version}/namespaces/{namespace} | Delete namespace {namespace} | -| FINALIZE | POST | /api/{version}/namespaces/{namespace}/finalize | Finalize namespace {namespace} | -| WATCH | GET | /api/{version}/watch/namespaces | Watch all namespaces | - -This specification reserves the name *finalize* as a sub-resource to namespace. - -As a consequence, it is invalid to have a *resourceType* managed by a namespace whose kind is *finalize*. - -To interact with content associated with a Namespace: - -| Action | HTTP Verb | Path | Description | -| ---- | ---- | ---- | ---- | -| CREATE | POST | /api/{version}/namespaces/{namespace}/{resourceType}/ | Create instance of {resourceType} in namespace {namespace} | -| GET | GET | /api/{version}/namespaces/{namespace}/{resourceType}/{name} | Get instance of {resourceType} in namespace {namespace} with {name} | -| UPDATE | PUT | /api/{version}/namespaces/{namespace}/{resourceType}/{name} | Update instance of {resourceType} in namespace {namespace} with {name} | -| DELETE | DELETE | /api/{version}/namespaces/{namespace}/{resourceType}/{name} | Delete instance of {resourceType} in namespace {namespace} with {name} | -| LIST | GET | /api/{version}/namespaces/{namespace}/{resourceType} | List instances of {resourceType} in namespace {namespace} | -| WATCH | GET | /api/{version}/watch/namespaces/{namespace}/{resourceType} | Watch for changes to a {resourceType} in namespace {namespace} | -| WATCH | GET | /api/{version}/watch/{resourceType} | Watch for changes to a {resourceType} across all namespaces | -| LIST | GET | /api/{version}/list/{resourceType} | List instances of {resourceType} across all namespaces | - -The API server verifies the *Namespace* on resource creation matches the *{namespace}* on the path. - -The API server will associate a resource with a *Namespace* if not populated by the end-user based on the *Namespace* context -of the incoming request. If the *Namespace* of the resource being created, or updated does not match the *Namespace* on the request, -then the API server will reject the request. - -### Storage - -A namespace provides a unique identifier space and therefore must be in the storage path of a resource. - -In etcd, we want to continue to still support efficient WATCH across namespaces. - -Resources that persist content in etcd will have storage paths as follows: - -/{k8s_storage_prefix}/{resourceType}/{resource.Namespace}/{resource.Name} - -This enables consumers to WATCH /registry/{resourceType} for changes across namespace of a particular {resourceType}. - -### Kubelet - -The kubelet will register pod's it sources from a file or http source with a namespace associated with the -*cluster-id* - -### Example: OpenShift Origin managing a Kubernetes Namespace - -In this example, we demonstrate how the design allows for agents built on-top of -Kubernetes that manage their own set of resource types associated with a *Namespace* -to take part in Namespace termination. - -OpenShift creates a Namespace in Kubernetes - -``` -{ - "apiVersion":"v1", - "kind": "Namespace", - "metadata": { - "name": "development", - }, - "spec": { - "finalizers": ["openshift.com/origin", "kubernetes"], - }, - "status": { - "phase": "Active", - }, - "labels": { - "name": "development" - }, -} -``` - -OpenShift then goes and creates a set of resources (pods, services, etc) associated -with the "development" namespace. It also creates its own set of resources in its -own storage associated with the "development" namespace unknown to Kubernetes. - -User deletes the Namespace in Kubernetes, and Namespace now has following state: - -``` -{ - "apiVersion":"v1", - "kind": "Namespace", - "metadata": { - "name": "development", - "deletionTimestamp": "..." - }, - "spec": { - "finalizers": ["openshift.com/origin", "kubernetes"], - }, - "status": { - "phase": "Terminating", - }, - "labels": { - "name": "development" - }, -} -``` - -The Kubernetes *namespace controller* observes the namespace has a *deletionTimestamp* -and begins to terminate all of the content in the namespace that it knows about. Upon -success, it executes a *finalize* action that modifies the *Namespace* by -removing *kubernetes* from the list of finalizers: - -``` -{ - "apiVersion":"v1", - "kind": "Namespace", - "metadata": { - "name": "development", - "deletionTimestamp": "..." - }, - "spec": { - "finalizers": ["openshift.com/origin"], - }, - "status": { - "phase": "Terminating", - }, - "labels": { - "name": "development" - }, -} -``` - -OpenShift Origin has its own *namespace controller* that is observing cluster state, and -it observes the same namespace had a *deletionTimestamp* assigned to it. It too will go -and purge resources from its own storage that it manages associated with that namespace. -Upon completion, it executes a *finalize* action and removes the reference to "openshift.com/origin" -from the list of finalizers. - -This results in the following state: - -``` -{ - "apiVersion":"v1", - "kind": "Namespace", - "metadata": { - "name": "development", - "deletionTimestamp": "..." - }, - "spec": { - "finalizers": [], - }, - "status": { - "phase": "Terminating", - }, - "labels": { - "name": "development" - }, -} -``` - -At this point, the Kubernetes *namespace controller* in its sync loop will see that the namespace -has a deletion timestamp and that its list of finalizers is empty. As a result, it knows all -content associated from that namespace has been purged. It performs a final DELETE action -to remove that Namespace from the storage. - -At this point, all content associated with that Namespace, and the Namespace itself are gone. - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/namespaces.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/design/namespaces.md?pixel)]() diff --git a/release-0.19.0/docs/design/networking.md b/release-0.19.0/docs/design/networking.md deleted file mode 100644 index 2bbfba40dbd..00000000000 --- a/release-0.19.0/docs/design/networking.md +++ /dev/null @@ -1,114 +0,0 @@ -# Networking - -## Model and motivation - -Kubernetes deviates from the default Docker networking model. The goal is for each pod to have an IP in a flat shared networking namespace that has full communication with other physical computers and containers across the network. IP-per-pod creates a clean, backward-compatible model where pods can be treated much like VMs or physical hosts from the perspectives of port allocation, networking, naming, service discovery, load balancing, application configuration, and migration. - -OTOH, dynamic port allocation requires supporting both static ports (e.g., for externally accessible services) and dynamically allocated ports, requires partitioning centrally allocated and locally acquired dynamic ports, complicates scheduling (since ports are a scarce resource), is inconvenient for users, complicates application configuration, is plagued by port conflicts and reuse and exhaustion, requires non-standard approaches to naming (e.g., etcd rather than DNS), requires proxies and/or redirection for programs using standard naming/addressing mechanisms (e.g., web browsers), requires watching and cache invalidation for address/port changes for instances in addition to watching group membership changes, and obstructs container/pod migration (e.g., using CRIU). NAT introduces additional complexity by fragmenting the addressing space, which breaks self-registration mechanisms, among other problems. - -With the IP-per-pod model, all user containers within a pod behave as if they are on the same host with regard to networking. They can all reach each other’s ports on localhost. Ports which are published to the host interface are done so in the normal Docker way. All containers in all pods can talk to all other containers in all other pods by their 10-dot addresses. - -In addition to avoiding the aforementioned problems with dynamic port allocation, this approach reduces friction for applications moving from the world of uncontainerized apps on physical or virtual hosts to containers within pods. People running application stacks together on the same host have already figured out how to make ports not conflict (e.g., by configuring them through environment variables) and have arranged for clients to find them. - -The approach does reduce isolation between containers within a pod -- ports could conflict, and there couldn't be private ports across containers within a pod, but applications requiring their own port spaces could just run as separate pods and processes requiring private communication could run within the same container. Besides, the premise of pods is that containers within a pod share some resources (volumes, cpu, ram, etc.) and therefore expect and tolerate reduced isolation. Additionally, the user can control what containers belong to the same pod whereas, in general, they don't control what pods land together on a host. - -When any container calls SIOCGIFADDR, it sees the IP that any peer container would see them coming from -- each pod has its own IP address that other pods can know. By making IP addresses and ports the same within and outside the containers and pods, we create a NAT-less, flat address space. "ip addr show" should work as expected. This would enable all existing naming/discovery mechanisms to work out of the box, including self-registration mechanisms and applications that distribute IP addresses. (We should test that with etcd and perhaps one other option, such as Eureka (used by Acme Air) or Consul.) We should be optimizing for inter-pod network communication. Within a pod, containers are more likely to use communication through volumes (e.g., tmpfs) or IPC. - -This is different from the standard Docker model. In that mode, each container gets an IP in the 172-dot space and would only see that 172-dot address from SIOCGIFADDR. If these containers connect to another container the peer would see the connect coming from a different IP than the container itself knows. In short - you can never self-register anything from a container, because a container can not be reached on its private IP. - -An alternative we considered was an additional layer of addressing: pod-centric IP per container. Each container would have its own local IP address, visible only within that pod. This would perhaps make it easier for containerized applications to move from physical/virtual hosts to pods, but would be more complex to implement (e.g., requiring a bridge per pod, split-horizon/VP DNS) and to reason about, due to the additional layer of address translation, and would break self-registration and IP distribution mechanisms. - -## Current implementation - -For the Google Compute Engine cluster configuration scripts, [advanced routing](https://developers.google.com/compute/docs/networking#routing) is set up so that each VM has an extra 256 IP addresses that get routed to it. This is in addition to the 'main' IP address assigned to the VM that is NAT-ed for Internet access. The networking bridge (called `cbr0` to differentiate it from `docker0`) is set up outside of Docker proper and only does NAT for egress network traffic that isn't aimed at the virtual network. - -Ports mapped in from the 'main IP' (and hence the internet if the right firewall rules are set up) are proxied in user mode by Docker. In the future, this should be done with `iptables` by either the Kubelet or Docker: [Issue #15](https://github.com/GoogleCloudPlatform/kubernetes/issues/15). - -We start Docker with: - DOCKER_OPTS="--bridge cbr0 --iptables=false" - -We set up this bridge on each node with SaltStack, in [container_bridge.py](cluster/saltbase/salt/_states/container_bridge.py). - - cbr0: - container_bridge.ensure: - - cidr: {{ grains['cbr-cidr'] }} - ... - grains: - roles: - - kubernetes-pool - cbr-cidr: $MINION_IP_RANGE - -We make these addresses routable in GCE: - - gcloud compute routes add "${MINION_NAMES[$i]}" \ - --project "${PROJECT}" \ - --destination-range "${MINION_IP_RANGES[$i]}" \ - --network "${NETWORK}" \ - --next-hop-instance "${MINION_NAMES[$i]}" \ - --next-hop-instance-zone "${ZONE}" & - -The minion IP ranges are /24s in the 10-dot space. - -GCE itself does not know anything about these IPs, though. - -These are not externally routable, though, so containers that need to communicate with the outside world need to use host networking. To set up an external IP that forwards to the VM, it will only forward to the VM's primary IP (which is assigned to no pod). So we use docker's -p flag to map published ports to the main interface. This has the side effect of disallowing two pods from exposing the same port. (More discussion on this in [Issue #390](https://github.com/GoogleCloudPlatform/kubernetes/issues/390).) - -We create a container to use for the pod network namespace -- a single loopback device and a single veth device. All the user's containers get their network namespaces from this pod networking container. - -Docker allocates IP addresses from a bridge we create on each node, using its “container†networking mode. - -1. Create a normal (in the networking sense) container which uses a minimal image and runs a command that blocks forever. This is not a user-defined container, and gets a special well-known name. - - creates a new network namespace (netns) and loopback device - - creates a new pair of veth devices and binds them to the netns - - auto-assigns an IP from docker’s IP range - -2. Create the user containers and specify the name of the pod infra container as their “POD†argument. Docker finds the PID of the command running in the pod infra container and attaches to the netns and ipcns of that PID. - -### Other networking implementation examples -With the primary aim of providing IP-per-pod-model, other implementations exist to serve the purpose outside of GCE. - - [OpenVSwitch with GRE/VxLAN](../ovs-networking.md) - - [Flannel](https://github.com/coreos/flannel#flannel) - -## Challenges and future work - -### Docker API - -Right now, docker inspect doesn't show the networking configuration of the containers, since they derive it from another container. That information should be exposed somehow. - -### External IP assignment - -We want to be able to assign IP addresses externally from Docker ([Docker issue #6743](https://github.com/dotcloud/docker/issues/6743)) so that we don't need to statically allocate fixed-size IP ranges to each node, so that IP addresses can be made stable across pod infra container restarts ([Docker issue #2801](https://github.com/dotcloud/docker/issues/2801)), and to facilitate pod migration. Right now, if the pod infra container dies, all the user containers must be stopped and restarted because the netns of the pod infra container will change on restart, and any subsequent user container restart will join that new netns, thereby not being able to see its peers. Additionally, a change in IP address would encounter DNS caching/TTL problems. External IP assignment would also simplify DNS support (see below). - -### Naming, discovery, and load balancing - -In addition to enabling self-registration with 3rd-party discovery mechanisms, we'd like to setup DDNS automatically ([Issue #146](https://github.com/GoogleCloudPlatform/kubernetes/issues/146)). hostname, $HOSTNAME, etc. should return a name for the pod ([Issue #298](https://github.com/GoogleCloudPlatform/kubernetes/issues/298)), and gethostbyname should be able to resolve names of other pods. Probably we need to set up a DNS resolver to do the latter ([Docker issue #2267](https://github.com/dotcloud/docker/issues/2267)), so that we don't need to keep /etc/hosts files up to date dynamically. - -[Service](http://docs.k8s.io/services.md) endpoints are currently found through environment variables. Both [Docker-links-compatible](https://docs.docker.com/userguide/dockerlinks/) variables and kubernetes-specific variables ({NAME}_SERVICE_HOST and {NAME}_SERVICE_BAR) are supported, and resolve to ports opened by the service proxy. We don't actually use [the Docker ambassador pattern](https://docs.docker.com/articles/ambassador_pattern_linking/) to link containers because we don't require applications to identify all clients at configuration time, yet. While services today are managed by the service proxy, this is an implementation detail that applications should not rely on. Clients should instead use the [service IP](http://docs.k8s.io/services.md) (which the above environment variables will resolve to). However, a flat service namespace doesn't scale and environment variables don't permit dynamic updates, which complicates service deployment by imposing implicit ordering constraints. We intend to register each service's IP in DNS, and for that to become the preferred resolution protocol. - -We'd also like to accommodate other load-balancing solutions (e.g., HAProxy), non-load-balanced services ([Issue #260](https://github.com/GoogleCloudPlatform/kubernetes/issues/260)), and other types of groups (worker pools, etc.). Providing the ability to Watch a label selector applied to pod addresses would enable efficient monitoring of group membership, which could be directly consumed or synced with a discovery mechanism. Event hooks ([Issue #140](https://github.com/GoogleCloudPlatform/kubernetes/issues/140)) for join/leave events would probably make this even easier. - -### External routability - -We want traffic between containers to use the pod IP addresses across nodes. Say we have Node A with a container IP space of 10.244.1.0/24 and Node B with a container IP space of 10.244.2.0/24. And we have Container A1 at 10.244.1.1 and Container B1 at 10.244.2.1. We want Container A1 to talk to Container B1 directly with no NAT. B1 should see the "source" in the IP packets of 10.244.1.1 -- not the "primary" host IP for Node A. That means that we want to turn off NAT for traffic between containers (and also between VMs and containers). - -We'd also like to make pods directly routable from the external internet. However, we can't yet support the extra container IPs that we've provisioned talking to the internet directly. So, we don't map external IPs to the container IPs. Instead, we solve that problem by having traffic that isn't to the internal network (! 10.0.0.0/8) get NATed through the primary host IP address so that it can get 1:1 NATed by the GCE networking when talking to the internet. Similarly, incoming traffic from the internet has to get NATed/proxied through the host IP. - -So we end up with 3 cases: - -1. Container -> Container or Container <-> VM. These should use 10. addresses directly and there should be no NAT. - -2. Container -> Internet. These have to get mapped to the primary host IP so that GCE knows how to egress that traffic. There is actually 2 layers of NAT here: Container IP -> Internal Host IP -> External Host IP. The first level happens in the guest with IP tables and the second happens as part of GCE networking. The first one (Container IP -> internal host IP) does dynamic port allocation while the second maps ports 1:1. - -3. Internet -> Container. This also has to go through the primary host IP and also has 2 levels of NAT, ideally. However, the path currently is a proxy with (External Host IP -> Internal Host IP -> Docker) -> (Docker -> Container IP). Once [issue #15](https://github.com/GoogleCloudPlatform/kubernetes/issues/15) is closed, it should be External Host IP -> Internal Host IP -> Container IP. But to get that second arrow we have to set up the port forwarding iptables rules per mapped port. - -Another approach could be to create a new host interface alias for each pod, if we had a way to route an external IP to it. This would eliminate the scheduling constraints resulting from using the host's IP address. - -### IPv6 - -IPv6 would be a nice option, also, but we can't depend on it yet. Docker support is in progress: [Docker issue #2974](https://github.com/dotcloud/docker/issues/2974), [Docker issue #6923](https://github.com/dotcloud/docker/issues/6923), [Docker issue #6975](https://github.com/dotcloud/docker/issues/6975). Additionally, direct ipv6 assignment to instances doesn't appear to be supported by major cloud providers (e.g., AWS EC2, GCE) yet. We'd happily take pull requests from people running Kubernetes on bare metal, though. :-) - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/networking.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/design/networking.md?pixel)]() diff --git a/release-0.19.0/docs/design/persistent-storage.md b/release-0.19.0/docs/design/persistent-storage.md deleted file mode 100644 index aadd5b7391e..00000000000 --- a/release-0.19.0/docs/design/persistent-storage.md +++ /dev/null @@ -1,220 +0,0 @@ -# Persistent Storage - -This document proposes a model for managing persistent, cluster-scoped storage for applications requiring long lived data. - -### tl;dr - -Two new API kinds: - -A `PersistentVolume` (PV) is a storage resource provisioned by an administrator. It is analogous to a node. - -A `PersistentVolumeClaim` (PVC) is a user's request for a persistent volume to use in a pod. It is analogous to a pod. - -One new system component: - -`PersistentVolumeClaimBinder` is a singleton running in master that watches all PersistentVolumeClaims in the system and binds them to the closest matching available PersistentVolume. The volume manager watches the API for newly created volumes to manage. - -One new volume: - -`PersistentVolumeClaimVolumeSource` references the user's PVC in the same namespace. This volume finds the bound PV and mounts that volume for the pod. A `PersistentVolumeClaimVolumeSource` is, essentially, a wrapper around another type of volume that is owned by someone else (the system). - -Kubernetes makes no guarantees at runtime that the underlying storage exists or is available. High availability is left to the storage provider. - -### Goals - -* Allow administrators to describe available storage -* Allow pod authors to discover and request persistent volumes to use with pods -* Enforce security through access control lists and securing storage to the same namespace as the pod volume -* Enforce quotas through admission control -* Enforce scheduler rules by resource counting -* Ensure developers can rely on storage being available without being closely bound to a particular disk, server, network, or storage device. - - -#### Describe available storage - -Cluster administrators use the API to manage *PersistentVolumes*. A custom store ```NewPersistentVolumeOrderedIndex``` will index volumes by access modes and sort by storage capacity. The ```PersistentVolumeClaimBinder``` watches for new claims for storage and binds them to an available volume by matching the volume's characteristics (AccessModes and storage size) to the user's request. - -PVs are system objects and, thus, have no namespace. - -Many means of dynamic provisioning will be eventually be implemented for various storage types. - - -##### PersistentVolume API - -| Action | HTTP Verb | Path | Description | -| ---- | ---- | ---- | ---- | -| CREATE | POST | /api/{version}/persistentvolumes/ | Create instance of PersistentVolume | -| GET | GET | /api/{version}persistentvolumes/{name} | Get instance of PersistentVolume with {name} | -| UPDATE | PUT | /api/{version}/persistentvolumes/{name} | Update instance of PersistentVolume with {name} | -| DELETE | DELETE | /api/{version}/persistentvolumes/{name} | Delete instance of PersistentVolume with {name} | -| LIST | GET | /api/{version}/persistentvolumes | List instances of PersistentVolume | -| WATCH | GET | /api/{version}/watch/persistentvolumes | Watch for changes to a PersistentVolume | - - -#### Request Storage - -Kubernetes users request persistent storage for their pod by creating a ```PersistentVolumeClaim```. Their request for storage is described by their requirements for resources and mount capabilities. - -Requests for volumes are bound to available volumes by the volume manager, if a suitable match is found. Requests for resources can go unfulfilled. - -Users attach their claim to their pod using a new ```PersistentVolumeClaimVolumeSource``` volume source. - - -##### PersistentVolumeClaim API - - -| Action | HTTP Verb | Path | Description | -| ---- | ---- | ---- | ---- | -| CREATE | POST | /api/{version}/namespaces/{ns}/persistentvolumeclaims/ | Create instance of PersistentVolumeClaim in namespace {ns} | -| GET | GET | /api/{version}/namespaces/{ns}/persistentvolumeclaims/{name} | Get instance of PersistentVolumeClaim in namespace {ns} with {name} | -| UPDATE | PUT | /api/{version}/namespaces/{ns}/persistentvolumeclaims/{name} | Update instance of PersistentVolumeClaim in namespace {ns} with {name} | -| DELETE | DELETE | /api/{version}/namespaces/{ns}/persistentvolumeclaims/{name} | Delete instance of PersistentVolumeClaim in namespace {ns} with {name} | -| LIST | GET | /api/{version}/namespaces/{ns}/persistentvolumeclaims | List instances of PersistentVolumeClaim in namespace {ns} | -| WATCH | GET | /api/{version}/watch/namespaces/{ns}/persistentvolumeclaims | Watch for changes to PersistentVolumeClaim in namespace {ns} | - - - -#### Scheduling constraints - -Scheduling constraints are to be handled similar to pod resource constraints. Pods will need to be annotated or decorated with the number of resources it requires on a node. Similarly, a node will need to list how many it has used or available. - -TBD - - -#### Events - -The implementation of persistent storage will not require events to communicate to the user the state of their claim. The CLI for bound claims contains a reference to the backing persistent volume. This is always present in the API and CLI, making an event to communicate the same unnecessary. - -Events that communicate the state of a mounted volume are left to the volume plugins. - - -### Example - -#### Admin provisions storage - -An administrator provisions storage by posting PVs to the API. Various way to automate this task can be scripted. Dynamic provisioning is a future feature that can maintain levels of PVs. - -``` -POST: - -kind: PersistentVolume -apiVersion: v1 -metadata: - name: pv0001 -spec: - capacity: - storage: 10 - persistentDisk: - pdName: "abc123" - fsType: "ext4" - --------------------------------------------------- - -kubectl get pv - -NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM -pv0001 map[] 10737418240 RWO Pending - - -``` - -#### Users request storage - -A user requests storage by posting a PVC to the API. Their request contains the AccessModes they wish their volume to have and the minimum size needed. - -The user must be within a namespace to create PVCs. - -``` - -POST: -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: myclaim-1 -spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 3 - --------------------------------------------------- - -kubectl get pvc - - -NAME LABELS STATUS VOLUME -myclaim-1 map[] pending - -``` - - -#### Matching and binding - - The ```PersistentVolumeClaimBinder``` attempts to find an available volume that most closely matches the user's request. If one exists, they are bound by putting a reference on the PV to the PVC. Requests can go unfulfilled if a suitable match is not found. - -``` - -kubectl get pv - -NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM -pv0001 map[] 10737418240 RWO Bound myclaim-1 / f4b3d283-c0ef-11e4-8be4-80e6500a981e - - -kubectl get pvc - -NAME LABELS STATUS VOLUME -myclaim-1 map[] Bound b16e91d6-c0ef-11e4-8be4-80e6500a981e - - -``` - -#### Claim usage - -The claim holder can use their claim as a volume. The ```PersistentVolumeClaimVolumeSource``` knows to fetch the PV backing the claim and mount its volume for a pod. - -The claim holder owns the claim and its data for as long as the claim exists. The pod using the claim can be deleted, but the claim remains in the user's namespace. It can be used again and again by many pods. - -``` -POST: - -kind: Pod -apiVersion: v1 -metadata: - name: mypod -spec: - containers: - - image: nginx - name: myfrontend - volumeMounts: - - mountPath: "/var/www/html" - name: mypd - volumes: - - name: mypd - source: - persistentVolumeClaim: - accessMode: ReadWriteOnce - claimRef: - name: myclaim-1 - -``` - -#### Releasing a claim and Recycling a volume - -When a claim holder is finished with their data, they can delete their claim. - -``` - -kubectl delete pvc myclaim-1 - -``` - -The ```PersistentVolumeClaimBinder``` will reconcile this by removing the claim reference from the PV and change the PVs status to 'Released'. - -Admins can script the recycling of released volumes. Future dynamic provisioners will understand how a volume should be recycled. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/persistent-storage.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/design/persistent-storage.md?pixel)]() diff --git a/release-0.19.0/docs/design/principles.md b/release-0.19.0/docs/design/principles.md deleted file mode 100644 index 8a596cd78bf..00000000000 --- a/release-0.19.0/docs/design/principles.md +++ /dev/null @@ -1,61 +0,0 @@ -# Design Principles - -Principles to follow when extending Kubernetes. - -## API - -See also the [API conventions](../api-conventions.md). - -* All APIs should be declarative. -* API objects should be complementary and composable, not opaque wrappers. -* The control plane should be transparent -- there are no hidden internal APIs. -* The cost of API operations should be proportional to the number of objects intentionally operated upon. Therefore, common filtered lookups must be indexed. Beware of patterns of multiple API calls that would incur quadratic behavior. -* Object status must be 100% reconstructable by observation. Any history kept must be just an optimization and not required for correct operation. -* Cluster-wide invariants are difficult to enforce correctly. Try not to add them. If you must have them, don't enforce them atomically in master components, that is contention-prone and doesn't provide a recovery path in the case of a bug allowing the invariant to be violated. Instead, provide a series of checks to reduce the probability of a violation, and make every component involved able to recover from an invariant violation. -* Low-level APIs should be designed for control by higher-level systems. Higher-level APIs should be intent-oriented (think SLOs) rather than implementation-oriented (think control knobs). - -## Control logic - -* Functionality must be *level-based*, meaning the system must operate correctly given the desired state and the current/observed state, regardless of how many intermediate state updates may have been missed. Edge-triggered behavior must be just an optimization. -* Assume an open world: continually verify assumptions and gracefully adapt to external events and/or actors. Example: we allow users to kill pods under control of a replication controller; it just replaces them. -* Do not define comprehensive state machines for objects with behaviors associated with state transitions and/or "assumed" states that cannot be ascertained by observation. -* Don't assume a component's decisions will not be overridden or rejected, nor for the component to always understand why. For example, etcd may reject writes. Kubelet may reject pods. The scheduler may not be able to schedule pods. Retry, but back off and/or make alternative decisions. -* Components should be self-healing. For example, if you must keep some state (e.g., cache) the content needs to be periodically refreshed, so that if an item does get erroneously stored or a deletion event is missed etc, it will be soon fixed, ideally on timescales that are shorter than what will attract attention from humans. -* Component behavior should degrade gracefully. Prioritize actions so that the most important activities can continue to function even when overloaded and/or in states of partial failure. - -## Architecture - -* Only the apiserver should communicate with etcd/store, and not other components (scheduler, kubelet, etc.). -* Compromising a single node shouldn't compromise the cluster. -* Components should continue to do what they were last told in the absence of new instructions (e.g., due to network partition or component outage). -* All components should keep all relevant state in memory all the time. The apiserver should write through to etcd/store, other components should write through to the apiserver, and they should watch for updates made by other clients. -* Watch is preferred over polling. - -## Extensibility - -TODO: pluggability - -## Bootstrapping - -* [Self-hosting](https://github.com/GoogleCloudPlatform/kubernetes/issues/246) of all components is a goal. -* Minimize the number of dependencies, particularly those required for steady-state operation. -* Stratify the dependencies that remain via principled layering. -* Break any circular dependencies by converting hard dependencies to soft dependencies. - * Also accept that data from other components from another source, such as local files, which can then be manually populated at bootstrap time and then continuously updated once those other components are available. - * State should be rediscoverable and/or reconstructable. - * Make it easy to run temporary, bootstrap instances of all components in order to create the runtime state needed to run the components in the steady state; use a lock (master election for distributed components, file lock for local components like Kubelet) to coordinate handoff. We call this technique "pivoting". - * Have a solution to restart dead components. For distributed components, replication works well. For local components such as Kubelet, a process manager or even a simple shell loop works. - -## Availability - -TODO - -## General principles - -* [Eric Raymond's 17 UNIX rules](https://en.wikipedia.org/wiki/Unix_philosophy#Eric_Raymond.E2.80.99s_17_Unix_Rules) - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/principles.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/design/principles.md?pixel)]() diff --git a/release-0.19.0/docs/design/secrets.md b/release-0.19.0/docs/design/secrets.md deleted file mode 100644 index c339b181c17..00000000000 --- a/release-0.19.0/docs/design/secrets.md +++ /dev/null @@ -1,581 +0,0 @@ - -## Abstract - -A proposal for the distribution of secrets (passwords, keys, etc) to the Kubelet and to -containers inside Kubernetes using a custom volume type. - -## Motivation - -Secrets are needed in containers to access internal resources like the Kubernetes master or -external resources such as git repositories, databases, etc. Users may also want behaviors in the -kubelet that depend on secret data (credentials for image pull from a docker registry) associated -with pods. - -Goals of this design: - -1. Describe a secret resource -2. Define the various challenges attendant to managing secrets on the node -3. Define a mechanism for consuming secrets in containers without modification - -## Constraints and Assumptions - -* This design does not prescribe a method for storing secrets; storage of secrets should be - pluggable to accommodate different use-cases -* Encryption of secret data and node security are orthogonal concerns -* It is assumed that node and master are secure and that compromising their security could also - compromise secrets: - * If a node is compromised, the only secrets that could potentially be exposed should be the - secrets belonging to containers scheduled onto it - * If the master is compromised, all secrets in the cluster may be exposed -* Secret rotation is an orthogonal concern, but it should be facilitated by this proposal -* A user who can consume a secret in a container can know the value of the secret; secrets must - be provisioned judiciously - -## Use Cases - -1. As a user, I want to store secret artifacts for my applications and consume them securely in - containers, so that I can keep the configuration for my applications separate from the images - that use them: - 1. As a cluster operator, I want to allow a pod to access the Kubernetes master using a custom - `.kubeconfig` file, so that I can securely reach the master - 2. As a cluster operator, I want to allow a pod to access a Docker registry using credentials - from a `.dockercfg` file, so that containers can push images - 3. As a cluster operator, I want to allow a pod to access a git repository using SSH keys, - so that I can push and fetch to and from the repository -2. As a user, I want to allow containers to consume supplemental information about services such - as username and password which should be kept secret, so that I can share secrets about a - service amongst the containers in my application securely -3. As a user, I want to associate a pod with a `ServiceAccount` that consumes a secret and have - the kubelet implement some reserved behaviors based on the types of secrets the service account - consumes: - 1. Use credentials for a docker registry to pull the pod's docker image - 2. Present kubernetes auth token to the pod or transparently decorate traffic between the pod - and master service -4. As a user, I want to be able to indicate that a secret expires and for that secret's value to - be rotated once it expires, so that the system can help me follow good practices - -### Use-Case: Configuration artifacts - -Many configuration files contain secrets intermixed with other configuration information. For -example, a user's application may contain a properties file than contains database credentials, -SaaS API tokens, etc. Users should be able to consume configuration artifacts in their containers -and be able to control the path on the container's filesystems where the artifact will be -presented. - -### Use-Case: Metadata about services - -Most pieces of information about how to use a service are secrets. For example, a service that -provides a MySQL database needs to provide the username, password, and database name to consumers -so that they can authenticate and use the correct database. Containers in pods consuming the MySQL -service would also consume the secrets associated with the MySQL service. - -### Use-Case: Secrets associated with service accounts - -[Service Accounts](http://docs.k8s.io/design/service_accounts.md) are proposed as a -mechanism to decouple capabilities and security contexts from individual human users. A -`ServiceAccount` contains references to some number of secrets. A `Pod` can specify that it is -associated with a `ServiceAccount`. Secrets should have a `Type` field to allow the Kubelet and -other system components to take action based on the secret's type. - -#### Example: service account consumes auth token secret - -As an example, the service account proposal discusses service accounts consuming secrets which -contain kubernetes auth tokens. When a Kubelet starts a pod associated with a service account -which consumes this type of secret, the Kubelet may take a number of actions: - -1. Expose the secret in a `.kubernetes_auth` file in a well-known location in the container's - file system -2. Configure that node's `kube-proxy` to decorate HTTP requests from that pod to the - `kubernetes-master` service with the auth token, e. g. by adding a header to the request - (see the [LOAS Daemon](https://github.com/GoogleCloudPlatform/kubernetes/issues/2209) proposal) - -#### Example: service account consumes docker registry credentials - -Another example use case is where a pod is associated with a secret containing docker registry -credentials. The Kubelet could use these credentials for the docker pull to retrieve the image. - -### Use-Case: Secret expiry and rotation - -Rotation is considered a good practice for many types of secret data. It should be possible to -express that a secret has an expiry date; this would make it possible to implement a system -component that could regenerate expired secrets. As an example, consider a component that rotates -expired secrets. The rotator could periodically regenerate the values for expired secrets of -common types and update their expiry dates. - -## Deferral: Consuming secrets as environment variables - -Some images will expect to receive configuration items as environment variables instead of files. -We should consider what the best way to allow this is; there are a few different options: - -1. Force the user to adapt files into environment variables. Users can store secrets that need to - be presented as environment variables in a format that is easy to consume from a shell: - - $ cat /etc/secrets/my-secret.txt - export MY_SECRET_ENV=MY_SECRET_VALUE - - The user could `source` the file at `/etc/secrets/my-secret` prior to executing the command for - the image either inline in the command or in an init script, - -2. Give secrets an attribute that allows users to express the intent that the platform should - generate the above syntax in the file used to present a secret. The user could consume these - files in the same manner as the above option. - -3. Give secrets attributes that allow the user to express that the secret should be presented to - the container as an environment variable. The container's environment would contain the - desired values and the software in the container could use them without accommodation the - command or setup script. - -For our initial work, we will treat all secrets as files to narrow the problem space. There will -be a future proposal that handles exposing secrets as environment variables. - -## Flow analysis of secret data with respect to the API server - -There are two fundamentally different use-cases for access to secrets: - -1. CRUD operations on secrets by their owners -2. Read-only access to the secrets needed for a particular node by the kubelet - -### Use-Case: CRUD operations by owners - -In use cases for CRUD operations, the user experience for secrets should be no different than for -other API resources. - -#### Data store backing the REST API - -The data store backing the REST API should be pluggable because different cluster operators will -have different preferences for the central store of secret data. Some possibilities for storage: - -1. An etcd collection alongside the storage for other API resources -2. A collocated [HSM](http://en.wikipedia.org/wiki/Hardware_security_module) -3. An external datastore such as an external etcd, RDBMS, etc. - -#### Size limit for secrets - -There should be a size limit for secrets in order to: - -1. Prevent DOS attacks against the API server -2. Allow kubelet implementations that prevent secret data from touching the node's filesystem - -The size limit should satisfy the following conditions: - -1. Large enough to store common artifact types (encryption keypairs, certificates, small - configuration files) -2. Small enough to avoid large impact on node resource consumption (storage, RAM for tmpfs, etc) - -To begin discussion, we propose an initial value for this size limit of **1MB**. - -#### Other limitations on secrets - -Defining a policy for limitations on how a secret may be referenced by another API resource and how -constraints should be applied throughout the cluster is tricky due to the number of variables -involved: - -1. Should there be a maximum number of secrets a pod can reference via a volume? -2. Should there be a maximum number of secrets a service account can reference? -3. Should there be a total maximum number of secrets a pod can reference via its own spec and its - associated service account? -4. Should there be a total size limit on the amount of secret data consumed by a pod? -5. How will cluster operators want to be able to configure these limits? -6. How will these limits impact API server validations? -7. How will these limits affect scheduling? - -For now, we will not implement validations around these limits. Cluster operators will decide how -much node storage is allocated to secrets. It will be the operator's responsibility to ensure that -the allocated storage is sufficient for the workload scheduled onto a node. - -For now, kubelets will only attach secrets to api-sourced pods, and not file- or http-sourced -ones. Doing so would: - - confuse the secrets admission controller in the case of mirror pods. - - create an apiserver-liveness dependency -- avoiding this dependency is a main reason to use non-api-source pods. - -### Use-Case: Kubelet read of secrets for node - -The use-case where the kubelet reads secrets has several additional requirements: - -1. Kubelets should only be able to receive secret data which is required by pods scheduled onto - the kubelet's node -2. Kubelets should have read-only access to secret data -3. Secret data should not be transmitted over the wire insecurely -4. Kubelets must ensure pods do not have access to each other's secrets - -#### Read of secret data by the Kubelet - -The Kubelet should only be allowed to read secrets which are consumed by pods scheduled onto that -Kubelet's node and their associated service accounts. Authorization of the Kubelet to read this -data would be delegated to an authorization plugin and associated policy rule. - -#### Secret data on the node: data at rest - -Consideration must be given to whether secret data should be allowed to be at rest on the node: - -1. If secret data is not allowed to be at rest, the size of secret data becomes another draw on - the node's RAM - should it affect scheduling? -2. If secret data is allowed to be at rest, should it be encrypted? - 1. If so, how should be this be done? - 2. If not, what threats exist? What types of secret are appropriate to store this way? - -For the sake of limiting complexity, we propose that initially secret data should not be allowed -to be at rest on a node; secret data should be stored on a node-level tmpfs filesystem. This -filesystem can be subdivided into directories for use by the kubelet and by the volume plugin. - -#### Secret data on the node: resource consumption - -The Kubelet will be responsible for creating the per-node tmpfs file system for secret storage. -It is hard to make a prescriptive declaration about how much storage is appropriate to reserve for -secrets because different installations will vary widely in available resources, desired pod to -node density, overcommit policy, and other operation dimensions. That being the case, we propose -for simplicity that the amount of secret storage be controlled by a new parameter to the kubelet -with a default value of **64MB**. It is the cluster operator's responsibility to handle choosing -the right storage size for their installation and configuring their Kubelets correctly. - -Configuring each Kubelet is not the ideal story for operator experience; it is more intuitive that -the cluster-wide storage size be readable from a central configuration store like the one proposed -in [#1553](https://github.com/GoogleCloudPlatform/kubernetes/issues/1553). When such a store -exists, the Kubelet could be modified to read this configuration item from the store. - -When the Kubelet is modified to advertise node resources (as proposed in -[#4441](https://github.com/GoogleCloudPlatform/kubernetes/issues/4441)), the capacity calculation -for available memory should factor in the potential size of the node-level tmpfs in order to avoid -memory overcommit on the node. - -#### Secret data on the node: isolation - -Every pod will have a [security context](http://docs.k8s.io/design/security_context.md). -Secret data on the node should be isolated according to the security context of the container. The -Kubelet volume plugin API will be changed so that a volume plugin receives the security context of -a volume along with the volume spec. This will allow volume plugins to implement setting the -security context of volumes they manage. - -## Community work: - -Several proposals / upstream patches are notable as background for this proposal: - -1. [Docker vault proposal](https://github.com/docker/docker/issues/10310) -2. [Specification for image/container standardization based on volumes](https://github.com/docker/docker/issues/9277) -3. [Kubernetes service account proposal](http://docs.k8s.io/design/service_accounts.md) -4. [Secrets proposal for docker (1)](https://github.com/docker/docker/pull/6075) -5. [Secrets proposal for docker (2)](https://github.com/docker/docker/pull/6697) - -## Proposed Design - -We propose a new `Secret` resource which is mounted into containers with a new volume type. Secret -volumes will be handled by a volume plugin that does the actual work of fetching the secret and -storing it. Secrets contain multiple pieces of data that are presented as different files within -the secret volume (example: SSH key pair). - -In order to remove the burden from the end user in specifying every file that a secret consists of, -it should be possible to mount all files provided by a secret with a single ```VolumeMount``` entry -in the container specification. - -### Secret API Resource - -A new resource for secrets will be added to the API: - -```go -type Secret struct { - TypeMeta - ObjectMeta - - // Data contains the secret data. Each key must be a valid DNS_SUBDOMAIN. - // The serialized form of the secret data is a base64 encoded string, - // representing the arbitrary (possibly non-string) data value here. - Data map[string][]byte `json:"data,omitempty"` - - // Used to facilitate programmatic handling of secret data. - Type SecretType `json:"type,omitempty"` -} - -type SecretType string - -const ( - SecretTypeOpaque SecretType = "Opaque" // Opaque (arbitrary data; default) - SecretTypeKubernetesAuthToken SecretType = "KubernetesAuth" // Kubernetes auth token - SecretTypeDockerRegistryAuth SecretType = "DockerRegistryAuth" // Docker registry auth - // FUTURE: other type values -) - -const MaxSecretSize = 1 * 1024 * 1024 -``` - -A Secret can declare a type in order to provide type information to system components that work -with secrets. The default type is `opaque`, which represents arbitrary user-owned data. - -Secrets are validated against `MaxSecretSize`. The keys in the `Data` field must be valid DNS -subdomains. - -A new REST API and registry interface will be added to accompany the `Secret` resource. The -default implementation of the registry will store `Secret` information in etcd. Future registry -implementations could store the `TypeMeta` and `ObjectMeta` fields in etcd and store the secret -data in another data store entirely, or store the whole object in another data store. - -#### Other validations related to secrets - -Initially there will be no validations for the number of secrets a pod references, or the number of -secrets that can be associated with a service account. These may be added in the future as the -finer points of secrets and resource allocation are fleshed out. - -### Secret Volume Source - -A new `SecretSource` type of volume source will be added to the ```VolumeSource``` struct in the -API: - -```go -type VolumeSource struct { - // Other fields omitted - - // SecretSource represents a secret that should be presented in a volume - SecretSource *SecretSource `json:"secret"` -} - -type SecretSource struct { - Target ObjectReference -} -``` - -Secret volume sources are validated to ensure that the specified object reference actually points -to an object of type `Secret`. - -In the future, the `SecretSource` will be extended to allow: - -1. Fine-grained control over which pieces of secret data are exposed in the volume -2. The paths and filenames for how secret data are exposed - -### Secret Volume Plugin - -A new Kubelet volume plugin will be added to handle volumes with a secret source. This plugin will -require access to the API server to retrieve secret data and therefore the volume `Host` interface -will have to change to expose a client interface: - -```go -type Host interface { - // Other methods omitted - - // GetKubeClient returns a client interface - GetKubeClient() client.Interface -} -``` - -The secret volume plugin will be responsible for: - -1. Returning a `volume.Builder` implementation from `NewBuilder` that: - 1. Retrieves the secret data for the volume from the API server - 2. Places the secret data onto the container's filesystem - 3. Sets the correct security attributes for the volume based on the pod's `SecurityContext` -2. Returning a `volume.Cleaner` implementation from `NewClear` that cleans the volume from the - container's filesystem - -### Kubelet: Node-level secret storage - -The Kubelet must be modified to accept a new parameter for the secret storage size and to create -a tmpfs file system of that size to store secret data. Rough accounting of specific changes: - -1. The Kubelet should have a new field added called `secretStorageSize`; units are megabytes -2. `NewMainKubelet` should accept a value for secret storage size -3. The Kubelet server should have a new flag added for secret storage size -4. The Kubelet's `setupDataDirs` method should be changed to create the secret storage - -### Kubelet: New behaviors for secrets associated with service accounts - -For use-cases where the Kubelet's behavior is affected by the secrets associated with a pod's -`ServiceAccount`, the Kubelet will need to be changed. For example, if secrets of type -`docker-reg-auth` affect how the pod's images are pulled, the Kubelet will need to be changed -to accommodate this. Subsequent proposals can address this on a type-by-type basis. - -## Examples - -For clarity, let's examine some detailed examples of some common use-cases in terms of the -suggested changes. All of these examples are assumed to be created in a namespace called -`example`. - -### Use-Case: Pod with ssh keys - -To create a pod that uses an ssh key stored as a secret, we first need to create a secret: - -```json -{ - "kind": "Secret", - "apiVersion": "v1", - "metadata": { - "name": "ssh-key-secret" - }, - "data": { - "id-rsa": "dmFsdWUtMg0KDQo=", - "id-rsa.pub": "dmFsdWUtMQ0K" - } -} -``` - -**Note:** The serialized JSON and YAML values of secret data are encoded as -base64 strings. Newlines are not valid within these strings and must be -omitted. - -Now we can create a pod which references the secret with the ssh key and consumes it in a volume: - -```json -{ - "kind": "Pod", - "apiVersion": "v1", - "metadata": { - "name": "secret-test-pod", - "labels": { - "name": "secret-test" - } - }, - "spec": { - "volumes": [ - { - "name": "secret-volume", - "secret": { - "secretName": "ssh-key-secret" - } - } - ], - "containers": [ - { - "name": "ssh-test-container", - "image": "mySshImage", - "volumeMounts": [ - { - "name": "secret-volume", - "readOnly": true, - "mountPath": "/etc/secret-volume" - } - ] - } - ] - } -} -``` - -When the container's command runs, the pieces of the key will be available in: - - /etc/secret-volume/id-rsa.pub - /etc/secret-volume/id-rsa - -The container is then free to use the secret data to establish an ssh connection. - -### Use-Case: Pods with pod / test credentials - -This example illustrates a pod which consumes a secret containing prod -credentials and another pod which consumes a secret with test environment -credentials. - -The secrets: - -```json -{ - "apiVersion": "v1", - "kind": "List", - "items": - [{ - "kind": "Secret", - "apiVersion": "v1", - "metadata": { - "name": "prod-db-secret" - }, - "data": { - "password": "dmFsdWUtMg0KDQo=", - "username": "dmFsdWUtMQ0K" - } - }, - { - "kind": "Secret", - "apiVersion": "v1", - "metadata": { - "name": "test-db-secret" - }, - "data": { - "password": "dmFsdWUtMg0KDQo=", - "username": "dmFsdWUtMQ0K" - } - }] -} -``` - -The pods: - -```json -{ - "apiVersion": "v1", - "kind": "List", - "items": - [{ - "kind": "Pod", - "apiVersion": "v1", - "metadata": { - "name": "prod-db-client-pod", - "labels": { - "name": "prod-db-client" - } - }, - "spec": { - "volumes": [ - { - "name": "secret-volume", - "secret": { - "secretName": "prod-db-secret" - } - } - ], - "containers": [ - { - "name": "db-client-container", - "image": "myClientImage", - "volumeMounts": [ - { - "name": "secret-volume", - "readOnly": true, - "mountPath": "/etc/secret-volume" - } - ] - } - ] - } - }, - { - "kind": "Pod", - "apiVersion": "v1", - "metadata": { - "name": "test-db-client-pod", - "labels": { - "name": "test-db-client" - } - }, - "spec": { - "volumes": [ - { - "name": "secret-volume", - "secret": { - "secretName": "test-db-secret" - } - } - ], - "containers": [ - { - "name": "db-client-container", - "image": "myClientImage", - "volumeMounts": [ - { - "name": "secret-volume", - "readOnly": true, - "mountPath": "/etc/secret-volume" - } - ] - } - ] - } - }] -} -``` - -The specs for the two pods differ only in the value of the object referred to by the secret volume -source. Both containers will have the following files present on their filesystems: - - /etc/secret-volume/username - /etc/secret-volume/password - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/secrets.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/design/secrets.md?pixel)]() diff --git a/release-0.19.0/docs/design/security.md b/release-0.19.0/docs/design/security.md deleted file mode 100644 index 352079cedbc..00000000000 --- a/release-0.19.0/docs/design/security.md +++ /dev/null @@ -1,123 +0,0 @@ -# Security in Kubernetes - -Kubernetes should define a reasonable set of security best practices that allows processes to be isolated from each other, from the cluster infrastructure, and which preserves important boundaries between those who manage the cluster, and those who use the cluster. - -While Kubernetes today is not primarily a multi-tenant system, the long term evolution of Kubernetes will increasingly rely on proper boundaries between users and administrators. The code running on the cluster must be appropriately isolated and secured to prevent malicious parties from affecting the entire cluster. - - -## High Level Goals - -1. Ensure a clear isolation between the container and the underlying host it runs on -2. Limit the ability of the container to negatively impact the infrastructure or other containers -3. [Principle of Least Privilege](http://en.wikipedia.org/wiki/Principle_of_least_privilege) - ensure components are only authorized to perform the actions they need, and limit the scope of a compromise by limiting the capabilities of individual components -4. Reduce the number of systems that have to be hardened and secured by defining clear boundaries between components -5. Allow users of the system to be cleanly separated from administrators -6. Allow administrative functions to be delegated to users where necessary -7. Allow applications to be run on the cluster that have "secret" data (keys, certs, passwords) which is properly abstracted from "public" data. - - -## Use cases - -### Roles: - -We define "user" as a unique identity accessing the Kubernetes API server, which may be a human or an automated process. Human users fall into the following categories: - -1. k8s admin - administers a kubernetes cluster and has access to the undelying components of the system -2. k8s project administrator - administrates the security of a small subset of the cluster -3. k8s developer - launches pods on a kubernetes cluster and consumes cluster resources - -Automated process users fall into the following categories: - -1. k8s container user - a user that processes running inside a container (on the cluster) can use to access other cluster resources indepedent of the human users attached to a project -2. k8s infrastructure user - the user that kubernetes infrastructure components use to perform cluster functions with clearly defined roles - - -### Description of roles: - -* Developers: - * write pod specs. - * making some of their own images, and using some "community" docker images - * know which pods need to talk to which other pods - * decide which pods should share files with other pods, and which should not. - * reason about application level security, such as containing the effects of a local-file-read exploit in a webserver pod. - * do not often reason about operating system or organizational security. - * are not necessarily comfortable reasoning about the security properties of a system at the level of detail of Linux Capabilities, SELinux, AppArmor, etc. - -* Project Admins: - * allocate identity and roles within a namespace - * reason about organizational security within a namespace - * don't give a developer permissions that are not needed for role. - * protect files on shared storage from unnecessary cross-team access - * are less focused about application security - -* Administrators: - * are less focused on application security. Focused on operating system security. - * protect the node from bad actors in containers, and properly-configured innocent containers from bad actors in other containers. - * comfortable reasoning about the security properties of a system at the level of detail of Linux Capabilities, SELinux, AppArmor, etc. - * decides who can use which Linux Capabilities, run privileged containers, use hostDir, etc. - * e.g. a team that manages Ceph or a mysql server might be trusted to have raw access to storage devices in some organizations, but teams that develop the applications at higher layers would not. - - -## Proposed Design - -A pod runs in a *security context* under a *service account* that is defined by an administrator or project administrator, and the *secrets* a pod has access to is limited by that *service account*. - - -1. The API should authenticate and authorize user actions [authn and authz](http://docs.k8s.io/design/access.md) -2. All infrastructure components (kubelets, kube-proxies, controllers, scheduler) should have an infrastructure user that they can authenticate with and be authorized to perform only the functions they require against the API. -3. Most infrastructure components should use the API as a way of exchanging data and changing the system, and only the API should have access to the underlying data store (etcd) -4. When containers run on the cluster and need to talk to other containers or the API server, they should be identified and authorized clearly as an autonomous process via a [service account](http://docs.k8s.io/design/service_accounts.md) - 1. If the user who started a long-lived process is removed from access to the cluster, the process should be able to continue without interruption - 2. If the user who started processes are removed from the cluster, administrators may wish to terminate their processes in bulk - 3. When containers run with a service account, the user that created / triggered the service account behavior must be associated with the container's action -5. When container processes run on the cluster, they should run in a [security context](http://docs.k8s.io/design/security_context.md) that isolates those processes via Linux user security, user namespaces, and permissions. - 1. Administrators should be able to configure the cluster to automatically confine all container processes as a non-root, randomly assigned UID - 2. Administrators should be able to ensure that container processes within the same namespace are all assigned the same unix user UID - 3. Administrators should be able to limit which developers and project administrators have access to higher privilege actions - 4. Project administrators should be able to run pods within a namespace under different security contexts, and developers must be able to specify which of the available security contexts they may use - 5. Developers should be able to run their own images or images from the community and expect those images to run correctly - 6. Developers may need to ensure their images work within higher security requirements specified by administrators - 7. When available, Linux kernel user namespaces can be used to ensure 5.2 and 5.4 are met. - 8. When application developers want to share filesytem data via distributed filesystems, the Unix user ids on those filesystems must be consistent across different container processes -6. Developers should be able to define [secrets](http://docs.k8s.io/design/secrets.md) that are automatically added to the containers when pods are run - 1. Secrets are files injected into the container whose values should not be displayed within a pod. Examples: - 1. An SSH private key for git cloning remote data - 2. A client certificate for accessing a remote system - 3. A private key and certificate for a web server - 4. A .kubeconfig file with embedded cert / token data for accessing the Kubernetes master - 5. A .dockercfg file for pulling images from a protected registry - 2. Developers should be able to define the pod spec so that a secret lands in a specific location - 3. Project administrators should be able to limit developers within a namespace from viewing or modifying secrets (anyone who can launch an arbitrary pod can view secrets) - 4. Secrets are generally not copied from one namespace to another when a developer's application definitions are copied - - -### Related design discussion - -* Authorization and authentication http://docs.k8s.io/design/access.md -* Secret distribution via files https://github.com/GoogleCloudPlatform/kubernetes/pull/2030 -* Docker secrets https://github.com/docker/docker/pull/6697 -* Docker vault https://github.com/docker/docker/issues/10310 -* Service Accounts: http://docs.k8s.io/design/service_accounts.md -* Secret volumes https://github.com/GoogleCloudPlatform/kubernetes/4126 - -## Specific Design Points - -### TODO: authorization, authentication - -### Isolate the data store from the nodes and supporting infrastructure - -Access to the central data store (etcd) in Kubernetes allows an attacker to run arbitrary containers on hosts, to gain access to any protected information stored in either volumes or in pods (such as access tokens or shared secrets provided as environment variables), to intercept and redirect traffic from running services by inserting middlemen, or to simply delete the entire history of the custer. - -As a general principle, access to the central data store should be restricted to the components that need full control over the system and which can apply appropriate authorization and authentication of change requests. In the future, etcd may offer granular access control, but that granularity will require an administrator to understand the schema of the data to properly apply security. An administrator must be able to properly secure Kubernetes at a policy level, rather than at an implementation level, and schema changes over time should not risk unintended security leaks. - -Both the Kubelet and Kube Proxy need information related to their specific roles - for the Kubelet, the set of pods it should be running, and for the Proxy, the set of services and endpoints to load balance. The Kubelet also needs to provide information about running pods and historical termination data. The access pattern for both Kubelet and Proxy to load their configuration is an efficient "wait for changes" request over HTTP. It should be possible to limit the Kubelet and Proxy to only access the information they need to perform their roles and no more. - -The controller manager for Replication Controllers and other future controllers act on behalf of a user via delegation to perform automated maintenance on Kubernetes resources. Their ability to access or modify resource state should be strictly limited to their intended duties and they should be prevented from accessing information not pertinent to their role. For example, a replication controller needs only to create a copy of a known pod configuration, to determine the running state of an existing pod, or to delete an existing pod that it created - it does not need to know the contents or current state of a pod, nor have access to any data in the pods attached volumes. - -The Kubernetes pod scheduler is responsible for reading data from the pod to fit it onto a node in the cluster. At a minimum, it needs access to view the ID of a pod (to craft the binding), its current state, any resource information necessary to identify placement, and other data relevant to concerns like anti-affinity, zone or region preference, or custom logic. It does not need the ability to modify pods or see other resources, only to create bindings. It should not need the ability to delete bindings unless the scheduler takes control of relocating components on failed hosts (which could be implemented by a separate component that can delete bindings but not create them). The scheduler may need read access to user or project-container information to determine preferential location (underspecified at this time). - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/security.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/design/security.md?pixel)]() diff --git a/release-0.19.0/docs/design/security_context.md b/release-0.19.0/docs/design/security_context.md deleted file mode 100644 index 19aff2bb2a2..00000000000 --- a/release-0.19.0/docs/design/security_context.md +++ /dev/null @@ -1,163 +0,0 @@ -# Security Contexts -## Abstract -A security context is a set of constraints that are applied to a container in order to achieve the following goals (from [security design](security.md)): - -1. Ensure a clear isolation between container and the underlying host it runs on -2. Limit the ability of the container to negatively impact the infrastructure or other containers - -## Background - -The problem of securing containers in Kubernetes has come up [before](https://github.com/GoogleCloudPlatform/kubernetes/issues/398) and the potential problems with container security are [well known](http://opensource.com/business/14/7/docker-security-selinux). Although it is not possible to completely isolate Docker containers from their hosts, new features like [user namespaces](https://github.com/docker/libcontainer/pull/304) make it possible to greatly reduce the attack surface. - -## Motivation - -### Container isolation - -In order to improve container isolation from host and other containers running on the host, containers should only be -granted the access they need to perform their work. To this end it should be possible to take advantage of Docker -features such as the ability to [add or remove capabilities](https://docs.docker.com/reference/run/#runtime-privilege-linux-capabilities-and-lxc-configuration) and [assign MCS labels](https://docs.docker.com/reference/run/#security-configuration) -to the container process. - -Support for user namespaces has recently been [merged](https://github.com/docker/libcontainer/pull/304) into Docker's libcontainer project and should soon surface in Docker itself. It will make it possible to assign a range of unprivileged uids and gids from the host to each container, improving the isolation between host and container and between containers. - -### External integration with shared storage -In order to support external integration with shared storage, processes running in a Kubernetes cluster -should be able to be uniquely identified by their Unix UID, such that a chain of ownership can be established. -Processes in pods will need to have consistent UID/GID/SELinux category labels in order to access shared disks. - -## Constraints and Assumptions -* It is out of the scope of this document to prescribe a specific set - of constraints to isolate containers from their host. Different use cases need different - settings. -* The concept of a security context should not be tied to a particular security mechanism or platform - (ie. SELinux, AppArmor) -* Applying a different security context to a scope (namespace or pod) requires a solution such as the one proposed for - [service accounts](./service_accounts.md). - -## Use Cases - -In order of increasing complexity, following are example use cases that would -be addressed with security contexts: - -1. Kubernetes is used to run a single cloud application. In order to protect - nodes from containers: - * All containers run as a single non-root user - * Privileged containers are disabled - * All containers run with a particular MCS label - * Kernel capabilities like CHOWN and MKNOD are removed from containers - -2. Just like case #1, except that I have more than one application running on - the Kubernetes cluster. - * Each application is run in its own namespace to avoid name collisions - * For each application a different uid and MCS label is used - -3. Kubernetes is used as the base for a PAAS with - multiple projects, each project represented by a namespace. - * Each namespace is associated with a range of uids/gids on the node that - are mapped to uids/gids on containers using linux user namespaces. - * Certain pods in each namespace have special privileges to perform system - actions such as talking back to the server for deployment, run docker - builds, etc. - * External NFS storage is assigned to each namespace and permissions set - using the range of uids/gids assigned to that namespace. - -## Proposed Design - -### Overview -A *security context* consists of a set of constraints that determine how a container -is secured before getting created and run. A security context resides on the container and represents the runtime parameters that will -be used to create and run the container via container APIs. A *security context provider* is passed to the Kubelet so it can have a chance -to mutate Docker API calls in order to apply the security context. - -It is recommended that this design be implemented in two phases: - -1. Implement the security context provider extension point in the Kubelet - so that a default security context can be applied on container run and creation. -2. Implement a security context structure that is part of a service account. The - default context provider can then be used to apply a security context based - on the service account associated with the pod. - -### Security Context Provider - -The Kubelet will have an interface that points to a `SecurityContextProvider`. The `SecurityContextProvider` is invoked before creating and running a given container: - -```go -type SecurityContextProvider interface { - // ModifyContainerConfig is called before the Docker createContainer call. - // The security context provider can make changes to the Config with which - // the container is created. - // An error is returned if it's not possible to secure the container as - // requested with a security context. - ModifyContainerConfig(pod *api.Pod, container *api.Container, config *docker.Config) - - // ModifyHostConfig is called before the Docker runContainer call. - // The security context provider can make changes to the HostConfig, affecting - // security options, whether the container is privileged, volume binds, etc. - // An error is returned if it's not possible to secure the container as requested - // with a security context. - ModifyHostConfig(pod *api.Pod, container *api.Container, hostConfig *docker.HostConfig) -} -``` - -If the value of the SecurityContextProvider field on the Kubelet is nil, the kubelet will create and run the container as it does today. - -### Security Context - -A security context resides on the container and represents the runtime parameters that will -be used to create and run the container via container APIs. Following is an example of an initial implementation: - -```go -type type Container struct { - ... other fields omitted ... - // Optional: SecurityContext defines the security options the pod should be run with - SecurityContext *SecurityContext -} - -// SecurityContext holds security configuration that will be applied to a container. SecurityContext -// contains duplication of some existing fields from the Container resource. These duplicate fields -// will be populated based on the Container configuration if they are not set. Defining them on -// both the Container AND the SecurityContext will result in an error. -type SecurityContext struct { - // Capabilities are the capabilities to add/drop when running the container - Capabilities *Capabilities - - // Run the container in privileged mode - Privileged *bool - - // SELinuxOptions are the labels to be applied to the container - // and volumes - SELinuxOptions *SELinuxOptions - - // RunAsUser is the UID to run the entrypoint of the container process. - RunAsUser *int64 -} - -// SELinuxOptions are the labels to be applied to the container. -type SELinuxOptions struct { - // SELinux user label - User string - - // SELinux role label - Role string - - // SELinux type label - Type string - - // SELinux level label. - Level string -} -``` -### Admission - -It is up to an admission plugin to determine if the security context is acceptable or not. At the -time of writing, the admission control plugin for security contexts will only allow a context that -has defined capabilities or privileged. Contexts that attempt to define a UID or SELinux options -will be denied by default. In the future the admission plugin will base this decision upon -configurable policies that reside within the [service account](https://github.com/GoogleCloudPlatform/kubernetes/pull/2297). - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/security_context.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/design/security_context.md?pixel)]() diff --git a/release-0.19.0/docs/design/service_accounts.md b/release-0.19.0/docs/design/service_accounts.md deleted file mode 100644 index 2c7f2fa2b31..00000000000 --- a/release-0.19.0/docs/design/service_accounts.md +++ /dev/null @@ -1,170 +0,0 @@ -#Service Accounts - -## Motivation - -Processes in Pods may need to call the Kubernetes API. For example: - - scheduler - - replication controller - - minion controller - - a map-reduce type framework which has a controller that then tries to make a dynamically determined number of workers and watch them - - continuous build and push system - - monitoring system - -They also may interact with services other than the Kubernetes API, such as: - - an image repository, such as docker -- both when the images are pulled to start the containers, and for writing - images in the case of pods that generate images. - - accessing other cloud services, such as blob storage, in the context of a larged, integrated, cloud offering (hosted - or private). - - accessing files in an NFS volume attached to the pod - -## Design Overview -A service account binds together several things: - - a *name*, understood by users, and perhaps by peripheral systems, for an identity - - a *principal* that can be authenticated and [authorized](../authorization.md) - - a [security context](./security_context.md), which defines the Linux Capabilities, User IDs, Groups IDs, and other - capabilities and controls on interaction with the file system and OS. - - a set of [secrets](./secrets.md), which a container may use to - access various networked resources. - -## Design Discussion - -A new object Kind is added: -```go -type ServiceAccount struct { - TypeMeta `json:",inline" yaml:",inline"` - ObjectMeta `json:"metadata,omitempty" yaml:"metadata,omitempty"` - - username string - securityContext ObjectReference // (reference to a securityContext object) - secrets []ObjectReference // (references to secret objects -} -``` - -The name ServiceAccount is chosen because it is widely used already (e.g. by Kerberos and LDAP) -to refer to this type of account. Note that it has no relation to kubernetes Service objects. - -The ServiceAccount object does not include any information that could not be defined separately: - - username can be defined however users are defined. - - securityContext and secrets are only referenced and are created using the REST API. - -The purpose of the serviceAccount object is twofold: - - to bind usernames to securityContexts and secrets, so that the username can be used to refer succinctly - in contexts where explicitly naming securityContexts and secrets would be inconvenient - - to provide an interface to simplify allocation of new securityContexts and secrets. -These features are explained later. - -### Names - -From the standpoint of the Kubernetes API, a `user` is any principal which can authenticate to kubernetes API. -This includes a human running `kubectl` on her desktop and a container in a Pod on a Node making API calls. - -There is already a notion of a username in kubernetes, which is populated into a request context after authentication. -However, there is no API object representing a user. While this may evolve, it is expected that in mature installations, -the canonical storage of user identifiers will be handled by a system external to kubernetes. - -Kubernetes does not dictate how to divide up the space of user identifier strings. User names can be -simple Unix-style short usernames, (e.g. `alice`), or may be qualified to allow for federated identity ( -`alice@example.com` vs `alice@example.org`.) Naming convention may distinguish service accounts from user -accounts (e.g. `alice@example.com` vs `build-service-account-a3b7f0@foo-namespace.service-accounts.example.com`), -but Kubernetes does not require this. - -Kubernetes also does not require that there be a distinction between human and Pod users. It will be possible -to setup a cluster where Alice the human talks to the kubernetes API as username `alice` and starts pods that -also talk to the API as user `alice` and write files to NFS as user `alice`. But, this is not recommended. - -Instead, it is recommended that Pods and Humans have distinct identities, and reference implementations will -make this distinction. - -The distinction is useful for a number of reasons: - - the requirements for humans and automated processes are different: - - Humans need a wide range of capabilities to do their daily activities. Automated processes often have more narrowly-defined activities. - - Humans may better tolerate the exceptional conditions created by expiration of a token. Remembering to handle - this in a program is more annoying. So, either long-lasting credentials or automated rotation of credentials is - needed. - - A Human typically keeps credentials on a machine that is not part of the cluster and so not subject to automatic - management. A VM with a role/service-account can have its credentials automatically managed. - - the identity of a Pod cannot in general be mapped to a single human. - - If policy allows, it may be created by one human, and then updated by another, and another, until its behavior cannot be attributed to a single human. - -**TODO**: consider getting rid of separate serviceAccount object and just rolling its parts into the SecurityContext or -Pod Object. - -The `secrets` field is a list of references to /secret objects that an process started as that service account should -have access to to be able to assert that role. - -The secrets are not inline with the serviceAccount object. This way, most or all users can have permission to `GET /serviceAccounts` so they can remind themselves -what serviceAccounts are available for use. - -Nothing will prevent creation of a serviceAccount with two secrets of type `SecretTypeKubernetesAuth`, or secrets of two -different types. Kubelet and client libraries will have some behavior, TBD, to handle the case of multiple secrets of a -given type (pick first or provide all and try each in order, etc). - -When a serviceAccount and a matching secret exist, then a `User.Info` for the serviceAccount and a `BearerToken` from the secret -are added to the map of tokens used by the authentication process in the apiserver, and similarly for other types. (We -might have some types that do not do anything on apiserver but just get pushed to the kubelet.) - -### Pods -The `PodSpec` is extended to have a `Pods.Spec.ServiceAccountUsername` field. If this is unset, then a -default value is chosen. If it is set, then the corresponding value of `Pods.Spec.SecurityContext` is set by the -Service Account Finalizer (see below). - -TBD: how policy limits which users can make pods with which service accounts. - -### Authorization -Kubernetes API Authorization Policies refer to users. Pods created with a `Pods.Spec.ServiceAccountUsername` typically -get a `Secret` which allows them to authenticate to the Kubernetes APIserver as a particular user. So any -policy that is desired can be applied to them. - -A higher level workflow is needed to coordinate creation of serviceAccounts, secrets and relevant policy objects. -Users are free to extend kubernetes to put this business logic wherever is convenient for them, though the -Service Account Finalizer is one place where this can happen (see below). - -### Kubelet - -The kubelet will treat as "not ready to run" (needing a finalizer to act on it) any Pod which has an empty -SecurityContext. - -The kubelet will set a default, restrictive, security context for any pods created from non-Apiserver config -sources (http, file). - -Kubelet watches apiserver for secrets which are needed by pods bound to it. - -**TODO**: how to only let kubelet see secrets it needs to know. - -### The service account finalizer - -There are several ways to use Pods with SecurityContexts and Secrets. - -One way is to explicitly specify the securityContext and all secrets of a Pod when the pod is initially created, -like this: - -**TODO**: example of pod with explicit refs. - -Another way is with the *Service Account Finalizer*, a plugin process which is optional, and which handles -business logic around service accounts. - -The Service Account Finalizer watches Pods, Namespaces, and ServiceAccount definitions. - -First, if it finds pods which have a `Pod.Spec.ServiceAccountUsername` but no `Pod.Spec.SecurityContext` set, -then it copies in the referenced securityContext and secrets references for the corresponding `serviceAccount`. - -Second, if ServiceAccount definitions change, it may take some actions. -**TODO**: decide what actions it takes when a serviceAccount definition changes. Does it stop pods, or just -allow someone to list ones that out out of spec? In general, people may want to customize this? - -Third, if a new namespace is created, it may create a new serviceAccount for that namespace. This may include -a new username (e.g. `NAMESPACE-default-service-account@serviceaccounts.$CLUSTERID.kubernetes.io`), a new -securityContext, a newly generated secret to authenticate that serviceAccount to the Kubernetes API, and default -policies for that service account. -**TODO**: more concrete example. What are typical default permissions for default service account (e.g. readonly access -to services in the same namespace and read-write access to events in that namespace?) - -Finally, it may provide an interface to automate creation of new serviceAccounts. In that case, the user may want -to GET serviceAccounts to see what has been created. - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/service_accounts.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/design/service_accounts.md?pixel)]() diff --git a/release-0.19.0/docs/design/simple-rolling-update.md b/release-0.19.0/docs/design/simple-rolling-update.md deleted file mode 100644 index c5a10c59343..00000000000 --- a/release-0.19.0/docs/design/simple-rolling-update.md +++ /dev/null @@ -1,97 +0,0 @@ -## Simple rolling update -This is a lightweight design document for simple rolling update in ```kubectl``` - -Complete execution flow can be found [here](#execution-details). - -### Lightweight rollout -Assume that we have a current replication controller named ```foo``` and it is running image ```image:v1``` - -```kubectl rolling-update rc foo [foo-v2] --image=myimage:v2``` - -If the user doesn't specify a name for the 'next' controller, then the 'next' controller is renamed to -the name of the original controller. - -Obviously there is a race here, where if you kill the client between delete foo, and creating the new version of 'foo' you might be surprised about what is there, but I think that's ok. -See [Recovery](#recovery) below - -If the user does specify a name for the 'next' controller, then the 'next' controller is retained with its existing name, -and the old 'foo' controller is deleted. For the purposes of the rollout, we add a unique-ifying label ```kubernetes.io/deployment``` to both the ```foo``` and ```foo-next``` controllers. -The value of that label is the hash of the complete JSON representation of the```foo-next``` or```foo``` controller. The name of this label can be overridden by the user with the ```--deployment-label-key``` flag. - -#### Recovery -If a rollout fails or is terminated in the middle, it is important that the user be able to resume the roll out. -To facilitate recovery in the case of a crash of the updating process itself, we add the following annotations to each replicaController in the ```kubernetes.io/``` annotation namespace: - * ```desired-replicas``` The desired number of replicas for this controller (either N or zero) - * ```update-partner``` A pointer to the replicaiton controller resource that is the other half of this update (syntax `````` the namespace is assumed to be identical to the namespace of this replication controller.) - -Recovery is achieved by issuing the same command again: - -``` -kubectl rolling-update rc foo [foo-v2] --image=myimage:v2 -``` - -Whenever the rolling update command executes, the kubectl client looks for replication controllers called ```foo``` and ```foo-next```, if they exist, an attempt is -made to roll ```foo``` to ```foo-next```. If ```foo-next``` does not exist, then it is created, and the rollout is a new rollout. If ```foo``` doesn't exist, then -it is assumed that the rollout is nearly completed, and ```foo-next``` is renamed to ```foo```. Details of the execution flow are given below. - - -### Aborting a rollout -Abort is assumed to want to reverse a rollout in progress. - -```kubectl rolling-update rc foo [foo-v2] --rollback``` - -This is really just semantic sugar for: - -```kubectl rolling-update rc foo-v2 foo``` - -With the added detail that it moves the ```desired-replicas``` annotation from ```foo-v2``` to ```foo``` - - -### Execution Details - -For the purposes of this example, assume that we are rolling from ```foo``` to ```foo-next``` where the only change is an image update from `v1` to `v2` - -If the user doesn't specify a ```foo-next``` name, then it is either discovered from the ```update-partner``` annotation on ```foo```. If that annotation doesn't exist, -then ```foo-next``` is synthesized using the pattern ```-``` - -#### Initialization - * If ```foo``` and ```foo-next``` do not exist: - * Exit, and indicate an error to the user, that the specified controller doesn't exist. - * If ```foo``` exists, but ```foo-next``` does not: - * Create ```foo-next``` populate it with the ```v2``` image, set ```desired-replicas``` to ```foo.Spec.Replicas``` - * Goto Rollout - * If ```foo-next``` exists, but ```foo``` does not: - * Assume that we are in the rename phase. - * Goto Rename - * If both ```foo``` and ```foo-next``` exist: - * Assume that we are in a partial rollout - * If ```foo-next``` is missing the ```desired-replicas``` annotation - * Populate the ```desired-replicas``` annotation to ```foo-next``` using the current size of ```foo``` - * Goto Rollout - -#### Rollout - * While size of ```foo-next``` < ```desired-replicas``` annotation on ```foo-next``` - * increase size of ```foo-next``` - * if size of ```foo``` > 0 - decrease size of ```foo``` - * Goto Rename - -#### Rename - * delete ```foo``` - * create ```foo``` that is identical to ```foo-next``` - * delete ```foo-next``` - -#### Abort - * If ```foo-next``` doesn't exist - * Exit and indicate to the user that they may want to simply do a new rollout with the old version - * If ```foo``` doesn't exist - * Exit and indicate not found to the user - * Otherwise, ```foo-next``` and ```foo``` both exist - * Set ```desired-replicas``` annotation on ```foo``` to match the annotation on ```foo-next``` - * Goto Rollout with ```foo``` and ```foo-next``` trading places. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/simple-rolling-update.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/design/simple-rolling-update.md?pixel)]() diff --git a/release-0.19.0/docs/devel/README.md b/release-0.19.0/docs/devel/README.md deleted file mode 100644 index 9d74004b0fb..00000000000 --- a/release-0.19.0/docs/devel/README.md +++ /dev/null @@ -1,27 +0,0 @@ -# Developing Kubernetes - -Docs in this directory relate to developing Kubernetes. - -* **On Collaborative Development** ([collab.md](collab.md)): info on pull requests and code reviews. - -* **Development Guide** ([development.md](development.md)): Setting up your environment tests. - -* **Hunting flaky tests** ([flaky-tests.md](flaky-tests.md)): We have a goal of 99.9% flake free tests. - Here's how to run your tests many times. - -* **GitHub Issues** ([issues.md](issues.md)): How incoming issues are reviewed and prioritized. - -* **Logging Conventions** ([logging.md](logging.md)]: Glog levels. - -* **Pull Request Process** ([pull-requests.md](pull-requests.md)): When and why pull requests are closed. - -* **Releasing Kubernetes** ([releasing.md](releasing.md)): How to create a Kubernetes release (as in version) - and how the version information gets embedded into the built binaries. - -* **Profiling Kubernetes** ([profiling.md](profiling.md)): How to plug in go pprof profiler to Kubernetes. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/devel/README.md?pixel)]() diff --git a/release-0.19.0/docs/devel/api_changes.md b/release-0.19.0/docs/devel/api_changes.md deleted file mode 100644 index a063e5795ec..00000000000 --- a/release-0.19.0/docs/devel/api_changes.md +++ /dev/null @@ -1,348 +0,0 @@ -# So you want to change the API? - -The Kubernetes API has two major components - the internal structures and -the versioned APIs. The versioned APIs are intended to be stable, while the -internal structures are implemented to best reflect the needs of the Kubernetes -code itself. - -What this means for API changes is that you have to be somewhat thoughtful in -how you approach changes, and that you have to touch a number of pieces to make -a complete change. This document aims to guide you through the process, though -not all API changes will need all of these steps. - -## Operational overview - -It is important to have a high level understanding of the API system used in -Kubernetes in order to navigate the rest of this document. - -As mentioned above, the internal representation of an API object is decoupled -from any one API version. This provides a lot of freedom to evolve the code, -but it requires robust infrastructure to convert between representations. There -are multiple steps in processing an API operation - even something as simple as -a GET involves a great deal of machinery. - -The conversion process is logically a "star" with the internal form at the -center. Every versioned API can be converted to the internal form (and -vice-versa), but versioned APIs do not convert to other versioned APIs directly. -This sounds like a heavy process, but in reality we do not intend to keep more -than a small number of versions alive at once. While all of the Kubernetes code -operates on the internal structures, they are always converted to a versioned -form before being written to storage (disk or etcd) or being sent over a wire. -Clients should consume and operate on the versioned APIs exclusively. - -To demonstrate the general process, here is a (hypothetical) example: - - 1. A user POSTs a `Pod` object to `/api/v7beta1/...` - 2. The JSON is unmarshalled into a `v7beta1.Pod` structure - 3. Default values are applied to the `v7beta1.Pod` - 4. The `v7beta1.Pod` is converted to an `api.Pod` structure - 5. The `api.Pod` is validated, and any errors are returned to the user - 6. The `api.Pod` is converted to a `v6.Pod` (because v6 is the latest stable - version) - 7. The `v6.Pod` is marshalled into JSON and written to etcd - -Now that we have the `Pod` object stored, a user can GET that object in any -supported api version. For example: - - 1. A user GETs the `Pod` from `/api/v5/...` - 2. The JSON is read from etcd and unmarshalled into a `v6.Pod` structure - 3. Default values are applied to the `v6.Pod` - 4. The `v6.Pod` is converted to an `api.Pod` structure - 5. The `api.Pod` is converted to a `v5.Pod` structure - 6. The `v5.Pod` is marshalled into JSON and sent to the user - -The implication of this process is that API changes must be done carefully and -backward-compatibly. - -## On compatibility - -Before talking about how to make API changes, it is worthwhile to clarify what -we mean by API compatibility. An API change is considered backward-compatible -if it: - * adds new functionality that is not required for correct behavior - * does not change existing semantics - * does not change existing defaults - -Put another way: - -1. Any API call (e.g. a structure POSTed to a REST endpoint) that worked before - your change must work the same after your change. -2. Any API call that uses your change must not cause problems (e.g. crash or - degrade behavior) when issued against servers that do not include your change. -3. It must be possible to round-trip your change (convert to different API - versions and back) with no loss of information. - -If your change does not meet these criteria, it is not considered strictly -compatible. There are times when this might be OK, but mostly we want changes -that meet this definition. If you think you need to break compatibility, you -should talk to the Kubernetes team first. - -Let's consider some examples. In a hypothetical API (assume we're at version -v6), the `Frobber` struct looks something like this: - -```go -// API v6. -type Frobber struct { - Height int `json:"height"` - Param string `json:"param"` -} -``` - -You want to add a new `Width` field. It is generally safe to add new fields -without changing the API version, so you can simply change it to: - -```go -// Still API v6. -type Frobber struct { - Height int `json:"height"` - Width int `json:"width"` - Param string `json:"param"` -} -``` - -The onus is on you to define a sane default value for `Width` such that rule #1 -above is true - API calls and stored objects that used to work must continue to -work. - -For your next change you want to allow multiple `Param` values. You can not -simply change `Param string` to `Params []string` (without creating a whole new -API version) - that fails rules #1 and #2. You can instead do something like: - -```go -// Still API v6, but kind of clumsy. -type Frobber struct { - Height int `json:"height"` - Width int `json:"width"` - Param string `json:"param"` // the first param - ExtraParams []string `json:"params"` // additional params -} -``` - -Now you can satisfy the rules: API calls that provide the old style `Param` -will still work, while servers that don't understand `ExtraParams` can ignore -it. This is somewhat unsatisfying as an API, but it is strictly compatible. - -Part of the reason for versioning APIs and for using internal structs that are -distinct from any one version is to handle growth like this. The internal -representation can be implemented as: - -```go -// Internal, soon to be v7beta1. -type Frobber struct { - Height int - Width int - Params []string -} -``` - -The code that converts to/from versioned APIs can decode this into the somewhat -uglier (but compatible!) structures. Eventually, a new API version, let's call -it v7beta1, will be forked and it can use the clean internal structure. - -We've seen how to satisfy rules #1 and #2. Rule #3 means that you can not -extend one versioned API without also extending the others. For example, an -API call might POST an object in API v7beta1 format, which uses the cleaner -`Params` field, but the API server might store that object in trusty old v6 -form (since v7beta1 is "beta"). When the user reads the object back in the -v7beta1 API it would be unacceptable to have lost all but `Params[0]`. This -means that, even though it is ugly, a compatible change must be made to the v6 -API. - -As another interesting example, enumerated values provide a unique challenge. -Adding a new value to an enumerated set is *not* a compatible change. Clients -which assume they know how to handle all possible values of a given field will -not be able to handle the new values. However, removing value from an -enumerated set *can* be a compatible change, if handled properly (treat the -removed value as deprecated but allowed). - -## Changing versioned APIs - -For most changes, you will probably find it easiest to change the versioned -APIs first. This forces you to think about how to make your change in a -compatible way. Rather than doing each step in every version, it's usually -easier to do each versioned API one at a time, or to do all of one version -before starting "all the rest". - -### Edit types.go - -The struct definitions for each API are in `pkg/api//types.go`. Edit -those files to reflect the change you want to make. Note that all non-online -fields in versioned APIs must have description tags - these are used to generate -documentation. - -### Edit defaults.go - -If your change includes new fields for which you will need default values, you -need to add cases to `pkg/api//defaults.go`. Of course, since you -have added code, you have to add a test: `pkg/api//defaults_test.go`. - -Do use pointers to scalars when you need to distinguish between an unset value -and an an automatic zero value. For example, -`PodSpec.TerminationGracePeriodSeconds` is defined as `*int64` the go type -definition. A zero value means 0 seconds, and a nil value asks the system to -pick a default. - -Don't forget to run the tests! - -### Edit conversion.go - -Given that you have not yet changed the internal structs, this might feel -premature, and that's because it is. You don't yet have anything to convert to -or from. We will revisit this in the "internal" section. If you're doing this -all in a different order (i.e. you started with the internal structs), then you -should jump to that topic below. In the very rare case that you are making an -incompatible change you might or might not want to do this now, but you will -have to do more later. The files you want are -`pkg/api//conversion.go` and `pkg/api//conversion_test.go`. - -## Changing the internal structures - -Now it is time to change the internal structs so your versioned changes can be -used. - -### Edit types.go - -Similar to the versioned APIs, the definitions for the internal structs are in -`pkg/api/types.go`. Edit those files to reflect the change you want to make. -Keep in mind that the internal structs must be able to express *all* of the -versioned APIs. - -## Edit validation.go - -Most changes made to the internal structs need some form of input validation. -Validation is currently done on internal objects in -`pkg/api/validation/validation.go`. This validation is the one of the first -opportunities we have to make a great user experience - good error messages and -thorough validation help ensure that users are giving you what you expect and, -when they don't, that they know why and how to fix it. Think hard about the -contents of `string` fields, the bounds of `int` fields and the -requiredness/optionalness of fields. - -Of course, code needs tests - `pkg/api/validation/validation_test.go`. - -## Edit version conversions - -At this point you have both the versioned API changes and the internal -structure changes done. If there are any notable differences - field names, -types, structural change in particular - you must add some logic to convert -versioned APIs to and from the internal representation. If you see errors from -the `serialization_test`, it may indicate the need for explicit conversions. - -Performance of conversions very heavily influence performance of apiserver. -Thus, we are auto-generating conversion functions that are much more efficient -than the generic ones (which are based on reflections and thus are highly -inefficient). - -The conversion code resides with each versioned API. There are two files: - - `pkg/api//conversion.go` containing manually written conversion - functions - - `pkg/api//conversion_generated.go` containing auto-generated - conversion functions - -Since auto-generated conversion functions are using manually written ones, -those manually written should be named with a defined convention, i.e. a function -converting type X in pkg a to type Y in pkg b, should be named: -`convert_a_X_To_b_Y`. - -Also note that you can (and for efficiency reasons should) use auto-generated -conversion functions when writing your conversion functions. - -Once all the necessary manually written conversions are added, you need to -regenerate auto-generated ones. To regenerate them: - - run -``` - $ hack/update-generated-conversions.sh -``` - -If running the above script is impossible due to compile errors, the easiest -workaround is to comment out the code causing errors and let the script to -regenerate it. If the auto-generated conversion methods are not used by the -manually-written ones, it's fine to just remove the whole file and let the -generator to create it from scratch. - -Unsurprisingly, adding manually written conversion also requires you to add tests to -`pkg/api//conversion_test.go`. - -## Update the fuzzer - -Part of our testing regimen for APIs is to "fuzz" (fill with random values) API -objects and then convert them to and from the different API versions. This is -a great way of exposing places where you lost information or made bad -assumptions. If you have added any fields which need very careful formatting -(the test does not run validation) or if you have made assumptions such as -"this slice will always have at least 1 element", you may get an error or even -a panic from the `serialization_test`. If so, look at the diff it produces (or -the backtrace in case of a panic) and figure out what you forgot. Encode that -into the fuzzer's custom fuzz functions. Hint: if you added defaults for a field, -that field will need to have a custom fuzz function that ensures that the field is -fuzzed to a non-empty value. - -The fuzzer can be found in `pkg/api/testing/fuzzer.go`. - -## Update the semantic comparisons - -VERY VERY rarely is this needed, but when it hits, it hurts. In some rare -cases we end up with objects (e.g. resource quantities) that have morally -equivalent values with different bitwise representations (e.g. value 10 with a -base-2 formatter is the same as value 0 with a base-10 formatter). The only way -Go knows how to do deep-equality is through field-by-field bitwise comparisons. -This is a problem for us. - -The first thing you should do is try not to do that. If you really can't avoid -this, I'd like to introduce you to our semantic DeepEqual routine. It supports -custom overrides for specific types - you can find that in `pkg/api/helpers.go`. - -There's one other time when you might have to touch this: unexported fields. -You see, while Go's `reflect` package is allowed to touch unexported fields, us -mere mortals are not - this includes semantic DeepEqual. Fortunately, most of -our API objects are "dumb structs" all the way down - all fields are exported -(start with a capital letter) and there are no unexported fields. But sometimes -you want to include an object in our API that does have unexported fields -somewhere in it (for example, `time.Time` has unexported fields). If this hits -you, you may have to touch the semantic DeepEqual customization functions. - -## Implement your change - -Now you have the API all changed - go implement whatever it is that you're -doing! - -## Write end-to-end tests - -This is, sadly, still sort of painful. Talk to us and we'll try to help you -figure out the best way to make sure your cool feature keeps working forever. - -## Examples and docs - -At last, your change is done, all unit tests pass, e2e passes, you're done, -right? Actually, no. You just changed the API. If you are touching an -existing facet of the API, you have to try *really* hard to make sure that -*all* the examples and docs are updated. There's no easy way to do this, due -in part to JSON and YAML silently dropping unknown fields. You're clever - -you'll figure it out. Put `grep` or `ack` to good use. - -If you added functionality, you should consider documenting it and/or writing -an example to illustrate your change. - -Make sure you update the swagger API spec by running: - -```shell -$ hack/update-swagger-spec.sh -``` - -The API spec changes should be in a commit separate from your other changes. - -## Incompatible API changes -If your change is going to be backward incompatible or might be a breaking change for API -consumers, please send an announcement to `kubernetes-dev@googlegroups.com` before -the change gets in. If you are unsure, ask. Also make sure that the change gets documented in -`CHANGELOG.md` for the next release. - -## Adding new REST objects - -TODO(smarterclayton): write this. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/api_changes.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/devel/api_changes.md?pixel)]() diff --git a/release-0.19.0/docs/devel/coding-conventions.md b/release-0.19.0/docs/devel/coding-conventions.md deleted file mode 100644 index 5f58079d6cc..00000000000 --- a/release-0.19.0/docs/devel/coding-conventions.md +++ /dev/null @@ -1,13 +0,0 @@ -Coding style advice for contributors - - Bash - - https://google-styleguide.googlecode.com/svn/trunk/shell.xml - - Go - - https://github.com/golang/go/wiki/CodeReviewComments - - https://gist.github.com/lavalamp/4bd23295a9f32706a48f - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/coding-conventions.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/devel/coding-conventions.md?pixel)]() diff --git a/release-0.19.0/docs/devel/collab.md b/release-0.19.0/docs/devel/collab.md deleted file mode 100644 index 28173059cbd..00000000000 --- a/release-0.19.0/docs/devel/collab.md +++ /dev/null @@ -1,46 +0,0 @@ -# On Collaborative Development - -Kubernetes is open source, but many of the people working on it do so as their day job. In order to avoid forcing people to be "at work" effectively 24/7, we want to establish some semi-formal protocols around development. Hopefully these rules make things go more smoothly. If you find that this is not the case, please complain loudly. - -## Patches welcome - -First and foremost: as a potential contributor, your changes and ideas are welcome at any hour of the day or night, weekdays, weekends, and holidays. Please do not ever hesitate to ask a question or send a PR. - -## Code reviews - -All changes must be code reviewed. For non-maintainers this is obvious, since you can't commit anyway. But even for maintainers, we want all changes to get at least one review, preferably (for non-trivial changes obligately) from someone who knows the areas the change touches. For non-trivial changes we may want two reviewers. The primary reviewer will make this decision and nominate a second reviewer, if needed. Except for trivial changes, PRs should not be committed until relevant parties (e.g. owners of the subsystem affected by the PR) have had a reasonable chance to look at PR in their local business hours. - -Most PRs will find reviewers organically. If a maintainer intends to be the primary reviewer of a PR they should set themselves as the assignee on GitHub and say so in a reply to the PR. Only the primary reviewer of a change should actually do the merge, except in rare cases (e.g. they are unavailable in a reasonable timeframe). - -If a PR has gone 2 work days without an owner emerging, please poke the PR thread and ask for a reviewer to be assigned. - -Except for rare cases, such as trivial changes (e.g. typos, comments) or emergencies (e.g. broken builds), maintainers should not merge their own changes. - -Expect reviewers to request that you avoid [common go style mistakes](https://github.com/golang/go/wiki/CodeReviewComments) in your PRs. - -## Assigned reviews - -Maintainers can assign reviews to other maintainers, when appropriate. The assignee becomes the shepherd for that PR and is responsible for merging the PR once they are satisfied with it or else closing it. The assignee might request reviews from non-maintainers. - -## Merge hours - -Maintainers will do merges of appropriately reviewed-and-approved changes during their local "business hours" (typically 7:00 am Monday to 5:00 pm (17:00h) Friday). PRs that arrive over the weekend or on holidays will only be merged if there is a very good reason for it and if the code review requirements have been met. Concretely this means that nobody should merge changes immediately before going to bed for the night. - -There may be discussion an even approvals granted outside of the above hours, but merges will generally be deferred. - -If a PR is considered complex or controversial, the merge of that PR should be delayed to give all interested parties in all timezones the opportunity to provide feedback. Concretely, this means that such PRs should be held for 24 -hours before merging. Of course "complex" and "controversial" are left to the judgment of the people involved, but we trust that part of being a committer is the judgment required to evaluate such things honestly, and not be -motivated by your desire (or your cube-mate's desire) to get their code merged. Also see "Holds" below, any reviewer can issue a "hold" to indicate that the PR is in fact complicated or complex and deserves further review. - -PRs that are incorrectly judged to be merge-able, may be reverted and subject to re-review, if subsequent reviewers believe that they in fact are controversial or complex. - - -## Holds - -Any maintainer or core contributor who wants to review a PR but does not have time immediately may put a hold on a PR simply by saying so on the PR discussion and offering an ETA measured in single-digit days at most. Any PR that has a hold shall not be merged until the person who requested the hold acks the review, withdraws their hold, or is overruled by a preponderance of maintainers. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/collab.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/devel/collab.md?pixel)]() diff --git a/release-0.19.0/docs/devel/developer-guides/vagrant.md b/release-0.19.0/docs/devel/developer-guides/vagrant.md deleted file mode 100644 index 9e829903507..00000000000 --- a/release-0.19.0/docs/devel/developer-guides/vagrant.md +++ /dev/null @@ -1,341 +0,0 @@ -## Getting started with Vagrant - -Running kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/develop on your local machine (Linux, Mac OS X). - -### Prerequisites -1. Install latest version >= 1.6.2 of vagrant from http://www.vagrantup.com/downloads.html -2. Install one of: - 1. The latest version of Virtual Box from https://www.virtualbox.org/wiki/Downloads - 2. [VMWare Fusion](https://www.vmware.com/products/fusion/) version 5 or greater as well as the appropriate [Vagrant VMWare Fusion provider](https://www.vagrantup.com/vmware) - 3. [VMWare Workstation](https://www.vmware.com/products/workstation/) version 9 or greater as well as the [Vagrant VMWare Workstation provider](https://www.vagrantup.com/vmware) - 4. [Parallels Desktop](https://www.parallels.com/products/desktop/) version 9 or greater as well as the [Vagrant Parallels provider](https://parallels.github.io/vagrant-parallels/) -3. Get or build a [binary release](/docs/getting-started-guides/binary_release.md) - -### Setup - -By default, the Vagrant setup will create a single kubernetes-master and 1 kubernetes-minion. Each VM will take 1 GB, so make sure you have at least 2GB to 4GB of free memory (plus appropriate free disk space). To start your local cluster, open a shell and run: - -```sh -cd kubernetes - -export KUBERNETES_PROVIDER=vagrant -./cluster/kube-up.sh -``` - -The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine. - -If you installed more than one Vagrant provider, Kubernetes will usually pick the appropriate one. However, you can override which one Kubernetes will use by setting the [`VAGRANT_DEFAULT_PROVIDER`](https://docs.vagrantup.com/v2/providers/default.html) environment variable: - -```sh -export VAGRANT_DEFAULT_PROVIDER=parallels -export KUBERNETES_PROVIDER=vagrant -./cluster/kube-up.sh -``` - -Vagrant will provision each machine in the cluster with all the necessary components to run Kubernetes. The initial setup can take a few minutes to complete on each machine. - -By default, each VM in the cluster is running Fedora, and all of the Kubernetes services are installed into systemd. - -To access the master or any minion: - -```sh -vagrant ssh master -vagrant ssh minion-1 -``` - -If you are running more than one minion, you can access the others by: - -```sh -vagrant ssh minion-2 -vagrant ssh minion-3 -``` - -To view the service status and/or logs on the kubernetes-master: -```sh -vagrant ssh master -[vagrant@kubernetes-master ~] $ sudo systemctl status kube-apiserver -[vagrant@kubernetes-master ~] $ sudo journalctl -r -u kube-apiserver - -[vagrant@kubernetes-master ~] $ sudo systemctl status kube-controller-manager -[vagrant@kubernetes-master ~] $ sudo journalctl -r -u kube-controller-manager - -[vagrant@kubernetes-master ~] $ sudo systemctl status etcd -[vagrant@kubernetes-master ~] $ sudo systemctl status nginx -``` - -To view the services on any of the kubernetes-minion(s): -```sh -vagrant ssh minion-1 -[vagrant@kubernetes-minion-1] $ sudo systemctl status docker -[vagrant@kubernetes-minion-1] $ sudo journalctl -r -u docker -[vagrant@kubernetes-minion-1] $ sudo systemctl status kubelet -[vagrant@kubernetes-minion-1] $ sudo journalctl -r -u kubelet -``` - -### Interacting with your Kubernetes cluster with Vagrant. - -With your Kubernetes cluster up, you can manage the nodes in your cluster with the regular Vagrant commands. - -To push updates to new Kubernetes code after making source changes: -```sh -./cluster/kube-push.sh -``` - -To stop and then restart the cluster: -```sh -vagrant halt -./cluster/kube-up.sh -``` - -To destroy the cluster: -```sh -vagrant destroy -``` - -Once your Vagrant machines are up and provisioned, the first thing to do is to check that you can use the `kubectl.sh` script. - -You may need to build the binaries first, you can do this with ```make``` - -```sh -$ ./cluster/kubectl.sh get minions - -NAME LABELS -10.245.1.4 -10.245.1.5 -10.245.1.3 -``` - -### Interacting with your Kubernetes cluster with the `kube-*` scripts. - -Alternatively to using the vagrant commands, you can also use the `cluster/kube-*.sh` scripts to interact with the vagrant based provider just like any other hosting platform for kubernetes. - -All of these commands assume you have set `KUBERNETES_PROVIDER` appropriately: - -```sh -export KUBERNETES_PROVIDER=vagrant -``` - -Bring up a vagrant cluster - -```sh -./cluster/kube-up.sh -``` - -Destroy the vagrant cluster - -```sh -./cluster/kube-down.sh -``` - -Update the vagrant cluster after you make changes (only works when building your own releases locally): - -```sh -./cluster/kube-push.sh -``` - -Interact with the cluster - -```sh -./cluster/kubectl.sh -``` - -### Authenticating with your master - -When using the vagrant provider in Kubernetes, the `cluster/kubectl.sh` script will cache your credentials in a `~/.kubernetes_vagrant_auth` file so you will not be prompted for them in the future. - -```sh -cat ~/.kubernetes_vagrant_auth -{ "User": "vagrant", - "Password": "vagrant" - "CAFile": "/home/k8s_user/.kubernetes.vagrant.ca.crt", - "CertFile": "/home/k8s_user/.kubecfg.vagrant.crt", - "KeyFile": "/home/k8s_user/.kubecfg.vagrant.key" -} -``` - -You should now be set to use the `cluster/kubectl.sh` script. For example try to list the minions that you have started with: - -```sh -./cluster/kubectl.sh get minions -``` - -### Running containers - -Your cluster is running, you can list the minions in your cluster: - -```sh -$ ./cluster/kubectl.sh get minions - -NAME LABELS -10.245.2.4 -10.245.2.3 -10.245.2.2 -``` - -Now start running some containers! - -You can now use any of the cluster/kube-*.sh commands to interact with your VM machines. -Before starting a container there will be no pods, services and replication controllers. - -``` -$ cluster/kubectl.sh get pods -NAME IMAGE(S) HOST LABELS STATUS - -$ cluster/kubectl.sh get services -NAME LABELS SELECTOR IP PORT - -$ cluster/kubectl.sh get replicationcontrollers -NAME IMAGE(S SELECTOR REPLICAS -``` - -Start a container running nginx with a replication controller and three replicas - -``` -$ cluster/kubectl.sh run my-nginx --image=nginx --replicas=3 --port=80 -``` - -When listing the pods, you will see that three containers have been started and are in Waiting state: - -``` -$ cluster/kubectl.sh get pods -NAME IMAGE(S) HOST LABELS STATUS -781191ff-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.4/10.245.2.4 name=myNginx Waiting -7813c8bd-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.2/10.245.2.2 name=myNginx Waiting -78140853-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.3/10.245.2.3 name=myNginx Waiting -``` - -You need to wait for the provisioning to complete, you can monitor the minions by doing: - -```sh -$ sudo salt '*minion-1' cmd.run 'docker images' -kubernetes-minion-1: - REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE - 96864a7d2df3 26 hours ago 204.4 MB - kubernetes/pause latest 6c4579af347b 8 weeks ago 239.8 kB -``` - -Once the docker image for nginx has been downloaded, the container will start and you can list it: - -```sh -$ sudo salt '*minion-1' cmd.run 'docker ps' -kubernetes-minion-1: - CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES - dbe79bf6e25b nginx:latest "nginx" 21 seconds ago Up 19 seconds k8s--mynginx.8c5b8a3a--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--fcfa837f - fa0e29c94501 kubernetes/pause:latest "/pause" 8 minutes ago Up 8 minutes 0.0.0.0:8080->80/tcp k8s--net.a90e7ce4--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--baf5b21b -``` - -Going back to listing the pods, services and replicationcontrollers, you now have: - -``` -$ cluster/kubectl.sh get pods -NAME IMAGE(S) HOST LABELS STATUS -781191ff-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.4/10.245.2.4 name=myNginx Running -7813c8bd-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.2/10.245.2.2 name=myNginx Running -78140853-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.3/10.245.2.3 name=myNginx Running - -$ cluster/kubectl.sh get services -NAME LABELS SELECTOR IP PORT - -$ cluster/kubectl.sh get replicationcontrollers -NAME IMAGE(S SELECTOR REPLICAS -myNginx nginx name=my-nginx 3 -``` - -We did not start any services, hence there are none listed. But we see three replicas displayed properly. -Check the [guestbook](/examples/guestbook/README.md) application to learn how to create a service. -You can already play with scaling the replicas with: - -```sh -$ ./cluster/kubectl.sh scale rc my-nginx --replicas=2 -$ ./cluster/kubectl.sh get pods -NAME IMAGE(S) HOST LABELS STATUS -7813c8bd-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.2/10.245.2.2 name=myNginx Running -78140853-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.3/10.245.2.3 name=myNginx Running -``` - -Congratulations! - -### Testing - -The following will run all of the end-to-end testing scenarios assuming you set your environment in `cluster/kube-env.sh`: - -```sh -NUM_MINIONS=3 hack/e2e-test.sh -``` - -### Troubleshooting - -#### I keep downloading the same (large) box all the time! - -By default the Vagrantfile will download the box from S3. You can change this (and cache the box locally) by providing a name and an alternate URL when calling `kube-up.sh` - -```sh -export KUBERNETES_BOX_NAME=choose_your_own_name_for_your_kuber_box -export KUBERNETES_BOX_URL=path_of_your_kuber_box -export KUBERNETES_PROVIDER=vagrant -./cluster/kube-up.sh -``` - -#### I just created the cluster, but I am getting authorization errors! - -You probably have an incorrect ~/.kubernetes_vagrant_auth file for the cluster you are attempting to contact. - -```sh -rm ~/.kubernetes_vagrant_auth -``` - -After using kubectl.sh make sure that the correct credentials are set: - -```sh -cat ~/.kubernetes_vagrant_auth -{ - "User": "vagrant", - "Password": "vagrant" -} -``` - -#### I just created the cluster, but I do not see my container running! - -If this is your first time creating the cluster, the kubelet on each minion schedules a number of docker pull requests to fetch prerequisite images. This can take some time and as a result may delay your initial pod getting provisioned. - -#### I changed Kubernetes code, but it's not running! - -Are you sure there was no build error? After running `$ vagrant provision`, scroll up and ensure that each Salt state was completed successfully on each box in the cluster. -It's very likely you see a build error due to an error in your source files! - -#### I have brought Vagrant up but the minions won't validate! - -Are you sure you built a release first? Did you install `net-tools`? For more clues, login to one of the minions (`vagrant ssh minion-1`) and inspect the salt minion log (`sudo cat /var/log/salt/minion`). - -#### I want to change the number of minions! - -You can control the number of minions that are instantiated via the environment variable `NUM_MINIONS` on your host machine. If you plan to work with replicas, we strongly encourage you to work with enough minions to satisfy your largest intended replica size. If you do not plan to work with replicas, you can save some system resources by running with a single minion. You do this, by setting `NUM_MINIONS` to 1 like so: - -```sh -export NUM_MINIONS=1 -``` - -#### I want my VMs to have more memory! - -You can control the memory allotted to virtual machines with the `KUBERNETES_MEMORY` environment variable. -Just set it to the number of megabytes you would like the machines to have. For example: - -```sh -export KUBERNETES_MEMORY=2048 -``` - -If you need more granular control, you can set the amount of memory for the master and minions independently. For example: - -```sh -export KUBERNETES_MASTER_MEMORY=1536 -export KUBERNETES_MINION_MEMORY=2048 -``` - -#### I ran vagrant suspend and nothing works! -```vagrant suspend``` seems to mess up the network. It's not supported at this time. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/developer-guides/vagrant.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/devel/developer-guides/vagrant.md?pixel)]() diff --git a/release-0.19.0/docs/devel/development.md b/release-0.19.0/docs/devel/development.md deleted file mode 100644 index c04ae02153a..00000000000 --- a/release-0.19.0/docs/devel/development.md +++ /dev/null @@ -1,275 +0,0 @@ -# Development Guide - -# Releases and Official Builds - -Official releases are built in Docker containers. Details are [here](../../build/README.md). You can do simple builds and development with just a local Docker installation. If want to build go locally outside of docker, please continue below. - -## Go development environment - -Kubernetes is written in [Go](http://golang.org) programming language. If you haven't set up Go development environment, please follow [this instruction](http://golang.org/doc/code.html) to install go tool and set up GOPATH. Ensure your version of Go is at least 1.3. - -## Clone kubernetes into GOPATH - -We highly recommend to put kubernetes' code into your GOPATH. For example, the following commands will download kubernetes' code under the current user's GOPATH (Assuming there's only one directory in GOPATH.): - -``` -$ echo $GOPATH -/home/user/goproj -$ mkdir -p $GOPATH/src/github.com/GoogleCloudPlatform/ -$ cd $GOPATH/src/github.com/GoogleCloudPlatform/ -$ git clone https://github.com/GoogleCloudPlatform/kubernetes.git -``` - -The commands above will not work if there are more than one directory in ``$GOPATH``. - -If you plan to do development, read about the -[Kubernetes Github Flow](https://docs.google.com/presentation/d/1HVxKSnvlc2WJJq8b9KCYtact5ZRrzDzkWgKEfm0QO_o/pub?start=false&loop=false&delayms=3000), -and then clone your own fork of Kubernetes as described there. - -## godep and dependency management - -Kubernetes uses [godep](https://github.com/tools/godep) to manage dependencies. It is not strictly required for building Kubernetes but it is required when managing dependencies under the Godeps/ tree, and is required by a number of the build and test scripts. Please make sure that ``godep`` is installed and in your ``$PATH``. - -### Installing godep -There are many ways to build and host go binaries. Here is an easy way to get utilities like ```godep``` installed: - -1) Ensure that [mercurial](http://mercurial.selenic.com/wiki/Download) is installed on your system. (some of godep's dependencies use the mercurial -source control system). Use ```apt-get install mercurial``` or ```yum install mercurial``` on Linux, or [brew.sh](http://brew.sh) on OS X, or download -directly from mercurial. - -2) Create a new GOPATH for your tools and install godep: -``` -export GOPATH=$HOME/go-tools -mkdir -p $GOPATH -go get github.com/tools/godep -``` - -3) Add $GOPATH/bin to your path. Typically you'd add this to your ~/.profile: -``` -export GOPATH=$HOME/go-tools -export PATH=$PATH:$GOPATH/bin -``` - -### Using godep -Here's a quick walkthrough of one way to use godeps to add or update a Kubernetes dependency into Godeps/_workspace. For more details, please see the instructions in [godep's documentation](https://github.com/tools/godep). - -1) Devote a directory to this endeavor: -``` -export KPATH=$HOME/code/kubernetes -mkdir -p $KPATH/src/github.com/GoogleCloudPlatform/kubernetes -cd $KPATH/src/github.com/GoogleCloudPlatform/kubernetes -git clone https://path/to/your/fork . -# Or copy your existing local repo here. IMPORTANT: making a symlink doesn't work. -``` - -2) Set up your GOPATH. -``` -# Option A: this will let your builds see packages that exist elsewhere on your system. -export GOPATH=$KPATH:$GOPATH -# Option B: This will *not* let your local builds see packages that exist elsewhere on your system. -export GOPATH=$KPATH -# Option B is recommended if you're going to mess with the dependencies. -``` - -3) Populate your new GOPATH. -``` -cd $KPATH/src/github.com/GoogleCloudPlatform/kubernetes -godep restore -``` - -4) Next, you can either add a new dependency or update an existing one. -``` -# To add a new dependency, do: -cd $KPATH/src/github.com/GoogleCloudPlatform/kubernetes -go get path/to/dependency -# Change code in Kubernetes to use the dependency. -godep save ./... - -# To update an existing dependency, do: -cd $KPATH/src/github.com/GoogleCloudPlatform/kubernetes -go get -u path/to/dependency -# Change code in Kubernetes accordingly if necessary. -godep update path/to/dependency -``` - -5) Before sending your PR, it's a good idea to sanity check that your Godeps.json file is ok by re-restoring: ```godep restore``` - -It is sometimes expedient to manually fix the /Godeps/godeps.json file to minimize the changes. - -Please send dependency updates in separate commits within your PR, for easier reviewing. - -## Hooks - -Before committing any changes, please link/copy these hooks into your .git -directory. This will keep you from accidentally committing non-gofmt'd go code. - -``` -cd kubernetes/.git/hooks/ -ln -s ../../hooks/pre-commit . -``` - -## Unit tests - -``` -cd kubernetes -hack/test-go.sh -``` - -Alternatively, you could also run: - -``` -cd kubernetes -godep go test ./... -``` - -If you only want to run unit tests in one package, you could run ``godep go test`` under the package directory. For example, the following commands will run all unit tests in package kubelet: - -``` -$ cd kubernetes # step into kubernetes' directory. -$ cd pkg/kubelet -$ godep go test -# some output from unit tests -PASS -ok github.com/GoogleCloudPlatform/kubernetes/pkg/kubelet 0.317s -``` - -## Coverage - -Currently, collecting coverage is only supported for the Go unit tests. - -To run all unit tests and generate an HTML coverage report, run the following: - -``` -cd kubernetes -KUBE_COVER=y hack/test-go.sh -``` - -At the end of the run, an the HTML report will be generated with the path printed to stdout. - -To run tests and collect coverage in only one package, pass its relative path under the `kubernetes` directory as an argument, for example: -``` -cd kubernetes -KUBE_COVER=y hack/test-go.sh pkg/kubectl -``` - -Multiple arguments can be passed, in which case the coverage results will be combined for all tests run. - -Coverage results for the project can also be viewed on [Coveralls](https://coveralls.io/r/GoogleCloudPlatform/kubernetes), and are continuously updated as commits are merged. Additionally, all pull requests which spawn a Travis build will report unit test coverage results to Coveralls. - -## Integration tests - -You need an [etcd](https://github.com/coreos/etcd/releases/tag/v2.0.0) in your path, please make sure it is installed and in your ``$PATH``. -``` -cd kubernetes -hack/test-integration.sh -``` - -## End-to-End tests - -You can run an end-to-end test which will bring up a master and two minions, perform some tests, and then tear everything down. Make sure you have followed the getting started steps for your chosen cloud platform (which might involve changing the `KUBERNETES_PROVIDER` environment variable to something other than "gce". -``` -cd kubernetes -hack/e2e-test.sh -``` - -Pressing control-C should result in an orderly shutdown but if something goes wrong and you still have some VMs running you can force a cleanup with this command: -``` -go run hack/e2e.go --down -``` - -### Flag options -See the flag definitions in `hack/e2e.go` for more options, such as reusing an existing cluster, here is an overview: - -```sh -# Build binaries for testing -go run hack/e2e.go --build - -# Create a fresh cluster. Deletes a cluster first, if it exists -go run hack/e2e.go --up - -# Create a fresh cluster at a specific release version. -go run hack/e2e.go --up --version=0.7.0 - -# Test if a cluster is up. -go run hack/e2e.go --isup - -# Push code to an existing cluster -go run hack/e2e.go --push - -# Push to an existing cluster, or bring up a cluster if it's down. -go run hack/e2e.go --pushup - -# Run all tests -go run hack/e2e.go --test - -# Run tests matching the regex "Pods.*env" -go run hack/e2e.go -v -test --test_args="--ginkgo.focus=Pods.*env" - -# Alternately, if you have the e2e cluster up and no desire to see the event stream, you can run ginkgo-e2e.sh directly: -hack/ginkgo-e2e.sh --ginkgo.focus=Pods.*env -``` - -### Combining flags -```sh -# Flags can be combined, and their actions will take place in this order: -# -build, -push|-up|-pushup, -test|-tests=..., -down -# e.g.: -go run hack/e2e.go -build -pushup -test -down - -# -v (verbose) can be added if you want streaming output instead of only -# seeing the output of failed commands. - -# -ctl can be used to quickly call kubectl against your e2e cluster. Useful for -# cleaning up after a failed test or viewing logs. Use -v to avoid suppressing -# kubectl output. -go run hack/e2e.go -v -ctl='get events' -go run hack/e2e.go -v -ctl='delete pod foobar' -``` - -## Conformance testing -End-to-end testing, as described above, is for [development -distributions](../../docs/devel/writing-a-getting-started-guide.md). A conformance test is used on -a [versioned distro](../../docs/devel/writing-a-getting-started-guide.md). - -The conformance test runs a subset of the e2e-tests against a manually-created cluster. It does not -require support for up/push/down and other operations. To run a conformance test, you need to know the -IP of the master for your cluster and the authorization arguments to use. The conformance test is -intended to run against a cluster at a specific binary release of Kubernetes. -See [conformance-test.sh](../../hack/conformance-test.sh). - -## Testing out flaky tests -[Instructions here](flaky-tests.md) - -## Keeping your development fork in sync - -One time after cloning your forked repo: - -``` -git remote add upstream https://github.com/GoogleCloudPlatform/kubernetes.git -``` - -Then each time you want to sync to upstream: - -``` -git fetch upstream -git rebase upstream/master -``` - -If you have write access to the main repository, you should modify your git configuration so that -you can't accidentally push to upstream: - -``` -git remote set-url --push upstream no_push -``` - -## Regenerating the CLI documentation - -``` -hack/run-gendocs.sh -``` - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/development.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/devel/development.md?pixel)]() diff --git a/release-0.19.0/docs/devel/faster_reviews.md b/release-0.19.0/docs/devel/faster_reviews.md deleted file mode 100644 index cd6cb5ff445..00000000000 --- a/release-0.19.0/docs/devel/faster_reviews.md +++ /dev/null @@ -1,183 +0,0 @@ -# How to get faster PR reviews - -Most of what is written here is not at all specific to Kubernetes, but it bears -being written down in the hope that it will occasionally remind people of "best -practices" around code reviews. - -You've just had a brilliant idea on how to make Kubernetes better. Let's call -that idea "FeatureX". Feature X is not even that complicated. You have a -pretty good idea of how to implement it. You jump in and implement it, fixing a -bunch of stuff along the way. You send your PR - this is awesome! And it sits. -And sits. A week goes by and nobody reviews it. Finally someone offers a few -comments, which you fix up and wait for more review. And you wait. Another -week or two goes by. This is horrible. - -What went wrong? One particular problem that comes up frequently is this - your -PR is too big to review. You've touched 39 files and have 8657 insertions. -When your would-be reviewers pull up the diffs they run away - this PR is going -to take 4 hours to review and they don't have 4 hours right now. They'll get to it -later, just as soon as they have more free time (ha!). - -Let's talk about how to avoid this. - -## 1. Don't build a cathedral in one PR - -Are you sure FeatureX is something the Kubernetes team wants or will accept, or -that it is implemented to fit with other changes in flight? Are you willing to -bet a few days or weeks of work on it? If you have any doubt at all about the -usefulness of your feature or the design - make a proposal doc or a sketch PR -or both. Write or code up just enough to express the idea and the design and -why you made those choices, then get feedback on this. Now, when we ask you to -change a bunch of facets of the design, you don't have to re-write it all. - -## 2. Smaller diffs are exponentially better - -Small PRs get reviewed faster and are more likely to be correct than big ones. -Let's face it - attention wanes over time. If your PR takes 60 minutes to -review, I almost guarantee that the reviewer's eye for details is not as keen in -the last 30 minutes as it was in the first. This leads to multiple rounds of -review when one might have sufficed. In some cases the review is delayed in its -entirety by the need for a large contiguous block of time to sit and read your -code. - -Whenever possible, break up your PRs into multiple commits. Making a series of -discrete commits is a powerful way to express the evolution of an idea or the -different ideas that make up a single feature. There's a balance to be struck, -obviously. If your commits are too small they become more cumbersome to deal -with. Strive to group logically distinct ideas into commits. - -For example, if you found that FeatureX needed some "prefactoring" to fit in, -make a commit that JUST does that prefactoring. Then make a new commit for -FeatureX. Don't lump unrelated things together just because you didn't think -about prefactoring. If you need to, fork a new branch, do the prefactoring -there and send a PR for that. If you can explain why you are doing seemingly -no-op work ("it makes the FeatureX change easier, I promise") we'll probably be -OK with it. - -Obviously, a PR with 25 commits is still very cumbersome to review, so use -common sense. - -## 3. Multiple small PRs are often better than multiple commits - -If you can extract whole ideas from your PR and send those as PRs of their own, -you can avoid the painful problem of continually rebasing. Kubernetes is a -fast-moving codebase - lock in your changes ASAP, and make merges be someone -else's problem. - -Obviously, we want every PR to be useful on its own, so you'll have to use -common sense in deciding what can be a PR vs what should be a commit in a larger -PR. Rule of thumb - if this commit or set of commits is directly related to -FeatureX and nothing else, it should probably be part of the FeatureX PR. If -you can plausibly imagine someone finding value in this commit outside of -FeatureX, try it as a PR. - -Don't worry about flooding us with PRs. We'd rather have 100 small, obvious PRs -than 10 unreviewable monoliths. - -## 4. Don't rename, reformat, comment, etc in the same PR - -Often, as you are implementing FeatureX, you find things that are just wrong. -Bad comments, poorly named functions, bad structure, weak type-safety. You -should absolutely fix those things (or at least file issues, please) - but not -in this PR. See the above points - break unrelated changes out into different -PRs or commits. Otherwise your diff will have WAY too many changes, and your -reviewer won't see the forest because of all the trees. - -## 5. Comments matter - -Read up on GoDoc - follow those general rules. If you're writing code and you -think there is any possible chance that someone might not understand why you did -something (or that you won't remember what you yourself did), comment it. If -you think there's something pretty obvious that we could follow up on, add a -TODO. Many code-review comments are about this exact issue. - -## 5. Tests are almost always required - -Nothing is more frustrating than doing a review, only to find that the tests are -inadequate or even entirely absent. Very few PRs can touch code and NOT touch -tests. If you don't know how to test FeatureX - ask! We'll be happy to help -you design things for easy testing or to suggest appropriate test cases. - -## 6. Look for opportunities to generify - -If you find yourself writing something that touches a lot of modules, think hard -about the dependencies you are introducing between packages. Can some of what -you're doing be made more generic and moved up and out of the FeatureX package? -Do you need to use a function or type from an otherwise unrelated package? If -so, promote! We have places specifically for hosting more generic code. - -Likewise if FeatureX is similar in form to FeatureW which was checked in last -month and it happens to exactly duplicate some tricky stuff from FeatureW, -consider prefactoring core logic out and using it in both FeatureW and FeatureX. -But do that in a different commit or PR, please. - -## 7. Fix feedback in a new commit - -Your reviewer has finally sent you some feedback on FeatureX. You make a bunch -of changes and ... what? You could patch those into your commits with git -"squash" or "fixup" logic. But that makes your changes hard to verify. Unless -your whole PR is pretty trivial, you should instead put your fixups into a new -commit and re-push. Your reviewer can then look at that commit on its own - so -much faster to review than starting over. - -We might still ask you to clean up your commits at the very end, for the sake -of a more readable history. - -## 8. KISS, YAGNI, MVP, etc - -Sometimes we need to remind each other of core tenets of software design - Keep -It Simple, You Aren't Gonna Need It, Minimum Viable Product, and so on. Adding -features "because we might need it later" is antithetical to software that -ships. Add the things you need NOW and (ideally) leave room for things you -might need later - but don't implement them now. - -## 9. Push back - -We understand that it is hard to imagine, but sometimes we make mistakes. It's -OK to push back on changes requested during a review. If you have a good reason -for doing something a certain way, you are absolutely allowed to debate the -merits of a requested change. You might be overruled, but you might also -prevail. We're mostly pretty reasonable people. Mostly. - -## 10. I'm still getting stalled - help?! - -So, you've done all that and you still aren't getting any PR love? Here's some -things you can do that might help kick a stalled process along: - - * Make sure that your PR has an assigned reviewer (assignee in GitHub). If - this is not the case, reply to the PR comment stream asking for one to be - assigned. - - * Ping the assignee (@username) on the PR comment stream asking for an - estimate of when they can get to it. - - * Ping the assignee by email (many of us have email addresses that are well - published or are the same as our GitHub handle @google.com or @redhat.com). - -If you think you have fixed all the issues in a round of review, and you haven't -heard back, you should ping the reviewer (assignee) on the comment stream with a -"please take another look" (PTAL) or similar comment indicating you are done and -you think it is ready for re-review. In fact, this is probably a good habit for -all PRs. - -One phenomenon of open-source projects (where anyone can comment on any issue) -is the dog-pile - your PR gets so many comments from so many people it becomes -hard to follow. In this situation you can ask the primary reviewer -(assignee) whether they want you to fork a new PR to clear out all the comments. -Remember: you don't HAVE to fix every issue raised by every person who feels -like commenting, but you should at least answer reasonable comments with an -explanation. - -## Final: Use common sense - -Obviously, none of these points are hard rules. There is no document that can -take the place of common sense and good taste. Use your best judgment, but put -a bit of thought into how your work can be made easier to review. If you do -these things your PRs will flow much more easily. - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/faster_reviews.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/devel/faster_reviews.md?pixel)]() diff --git a/release-0.19.0/docs/devel/flaky-tests.md b/release-0.19.0/docs/devel/flaky-tests.md deleted file mode 100644 index a82d0c3b5de..00000000000 --- a/release-0.19.0/docs/devel/flaky-tests.md +++ /dev/null @@ -1,68 +0,0 @@ -# Hunting flaky tests in Kubernetes -Sometimes unit tests are flaky. This means that due to (usually) race conditions, they will occasionally fail, even though most of the time they pass. - -We have a goal of 99.9% flake free tests. This means that there is only one flake in one thousand runs of a test. - -Running a test 1000 times on your own machine can be tedious and time consuming. Fortunately, there is a better way to achieve this using Kubernetes. - -_Note: these instructions are mildly hacky for now, as we get run once semantics and logging they will get better_ - -There is a testing image ```brendanburns/flake``` up on the docker hub. We will use this image to test our fix. - -Create a replication controller with the following config: -```yaml -apiVersion: v1 -kind: ReplicationController -metadata: - name: flakecontroller -spec: - replicas: 24 - template: - metadata: - labels: - name: flake - spec: - containers: - - name: flake - image: brendanburns/flake - env: - - name: TEST_PACKAGE - value: pkg/tools - - name: REPO_SPEC - value: https://github.com/GoogleCloudPlatform/kubernetes -``` -Note that we omit the labels and the selector fields of the replication controller, because they will be populated from the labels field of the pod template by default. - -``` -kubectl create -f controller.yaml -``` - -This will spin up 24 instances of the test. They will run to completion, then exit, and the kubelet will restart them, accumulating more and more runs of the test. -You can examine the recent runs of the test by calling ```docker ps -a``` and looking for tasks that exited with non-zero exit codes. Unfortunately, docker ps -a only keeps around the exit status of the last 15-20 containers with the same image, so you have to check them frequently. -You can use this script to automate checking for failures, assuming your cluster is running on GCE and has four nodes: - -```sh -echo "" > output.txt -for i in {1..4}; do - echo "Checking kubernetes-minion-${i}" - echo "kubernetes-minion-${i}:" >> output.txt - gcloud compute ssh "kubernetes-minion-${i}" --command="sudo docker ps -a" >> output.txt -done -grep "Exited ([^0])" output.txt -``` - -Eventually you will have sufficient runs for your purposes. At that point you can stop and delete the replication controller by running: - -```sh -kubectl stop replicationcontroller flakecontroller -``` - -If you do a final check for flakes with ```docker ps -a```, ignore tasks that exited -1, since that's what happens when you stop the replication controller. - -Happy flake hunting! - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/flaky-tests.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/devel/flaky-tests.md?pixel)]() diff --git a/release-0.19.0/docs/devel/issues.md b/release-0.19.0/docs/devel/issues.md deleted file mode 100644 index e65b5071fa7..00000000000 --- a/release-0.19.0/docs/devel/issues.md +++ /dev/null @@ -1,25 +0,0 @@ -GitHub Issues for the Kubernetes Project -======================================== - -A list quick overview of how we will review and prioritize incoming issues at https://github.com/GoogleCloudPlatform/kubernetes/issues - -Priorities ----------- - -We will use GitHub issue labels for prioritization. The absence of a priority label means the bug has not been reviewed and prioritized yet. - -Definitions ------------ -* P0 - something broken for users, build broken, or critical security issue. Someone must drop everything and work on it. -* P1 - must fix for earliest possible binary release (every two weeks) -* P2 - should be fixed in next major release version -* P3 - default priority for lower importance bugs that we still want to track and plan to fix at some point -* design - priority/design is for issues that are used to track design discussions -* support - priority/support is used for issues tracking user support requests -* untriaged - anything without a priority/X label will be considered untriaged - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/issues.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/devel/issues.md?pixel)]() diff --git a/release-0.19.0/docs/devel/logging.md b/release-0.19.0/docs/devel/logging.md deleted file mode 100644 index b389b9d352b..00000000000 --- a/release-0.19.0/docs/devel/logging.md +++ /dev/null @@ -1,32 +0,0 @@ -Logging Conventions -=================== - -The following conventions for the glog levels to use. [glog](http://godoc.org/github.com/golang/glog) is globally preferred to [log](http://golang.org/pkg/log/) for better runtime control. - -* glog.Errorf() - Always an error -* glog.Warningf() - Something unexpected, but probably not an error -* glog.Infof() has multiple levels: - * glog.V(0) - Generally useful for this to ALWAYS be visible to an operator - * Programmer errors - * Logging extra info about a panic - * CLI argument handling - * glog.V(1) - A reasonable default log level if you don't want verbosity. - * Information about config (listening on X, watching Y) - * Errors that repeat frequently that relate to conditions that can be corrected (pod detected as unhealthy) - * glog.V(2) - Useful steady state information about the service and important log messages that may correlate to significant changes in the system. This is the recommended default log level for most systems. - * Logging HTTP requests and their exit code - * System state changing (killing pod) - * Controller state change events (starting pods) - * Scheduler log messages - * glog.V(3) - Extended information about changes - * More info about system state changes - * glog.V(4) - Debug level verbosity (for now) - * Logging in particularly thorny parts of code where you may want to come back later and check it - -As per the comments, the practical default level is V(2). Developers and QE environments may wish to run at V(3) or V(4). If you wish to change the log level, you can pass in `-v=X` where X is the desired maximum level to log. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/logging.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/devel/logging.md?pixel)]() diff --git a/release-0.19.0/docs/devel/profiling.md b/release-0.19.0/docs/devel/profiling.md deleted file mode 100644 index 33ed0279012..00000000000 --- a/release-0.19.0/docs/devel/profiling.md +++ /dev/null @@ -1,40 +0,0 @@ -# Profiling Kubernetes - -This document explain how to plug in profiler and how to profile Kubernetes services. - -## Profiling library - -Go comes with inbuilt 'net/http/pprof' profiling library and profiling web service. The way service works is binding debug/pprof/ subtree on a running webserver to the profiler. Reading from subpages of debug/pprof returns pprof-formatted profiles of the running binary. The output can be processed offline by the tool of choice, or used as an input to handy 'go tool pprof', which can graphically represent the result. - -## Adding profiling to services to APIserver. - -TL;DR: Add lines: -``` - m.mux.HandleFunc("/debug/pprof/", pprof.Index) - m.mux.HandleFunc("/debug/pprof/profile", pprof.Profile) - m.mux.HandleFunc("/debug/pprof/symbol", pprof.Symbol) -``` -to the init(c *Config) method in 'pkg/master/master.go' and import 'net/http/pprof' package. - -In most use cases to use profiler service it's enough to do 'import _ net/http/pprof', which automatically registers a handler in the default http.Server. Slight inconvenience is that APIserver uses default server for intra-cluster communication, so plugging profiler to it is not really useful. In 'pkg/master/server/server.go' more servers are created and started as separate goroutines. The one that is usually serving external traffic is secureServer. The handler for this traffic is defined in 'pkg/master/master.go' and stored in Handler variable. It is created from HTTP multiplexer, so the only thing that needs to be done is adding profiler handler functions to this multiplexer. This is exactly what lines after TL;DR do. - -## Connecting to the profiler -Even when running profiler I found not really straightforward to use 'go tool pprof' with it. The problem is that at least for dev purposes certificates generated for APIserver are not signed by anyone trusted and because secureServer serves only secure traffic it isn't straightforward to connect to the service. The best workaround I found is by creating an ssh tunnel from the kubernetes_master open unsecured port to some external server, and use this server as a proxy. To save everyone looking for correct ssh flags, it is done by running: -``` - ssh kubernetes_master -L:localhost:8080 -``` -or analogous one for you Cloud provider. Afterwards you can e.g. run -``` -go tool pprof http://localhost:/debug/pprof/profile -``` -to get 30 sec. CPU profile. - -## Contention profiling - -To enable contention profiling you need to add line ```rt.SetBlockProfileRate(1)``` in addition to ```m.mux.HandleFunc(...)``` added before (```rt``` stands for ```runtime``` in ```master.go```). This enables 'debug/pprof/block' subpage, which can be used as an input to ```go tool pprof```. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/profiling.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/devel/profiling.md?pixel)]() diff --git a/release-0.19.0/docs/devel/pull-requests.md b/release-0.19.0/docs/devel/pull-requests.md deleted file mode 100644 index bc8083b487f..00000000000 --- a/release-0.19.0/docs/devel/pull-requests.md +++ /dev/null @@ -1,22 +0,0 @@ -Pull Request Process -==================== - -An overview of how we will manage old or out-of-date pull requests. - -Process -------- - -We will close any pull requests older than two weeks. - -Exceptions can be made for PRs that have active review comments, or that are awaiting other dependent PRs. Closed pull requests are easy to recreate, and little work is lost by closing a pull request that subsequently needs to be reopened. - -We want to limit the total number of PRs in flight to: -* Maintain a clean project -* Remove old PRs that would be difficult to rebase as the underlying code has changed over time -* Encourage code velocity - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/pull-requests.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/devel/pull-requests.md?pixel)]() diff --git a/release-0.19.0/docs/devel/releasing.dot b/release-0.19.0/docs/devel/releasing.dot deleted file mode 100644 index fe8124c36da..00000000000 --- a/release-0.19.0/docs/devel/releasing.dot +++ /dev/null @@ -1,113 +0,0 @@ -// Build it with: -// $ dot -Tsvg releasing.dot >releasing.svg - -digraph tagged_release { - size = "5,5" - // Arrows go up. - rankdir = BT - subgraph left { - // Group the left nodes together. - ci012abc -> pr101 -> ci345cde -> pr102 - style = invis - } - subgraph right { - // Group the right nodes together. - version_commit -> dev_commit - style = invis - } - { // Align the version commit and the info about it. - rank = same - // Align them with pr101 - pr101 - version_commit - // release_info shows the change in the commit. - release_info - } - { // Align the dev commit and the info about it. - rank = same - // Align them with 345cde - ci345cde - dev_commit - dev_info - } - // Join the nodes from subgraph left. - pr99 -> ci012abc - pr102 -> pr100 - // Do the version node. - pr99 -> version_commit - dev_commit -> pr100 - tag -> version_commit - pr99 [ - label = "Merge PR #99" - shape = box - fillcolor = "#ccccff" - style = "filled" - fontname = "Helvetica Neue, Helvetica, Segoe UI, Arial, freesans, sans-serif" - ]; - ci012abc [ - label = "012abc" - shape = circle - fillcolor = "#ffffcc" - style = "filled" - fontname = "Consolas, Liberation Mono, Menlo, Courier, monospace" - ]; - pr101 [ - label = "Merge PR #101" - shape = box - fillcolor = "#ccccff" - style = "filled" - fontname = "Helvetica Neue, Helvetica, Segoe UI, Arial, freesans, sans-serif" - ]; - ci345cde [ - label = "345cde" - shape = circle - fillcolor = "#ffffcc" - style = "filled" - fontname = "Consolas, Liberation Mono, Menlo, Courier, monospace" - ]; - pr102 [ - label = "Merge PR #102" - shape = box - fillcolor = "#ccccff" - style = "filled" - fontname = "Helvetica Neue, Helvetica, Segoe UI, Arial, freesans, sans-serif" - ]; - version_commit [ - label = "678fed" - shape = circle - fillcolor = "#ccffcc" - style = "filled" - fontname = "Consolas, Liberation Mono, Menlo, Courier, monospace" - ]; - dev_commit [ - label = "456dcb" - shape = circle - fillcolor = "#ffffcc" - style = "filled" - fontname = "Consolas, Liberation Mono, Menlo, Courier, monospace" - ]; - pr100 [ - label = "Merge PR #100" - shape = box - fillcolor = "#ccccff" - style = "filled" - fontname = "Helvetica Neue, Helvetica, Segoe UI, Arial, freesans, sans-serif" - ]; - release_info [ - label = "pkg/version/base.go:\ngitVersion = \"v0.5\";" - shape = none - fontname = "Helvetica Neue, Helvetica, Segoe UI, Arial, freesans, sans-serif" - ]; - dev_info [ - label = "pkg/version/base.go:\ngitVersion = \"v0.5-dev\";" - shape = none - fontname = "Helvetica Neue, Helvetica, Segoe UI, Arial, freesans, sans-serif" - ]; - tag [ - label = "$ git tag -a v0.5" - fillcolor = "#ffcccc" - style = "filled" - fontname = "Helvetica Neue, Helvetica, Segoe UI, Arial, freesans, sans-serif" - ]; -} - diff --git a/release-0.19.0/docs/devel/releasing.md b/release-0.19.0/docs/devel/releasing.md deleted file mode 100644 index 4769f48ca48..00000000000 --- a/release-0.19.0/docs/devel/releasing.md +++ /dev/null @@ -1,171 +0,0 @@ -# Releasing Kubernetes - -This document explains how to create a Kubernetes release (as in version) and -how the version information gets embedded into the built binaries. - -## Origin of the Sources - -Kubernetes may be built from either a git tree (using `hack/build-go.sh`) or -from a tarball (using either `hack/build-go.sh` or `go install`) or directly by -the Go native build system (using `go get`). - -When building from git, we want to be able to insert specific information about -the build tree at build time. In particular, we want to use the output of `git -describe` to generate the version of Kubernetes and the status of the build -tree (add a `-dirty` prefix if the tree was modified.) - -When building from a tarball or using the Go build system, we will not have -access to the information about the git tree, but we still want to be able to -tell whether this build corresponds to an exact release (e.g. v0.3) or is -between releases (e.g. at some point in development between v0.3 and v0.4). - -## Version Number Format - -In order to account for these use cases, there are some specific formats that -may end up representing the Kubernetes version. Here are a few examples: - -- **v0.5**: This is official version 0.5 and this version will only be used - when building from a clean git tree at the v0.5 git tag, or from a tree - extracted from the tarball corresponding to that specific release. -- **v0.5-15-g0123abcd4567**: This is the `git describe` output and it indicates - that we are 15 commits past the v0.5 release and that the SHA1 of the commit - where the binaries were built was `0123abcd4567`. It is only possible to have - this level of detail in the version information when building from git, not - when building from a tarball. -- **v0.5-15-g0123abcd4567-dirty** or **v0.5-dirty**: The extra `-dirty` prefix - means that the tree had local modifications or untracked files at the time of - the build, so there's no guarantee that the source code matches exactly the - state of the tree at the `0123abcd4567` commit or at the `v0.5` git tag - (resp.) -- **v0.5-dev**: This means we are building from a tarball or using `go get` or, - if we have a git tree, we are using `go install` directly, so it is not - possible to inject the git version into the build information. Additionally, - this is not an official release, so the `-dev` prefix indicates that the - version we are building is after `v0.5` but before `v0.6`. (There is actually - an exception where a commit with `v0.5-dev` is not present on `v0.6`, see - later for details.) - -## Injecting Version into Binaries - -In order to cover the different build cases, we start by providing information -that can be used when using only Go build tools or when we do not have the git -version information available. - -To be able to provide a meaningful version in those cases, we set the contents -of variables in a Go source file that will be used when no overrides are -present. - -We are using `pkg/version/base.go` as the source of versioning in absence of -information from git. Here is a sample of that file's contents: - -``` - var ( - gitVersion string = "v0.4-dev" // version from git, output of $(git describe) - gitCommit string = "" // sha1 from git, output of $(git rev-parse HEAD) - ) -``` - -This means a build with `go install` or `go get` or a build from a tarball will -yield binaries that will identify themselves as `v0.4-dev` and will not be able -to provide you with a SHA1. - -To add the extra versioning information when building from git, the -`hack/build-go.sh` script will gather that information (using `git describe` and -`git rev-parse`) and then create a `-ldflags` string to pass to `go install` and -tell the Go linker to override the contents of those variables at build time. It -can, for instance, tell it to override `gitVersion` and set it to -`v0.4-13-g4567bcdef6789-dirty` and set `gitCommit` to `4567bcdef6789...` which -is the complete SHA1 of the (dirty) tree used at build time. - -## Handling Official Versions - -Handling official versions from git is easy, as long as there is an annotated -git tag pointing to a specific version then `git describe` will return that tag -exactly which will match the idea of an official version (e.g. `v0.5`). - -Handling it on tarballs is a bit harder since the exact version string must be -present in `pkg/version/base.go` for it to get embedded into the binaries. But -simply creating a commit with `v0.5` on its own would mean that the commits -coming after it would also get the `v0.5` version when built from tarball or `go -get` while in fact they do not match `v0.5` (the one that was tagged) exactly. - -To handle that case, creating a new release should involve creating two adjacent -commits where the first of them will set the version to `v0.5` and the second -will set it to `v0.5-dev`. In that case, even in the presence of merges, there -will be a single commit where the exact `v0.5` version will be used and all -others around it will either have `v0.4-dev` or `v0.5-dev`. - -The diagram below illustrates it. - -![Diagram of git commits involved in the release](./releasing.png) - -After working on `v0.4-dev` and merging PR 99 we decide it is time to release -`v0.5`. So we start a new branch, create one commit to update -`pkg/version/base.go` to include `gitVersion = "v0.5"` and `git commit` it. - -We test it and make sure everything is working as expected. - -Before sending a PR for it, we create a second commit on that same branch, -updating `pkg/version/base.go` to include `gitVersion = "v0.5-dev"`. That will -ensure that further builds (from tarball or `go install`) on that tree will -always include the `-dev` prefix and will not have a `v0.5` version (since they -do not match the official `v0.5` exactly.) - -We then send PR 100 with both commits in it. - -Once the PR is accepted, we can use `git tag -a` to create an annotated tag -*pointing to the one commit* that has `v0.5` in `pkg/version/base.go` and push -it to GitHub. (Unfortunately GitHub tags/releases are not annotated tags, so -this needs to be done from a git client and pushed to GitHub using SSH.) - -## Parallel Commits - -While we are working on releasing `v0.5`, other development takes place and -other PRs get merged. For instance, in the example above, PRs 101 and 102 get -merged to the master branch before the versioning PR gets merged. - -This is not a problem, it is only slightly inaccurate that checking out the tree -at commit `012abc` or commit `345cde` or at the commit of the merges of PR 101 -or 102 will yield a version of `v0.4-dev` *but* those commits are not present in -`v0.5`. - -In that sense, there is a small window in which commits will get a -`v0.4-dev` or `v0.4-N-gXXX` label and while they're indeed later than `v0.4` -but they are not really before `v0.5` in that `v0.5` does not contain those -commits. - -Unfortunately, there is not much we can do about it. On the other hand, other -projects seem to live with that and it does not really become a large problem. - -As an example, Docker commit a327d9b91edf has a `v1.1.1-N-gXXX` label but it is -not present in Docker `v1.2.0`: - -``` - $ git describe a327d9b91edf - v1.1.1-822-ga327d9b91edf - - $ git log --oneline v1.2.0..a327d9b91edf - a327d9b91edf Fix data space reporting from Kb/Mb to KB/MB - - (Non-empty output here means the commit is not present on v1.2.0.) -``` - -## Release Notes - -No official release should be made final without properly matching release notes. - -There should be made available, per release, a small summary, preamble, of the -major changes, both in terms of feature improvements/bug fixes and notes about -functional feature changes (if any) regarding the previous released version so -that the BOM regarding updating to it gets as obvious and trouble free as possible. - -After this summary, preamble, all the relevant PRs/issues that got in that -version should be listed and linked together with a small summary understandable -by plain mortals (in a perfect world PR/issue's title would be enough but often -it is just too cryptic/geeky/domain-specific that it isn't). - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/releasing.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/devel/releasing.md?pixel)]() diff --git a/release-0.19.0/docs/devel/releasing.png b/release-0.19.0/docs/devel/releasing.png deleted file mode 100644 index 935628deddc..00000000000 Binary files a/release-0.19.0/docs/devel/releasing.png and /dev/null differ diff --git a/release-0.19.0/docs/devel/releasing.svg b/release-0.19.0/docs/devel/releasing.svg deleted file mode 100644 index f703e6e2ac9..00000000000 --- a/release-0.19.0/docs/devel/releasing.svg +++ /dev/null @@ -1,113 +0,0 @@ - - - - - - -tagged_release - - -ci012abc - -012abc - - -pr101 - -Merge PR #101 - - -ci012abc->pr101 - - - - -ci345cde - -345cde - - -pr101->ci345cde - - - - -pr102 - -Merge PR #102 - - -ci345cde->pr102 - - - - -pr100 - -Merge PR #100 - - -pr102->pr100 - - - - -version_commit - -678fed - - -dev_commit - -456dcb - - -version_commit->dev_commit - - - - -dev_commit->pr100 - - - - -release_info -pkg/version/base.go: -gitVersion = "v0.5"; - - -dev_info -pkg/version/base.go: -gitVersion = "v0.5-dev"; - - -pr99 - -Merge PR #99 - - -pr99->ci012abc - - - - -pr99->version_commit - - - - -tag - -$ git tag -a v0.5 - - -tag->version_commit - - - - - diff --git a/release-0.19.0/docs/devel/writing-a-getting-started-guide.md b/release-0.19.0/docs/devel/writing-a-getting-started-guide.md deleted file mode 100644 index 9333cd1856a..00000000000 --- a/release-0.19.0/docs/devel/writing-a-getting-started-guide.md +++ /dev/null @@ -1,105 +0,0 @@ -# Writing a Getting Started Guide -This page gives some advice for anyone planning to write or update a Getting Started Guide for Kubernetes. -It also gives some guidelines which reviewers should follow when reviewing a pull request for a -guide. - -A Getting Started Guide is instructions on how to create a Kubernetes cluster on top of a particular -type(s) of infrastructure. Infrastructure includes: the IaaS provider for VMs; -the node OS; inter-node networking; and node Configuration Management system. -A guide refers to scripts, Configuration Management files, and/or binary assets such as RPMs. We call -the combination of all these things needed to run on a particular type of infrastructure a -**distro**. - -[The Matrix](../../docs/getting-started-guides/README.md) lists the distros. If there is already a guide -which is similar to the one you have planned, consider improving that one. - - -Distros fall into two categories: - - **versioned distros** are tested to work with a particular binary release of Kubernetes. These - come in a wide variety, reflecting a wide range of ideas and preferences in how to run a cluster. - - **development distros** are tested work with the latest Kubernetes source code. But, there are - relatively few of these and the bar is much higher for creating one. - -There are different guidelines for each. - -## Versioned Distro Guidelines -These guidelines say *what* to do. See the Rationale section for *why*. - - Send us a PR. - - Put the instructions in `docs/getting-started-guides/...`. Scripts go there too. This helps devs easily - search for uses of flags by guides. - - We may ask that you host binary assets or large amounts of code in our `contrib` directory or on your - own repo. - - Setup a cluster and run the [conformance test](../../docs/devel/conformance-test.md) against it, and report the - results in your PR. - - Add or update a row in [The Matrix](../../docs/getting-started-guides/README.md). - - State the binary version of kubernetes that you tested clearly in your Guide doc and in The Matrix. - - Even if you are just updating the binary version used, please still do a conformance test. - - If it worked before and now fails, you can ask on IRC, - check the release notes since your last tested version, or look at git -logs for files in other distros - that are updated to the new version. - - Versioned distros should typically not modify or add code in `cluster/`. That is just scripts for developer - distros. - - If a versioned distro has not been updated for many binary releases, it may be dropped from the Matrix. - -If you have a cluster partially working, but doing all the above steps seems like too much work, -we still want to hear from you. We suggest you write a blog post or a Gist, and we will link to it on our wiki page. -Just file an issue or chat us on IRC and one of the committers will link to it from the wiki. - -## Development Distro Guidelines -These guidelines say *what* to do. See the Rationale section for *why*. - - the main reason to add a new development distro is to support a new IaaS provider (VM and - network management). This means implementing a new `pkg/cloudprovider/$IAAS_NAME`. - - Development distros should use Saltstack for Configuration Management. - - development distros need to support automated cluster creation, deletion, upgrading, etc. - This mean writing scripts in `cluster/$IAAS_NAME`. - - all commits to the tip of this repo need to not break any of the development distros - - the author of the change is responsible for making changes necessary on all the cloud-providers if the - change affects any of them, and reverting the change if it breaks any of the CIs. - - a development distro needs to have an organization which owns it. This organization needs to: - - Setting up and maintaining Continuous Integration that runs e2e frequently (multiple times per day) against the - Distro at head, and which notifies all devs of breakage. - - being reasonably available for questions and assisting with - refactoring and feature additions that affect code for their IaaS. - -## Rationale - - We want want people to create Kubernetes clusters with whatever IaaS, Node OS, - configuration management tools, and so on, which they are familiar with. The - guidelines for **versioned distros** are designed for flexibility. - - We want developers to be able to work without understanding all the permutations of - IaaS, NodeOS, and configuration management. The guidelines for **developer distros** are designed - for consistency. - - We want users to have a uniform experience with Kubernetes whenever they follow instructions anywhere - in our Github repository. So, we ask that versioned distros pass a **conformance test** to make sure - really work. - - We ask versioned distros to **clearly state a version**. People pulling from Github may - expect any instructions there to work at Head, so stuff that has not been tested at Head needs - to be called out. We are still changing things really fast, and, while the REST API is versioned, - it is not practical at this point to version or limit changes that affect distros. We still change - flags at the Kubernetes/Infrastructure interface. - - We want to **limit the number of development distros** for several reasons. Developers should - only have to change a limited number of places to add a new feature. Also, since we will - gate commits on passing CI for all distros, and since end-to-end tests are typically somewhat - flaky, it would be highly likely for there to be false positives and CI backlogs with many CI pipelines. - - We do not require versioned distros to do **CI** for several reasons. It is a steep - learning curve to understand our our automated testing scripts. And it is considerable effort - to fully automate setup and teardown of a cluster, which is needed for CI. And, not everyone - has the time and money to run CI. We do not want to - discourage people from writing and sharing guides because of this. - - Versioned distro authors are free to run their own CI and let us know if there is breakage, but we - will not include them as commit hooks -- there cannot be so many commit checks that it is impossible - to pass them all. - - We prefer a single Configuration Management tool for development distros. If there were more - than one, the core developers would have to learn multiple tools and update config in multiple - places. **Saltstack** happens to be the one we picked when we started the project. We - welcome versioned distros that use any tool; there are already examples of - CoreOS Fleet, Ansible, and others. - - You can still run code from head or your own branch - if you use another Configuration Management tool -- you just have to do some manual steps - during testing and deployment. - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/writing-a-getting-started-guide.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/devel/writing-a-getting-started-guide.md?pixel)]() diff --git a/release-0.19.0/docs/developer-guide.md b/release-0.19.0/docs/developer-guide.md deleted file mode 100644 index 1bc77402d07..00000000000 --- a/release-0.19.0/docs/developer-guide.md +++ /dev/null @@ -1,41 +0,0 @@ -# Kubernetes Developer Guide - -The developer guide is for anyone wanting to either write code which directly accesses the -kubernetes API, or to contribute directly to the kubernetes project. -It assumes some familiarity with concepts in the [User Guide](user-guide.md) and the [Cluster Admin -Guide](cluster-admin-guide.md). - - -## Developing against the Kubernetes API - -* API objects are explained at [http://kubernetes.io/third_party/swagger-ui/](http://kubernetes.io/third_party/swagger-ui/). - -* **Annotations** ([annotations.md](annotations.md)): are for attaching arbitrary non-identifying metadata to objects. - Programs that automate Kubernetes objects may use annotations to store small amounts of their state. - -* **API Conventions** ([api-conventions.md](api-conventions.md)): - Defining the verbs and resources used in the Kubernetes API. - -* **API Client Libraries** ([client-libraries.md](client-libraries.md)): - A list of existing client libraries, both supported and user-contributed. - -## Writing Plugins - -* **Authentication Plugins** ([authentication.md](authentication.md)): - The current and planned states of authentication tokens. - -* **Authorization Plugins** ([authorization.md](authorization.md)): - Authorization applies to all HTTP requests on the main apiserver port. - This doc explains the available authorization implementations. - -* **Admission Control Plugins** ([admission_control](design/admission_control.md)) - -## Contributing to the Kubernetes Project - -See this [README](../docs/devel/README.md). - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/developer-guide.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/developer-guide.md?pixel)]() diff --git a/release-0.19.0/docs/dns.md b/release-0.19.0/docs/dns.md deleted file mode 100644 index ecdde9b27ce..00000000000 --- a/release-0.19.0/docs/dns.md +++ /dev/null @@ -1,44 +0,0 @@ -# DNS Integration with Kubernetes - -As of kubernetes 0.8, DNS is offered as a cluster add-on. If enabled, a DNS -Pod and Service will be scheduled on the cluster, and the kubelets will be -configured to tell individual containers to use the DNS Service's IP. - -Every Service defined in the cluster (including the DNS server itself) will be -assigned a DNS name. By default, a client Pod's DNS search list will -include the Pod's own namespace and the cluster's default domain. This is best -illustrated by example: - -Assume a Service named `foo` in the kubernetes namespace `bar`. A Pod running -in namespace `bar` can look up this service by simply doing a DNS query for -`foo`. A Pod running in namespace `quux` can look up this service by doing a -DNS query for `foo.bar`. - -The cluster DNS server ([SkyDNS](https://github.com/skynetservices/skydns)) -supports forward lookups (A records) and service lookups (SRV records). - -## How it Works - -The DNS pod that runs holds 3 containers - skydns, etcd (which skydns uses), -and a kubernetes-to-skydns bridge called kube2sky. The kube2sky process -watches the kubernetes master for changes in Services, and then writes the -information to etcd, which skydns reads. This etcd instance is not linked to -any other etcd clusters that might exist, including the kubernetes master. - -## Issues - -The skydns service is reachable directly from kubernetes nodes (outside -of any container) and DNS resolution works if the skydns service is targeted -explicitly. However, nodes are not configured to use the cluster DNS service or -to search the cluster's DNS domain by default. This may be resolved at a later -time. - -## For more information - -See [the docs for the cluster addon](../cluster/addons/dns/README.md). - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/dns.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/dns.md?pixel)]() diff --git a/release-0.19.0/docs/downward_api.md b/release-0.19.0/docs/downward_api.md deleted file mode 100644 index a5b097c357e..00000000000 --- a/release-0.19.0/docs/downward_api.md +++ /dev/null @@ -1,53 +0,0 @@ -# Downward API - -The downward API allows containers to consume information about the system without coupling to the -kubernetes client or REST API. - -### Capabilities - -Containers can consume the following information via the downward API: - -* Their pod's name -* Their pod's namespace - -### Consuming information about a pod in a container - -Containers consume information from the downward API using environment variables. In the future, -containers will also be able to consume the downward API via a volume plugin. The `valueFrom` -field of an environment variable allows you to specify an `ObjectFieldSelector` to select fields -from the pod's definition. The `ObjectFieldSelector` has an `apiVersion` field and a `fieldPath` -field. The `fieldPath` field is an expression designating a field on the pod. The `apiVersion` -field is the version of the API schema that the `fieldPath` is written in terms of. If the -`apiVersion` field is not specified it is defaulted to the API version of the enclosing object. - -### Example: consuming the downward API - -This is an example of a pod that consumes its name and namespace via the downward API: - -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: dapi-test-pod -spec: - containers: - - name: test-container - image: gcr.io/google_containers/busybox - command: [ "/bin/sh", "-c", "env" ] - env: - - name: POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - restartPolicy: Never -``` - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/downward_api.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/downward_api.md?pixel)]() diff --git a/release-0.19.0/docs/getting-started-guides/README.md b/release-0.19.0/docs/getting-started-guides/README.md deleted file mode 100644 index c621e43a565..00000000000 --- a/release-0.19.0/docs/getting-started-guides/README.md +++ /dev/null @@ -1,66 +0,0 @@ -If you are not sure what OSes and infrastructure is supported, the table below lists all the combinations which have -been tested recently. - -For the easiest "kick the tires" experience, please try the [local docker](docker.md) guide. - -If you are considering contributing a new guide, please read the -[guidelines](../../docs/devel/writing-a-getting-started-guide.md). - -IaaS Provider | Config. Mgmt | OS | Networking | Docs | Support Level | Notes --------------- | ------------ | ------ | ---------- | ---------------------------------------------------- | ---------------------------- | ----- -GKE | | | GCE | [docs](https://cloud.google.com/container-engine) | Commercial | Uses K8s version 0.15.0 -Vagrant | Saltstack | Fedora | OVS | [docs](../../docs/getting-started-guides/vagrant.md) | Project | Uses latest via https://get.k8s.io/ -GCE | Saltstack | Debian | GCE | [docs](../../docs/getting-started-guides/gce.md) | Project | Tested with 0.15.0 by @robertbailey -Azure | CoreOS | CoreOS | Weave | [docs](../../docs/getting-started-guides/coreos/azure/README.md) | Community ([@errordeveloper](https://github.com/errordeveloper), [@squillace](https://github.com/squillace), [@chanezon](https://github.com/chanezon), [@crossorigin](https://github.com/crossorigin)) | Uses K8s version 0.17.0 -Docker Single Node | custom | N/A | local | [docs](docker.md) | Project (@brendandburns) | Tested @ 0.14.1 | -Docker Multi Node | Flannel | N/A | local | [docs](docker-multinode.md) | Project (@brendandburns) | Tested @ 0.14.1 | -Bare-metal | Ansible | Fedora | flannel | [docs](../../docs/getting-started-guides/fedora/fedora_ansible_config.md) | Project | Uses K8s v0.13.2 -Bare-metal | custom | Fedora | _none_ | [docs](../../docs/getting-started-guides/fedora/fedora_manual_config.md) | Project | Uses K8s v0.13.2 -Bare-metal | custom | Fedora | flannel | [docs](../../docs/getting-started-guides/fedora/flannel_multi_node_cluster.md) | Community ([@aveshagarwal](https://github.com/aveshagarwal))| Tested with 0.15.0 -libvirt | custom | Fedora | flannel | [docs](../../docs/getting-started-guides/fedora/flannel_multi_node_cluster.md) | Community ([@aveshagarwal](https://github.com/aveshagarwal))| Tested with 0.15.0 -KVM | custom | Fedora | flannel | [docs](../../docs/getting-started-guides/fedora/flannel_multi_node_cluster.md) | Community ([@aveshagarwal](https://github.com/aveshagarwal))| Tested with 0.15.0 -Mesos/GCE | | | | [docs](../../docs/getting-started-guides/mesos.md) | [Community](https://github.com/mesosphere/kubernetes-mesos) ([@jdef](https://github.com/jdef)) | Uses K8s v0.11.2 -AWS | CoreOS | CoreOS | flannel | [docs](../../docs/getting-started-guides/coreos.md) | Community | Uses K8s version 0.17.0 -GCE | CoreOS | CoreOS | flannel | [docs](../../docs/getting-started-guides/coreos.md) | Community (@kelseyhightower) | Uses K8s version 0.15.0 -Vagrant | CoreOS | CoreOS | flannel | [docs](../../docs/getting-started-guides/coreos.md) | Community ( [@pires](https://github.com/pires), [@AntonioMeireles](https://github.com/AntonioMeireles) ) | Uses K8s version 0.15.0 -Bare-metal (Offline) | CoreOS | CoreOS | flannel | [docs](../../docs/getting-started-guides/coreos/bare_metal_offline.md) | Community([@jeffbean](https://github.com/jeffbean)) | Uses K8s version 0.15.0 -CloudStack | Ansible | CoreOS | flannel | [docs](../../docs/getting-started-guides/cloudstack.md)| Community (@runseb) | Uses K8s version 0.9.1 -Vmware | | Debian | OVS | [docs](../../docs/getting-started-guides/vsphere.md) | Community (@pietern) | Uses K8s version 0.9.1 -Bare-metal | custom | CentOS | _none_ | [docs](../../docs/getting-started-guides/centos/centos_manual_config.md) | Community(@coolsvap) | Uses K8s v0.9.1 -AWS | Juju | Ubuntu | flannel | [docs](../../docs/getting-started-guides/juju.md) | [Community](https://github.com/whitmo/bundle-kubernetes) ( [@whit](https://github.com/whitmo), [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) | [Tested](http://reports.vapour.ws/charm-tests-by-charm/kubernetes) K8s v0.8.1 -OpenStack/HPCloud | Juju | Ubuntu | flannel | [docs](../../docs/getting-started-guides/juju.md) | [Community](https://github.com/whitmo/bundle-kubernetes) ( [@whit](https://github.com/whitmo), [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) | [Tested](http://reports.vapour.ws/charm-tests-by-charm/kubernetes) K8s v0.8.1 -Joyent | Juju | Ubuntu | flannel | [docs](../../docs/getting-started-guides/juju.md) | [Community](https://github.com/whitmo/bundle-kubernetes) ( [@whit](https://github.com/whitmo), [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) | [Tested](http://reports.vapour.ws/charm-tests-by-charm/kubernetes) K8s v0.8.1 -AWS | Saltstack | Ubuntu | OVS | [docs](../../docs/getting-started-guides/aws.md) | Community (@justinsb) | Uses K8s version 0.5.0 -Vmware | CoreOS | CoreOS | flannel | [docs](../../docs/getting-started-guides/coreos.md) | Community (@kelseyhightower) | Uses K8s version 0.15.0 -Azure | Saltstack | Ubuntu | OpenVPN | [docs](../../docs/getting-started-guides/azure.md) | Community | -Bare-metal | custom | Ubuntu | flannel | [docs](../../docs/getting-started-guides/ubuntu.md) | Community (@resouer @WIZARD-CXY) | use k8s version 0.18.0 -Docker Single Node | custom | N/A | local | [docs](docker.md) | Project (@brendandburns) | Tested @ 0.14.1 | -Docker Multi Node | Flannel| N/A | local | [docs](docker-multinode.md) | Project (@brendandburns) | Tested @ 0.14.1 | -Local | | | _none_ | [docs](../../docs/getting-started-guides/locally.md) | Community (@preillyme) | -libvirt/KVM | CoreOS | CoreOS | libvirt/KVM | [docs](../../docs/getting-started-guides/libvirt-coreos.md) | Community (@lhuard1A) | -oVirt | | | | [docs](../../docs/getting-started-guides/ovirt.md) | Community (@simon3z) | -Rackspace | CoreOS | CoreOS | flannel | [docs](../../docs/getting-started-guides/rackspace.md) | Community (@doublerr) | use k8s version 0.18.0 - - -*Note*: The above table is ordered by version test/used in notes followed by support level. - -Definition of columns: - - **IaaS Provider** is who/what provides the virtual or physical machines (nodes) that Kubernetes runs on. - - **OS** is the base operating system of the nodes. - - **Config. Mgmt** is the configuration management system that helps install and maintain kubernetes software on the - nodes. - - **Networking** is what implements the [networking model](../../docs/networking.md). Those with networking type - _none_ may not support more than one node, or may support multiple VM nodes only in the same physical node. - - Support Levels - - **Project**: Kubernetes Committers regularly use this configuration, so it usually works with the latest release - of Kubernetes. - - **Commercial**: A commercial offering with its own support arrangements. - - **Community**: Actively supported by community contributions. May not work with more recent releases of kubernetes. - - **Inactive**: No active maintainer. Not recommended for first-time K8s users, and may be deleted soon. - - **Notes** is relevant information such as version k8s used. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/getting-started-guides/README.md?pixel)]() diff --git a/release-0.19.0/docs/getting-started-guides/aws-coreos.md b/release-0.19.0/docs/getting-started-guides/aws-coreos.md deleted file mode 100644 index ebd4fea8b80..00000000000 --- a/release-0.19.0/docs/getting-started-guides/aws-coreos.md +++ /dev/null @@ -1,220 +0,0 @@ -# Getting started on Amazon EC2 with CoreOS - -The example below creates an elastic Kubernetes cluster with a custom number of worker nodes and a master. - -**Warning:** contrary to the [supported procedure](aws.md), the examples below provision Kubernetes with an insecure API server (plain HTTP, -no security tokens, no basic auth). For demonstration purposes only. - -## Highlights - -* Cluster bootstrapping using [cloud-config](https://coreos.com/docs/cluster-management/setup/cloudinit-cloud-config/) -* Cross container networking with [flannel](https://github.com/coreos/flannel#flannel) -* Auto worker registration with [kube-register](https://github.com/kelseyhightower/kube-register#kube-register) -* Kubernetes v0.17.0 [official binaries](https://github.com/GoogleCloudPlatform/kubernetes/releases/tag/v0.17.0) - -## Prerequisites - -* [aws CLI](http://aws.amazon.com/cli) -* [CoreOS image for AWS](https://coreos.com/docs/running-coreos/cloud-providers/ec2/) -* [kubectl CLI](aws/kubectl.md) - -## Starting a Cluster - -### CloudFormation - -The [cloudformation-template.json](aws/cloudformation-template.json) can be used to bootstrap a Kubernetes cluster with a single command: - -```bash -aws cloudformation create-stack --stack-name kubernetes --region us-west-2 \ ---template-body file://aws/cloudformation-template.json \ ---parameters ParameterKey=KeyPair,ParameterValue= \ - ParameterKey=ClusterSize,ParameterValue= \ - ParameterKey=VpcId,ParameterValue= \ - ParameterKey=SubnetId,ParameterValue= \ - ParameterKey=SubnetAZ,ParameterValue= -``` - -It will take a few minutes for the entire stack to come up. You can monitor the stack progress with the following command: - -```bash -aws cloudformation describe-stack-events --stack-name kubernetes -``` - -Record the Kubernetes Master IP address: - -```bash -aws cloudformation describe-stacks --stack-name kubernetes -``` - -[Skip to kubectl client configuration](#configure-the-kubectl-ssh-tunnel) - -### AWS CLI - -The following commands shall use the latest CoreOS alpha AMI for the `us-west-2` region. For a list of different regions and corresponding AMI IDs see the [CoreOS EC2 cloud provider documentation](https://coreos.com/docs/running-coreos/cloud-providers/ec2/#choosing-a-channel). - -#### Create the Kubernetes Security Group - -```bash -aws ec2 create-security-group --group-name kubernetes --description "Kubernetes Security Group" -aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 22 --cidr 0.0.0.0/0 -aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 80 --cidr 0.0.0.0/0 -aws ec2 authorize-security-group-ingress --group-name kubernetes --source-security-group-name kubernetes -``` - -#### Save the master and node cloud-configs - -* [master.yaml](aws/cloud-configs/master.yaml) -* [node.yaml](aws/cloud-configs/node.yaml) - -#### Launch the master - -*Attention:* replace `` below for a [suitable version of CoreOS image for AWS](https://coreos.com/docs/running-coreos/cloud-providers/ec2/). - -```bash -aws ec2 run-instances --image-id --key-name \ ---region us-west-2 --security-groups kubernetes --instance-type m3.medium \ ---user-data file://master.yaml -``` - -Record the `InstanceId` for the master. - -Gather the public and private IPs for the master node: - -```bash -aws ec2 describe-instances --instance-id -``` - -``` -{ - "Reservations": [ - { - "Instances": [ - { - "PublicDnsName": "ec2-54-68-97-117.us-west-2.compute.amazonaws.com", - "RootDeviceType": "ebs", - "State": { - "Code": 16, - "Name": "running" - }, - "PublicIpAddress": "54.68.97.117", - "PrivateIpAddress": "172.31.9.9", -... -``` - -#### Update the node.yaml cloud-config - -Edit `node.yaml` and replace all instances of `` with the **private** IP address of the master node. - -### Launch 3 worker nodes - -*Attention:* Replace `` below for a [suitable version of CoreOS image for AWS](https://coreos.com/docs/running-coreos/cloud-providers/ec2/#choosing-a-channel). - -```bash -aws ec2 run-instances --count 3 --image-id --key-name \ ---region us-west-2 --security-groups kubernetes --instance-type m3.medium \ ---user-data file://node.yaml -``` - -### Add additional worker nodes - -*Attention:* replace `` below for a [suitable version of CoreOS image for AWS](https://coreos.com/docs/running-coreos/cloud-providers/ec2/#choosing-a-channel). - -```bash -aws ec2 run-instances --count 1 --image-id --key-name \ ---region us-west-2 --security-groups kubernetes --instance-type m3.medium \ ---user-data file://node.yaml -``` - -### Configure the kubectl SSH tunnel - -This command enables secure communication between the kubectl client and the Kubernetes API. - -```bash -ssh -f -nNT -L 8080:127.0.0.1:8080 core@ -``` - -### Listing worker nodes - -Once the worker instances have fully booted, they will be automatically registered with the Kubernetes API server by the kube-register service running on the master node. It may take a few mins. - -```bash -kubectl get nodes -``` - -## Starting a simple pod - -Create a pod manifest: `pod.json` - -```json -{ - "apiVersion": "v1", - "kind": "Pod", - "metadata": { - "name": "hello", - "labels": { - "name": "hello", - "environment": "testing" - } - }, - "spec": { - "containers": [{ - "name": "hello", - "image": "quay.io/kelseyhightower/hello", - "ports": [{ - "containerPort": 80, - "hostPort": 80 - }] - }] - } -} -``` - -### Create the pod using the kubectl command line tool - -```bash -kubectl create -f pod.json -``` - -### Testing - -```bash -kubectl get pods -``` - -Record the **Host** of the pod, which should be the private IP address. - -Gather the public IP address for the worker node. - -```bash -aws ec2 describe-instances --filters 'Name=private-ip-address,Values=' -``` - -``` -{ - "Reservations": [ - { - "Instances": [ - { - "PublicDnsName": "ec2-54-68-97-117.us-west-2.compute.amazonaws.com", - "RootDeviceType": "ebs", - "State": { - "Code": 16, - "Name": "running" - }, - "PublicIpAddress": "54.68.97.117", -... -``` - -Visit the public IP address in your browser to view the running pod. - -### Delete the pod - -```bash -kubectl delete pods hello -``` - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/aws-coreos.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/getting-started-guides/aws-coreos.md?pixel)]() diff --git a/release-0.19.0/docs/getting-started-guides/aws.md b/release-0.19.0/docs/getting-started-guides/aws.md deleted file mode 100644 index d89199fdf44..00000000000 --- a/release-0.19.0/docs/getting-started-guides/aws.md +++ /dev/null @@ -1,89 +0,0 @@ -# Getting started on AWS EC2 - -## Prerequisites - -1. You need an AWS account. Visit [http://aws.amazon.com](http://aws.amazon.com) to get started -2. Install and configure [AWS Command Line Interface](http://aws.amazon.com/cli) -3. You need an AWS [instance profile and role](http://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html) with EC2 full access. - -## Cluster turnup -### Supported procedure: `get-kube` -```bash -#Using wget -export KUBERNETES_PROVIDER=aws; wget -q -O - https://get.k8s.io | bash - -#Using cURL -export KUBERNETES_PROVIDER=aws; curl -sS https://get.k8s.io | bash -``` - -NOTE: This script calls [cluster/kube-up.sh](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/kube-up.sh) -which in turn calls [cluster/aws/util.sh](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/aws/util.sh) -using [cluster/aws/config-default.sh](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/aws/config-default.sh). - -This process takes about 5 to 10 minutes. Once the cluster is up, the IP addresses of your master and node(s) will be printed, -as well as information about the default services running in the cluster (monitoring, logging, dns). User credentials and security -tokens are written in `~/.kube/kubeconfig`, they will be necessary to use the CLI or the HTTP Basic Auth. - -By default, the script will provision a new VPC and a 4 node k8s cluster in us-west-2a (Oregon) with `t2.micro` instances running on Ubuntu. -You can override the variables defined in [config-default.sh](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/aws/config-default.sh) to change this behavior as follows: - -```bash -export KUBE_AWS_ZONE=eu-west-1c -export NUM_MINIONS=2 -export MINION_SIZE=m3.medium -export AWS_S3_REGION=eu-west-1 -export AWS_S3_BUCKET=mycompany-kubernetes-artifacts -export INSTANCE_PREFIX=k8s -... -``` - -It will also try to create or reuse a keypair called "kubernetes", and IAM profiles called "kubernetes-master" and "kubernetes-minion". -If these already exist, make sure you want them to be used here. - -NOTE: If using an existing keypair named "kubernetes" then you must set the `AWS_SSH_KEY` key to point to your private key. - -### Alternatives -A contributed [example](aws-coreos.md) allows you to setup a Kubernetes cluster based on [CoreOS](http://www.coreos.com), either using -AWS CloudFormation or EC2 with user data (cloud-config). - -## Getting started with your cluster -### Command line administration tool: `kubectl` -Copy the appropriate `kubectl` binary to any location defined in your `PATH` environment variable, for example: - -```bash -# OS X -sudo cp kubernetes/platforms/darwin/amd64/kubectl /usr/local/bin/kubectl - -# Linux -sudo cp kubernetes/platforms/linux/amd64/kubectl /usr/local/bin/kubectl -``` - -An up-to-date documentation page for this tool is available here: [kubectl manual](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/kubectl.md) - -By default, `kubectl` will use the `kubeconfig` file generated during the cluster startup for authenticating against the API. -For more information, please read [kubeconfig files](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/kubeconfig-file.md) - -### Examples -See [a simple nginx example](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/examples/simple-nginx.md) to try out your new cluster. - -The "Guestbook" application is another popular example to get started with Kubernetes: [guestbook example](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/guestbook) - -For more complete applications, please look in the [examples directory](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/examples) - -## Tearing down the cluster -Make sure the environment variables you used to provision your cluster are still exported, then call the following script inside the -`kubernetes` directory: - -```bash -cluster/kube-down.sh -``` - -## Further reading -Please see the [Kubernetes docs](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/docs) for more details on administering -and using a Kubernetes cluster. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/aws.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/getting-started-guides/aws.md?pixel)]() diff --git a/release-0.19.0/docs/getting-started-guides/aws/cloud-configs/master.yaml b/release-0.19.0/docs/getting-started-guides/aws/cloud-configs/master.yaml deleted file mode 100644 index af8d61078a7..00000000000 --- a/release-0.19.0/docs/getting-started-guides/aws/cloud-configs/master.yaml +++ /dev/null @@ -1,177 +0,0 @@ -#cloud-config - -write_files: - - path: /opt/bin/waiter.sh - owner: root - permissions: 0755 - content: | - #! /usr/bin/bash - until curl http://127.0.0.1:2379/v2/machines; do sleep 2; done - -coreos: - etcd2: - name: master - initial-cluster-token: k8s_etcd - initial-cluster: master=http://$private_ipv4:2380 - listen-peer-urls: http://$private_ipv4:2380,http://localhost:2380 - initial-advertise-peer-urls: http://$private_ipv4:2380 - listen-client-urls: http://$private_ipv4:2379,http://localhost:2379 - advertise-client-urls: http://$private_ipv4:2379 - fleet: - etcd_servers: http://localhost:2379 - metadata: k8srole=master - flannel: - etcd_endpoints: http://localhost:2379 - locksmithd: - endpoint: http://localhost:2379 - units: - - name: etcd2.service - command: start - - name: fleet.service - command: start - - name: etcd2-waiter.service - command: start - content: | - [Unit] - Description=etcd waiter - Wants=network-online.target - Wants=etcd2.service - After=etcd2.service - After=network-online.target - Before=flanneld.service fleet.service locksmithd.service - - [Service] - ExecStart=/usr/bin/bash /opt/bin/waiter.sh - RemainAfterExit=true - Type=oneshot - - name: flanneld.service - command: start - drop-ins: - - name: 50-network-config.conf - content: | - [Service] - ExecStartPre=-/usr/bin/etcdctl mk /coreos.com/network/config '{"Network": "10.244.0.0/16", "Backend": {"Type": "vxlan"}}' - - name: docker-cache.service - command: start - content: | - [Unit] - Description=Docker cache proxy - Requires=early-docker.service - After=early-docker.service - Before=early-docker.target - - [Service] - Restart=always - TimeoutStartSec=0 - RestartSec=5 - Environment=TMPDIR=/var/tmp/ - Environment=DOCKER_HOST=unix:///var/run/early-docker.sock - ExecStartPre=-/usr/bin/docker kill docker-registry - ExecStartPre=-/usr/bin/docker rm docker-registry - ExecStartPre=/usr/bin/docker pull quay.io/devops/docker-registry:latest - # GUNICORN_OPTS is an workaround for - # https://github.com/docker/docker-registry/issues/892 - ExecStart=/usr/bin/docker run --rm --net host --name docker-registry \ - -e STANDALONE=false \ - -e GUNICORN_OPTS=[--preload] \ - -e MIRROR_SOURCE=https://registry-1.docker.io \ - -e MIRROR_SOURCE_INDEX=https://index.docker.io \ - -e MIRROR_TAGS_CACHE_TTL=1800 \ - quay.io/devops/docker-registry:latest - - name: docker.service - drop-ins: - - name: 51-docker-mirror.conf - content: | - [Unit] - # making sure that docker-cache is up and that flanneld finished - # startup, otherwise containers won't land in flannel's network... - Requires=docker-cache.service - After=docker-cache.service - - [Service] - Environment=DOCKER_OPTS='--registry-mirror=http://$private_ipv4:5000' - - name: get-kubectl.service - command: start - content: | - [Unit] - Description=Get kubectl client tool - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=network-online.target - After=network-online.target - - [Service] - ExecStart=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.17.0/bin/linux/amd64/kubectl - ExecStart=/usr/bin/chmod +x /opt/bin/kubectl - Type=oneshot - RemainAfterExit=true - - name: kube-apiserver.service - command: start - content: | - [Unit] - Description=Kubernetes API Server - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=etcd2-waiter.service - After=etcd2-waiter.service - - [Service] - ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.17.0/bin/linux/amd64/kube-apiserver - ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-apiserver - ExecStart=/opt/bin/kube-apiserver \ - --insecure-bind-address=0.0.0.0 \ - --service-cluster-ip-range=10.100.0.0/16 \ - --etcd-servers=http://localhost:2379 - Restart=always - RestartSec=10 - - name: kube-controller-manager.service - command: start - content: | - [Unit] - Description=Kubernetes Controller Manager - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=kube-apiserver.service - After=kube-apiserver.service - - [Service] - ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.17.0/bin/linux/amd64/kube-controller-manager - ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-controller-manager - ExecStart=/opt/bin/kube-controller-manager \ - --master=127.0.0.1:8080 - Restart=always - RestartSec=10 - - name: kube-scheduler.service - command: start - content: | - [Unit] - Description=Kubernetes Scheduler - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=kube-apiserver.service - After=kube-apiserver.service - - [Service] - ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.17.0/bin/linux/amd64/kube-scheduler - ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-scheduler - ExecStart=/opt/bin/kube-scheduler \ - --master=127.0.0.1:8080 - Restart=always - RestartSec=10 - - name: kube-register.service - command: start - content: | - [Unit] - Description=Kubernetes Registration Service - Documentation=https://github.com/kelseyhightower/kube-register - Requires=kube-apiserver.service fleet.service - After=kube-apiserver.service fleet.service - - [Service] - ExecStartPre=-/usr/bin/wget -nc -O /opt/bin/kube-register https://github.com/kelseyhightower/kube-register/releases/download/v0.0.3/kube-register-0.0.3-linux-amd64 - ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-register - ExecStart=/opt/bin/kube-register \ - --metadata=k8srole=node \ - --fleet-endpoint=unix:///var/run/fleet.sock \ - --api-endpoint=http://127.0.0.1:8080 - Restart=always - RestartSec=10 - update: - group: alpha - reboot-strategy: off diff --git a/release-0.19.0/docs/getting-started-guides/aws/cloud-configs/node.yaml b/release-0.19.0/docs/getting-started-guides/aws/cloud-configs/node.yaml deleted file mode 100644 index 9d3d61d868a..00000000000 --- a/release-0.19.0/docs/getting-started-guides/aws/cloud-configs/node.yaml +++ /dev/null @@ -1,81 +0,0 @@ -#cloud-config - -write_files: - - path: /opt/bin/wupiao - owner: root - permissions: 0755 - content: | - #!/bin/bash - # [w]ait [u]ntil [p]ort [i]s [a]ctually [o]pen - [ -n "$1" ] && [ -n "$2" ] && while ! curl --output /dev/null \ - --silent --head --fail \ - http://${1}:${2}; do sleep 1 && echo -n .; done; - exit $? - -coreos: - etcd2: - listen-client-urls: http://localhost:2379 - advertise-client-urls: http://0.0.0.0:2379 - initial-cluster: master=http://:2380 - proxy: on - fleet: - etcd_servers: http://localhost:2379 - metadata: k8srole=node - flannel: - etcd_endpoints: http://localhost:2379 - locksmithd: - endpoint: http://localhost:2379 - units: - - name: etcd2.service - command: start - - name: fleet.service - command: start - - name: flanneld.service - command: start - - name: docker.service - command: start - drop-ins: - - name: 50-docker-mirror.conf - content: | - [Service] - Environment=DOCKER_OPTS='--registry-mirror=http://:5000' - - name: kubelet.service - command: start - content: | - [Unit] - Description=Kubernetes Kubelet - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=network-online.target - After=network-online.target - - [Service] - ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.17.0/bin/linux/amd64/kubelet - ExecStartPre=/usr/bin/chmod +x /opt/bin/kubelet - # wait for kubernetes master to be up and ready - ExecStartPre=/opt/bin/wupiao 8080 - ExecStart=/opt/bin/kubelet \ - --api-servers=:8080 \ - --hostname-override=$private_ipv4 - Restart=always - RestartSec=10 - - name: kube-proxy.service - command: start - content: | - [Unit] - Description=Kubernetes Proxy - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=network-online.target - After=network-online.target - - [Service] - ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.17.0/bin/linux/amd64/kube-proxy - ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-proxy - # wait for kubernetes master to be up and ready - ExecStartPre=/opt/bin/wupiao 8080 - ExecStart=/opt/bin/kube-proxy \ - --master=http://:8080 - Restart=always - RestartSec=10 - update: - group: alpha - reboot-strategy: off diff --git a/release-0.19.0/docs/getting-started-guides/aws/cloudformation-template.json b/release-0.19.0/docs/getting-started-guides/aws/cloudformation-template.json deleted file mode 100644 index 7617445125c..00000000000 --- a/release-0.19.0/docs/getting-started-guides/aws/cloudformation-template.json +++ /dev/null @@ -1,421 +0,0 @@ -{ - "AWSTemplateFormatVersion": "2010-09-09", - "Description": "Kubernetes 0.17.0 on EC2 powered by CoreOS 681.0.0 (alpha)", - "Mappings": { - "RegionMap": { - "eu-central-1" : { - "AMI" : "ami-4c4f7151" - }, - "ap-northeast-1" : { - "AMI" : "ami-3a35fd3a" - }, - "us-gov-west-1" : { - "AMI" : "ami-57117174" - }, - "sa-east-1" : { - "AMI" : "ami-fbcc4ae6" - }, - "ap-southeast-2" : { - "AMI" : "ami-593c4263" - }, - "ap-southeast-1" : { - "AMI" : "ami-3a083668" - }, - "us-east-1" : { - "AMI" : "ami-40322028" - }, - "us-west-2" : { - "AMI" : "ami-23b58613" - }, - "us-west-1" : { - "AMI" : "ami-15618f51" - }, - "eu-west-1" : { - "AMI" : "ami-8d1164fa" - } - } - }, - "Parameters": { - "InstanceType": { - "Description": "EC2 HVM instance type (m3.medium, etc).", - "Type": "String", - "Default": "m3.medium", - "AllowedValues": [ - "m3.medium", - "m3.large", - "m3.xlarge", - "m3.2xlarge", - "c3.large", - "c3.xlarge", - "c3.2xlarge", - "c3.4xlarge", - "c3.8xlarge", - "cc2.8xlarge", - "cr1.8xlarge", - "hi1.4xlarge", - "hs1.8xlarge", - "i2.xlarge", - "i2.2xlarge", - "i2.4xlarge", - "i2.8xlarge", - "r3.large", - "r3.xlarge", - "r3.2xlarge", - "r3.4xlarge", - "r3.8xlarge", - "t2.micro", - "t2.small", - "t2.medium" - ], - "ConstraintDescription": "Must be a valid EC2 HVM instance type." - }, - "ClusterSize": { - "Description": "Number of nodes in cluster (2-12).", - "Default": "2", - "MinValue": "2", - "MaxValue": "12", - "Type": "Number" - }, - "AllowSSHFrom": { - "Description": "The net block (CIDR) that SSH is available to.", - "Default": "0.0.0.0/0", - "Type": "String" - }, - "KeyPair": { - "Description": "The name of an EC2 Key Pair to allow SSH access to the instance.", - "Type": "AWS::EC2::KeyPair::KeyName" - }, - "VpcId": { - "Description": "The ID of the VPC to launch into.", - "Type": "AWS::EC2::VPC::Id" - }, - "SubnetId": { - "Description": "The ID of the subnet to launch into (that must be within the supplied VPC)", - "Type": "AWS::EC2::Subnet::Id" - }, - "SubnetAZ": { - "Description": "The availability zone of the subnet supplied (for example eu-west-1a)", - "Type": "String" - } - }, - "Conditions": { - "UseEC2Classic": {"Fn::Equals": [{"Ref": "VpcId"}, ""]} - }, - "Resources": { - "KubernetesSecurityGroup": { - "Type": "AWS::EC2::SecurityGroup", - "Properties": { - "VpcId": {"Fn::If": ["UseEC2Classic", {"Ref": "AWS::NoValue"}, {"Ref": "VpcId"}]}, - "GroupDescription": "Kubernetes SecurityGroup", - "SecurityGroupIngress": [ - { - "IpProtocol": "tcp", - "FromPort": "22", - "ToPort": "22", - "CidrIp": {"Ref": "AllowSSHFrom"} - } - ] - } - }, - "KubernetesIngress": { - "Type": "AWS::EC2::SecurityGroupIngress", - "Properties": { - "GroupId": {"Fn::GetAtt": ["KubernetesSecurityGroup", "GroupId"]}, - "IpProtocol": "tcp", - "FromPort": "1", - "ToPort": "65535", - "SourceSecurityGroupId": { - "Fn::GetAtt" : [ "KubernetesSecurityGroup", "GroupId" ] - } - } - }, - "KubernetesIngressUDP": { - "Type": "AWS::EC2::SecurityGroupIngress", - "Properties": { - "GroupId": {"Fn::GetAtt": ["KubernetesSecurityGroup", "GroupId"]}, - "IpProtocol": "udp", - "FromPort": "1", - "ToPort": "65535", - "SourceSecurityGroupId": { - "Fn::GetAtt" : [ "KubernetesSecurityGroup", "GroupId" ] - } - } - }, - "KubernetesMasterInstance": { - "Type": "AWS::EC2::Instance", - "Properties": { - "NetworkInterfaces" : [{ - "GroupSet" : [{"Fn::GetAtt": ["KubernetesSecurityGroup", "GroupId"]}], - "AssociatePublicIpAddress" : "true", - "DeviceIndex" : "0", - "DeleteOnTermination" : "true", - "SubnetId" : {"Fn::If": ["UseEC2Classic", {"Ref": "AWS::NoValue"}, {"Ref": "SubnetId"}]} - }], - "ImageId": {"Fn::FindInMap" : ["RegionMap", {"Ref": "AWS::Region" }, "AMI"]}, - "InstanceType": {"Ref": "InstanceType"}, - "KeyName": {"Ref": "KeyPair"}, - "Tags" : [ - {"Key" : "Name", "Value" : {"Fn::Join" : [ "-", [ {"Ref" : "AWS::StackName"}, "k8s-master" ] ]}}, - {"Key" : "KubernetesRole", "Value" : "node"} - ], - "UserData": { "Fn::Base64": {"Fn::Join" : ["", [ - "#cloud-config\n\n", - "write_files:\n", - "- path: /opt/bin/waiter.sh\n", - " owner: root\n", - " content: |\n", - " #! /usr/bin/bash\n", - " until curl http://127.0.0.1:2379/v2/machines; do sleep 2; done\n", - "coreos:\n", - " etcd2:\n", - " name: master\n", - " initial-cluster-token: k8s_etcd\n", - " initial-cluster: master=http://$private_ipv4:2380\n", - " listen-peer-urls: http://$private_ipv4:2380,http://localhost:2380\n", - " initial-advertise-peer-urls: http://$private_ipv4:2380\n", - " listen-client-urls: http://$private_ipv4:2379,http://localhost:2379\n", - " advertise-client-urls: http://$private_ipv4:2379\n", - " fleet:\n", - " etcd_servers: http://localhost:2379\n", - " metadata: k8srole=master\n", - " flannel:\n", - " etcd_endpoints: http://localhost:2379\n", - " locksmithd:\n", - " endpoint: http://localhost:2379\n", - " units:\n", - " - name: etcd2.service\n", - " command: start\n", - " - name: fleet.service\n", - " command: start\n", - " - name: etcd2-waiter.service\n", - " command: start\n", - " content: |\n", - " [Unit]\n", - " Description=etcd waiter\n", - " Wants=network-online.target\n", - " Wants=etcd2.service\n", - " After=etcd2.service\n", - " After=network-online.target\n", - " Before=flanneld.service fleet.service locksmithd.service\n\n", - " [Service]\n", - " ExecStart=/usr/bin/bash /opt/bin/waiter.sh\n", - " RemainAfterExit=true\n", - " Type=oneshot\n", - " - name: flanneld.service\n", - " command: start\n", - " drop-ins:\n", - " - name: 50-network-config.conf\n", - " content: |\n", - " [Service]\n", - " ExecStartPre=-/usr/bin/etcdctl mk /coreos.com/network/config '{\"Network\": \"10.244.0.0/16\", \"Backend\": {\"Type\": \"vxlan\"}}'\n", - " - name: docker-cache.service\n", - " command: start\n", - " content: |\n", - " [Unit]\n", - " Description=Docker cache proxy\n", - " Requires=early-docker.service\n", - " After=early-docker.service\n", - " Before=early-docker.target\n\n", - " [Service]\n", - " Restart=always\n", - " TimeoutStartSec=0\n", - " RestartSec=5\n", - " Environment=TMPDIR=/var/tmp/\n", - " Environment=DOCKER_HOST=unix:///var/run/early-docker.sock\n", - " ExecStartPre=-/usr/bin/docker kill docker-registry\n", - " ExecStartPre=-/usr/bin/docker rm docker-registry\n", - " ExecStartPre=/usr/bin/docker pull quay.io/devops/docker-registry:latest\n", - " # GUNICORN_OPTS is an workaround for\n", - " # https://github.com/docker/docker-registry/issues/892\n", - " ExecStart=/usr/bin/docker run --rm --net host --name docker-registry \\\n", - " -e STANDALONE=false \\\n", - " -e GUNICORN_OPTS=[--preload] \\\n", - " -e MIRROR_SOURCE=https://registry-1.docker.io \\\n", - " -e MIRROR_SOURCE_INDEX=https://index.docker.io \\\n", - " -e MIRROR_TAGS_CACHE_TTL=1800 \\\n", - " quay.io/devops/docker-registry:latest\n", - " - name: get-kubectl.service\n", - " command: start\n", - " content: |\n", - " [Unit]\n", - " Description=Get kubectl client tool\n", - " Documentation=https://github.com/GoogleCloudPlatform/kubernetes\n", - " Requires=network-online.target\n", - " After=network-online.target\n\n", - " [Service]\n", - " ExecStart=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.17.0/bin/linux/amd64/kubectl\n", - " ExecStart=/usr/bin/chmod +x /opt/bin/kubectl\n", - " Type=oneshot\n", - " RemainAfterExit=true\n", - " - name: kube-apiserver.service\n", - " command: start\n", - " content: |\n", - " [Unit]\n", - " Description=Kubernetes API Server\n", - " Documentation=https://github.com/GoogleCloudPlatform/kubernetes\n", - " Requires=etcd2-waiter.service\n", - " After=etcd2-waiter.service\n\n", - " [Service]\n", - " ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.17.0/bin/linux/amd64/kube-apiserver\n", - " ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-apiserver\n", - " ExecStart=/opt/bin/kube-apiserver \\\n", - " --insecure-bind-address=0.0.0.0 \\\n", - " --service-cluster-ip-range=10.100.0.0/16 \\\n", - " --etcd-servers=http://localhost:2379\n", - " Restart=always\n", - " RestartSec=10\n", - " - name: kube-controller-manager.service\n", - " command: start\n", - " content: |\n", - " [Unit]\n", - " Description=Kubernetes Controller Manager\n", - " Documentation=https://github.com/GoogleCloudPlatform/kubernetes\n", - " Requires=kube-apiserver.service\n", - " After=kube-apiserver.service\n\n", - " [Service]\n", - " ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.17.0/bin/linux/amd64/kube-controller-manager\n", - " ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-controller-manager\n", - " ExecStart=/opt/bin/kube-controller-manager \\\n", - " --master=127.0.0.1:8080\n", - " Restart=always\n", - " RestartSec=10\n", - " - name: kube-scheduler.service\n", - " command: start\n", - " content: |\n", - " [Unit]\n", - " Description=Kubernetes Scheduler\n", - " Documentation=https://github.com/GoogleCloudPlatform/kubernetes\n", - " Requires=kube-apiserver.service\n", - " After=kube-apiserver.service\n\n", - " [Service]\n", - " ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.17.0/bin/linux/amd64/kube-scheduler\n", - " ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-scheduler\n", - " ExecStart=/opt/bin/kube-scheduler \\\n", - " --master=127.0.0.1:8080\n", - " Restart=always\n", - " RestartSec=10\n", - " - name: kube-register.service\n", - " command: start\n", - " content: |\n", - " [Unit]\n", - " Description=Kubernetes Registration Service\n", - " Documentation=https://github.com/kelseyhightower/kube-register\n", - " Requires=kube-apiserver.service fleet.service\n", - " After=kube-apiserver.service fleet.service\n\n", - " [Service]\n", - " ExecStartPre=-/usr/bin/wget -nc -O /opt/bin/kube-register https://github.com/kelseyhightower/kube-register/releases/download/v0.0.3/kube-register-0.0.3-linux-amd64\n", - " ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-register\n", - " ExecStart=/opt/bin/kube-register \\\n", - " --metadata=k8srole=node \\\n", - " --fleet-endpoint=unix:///var/run/fleet.sock \\\n", - " --api-endpoint=http://127.0.0.1:8080\n", - " Restart=always\n", - " RestartSec=10\n", - " update:\n", - " group: alpha\n", - " reboot-strategy: off\n" - ]]} - } - } - }, - "KubernetesNodeLaunchConfig": { - "Type": "AWS::AutoScaling::LaunchConfiguration", - "Properties": { - "ImageId": {"Fn::FindInMap" : ["RegionMap", {"Ref": "AWS::Region" }, "AMI" ]}, - "InstanceType": {"Ref": "InstanceType"}, - "KeyName": {"Ref": "KeyPair"}, - "AssociatePublicIpAddress" : "true", - "SecurityGroups": [{"Fn::If": [ - "UseEC2Classic", - {"Ref": "KubernetesSecurityGroup"}, - {"Fn::GetAtt": ["KubernetesSecurityGroup", "GroupId"]}] - }], - "UserData": { "Fn::Base64": {"Fn::Join" : ["", [ - "#cloud-config\n\n", - "coreos:\n", - " etcd2:\n", - " listen-client-urls: http://localhost:2379\n", - " initial-cluster: master=http://", {"Fn::GetAtt" :["KubernetesMasterInstance" , "PrivateIp"]}, ":2380\n", - " proxy: on\n", - " fleet:\n", - " etcd_servers: http://localhost:2379\n", - " metadata: k8srole=node\n", - " flannel:\n", - " etcd_endpoints: http://localhost:2379\n", - " locksmithd:\n", - " endpoint: http://localhost:2379\n", - " units:\n", - " - name: etcd2.service\n", - " command: start\n", - " - name: fleet.service\n", - " command: start\n", - " - name: flanneld.service\n", - " command: start\n", - " - name: docker.service\n", - " command: start\n", - " drop-ins:\n", - " - name: 50-docker-mirror.conf\n", - " content: |\n", - " [Service]\n", - " Environment=DOCKER_OPTS='--registry-mirror=http://", {"Fn::GetAtt" :["KubernetesMasterInstance" , "PrivateIp"]}, ":5000'\n", - " - name: kubelet.service\n", - " command: start\n", - " content: |\n", - " [Unit]\n", - " Description=Kubernetes Kubelet\n", - " Documentation=https://github.com/GoogleCloudPlatform/kubernetes\n", - " Requires=network-online.target\n", - " After=network-online.target\n\n", - " [Service]\n", - " ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.17.0/bin/linux/amd64/kubelet\n", - " ExecStartPre=/usr/bin/chmod +x /opt/bin/kubelet\n", - " ExecStart=/opt/bin/kubelet \\\n", - " --api-servers=", {"Fn::GetAtt" :["KubernetesMasterInstance" , "PrivateIp"]}, ":8080 \\\n", - " --hostname-override=$private_ipv4\n", - " Restart=always\n", - " RestartSec=10\n", - " - name: kube-proxy.service\n", - " command: start\n", - " content: |\n", - " [Unit]\n", - " Description=Kubernetes Proxy\n", - " Documentation=https://github.com/GoogleCloudPlatform/kubernetes\n", - " Requires=network-online.target\n", - " After=network-online.target\n\n", - " [Service]\n", - " ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.17.0/bin/linux/amd64/kube-proxy\n", - " ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-proxy\n", - " ExecStart=/opt/bin/kube-proxy \\\n", - " --master=http://", {"Fn::GetAtt" :["KubernetesMasterInstance" , "PrivateIp"]}, ":8080\n", - " Restart=always\n", - " RestartSec=10\n", - " update:\n", - " group: alpha\n", - " reboot-strategy: off\n" - ]]} - } - } - }, - "KubernetesAutoScalingGroup": { - "Type": "AWS::AutoScaling::AutoScalingGroup", - "Properties": { - "AvailabilityZones": {"Fn::If": ["UseEC2Classic", {"Fn::GetAZs": ""}, [{"Ref": "SubnetAZ"}]]}, - "VPCZoneIdentifier": {"Fn::If": ["UseEC2Classic", {"Ref": "AWS::NoValue"}, [{"Ref": "SubnetId"}]]}, - "LaunchConfigurationName": {"Ref": "KubernetesNodeLaunchConfig"}, - "MinSize": "2", - "MaxSize": "12", - "DesiredCapacity": {"Ref": "ClusterSize"}, - "Tags" : [ - {"Key" : "Name", "Value" : {"Fn::Join" : [ "-", [ {"Ref" : "AWS::StackName"}, "k8s-node" ] ]}, "PropagateAtLaunch" : true}, - {"Key" : "KubernetesRole", "Value" : "node", "PropagateAtLaunch" : true} - ] - } - } - }, - "Outputs": { - "KubernetesMasterPublicIp": { - "Description": "Public Ip of the newly created Kubernetes Master instance", - "Value": {"Fn::GetAtt": ["KubernetesMasterInstance" , "PublicIp"]} - } - } -} diff --git a/release-0.19.0/docs/getting-started-guides/aws/kubectl.md b/release-0.19.0/docs/getting-started-guides/aws/kubectl.md deleted file mode 100644 index 473947855da..00000000000 --- a/release-0.19.0/docs/getting-started-guides/aws/kubectl.md +++ /dev/null @@ -1,27 +0,0 @@ -# Install and configure kubectl - -## Download the kubectl CLI tool -```bash -### Darwin -wget https://storage.googleapis.com/kubernetes-release/release/v0.17.0/bin/darwin/amd64/kubectl - -### Linux -wget https://storage.googleapis.com/kubernetes-release/release/v0.17.0/bin/linux/amd64/kubectl -``` - -### Copy kubectl to your path -```bash -chmod +x kubectl -mv kubectl /usr/local/bin/ -``` - -### Create a secure tunnel for API communication -```bash -ssh -f -nNT -L 8080:127.0.0.1:8080 core@ -``` - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/aws/kubectl.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/getting-started-guides/aws/kubectl.md?pixel)]() diff --git a/release-0.19.0/docs/getting-started-guides/azure.md b/release-0.19.0/docs/getting-started-guides/azure.md deleted file mode 100644 index 4fefe919bc2..00000000000 --- a/release-0.19.0/docs/getting-started-guides/azure.md +++ /dev/null @@ -1,54 +0,0 @@ -## Getting started on Microsoft Azure - -### Azure Prerequisites - -1. You need an Azure account. Visit http://azure.microsoft.com/ to get started. -2. Install and configure the Azure cross-platform command-line interface. http://azure.microsoft.com/en-us/documentation/articles/xplat-cli/ -3. Make sure you have a default account set in the Azure cli, using `azure account set` - -### Prerequisites for your workstation - -1. Be running a Linux or Mac OS X. -2. Get or build a [binary release](binary_release.md) -3. If you want to build your own release, you need to have [Docker -installed](https://docs.docker.com/installation/). On Mac OS X you can use -[boot2docker](http://boot2docker.io/). - -### Setup -The cluster setup scripts can setup Kubernetes for multiple targets. First modify `cluster/kube-env.sh` to specify azure: - - KUBERNETES_PROVIDER="azure" - -Next, specify an existing virtual network and subnet in `cluster/azure/config-default.sh`: - - AZ_VNET= - AZ_SUBNET= - -You can create a virtual network: - - azure network vnet create --subnet= --location "West US" -v - -Now you're ready. - -You can then use the `cluster/kube-*.sh` scripts to manage your azure cluster, start with: - - cluster/kube-up.sh - -The script above will start (by default) a single master VM along with 4 worker VMs. You -can tweak some of these parameters by editing `cluster/azure/config-default.sh`. - -### Getting started with your cluster -See [a simple nginx example](../../examples/simple-nginx.md) to try out your new cluster. - -For more complete applications, please look in the [examples directory](../../examples). - -### Tearing down the cluster -``` -cluster/kube-down.sh -``` - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/azure.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/getting-started-guides/azure.md?pixel)]() diff --git a/release-0.19.0/docs/getting-started-guides/binary_release.md b/release-0.19.0/docs/getting-started-guides/binary_release.md deleted file mode 100644 index cd9746bdd43..00000000000 --- a/release-0.19.0/docs/getting-started-guides/binary_release.md +++ /dev/null @@ -1,29 +0,0 @@ -## Getting a Binary Release - -You can either build a release from sources or download a pre-built release. If you don't plan on developing Kubernetes itself, we suggest a pre-built release. - -### Prebuilt Binary Release - -The list of binary releases is available for download from the [GitHub Kubernetes repo release page](https://github.com/GoogleCloudPlatform/kubernetes/releases). - -Download the latest release and unpack this tar file on Linux or OS X, cd to the created `kubernetes/` directory, and then follow the getting started guide for your cloud. - -### Building from source - -Get the Kubernetes source. If you are simply building a release from source there is no need to set up a full golang environment as all building happens in a Docker container. - -Building a release is simple. - -```bash -git clone https://github.com/GoogleCloudPlatform/kubernetes.git -cd kubernetes -make release -``` - -For more details on the release process see the [`build/` directory](../../build) - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/binary_release.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/getting-started-guides/binary_release.md?pixel)]() diff --git a/release-0.19.0/docs/getting-started-guides/centos/centos_manual_config.md b/release-0.19.0/docs/getting-started-guides/centos/centos_manual_config.md deleted file mode 100644 index 4853bbb33eb..00000000000 --- a/release-0.19.0/docs/getting-started-guides/centos/centos_manual_config.md +++ /dev/null @@ -1,170 +0,0 @@ - -##Getting started on [CentOS](http://centos.org) - -This is a getting started guide for CentOS. It is a manual configuration so you understand all the underlying packages / services / ports, etc... - -This guide will only get ONE minion working. Multiple minions requires a functional [networking configuration](http://docs.k8s.io/networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious. - -The kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, centos-master, will be the kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_. The remaining host, centos-minion will be the minion and run kubelet, proxy, cadvisor and docker. - -**System Information:** - -Hosts: -``` -centos-master = 192.168.121.9 -centos-minion = 192.168.121.65 -``` - -**Prepare the hosts:** - -* Create virt7-testing repo on all hosts - centos-{master,minion} with following information. - -``` -[virt7-testing] -name=virt7-testing -baseurl=http://cbs.centos.org/repos/virt7-testing/x86_64/os/ -gpgcheck=0 -``` - -* Install kubernetes on all hosts - centos-{master,minion}. This will also pull in etcd, docker, and cadvisor. - -``` -yum -y install --enablerepo=virt7-testing kubernetes -``` - -* Note * Using etcd-0.4.6-7 (This is temperory update in documentation) - -If you do not get etcd-0.4.6-7 installed with virt7-testing repo, - -In the current virt7-testing repo, the etcd package is updated which causes service failure. To avoid this, - -``` -yum erase etcd -``` - -It will uninstall the current available etcd package - -``` -yum install http://cbs.centos.org/kojifiles/packages/etcd/0.4.6/7.el7.centos/x86_64/etcd-0.4.6-7.el7.centos.x86_64.rpm -yum -y install --enablerepo=virt7-testing kubernetes -``` - -* Add master and minion to /etc/hosts on all machines (not needed if hostnames already in DNS) - -``` -echo "192.168.121.9 centos-master -192.168.121.65 centos-minion" >> /etc/hosts -``` - -* Edit /etc/kubernetes/config which will be the same on all hosts to contain: - -``` -# Comma separated list of nodes in the etcd cluster -KUBE_ETCD_SERVERS="--etcd_servers=http://centos-master:4001" - -# logging to stderr means we get it in the systemd journal -KUBE_LOGTOSTDERR="--logtostderr=true" - -# journal message level, 0 is debug -KUBE_LOG_LEVEL="--v=0" - -# Should this cluster be allowed to run privileged docker containers -KUBE_ALLOW_PRIV="--allow_privileged=false" -``` - -* Disable the firewall on both the master and minon, as docker does not play well with other firewall rule managers - -``` -systemctl disable iptables-services firewalld -systemctl stop iptables-services firewalld -``` - -**Configure the kubernetes services on the master.** - -* Edit /etc/kubernetes/apiserver to appear as such: - -``` -# The address on the local server to listen to. -KUBE_API_ADDRESS="--address=0.0.0.0" - -# The port on the local server to listen on. -KUBE_API_PORT="--port=8080" - -# How the replication controller and scheduler find the kube-apiserver -KUBE_MASTER="--master=http://centos-master:8080" - -# Port minions listen on -KUBELET_PORT="--kubelet_port=10250" - -# Address range to use for services -KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" - -# Add your own! -KUBE_API_ARGS="" -``` - -* Edit /etc/kubernetes/controller-manager to appear as such: -``` -# Comma separated list of minions -KUBELET_ADDRESSES="--machines=centos-minion" -``` - -* Start the appropriate services on master: - -``` -for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do - systemctl restart $SERVICES - systemctl enable $SERVICES - systemctl status $SERVICES -done -``` - -**Configure the kubernetes services on the minion.** - -***We need to configure the kubelet and start the kubelet and proxy*** - -* Edit /etc/kubernetes/kubelet to appear as such: - -``` -# The address for the info server to serve on -KUBELET_ADDRESS="--address=0.0.0.0" - -# The port for the info server to serve on -KUBELET_PORT="--port=10250" - -# You may leave this blank to use the actual hostname -KUBELET_HOSTNAME="--hostname_override=centos-minion" - -# Add your own! -KUBELET_ARGS="" -``` - -* Start the appropriate services on minion (centos-minion). - -``` -for SERVICES in kube-proxy kubelet docker; do - systemctl restart $SERVICES - systemctl enable $SERVICES - systemctl status $SERVICES -done -``` - -*You should be finished!* - -* Check to make sure the cluster can see the minion (on centos-master) - -``` -kubectl get minions -NAME LABELS STATUS -centos-minion Ready -``` - -**The cluster should be running! Launch a test pod.** - -You should have a functional cluster, check out [101](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/examples/walkthrough/README.md)! - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/centos/centos_manual_config.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/getting-started-guides/centos/centos_manual_config.md?pixel)]() diff --git a/release-0.19.0/docs/getting-started-guides/cloudstack.md b/release-0.19.0/docs/getting-started-guides/cloudstack.md deleted file mode 100644 index 3c83ad7e4a0..00000000000 --- a/release-0.19.0/docs/getting-started-guides/cloudstack.md +++ /dev/null @@ -1,96 +0,0 @@ -## Deploying Kubernetes on [CloudStack](http://cloudstack.apache.org) - -CloudStack is a software to build public and private clouds based on hardware virtualization principles (traditional IaaS). To deploy Kubernetes on CloudStack there are several possibilities depending on the Cloud being used and what images are made available. [Exoscale](http://exoscale.ch) for instance makes a [CoreOS](http://coreos.com) template available, therefore instructions to deploy Kubernetes on coreOS can be used. CloudStack also has a vagrant plugin available, hence Vagrant could be used to deploy Kubernetes either using the existing shell provisioner or using new Salt based recipes. - -[CoreOS](http://coreos.com) templates for CloudStack are built [nightly](http://stable.release.core-os.net/amd64-usr/current/). CloudStack operators need to [register](http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/templates.html) this template in their cloud before proceeding with these Kubernetes deployment instructions. - -There are currently two deployment techniques. - -* [Kubernetes on Exoscale](https://github.com/runseb/kubernetes-exoscale). - This uses [libcloud](http://libcloud.apache.org) to launch CoreOS instances and pass the appropriate cloud-config setup using userdata. Several manual steps are required. This is obsoleted by the Ansible playbook detailed below. - -* [Ansible playbook](https://github.com/runseb/ansible-kubernetes). - This is completely automated, a single playbook deploys Kubernetes based on the coreOS [instructions](http://docs.k8s.io/getting-started-guides/coreos/coreos_multinode_cluster.md). - -#Ansible playbook - -This [Ansible](http://ansibleworks.com) playbook deploys Kubernetes on a CloudStack based Cloud using CoreOS images. The playbook, creates an ssh key pair, creates a security group and associated rules and finally starts coreOS instances configured via cloud-init. - -Prerequisites -------------- - - $ sudo apt-get install -y python-pip - $ sudo pip install ansible - $ sudo pip install cs - -[_cs_](http://github.com/exoscale/cs) is a python module for the CloudStack API. - -Set your CloudStack endpoint, API keys and HTTP method used. - -You can define them as environment variables: `CLOUDSTACK_ENDPOINT`, `CLOUDSTACK_KEY`, `CLOUDSTACK_SECRET` and `CLOUDSTACK_METHOD`. - -Or create a `~/.cloudstack.ini` file: - - [cloudstack] - endpoint = - key = - secret = - method = post - -We need to use the http POST method to pass the _large_ userdata to the coreOS instances. - -Clone the playbook ------------------- - - $ git clone --recursive https://github.com/runseb/ansible-kubernetes.git - $ cd ansible-kubernetes - -The [ansible-cloudstack](https://github.com/resmo/ansible-cloudstack) module is setup in this repository as a submodule, hence the `--recursive`. - -Create a Kubernetes cluster ---------------------------- - -You simply need to run the playbook. - - $ ansible-playbook k8s.yml - -Some variables can be edited in the `k8s.yml` file. - - vars: - ssh_key: k8s - k8s_num_nodes: 2 - k8s_security_group_name: k8s - k8s_node_prefix: k8s2 - k8s_template: Linux CoreOS alpha 435 64-bit 10GB Disk - k8s_instance_type: Tiny - -This will start a Kubernetes master node and a number of compute nodes (by default 2). -The `instance_type` and `template` by default are specific to [exoscale](http://exoscale.ch), edit them to specify your CloudStack cloud specific template and instance type (i.e service offering). - -Check the tasks and templates in `roles/k8s` if you want to modify anything. - -Once the playbook as finished, it will print out the IP of the Kubernetes master: - - TASK: [k8s | debug msg='k8s master IP is {{ k8s_master.default_ip }}'] ******** - -SSH to it using the key that was created and using the _core_ user and you can list the machines in your cluster: - - $ ssh -i ~/.ssh/id_rsa_k8s core@ - $ fleetctl list-machines - MACHINE IP METADATA - a017c422... role=node - ad13bf84... role=master - e9af8293... role=node - - - - - - - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/cloudstack.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/getting-started-guides/cloudstack.md?pixel)]() diff --git a/release-0.19.0/docs/getting-started-guides/coreos.md b/release-0.19.0/docs/getting-started-guides/coreos.md deleted file mode 100644 index d9cef74a817..00000000000 --- a/release-0.19.0/docs/getting-started-guides/coreos.md +++ /dev/null @@ -1,18 +0,0 @@ -## Getting started on [CoreOS](http://coreos.com) - -There are multiple guides on running Kubernetes with [CoreOS](http://coreos.com): - -* [Single Node Cluster](coreos/coreos_single_node_cluster.md) -* [Multi-node Cluster](coreos/coreos_multinode_cluster.md) -* [Setup Multi-node Cluster on GCE in an easy way](https://github.com/rimusz/coreos-multi-node-k8s-gce/blob/master/README.md) -* [Multi-node cluster using cloud-config and Weave on Vagrant](https://github.com/errordeveloper/weave-demos/blob/master/poseidon/README.md) -* [Multi-node cluster using cloud-config and Vagrant](https://github.com/pires/kubernetes-vagrant-coreos-cluster/blob/master/README.md) -* [Yet another multi-node cluster using cloud-config and Vagrant](https://github.com/AntonioMeireles/kubernetes-vagrant-coreos-cluster/blob/master/README.md) (similar to the one above but with an increased, more *aggressive* focus on features and flexibility) -* [Multi-node cluster with Vagrant and fleet units using a small OS X App](https://github.com/rimusz/coreos-osx-gui-kubernetes-cluster/blob/master/README.md) -* [Resizable multi-node cluster on Azure with Weave](coreos/azure/README.md) - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/coreos.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/getting-started-guides/coreos.md?pixel)]() diff --git a/release-0.19.0/docs/getting-started-guides/coreos/azure/.gitignore b/release-0.19.0/docs/getting-started-guides/coreos/azure/.gitignore deleted file mode 100644 index c2658d7d1b3..00000000000 --- a/release-0.19.0/docs/getting-started-guides/coreos/azure/.gitignore +++ /dev/null @@ -1 +0,0 @@ -node_modules/ diff --git a/release-0.19.0/docs/getting-started-guides/coreos/azure/README.md b/release-0.19.0/docs/getting-started-guides/coreos/azure/README.md deleted file mode 100644 index e96524648f7..00000000000 --- a/release-0.19.0/docs/getting-started-guides/coreos/azure/README.md +++ /dev/null @@ -1,195 +0,0 @@ -# Kubernetes on Azure with CoreOS and [Weave](http://weave.works) - -## Introduction - -In this guide I will demonstrate how to deploy a Kubernetes cluster to Azure cloud. You will be using CoreOS with Weave, which implements simple and secure networking, in a transparent, yet robust way. The purpose of this guide is to provide an out-of-the-box implementation that can ultimately be taken into production with little change. It will demonstrate how to provision a dedicated Kubernetes master and etcd nodes, and show how to scale the cluster with ease. - -## Let's go! - -To get started, you need to checkout the code: -``` -git clone https://github.com/GoogleCloudPlatform/kubernetes -cd kubernetes/docs/getting-started-guides/coreos/azure/ -``` - -You will need to have [Node.js installed](http://nodejs.org/download/) on you machine. If you have previously used Azure CLI, you should have it already. - -First, you need to install some of the dependencies with - -``` -npm install -``` - -Now, all you need to do is: - -``` -./azure-login.js -u -./create-kubernetes-cluster.js -``` - -This script will provision a cluster suitable for production use, where there is a ring of 3 dedicated etcd nodes, Kubernetes master and 2 nodes. The `kube-00` VM will be the master, your work loads are only to be deployed on the minion nodes, `kube-01` and `kube-02`. Initially, all VMs are single-core, to ensure a user of the free tier can reproduce it without paying extra. I will show how to add more bigger VMs later. - -![VMs in Azure](initial_cluster.png) - -Once the creation of Azure VMs has finished, you should see the following: - -``` -... -azure_wrapper/info: Saved SSH config, you can use it like so: `ssh -F ./output/kube_1c1496016083b4_ssh_conf ` -azure_wrapper/info: The hosts in this deployment are: - [ 'etcd-00', 'etcd-01', 'etcd-02', 'kube-00', 'kube-01', 'kube-02' ] -azure_wrapper/info: Saved state into `./output/kube_1c1496016083b4_deployment.yml` -``` - -Let's login to the master node like so: -``` -ssh -F ./output/kube_1c1496016083b4_ssh_conf kube-00 -``` -> Note: config file name will be different, make sure to use the one you see. - -Check there are 2 nodes in the cluster: -``` -core@kube-00 ~ $ kubectl get nodes -NAME LABELS STATUS -kube-01 environment=production Ready -kube-02 environment=production Ready -``` - -## Deploying the workload - -Let's follow the Guestbook example now: -``` -cd guestbook-example -kubectl create -f redis-master-controller.json -kubectl create -f redis-master-service.json -kubectl create -f redis-slave-controller.json -kubectl create -f redis-slave-service.json -kubectl create -f frontend-controller.json -kubectl create -f frontend-service.json -``` - -You need to wait for the pods to get deployed, run the following and wait for `STATUS` to change from `Unknown`, through `Pending` to `Running`. -``` -kubectl get pods --watch -``` -> Note: the most time it will spend downloading Docker container images on each of the nodes. - -Eventually you should see: -``` -POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS -frontend-controller-0133o 10.2.1.14 php-redis kubernetes/example-guestbook-php-redis kube-01/172.18.0.13 name=frontend,uses=redisslave,redis-master Running -frontend-controller-ls6k1 10.2.3.10 php-redis kubernetes/example-guestbook-php-redis name=frontend,uses=redisslave,redis-master Running -frontend-controller-oh43e 10.2.2.15 php-redis kubernetes/example-guestbook-php-redis kube-02/172.18.0.14 name=frontend,uses=redisslave,redis-master Running -redis-master 10.2.1.3 master redis kube-01/172.18.0.13 name=redis-master Running -redis-slave-controller-fplln 10.2.2.3 slave brendanburns/redis-slave kube-02/172.18.0.14 name=redisslave,uses=redis-master Running -redis-slave-controller-gziey 10.2.1.4 slave brendanburns/redis-slave kube-01/172.18.0.13 name=redisslave,uses=redis-master Running - -``` - -## Scaling - -Two single-core nodes are certainly not enough for a production system of today, and, as you can see, there is one _unassigned_ pod. Let's scale the cluster by adding a couple of bigger nodes. - -You will need to open another terminal window on your machine and go to the same working directory (e.g. `~/Workspace/weave-demos/coreos-azure`). - -First, lets set the size of new VMs: -``` -export AZ_VM_SIZE=Large -``` -Now, run scale script with state file of the previous deployment and number of nodes to add: -``` -./scale-kubernetes-cluster.js ./output/kube_1c1496016083b4_deployment.yml 2 -... -azure_wrapper/info: Saved SSH config, you can use it like so: `ssh -F ./output/kube_8f984af944f572_ssh_conf ` -azure_wrapper/info: The hosts in this deployment are: - [ 'etcd-00', - 'etcd-01', - 'etcd-02', - 'kube-00', - 'kube-01', - 'kube-02', - 'kube-03', - 'kube-04' ] -azure_wrapper/info: Saved state into `./output/kube_8f984af944f572_deployment.yml` -``` -> Note: this step has created new files in `./output`. - -Back on `kube-00`: -``` -core@kube-00 ~ $ kubectl get nodes -NAME LABELS STATUS -kube-01 environment=production Ready -kube-02 environment=production Ready -kube-03 environment=production Ready -kube-04 environment=production Ready -``` - -You can see that two more nodes joined happily. Let's scale the number of Guestbook instances now. - -First, double-check how many replication controllers there are: - -``` -core@kube-00 ~ $ kubectl get rc -CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS -frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=frontend 3 -redis-master master redis name=redis-master 1 -redis-slave slave kubernetes/redis-slave:v2 name=redis-slave 2 -``` -As there are 4 nodes, let's scale proportionally: -``` -core@kube-00 ~ $ kubectl scale --replicas=4 rc redis-slave -scaled -core@kube-00 ~ $ kubectl scale --replicas=4 rc frontend -scaled -``` -Check what you have now: -``` -core@kube-00 ~ $ kubectl get rc -CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS -frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=frontend 4 -redis-master master redis name=redis-master 1 -redis-slave slave kubernetes/redis-slave:v2 name=redis-slave 4 -``` - -You now will have more instances of front-end Guestbook apps and Redis slaves; and, if you look up all pods labeled `name=frontend`, you should see one running on each node. - -``` -core@kube-00 ~/guestbook-example $ kubectl get pods -l name=frontend -POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS -frontend-controller-0133o 10.2.1.19 php-redis kubernetes/example-guestbook-php-redis kube-01/172.18.0.13 name=frontend,uses=redisslave,redis-master Running -frontend-controller-i7hvs 10.2.4.5 php-redis kubernetes/example-guestbook-php-redis kube-04/172.18.0.21 name=frontend,uses=redisslave,redis-master Running -frontend-controller-ls6k1 10.2.3.18 php-redis kubernetes/example-guestbook-php-redis kube-03/172.18.0.20 name=frontend,uses=redisslave,redis-master Running -frontend-controller-oh43e 10.2.2.22 php-redis kubernetes/example-guestbook-php-redis kube-02/172.18.0.14 name=frontend,uses=redisslave,redis-master Running -``` - -## Exposing the app to the outside world - -To makes sure the app is working, you probably want to load it in the browser. For accessing the Guesbook service from the outside world, an Azure endpoint needs to be created like shown on the picture below. - -![Creating an endpoint](external_access.png) - -You then should be able to access it from anywhere via the Azure virtual IP for `kube-01`, i.e. `http://104.40.211.194:8000/` as per screenshot. - -## Next steps - -You now have a full-blow cluster running in Azure, congrats! - -You should probably try deploy other [example apps](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples) or write your own ;) - -## Tear down... - -If you don't wish care about the Azure bill, you can tear down the cluster. It's easy to redeploy it, as you can see. - -``` -./destroy-cluster.js ./output/kube_8f984af944f572_deployment.yml -``` - -> Note: make sure to use the _latest state file_, as after scaling there is a new one. - -By the way, with the scripts shown, you can deploy multiple clusters, if you like :) - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/coreos/azure/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/getting-started-guides/coreos/azure/README.md?pixel)]() diff --git a/release-0.19.0/docs/getting-started-guides/coreos/azure/addons/cluster-monitoring/influxdb/grafana-service.yaml b/release-0.19.0/docs/getting-started-guides/coreos/azure/addons/cluster-monitoring/influxdb/grafana-service.yaml deleted file mode 100644 index 76e49087231..00000000000 --- a/release-0.19.0/docs/getting-started-guides/coreos/azure/addons/cluster-monitoring/influxdb/grafana-service.yaml +++ /dev/null @@ -1,14 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - labels: - kubernetes.io/cluster-service: "true" - kubernetes.io/name: "Grafana" - name: monitoring-grafana -spec: - ports: - - port: 80 - targetPort: 8080 - selector: - name: influxGrafana - diff --git a/release-0.19.0/docs/getting-started-guides/coreos/azure/addons/cluster-monitoring/influxdb/heapster-controller.yaml b/release-0.19.0/docs/getting-started-guides/coreos/azure/addons/cluster-monitoring/influxdb/heapster-controller.yaml deleted file mode 100644 index bac59a62c7f..00000000000 --- a/release-0.19.0/docs/getting-started-guides/coreos/azure/addons/cluster-monitoring/influxdb/heapster-controller.yaml +++ /dev/null @@ -1,24 +0,0 @@ -apiVersion: v1 -kind: ReplicationController -metadata: - labels: - name: heapster - kubernetes.io/cluster-service: "true" - name: monitoring-heapster-controller -spec: - replicas: 1 - selector: - name: heapster - template: - metadata: - labels: - name: heapster - kubernetes.io/cluster-service: "true" - spec: - containers: - - image: gcr.io/google_containers/heapster:v0.12.1 - name: heapster - command: - - /heapster - - --source=kubernetes:http://kubernetes?auth= - - --sink=influxdb:http://monitoring-influxdb:8086 diff --git a/release-0.19.0/docs/getting-started-guides/coreos/azure/addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml b/release-0.19.0/docs/getting-started-guides/coreos/azure/addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml deleted file mode 100644 index 92ee15d0c23..00000000000 --- a/release-0.19.0/docs/getting-started-guides/coreos/azure/addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml +++ /dev/null @@ -1,35 +0,0 @@ -apiVersion: v1 -kind: ReplicationController -metadata: - labels: - name: influxGrafana - kubernetes.io/cluster-service: "true" - name: monitoring-influx-grafana-controller -spec: - replicas: 1 - selector: - name: influxGrafana - template: - metadata: - labels: - name: influxGrafana - kubernetes.io/cluster-service: "true" - spec: - containers: - - image: gcr.io/google_containers/heapster_influxdb:v0.3 - name: influxdb - ports: - - containerPort: 8083 - hostPort: 8083 - - containerPort: 8086 - hostPort: 8086 - - image: gcr.io/google_containers/heapster_grafana:v0.7 - name: grafana - env: - - name: INFLUXDB_EXTERNAL_URL - value: /api/v1/proxy/namespaces/default/services/monitoring-grafana/db/ - - name: INFLUXDB_HOST - value: monitoring-influxdb - - name: INFLUXDB_PORT - value: "8086" - diff --git a/release-0.19.0/docs/getting-started-guides/coreos/azure/addons/cluster-monitoring/influxdb/influxdb-service.yaml b/release-0.19.0/docs/getting-started-guides/coreos/azure/addons/cluster-monitoring/influxdb/influxdb-service.yaml deleted file mode 100644 index 8301d782597..00000000000 --- a/release-0.19.0/docs/getting-started-guides/coreos/azure/addons/cluster-monitoring/influxdb/influxdb-service.yaml +++ /dev/null @@ -1,17 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - labels: - name: influxGrafana - name: monitoring-influxdb -spec: - ports: - - name: http - port: 8083 - targetPort: 8083 - - name: api - port: 8086 - targetPort: 8086 - selector: - name: influxGrafana - diff --git a/release-0.19.0/docs/getting-started-guides/coreos/azure/addons/fluentd-elasticsearch/es-controller.yaml b/release-0.19.0/docs/getting-started-guides/coreos/azure/addons/fluentd-elasticsearch/es-controller.yaml deleted file mode 100644 index f4cda7b032a..00000000000 --- a/release-0.19.0/docs/getting-started-guides/coreos/azure/addons/fluentd-elasticsearch/es-controller.yaml +++ /dev/null @@ -1,37 +0,0 @@ -apiVersion: v1 -kind: ReplicationController -metadata: - name: elasticsearch-logging-v1 - namespace: default - labels: - k8s-app: elasticsearch-logging - version: v1 - kubernetes.io/cluster-service: "true" -spec: - replicas: 2 - selector: - k8s-app: elasticsearch-logging - version: v1 - template: - metadata: - labels: - k8s-app: elasticsearch-logging - version: v1 - kubernetes.io/cluster-service: "true" - spec: - containers: - - image: gcr.io/google_containers/elasticsearch:1.3 - name: elasticsearch-logging - ports: - - containerPort: 9200 - name: es-port - protocol: TCP - - containerPort: 9300 - name: es-transport-port - protocol: TCP - volumeMounts: - - name: es-persistent-storage - mountPath: /data - volumes: - - name: es-persistent-storage - emptyDir: {} diff --git a/release-0.19.0/docs/getting-started-guides/coreos/azure/addons/fluentd-elasticsearch/es-service.yaml b/release-0.19.0/docs/getting-started-guides/coreos/azure/addons/fluentd-elasticsearch/es-service.yaml deleted file mode 100644 index 3b7ae06e7aa..00000000000 --- a/release-0.19.0/docs/getting-started-guides/coreos/azure/addons/fluentd-elasticsearch/es-service.yaml +++ /dev/null @@ -1,16 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: elasticsearch-logging - namespace: default - labels: - k8s-app: elasticsearch-logging - kubernetes.io/cluster-service: "true" - kubernetes.io/name: "Elasticsearch" -spec: - ports: - - port: 9200 - protocol: TCP - targetPort: es-port - selector: - k8s-app: elasticsearch-logging diff --git a/release-0.19.0/docs/getting-started-guides/coreos/azure/addons/fluentd-elasticsearch/kibana-controller.yaml b/release-0.19.0/docs/getting-started-guides/coreos/azure/addons/fluentd-elasticsearch/kibana-controller.yaml deleted file mode 100644 index 677bc5f664a..00000000000 --- a/release-0.19.0/docs/getting-started-guides/coreos/azure/addons/fluentd-elasticsearch/kibana-controller.yaml +++ /dev/null @@ -1,31 +0,0 @@ -apiVersion: v1 -kind: ReplicationController -metadata: - name: kibana-logging-v1 - namespace: default - labels: - k8s-app: kibana-logging - version: v1 - kubernetes.io/cluster-service: "true" -spec: - replicas: 1 - selector: - k8s-app: kibana-logging - version: v1 - template: - metadata: - labels: - k8s-app: kibana-logging - version: v1 - kubernetes.io/cluster-service: "true" - spec: - containers: - - name: kibana-logging - image: gcr.io/google_containers/kibana:1.3 - env: - - name: "ELASTICSEARCH_URL" - value: "http://elasticsearch-logging:9200" - ports: - - containerPort: 5601 - name: kibana-port - protocol: TCP diff --git a/release-0.19.0/docs/getting-started-guides/coreos/azure/addons/fluentd-elasticsearch/kibana-service.yaml b/release-0.19.0/docs/getting-started-guides/coreos/azure/addons/fluentd-elasticsearch/kibana-service.yaml deleted file mode 100644 index ac9aa5ce320..00000000000 --- a/release-0.19.0/docs/getting-started-guides/coreos/azure/addons/fluentd-elasticsearch/kibana-service.yaml +++ /dev/null @@ -1,17 +0,0 @@ - -apiVersion: v1 -kind: Service -metadata: - name: kibana-logging - namespace: default - labels: - k8s-app: kibana-logging - kubernetes.io/cluster-service: "true" - kubernetes.io/name: "Kibana" -spec: - ports: - - port: 5601 - protocol: TCP - targetPort: kibana-port - selector: - k8s-app: kibana-logging diff --git a/release-0.19.0/docs/getting-started-guides/coreos/azure/azure-login.js b/release-0.19.0/docs/getting-started-guides/coreos/azure/azure-login.js deleted file mode 100755 index 624916b2b56..00000000000 --- a/release-0.19.0/docs/getting-started-guides/coreos/azure/azure-login.js +++ /dev/null @@ -1,3 +0,0 @@ -#!/usr/bin/env node - -require('child_process').fork('node_modules/azure-cli/bin/azure', ['login'].concat(process.argv)); diff --git a/release-0.19.0/docs/getting-started-guides/coreos/azure/cloud_config_templates/kubernetes-cluster-etcd-node-template.yml b/release-0.19.0/docs/getting-started-guides/coreos/azure/cloud_config_templates/kubernetes-cluster-etcd-node-template.yml deleted file mode 100644 index cb1c1b254dd..00000000000 --- a/release-0.19.0/docs/getting-started-guides/coreos/azure/cloud_config_templates/kubernetes-cluster-etcd-node-template.yml +++ /dev/null @@ -1,60 +0,0 @@ -## This file is used as input to deployment script, which ammends it as needed. -## More specifically, we need to add peer hosts for each but the elected peer. - -write_files: - - path: /opt/bin/curl-retry.sh - permissions: '0755' - owner: root - content: | - #!/bin/sh -x - until curl $@ - do sleep 1 - done - -coreos: - units: - - name: download-etcd2.service - enable: true - command: start - content: | - [Unit] - After=network-online.target - Before=etcd2.service - Description=Download etcd2 Binaries - Documentation=https://github.com/coreos/etcd/ - Requires=network-online.target - [Service] - Environment=ETCD2_RELEASE_TARBALL=https://github.com/coreos/etcd/releases/download/v2.0.11/etcd-v2.0.11-linux-amd64.tar.gz - ExecStartPre=/bin/mkdir -p /opt/bin - ExecStart=/opt/bin/curl-retry.sh --silent --location $ETCD2_RELEASE_TARBALL --output /tmp/etcd2.tgz - ExecStart=/bin/tar xzvf /tmp/etcd2.tgz -C /opt - ExecStartPost=/bin/ln -s /opt/etcd-v2.0.11-linux-amd64/etcd /opt/bin/etcd2 - ExecStartPost=/bin/ln -s /opt/etcd-v2.0.11-linux-amd64/etcdctl /opt/bin/etcdctl2 - RemainAfterExit=yes - Type=oneshot - [Install] - WantedBy=multi-user.target - - name: etcd2.service - enable: true - command: start - content: | - [Unit] - After=download-etcd2.service - Description=etcd 2 - Documentation=https://github.com/coreos/etcd/ - [Service] - Environment=ETCD_NAME=%H - Environment=ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster - Environment=ETCD_INITIAL_ADVERTISE_PEER_URLS=http://%H:2380 - Environment=ETCD_LISTEN_PEER_URLS=http://%H:2380 - Environment=ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379,http://0.0.0.0:4001 - Environment=ETCD_ADVERTISE_CLIENT_URLS=http://%H:2379,http://%H:4001 - Environment=ETCD_INITIAL_CLUSTER_STATE=new - ExecStart=/opt/bin/etcd2 - Restart=always - RestartSec=10 - [Install] - WantedBy=multi-user.target - update: - group: stable - reboot-strategy: off diff --git a/release-0.19.0/docs/getting-started-guides/coreos/azure/cloud_config_templates/kubernetes-cluster-main-nodes-template.yml b/release-0.19.0/docs/getting-started-guides/coreos/azure/cloud_config_templates/kubernetes-cluster-main-nodes-template.yml deleted file mode 100644 index 16638e87199..00000000000 --- a/release-0.19.0/docs/getting-started-guides/coreos/azure/cloud_config_templates/kubernetes-cluster-main-nodes-template.yml +++ /dev/null @@ -1,388 +0,0 @@ -## This file is used as input to deployment script, which ammends it as needed. -## More specifically, we need to add environment files for as many nodes as we -## are going to deploy. - -write_files: - - path: /opt/bin/curl-retry.sh - permissions: '0755' - owner: root - content: | - #!/bin/sh -x - until curl $@ - do sleep 1 - done - - - path: /opt/bin/register_minion.sh - permissions: '0755' - owner: root - content: | - #!/bin/sh -xe - minion_id="${1}" - master_url="${2}" - env_label="${3}" - until healthcheck=$(curl --fail --silent "${master_url}/healthz") - do sleep 2 - done - test -n "${healthcheck}" - test "${healthcheck}" = "ok" - printf '{ - "id": "%s", - "kind": "Minion", - "apiVersion": "v1beta1", - "labels": { "environment": "%s" } - }' "${minion_id}" "${env_label}" \ - | /opt/bin/kubectl create -s "${master_url}" -f - - - - path: /etc/kubernetes/manifests/fluentd.manifest - permissions: '0755' - owner: root - content: | - apiVersion: v1 - kind: Pod - metadata: - name: fluentd-elasticsearch - spec: - containers: - - name: fluentd-elasticsearch - image: gcr.io/google_containers/fluentd-elasticsearch:1.5 - env: - - name: "FLUENTD_ARGS" - value: "-qq" - volumeMounts: - - name: varlog - mountPath: /varlog - - name: containers - mountPath: /var/lib/docker/containers - volumes: - - name: varlog - hostPath: - path: /var/log - - name: containers - hostPath: - path: /var/lib/docker/containers - -coreos: - update: - group: stable - reboot-strategy: off - units: - - name: systemd-networkd-wait-online.service - drop-ins: - - name: 50-check-github-is-reachable.conf - content: | - [Service] - ExecStart=/bin/sh -x -c \ - 'until curl --silent --fail https://status.github.com/api/status.json | grep -q \"good\"; do sleep 2; done' - - - name: docker.service - drop-ins: - - name: 50-weave-kubernetes.conf - content: | - [Service] - Environment=DOCKER_OPTS='--bridge="weave" -r="false"' - - - name: weave-network.target - enable: true - content: | - [Unit] - Description=Weave Network Setup Complete - Documentation=man:systemd.special(7) - RefuseManualStart=no - After=network-online.target - [Install] - WantedBy=multi-user.target - WantedBy=kubernetes-master.target - WantedBy=kubernetes-minion.target - - - name: kubernetes-master.target - enable: true - command: start - content: | - [Unit] - Description=Kubernetes Cluster Master - Documentation=http://kubernetes.io/ - RefuseManualStart=no - After=weave-network.target - Requires=weave-network.target - ConditionHost=kube-00 - Wants=apiserver.service - Wants=scheduler.service - Wants=controller-manager.service - [Install] - WantedBy=multi-user.target - - - name: kubernetes-minion.target - enable: true - command: start - content: | - [Unit] - Description=Kubernetes Cluster Minion - Documentation=http://kubernetes.io/ - RefuseManualStart=no - After=weave-network.target - Requires=weave-network.target - ConditionHost=!kube-00 - Wants=proxy.service - Wants=kubelet.service - [Install] - WantedBy=multi-user.target - - - name: 10-weave.network - runtime: false - content: | - [Match] - Type=bridge - Name=weave* - [Network] - - - name: install-weave.service - enable: true - content: | - [Unit] - After=network-online.target - Before=weave.service - Before=weave-helper.service - Before=docker.service - Description=Install Weave - Documentation=http://docs.weave.works/ - Requires=network-online.target - [Service] - Type=oneshot - RemainAfterExit=yes - ExecStartPre=/bin/mkdir -p /opt/bin/ - ExecStartPre=/opt/bin/curl-retry.sh \ - --silent \ - --location \ - https://github.com/weaveworks/weave/releases/download/latest_release/weave \ - --output /opt/bin/weave - ExecStartPre=/opt/bin/curl-retry.sh \ - --silent \ - --location \ - https://raw.github.com/errordeveloper/weave-demos/master/poseidon/weave-helper \ - --output /opt/bin/weave-helper - ExecStartPre=/usr/bin/chmod +x /opt/bin/weave - ExecStartPre=/usr/bin/chmod +x /opt/bin/weave-helper - ExecStart=/bin/echo Weave Installed - [Install] - WantedBy=weave-network.target - WantedBy=weave.service - - - name: weave-helper.service - enable: true - content: | - [Unit] - After=install-weave.service - After=docker.service - Description=Weave Network Router - Documentation=http://docs.weave.works/ - Requires=docker.service - Requires=install-weave.service - [Service] - ExecStart=/opt/bin/weave-helper - Restart=always - [Install] - WantedBy=weave-network.target - - - name: weave.service - enable: true - content: | - [Unit] - After=install-weave.service - After=docker.service - Description=Weave Network Router - Documentation=http://docs.weave.works/ - Requires=docker.service - Requires=install-weave.service - [Service] - TimeoutStartSec=0 - EnvironmentFile=/etc/weave.%H.env - ExecStartPre=/opt/bin/weave setup - ExecStartPre=/opt/bin/weave launch $WEAVE_PEERS - ExecStart=/usr/bin/docker attach weave - Restart=on-failure - Restart=always - ExecStop=/opt/bin/weave stop - [Install] - WantedBy=weave-network.target - - - name: weave-create-bridge.service - enable: true - content: | - [Unit] - After=network.target - After=install-weave.service - Before=weave.service - Before=docker.service - Requires=network.target - Requires=install-weave.service - [Service] - Type=oneshot - EnvironmentFile=/etc/weave.%H.env - ExecStart=/opt/bin/weave --local create-bridge - ExecStart=/usr/bin/ip addr add dev weave $BRIDGE_ADDRESS_CIDR - ExecStart=/usr/bin/ip route add $BREAKOUT_ROUTE dev weave scope link - ExecStart=/usr/bin/ip route add 224.0.0.0/4 dev weave - [Install] - WantedBy=multi-user.target - WantedBy=weave-network.target - - - name: download-kubernetes.service - enable: true - content: | - [Unit] - After=network-online.target - Before=apiserver.service - Before=controller-manager.service - Before=kubelet.service - Before=proxy.service - Description=Download Kubernetes Binaries - Documentation=http://kubernetes.io/ - Requires=network-online.target - [Service] - Environment=KUBE_RELEASE_TARBALL=https://github.com/GoogleCloudPlatform/kubernetes/releases/download/v0.18.0/kubernetes.tar.gz - ExecStartPre=/bin/mkdir -p /opt/ - ExecStart=/opt/bin/curl-retry.sh --silent --location $KUBE_RELEASE_TARBALL --output /tmp/kubernetes.tgz - ExecStart=/bin/tar xzvf /tmp/kubernetes.tgz -C /tmp/ - ExecStart=/bin/tar xzvf /tmp/kubernetes/server/kubernetes-server-linux-amd64.tar.gz -C /opt - ExecStartPost=/bin/chmod o+rx -R /opt/kubernetes - ExecStartPost=/bin/ln -s /opt/kubernetes/server/bin/kubectl /opt/bin/ - ExecStartPost=/bin/mv /tmp/kubernetes/examples/guestbook /home/core/guestbook-example - ExecStartPost=/bin/chown core. -R /home/core/guestbook-example - ExecStartPost=/bin/rm -rf /tmp/kubernetes - ExecStartPost=/bin/sed 's/\("createExternalLoadBalancer":\) true/\1 false/' -i /home/core/guestbook-example/frontend-service.json - RemainAfterExit=yes - Type=oneshot - [Install] - WantedBy=kubernetes-master.target - WantedBy=kubernetes-minion.target - - - name: apiserver.service - enable: true - content: | - [Unit] - After=download-kubernetes.service - Before=controller-manager.service - Before=scheduler.service - ConditionFileIsExecutable=/opt/kubernetes/server/bin/kube-apiserver - Description=Kubernetes API Server - Documentation=http://kubernetes.io/ - Wants=download-kubernetes.service - ConditionHost=kube-00 - [Service] - ExecStart=/opt/kubernetes/server/bin/kube-apiserver \ - --address=0.0.0.0 \ - --port=8080 \ - $ETCD_SERVERS \ - --service-cluster-ip-range=10.1.0.0/16 \ - --cloud_provider=vagrant \ - --logtostderr=true --v=3 - Restart=always - RestartSec=10 - [Install] - WantedBy=kubernetes-master.target - - - name: scheduler.service - enable: true - content: | - [Unit] - After=apiserver.service - After=download-kubernetes.service - ConditionFileIsExecutable=/opt/kubernetes/server/bin/kube-scheduler - Description=Kubernetes Scheduler - Documentation=http://kubernetes.io/ - Wants=apiserver.service - ConditionHost=kube-00 - [Service] - ExecStart=/opt/kubernetes/server/bin/kube-scheduler \ - --logtostderr=true \ - --master=127.0.0.1:8080 - Restart=always - RestartSec=10 - [Install] - WantedBy=kubernetes-master.target - - - name: controller-manager.service - enable: true - content: | - [Unit] - After=download-kubernetes.service - After=apiserver.service - ConditionFileIsExecutable=/opt/kubernetes/server/bin/kube-controller-manager - Description=Kubernetes Controller Manager - Documentation=http://kubernetes.io/ - Wants=apiserver.service - Wants=download-kubernetes.service - ConditionHost=kube-00 - [Service] - ExecStart=/opt/kubernetes/server/bin/kube-controller-manager \ - --cloud_provider=vagrant \ - --master=127.0.0.1:8080 \ - --logtostderr=true - Restart=always - RestartSec=10 - [Install] - WantedBy=kubernetes-master.target - - - name: kubelet.service - enable: true - content: | - [Unit] - After=download-kubernetes.service - ConditionFileIsExecutable=/opt/kubernetes/server/bin/kubelet - Description=Kubernetes Kubelet - Documentation=http://kubernetes.io/ - Wants=download-kubernetes.service - ConditionHost=!kube-00 - [Service] - ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests/ - ExecStart=/opt/kubernetes/server/bin/kubelet \ - --address=0.0.0.0 \ - --port=10250 \ - --hostname_override=%H \ - --api_servers=http://kube-00:8080 \ - --logtostderr=true \ - --cluster_dns=10.1.0.3 \ - --cluster_domain=kube.local \ - --config=/etc/kubernetes/manifests/ - Restart=always - RestartSec=10 - [Install] - WantedBy=kubernetes-minion.target - - - name: proxy.service - enable: true - content: | - [Unit] - After=download-kubernetes.service - ConditionFileIsExecutable=/opt/kubernetes/server/bin/kube-proxy - Description=Kubernetes Proxy - Documentation=http://kubernetes.io/ - Wants=download-kubernetes.service - ConditionHost=!kube-00 - [Service] - ExecStart=/opt/kubernetes/server/bin/kube-proxy \ - --master=http://kube-00:8080 \ - --logtostderr=true - Restart=always - RestartSec=10 - [Install] - WantedBy=kubernetes-minion.target - - - name: kubectl-create-minion.service - enable: true - content: | - [Unit] - After=download-kubernetes.service - Before=proxy.service - Before=kubelet.service - ConditionFileIsExecutable=/opt/kubernetes/server/bin/kubectl - ConditionFileIsExecutable=/opt/bin/register_minion.sh - Description=Kubernetes Create Minion - Documentation=http://kubernetes.io/ - Wants=download-kubernetes.service - ConditionHost=!kube-00 - [Service] - ExecStart=/opt/bin/register_minion.sh %H http://kube-00:8080 production - Type=oneshot - [Install] - WantedBy=kubernetes-minion.target diff --git a/release-0.19.0/docs/getting-started-guides/coreos/azure/create-kubernetes-cluster.js b/release-0.19.0/docs/getting-started-guides/coreos/azure/create-kubernetes-cluster.js deleted file mode 100755 index 70248c596c6..00000000000 --- a/release-0.19.0/docs/getting-started-guides/coreos/azure/create-kubernetes-cluster.js +++ /dev/null @@ -1,15 +0,0 @@ -#!/usr/bin/env node - -var azure = require('./lib/azure_wrapper.js'); -var kube = require('./lib/deployment_logic/kubernetes.js'); - -azure.create_config('kube', { 'etcd': 3, 'kube': 3 }); - -azure.run_task_queue([ - azure.queue_default_network(), - azure.queue_storage_if_needed(), - azure.queue_machines('etcd', 'stable', - kube.create_etcd_cloud_config), - azure.queue_machines('kube', 'stable', - kube.create_node_cloud_config), -]); diff --git a/release-0.19.0/docs/getting-started-guides/coreos/azure/destroy-cluster.js b/release-0.19.0/docs/getting-started-guides/coreos/azure/destroy-cluster.js deleted file mode 100755 index ce441e538a5..00000000000 --- a/release-0.19.0/docs/getting-started-guides/coreos/azure/destroy-cluster.js +++ /dev/null @@ -1,7 +0,0 @@ -#!/usr/bin/env node - -var azure = require('./lib/azure_wrapper.js'); - -azure.destroy_cluster(process.argv[2]); - -console.log('The cluster had been destroyed, you can delete the state file now.'); diff --git a/release-0.19.0/docs/getting-started-guides/coreos/azure/external_access.png b/release-0.19.0/docs/getting-started-guides/coreos/azure/external_access.png deleted file mode 100644 index 6541309b0ac..00000000000 Binary files a/release-0.19.0/docs/getting-started-guides/coreos/azure/external_access.png and /dev/null differ diff --git a/release-0.19.0/docs/getting-started-guides/coreos/azure/initial_cluster.png b/release-0.19.0/docs/getting-started-guides/coreos/azure/initial_cluster.png deleted file mode 100644 index 99646a3fd06..00000000000 Binary files a/release-0.19.0/docs/getting-started-guides/coreos/azure/initial_cluster.png and /dev/null differ diff --git a/release-0.19.0/docs/getting-started-guides/coreos/azure/lib/azure_wrapper.js b/release-0.19.0/docs/getting-started-guides/coreos/azure/lib/azure_wrapper.js deleted file mode 100644 index 8f48b25181a..00000000000 --- a/release-0.19.0/docs/getting-started-guides/coreos/azure/lib/azure_wrapper.js +++ /dev/null @@ -1,271 +0,0 @@ -var _ = require('underscore'); - -var fs = require('fs'); -var cp = require('child_process'); - -var yaml = require('js-yaml'); - -var openssl = require('openssl-wrapper'); - -var clr = require('colors'); -var inspect = require('util').inspect; - -var util = require('./util.js'); - -var coreos_image_ids = { - 'stable': '2b171e93f07c4903bcad35bda10acf22__CoreOS-Stable-647.2.0', - 'beta': '2b171e93f07c4903bcad35bda10acf22__CoreOS-Beta-681.0.0', // untested - 'alpha': '2b171e93f07c4903bcad35bda10acf22__CoreOS-Alpha-695.0.0' // untested -}; - -var conf = {}; - -var hosts = { - collection: [], - ssh_port_counter: 2200, -}; - -var task_queue = []; - -exports.run_task_queue = function (dummy) { - var tasks = { - todo: task_queue, - done: [], - }; - - var pop_task = function() { - console.log(clr.yellow('azure_wrapper/task:'), clr.grey(inspect(tasks))); - var ret = {}; - ret.current = tasks.todo.shift(); - ret.remaining = tasks.todo.length; - return ret; - }; - - (function iter (task) { - if (task.current === undefined) { - if (conf.destroying === undefined) { - create_ssh_conf(); - save_state(); - } - return; - } else { - if (task.current.length !== 0) { - console.log(clr.yellow('azure_wrapper/exec:'), clr.blue(inspect(task.current))); - cp.fork('node_modules/azure-cli/bin/azure', task.current) - .on('exit', function (code, signal) { - tasks.done.push({ - code: code, - signal: signal, - what: task.current.join(' '), - remaining: task.remaining, - }); - if (code !== 0 && conf.destroying === undefined) { - console.log(clr.red('azure_wrapper/fail: Exiting due to an error.')); - save_state(); - console.log(clr.cyan('azure_wrapper/info: You probably want to destroy and re-run.')); - process.abort(); - } else { - iter(pop_task()); - } - }); - } else { - iter(pop_task()); - } - } - })(pop_task()); -}; - -var save_state = function () { - var file_name = util.join_output_file_path(conf.name, 'deployment.yml'); - try { - conf.hosts = hosts.collection; - fs.writeFileSync(file_name, yaml.safeDump(conf)); - console.log(clr.yellow('azure_wrapper/info: Saved state into `%s`'), file_name); - } catch (e) { - console.log(clr.red(e)); - } -}; - -var load_state = function (file_name) { - try { - conf = yaml.safeLoad(fs.readFileSync(file_name, 'utf8')); - console.log(clr.yellow('azure_wrapper/info: Loaded state from `%s`'), file_name); - return conf; - } catch (e) { - console.log(clr.red(e)); - } -}; - -var create_ssh_key = function (prefix) { - var opts = { - x509: true, - nodes: true, - newkey: 'rsa:2048', - subj: '/O=Weaveworks, Inc./L=London/C=GB/CN=weave.works', - keyout: util.join_output_file_path(prefix, 'ssh.key'), - out: util.join_output_file_path(prefix, 'ssh.pem'), - }; - openssl.exec('req', opts, function (err, buffer) { - if (err) console.log(clr.red(err)); - fs.chmod(opts.keyout, '0600', function (err) { - if (err) console.log(clr.red(err)); - }); - }); - return { - key: opts.keyout, - pem: opts.out, - } -} - -var create_ssh_conf = function () { - var file_name = util.join_output_file_path(conf.name, 'ssh_conf'); - var ssh_conf_head = [ - "Host *", - "\tHostname " + conf.resources['service'] + ".cloudapp.net", - "\tUser core", - "\tCompression yes", - "\tLogLevel FATAL", - "\tStrictHostKeyChecking no", - "\tUserKnownHostsFile /dev/null", - "\tIdentitiesOnly yes", - "\tIdentityFile " + conf.resources['ssh_key']['key'], - "\n", - ]; - - fs.writeFileSync(file_name, ssh_conf_head.concat(_.map(hosts.collection, function (host) { - return _.template("Host <%= name %>\n\tPort <%= port %>\n")(host); - })).join('\n')); - console.log(clr.yellow('azure_wrapper/info:'), clr.green('Saved SSH config, you can use it like so: `ssh -F ', file_name, '`')); - console.log(clr.yellow('azure_wrapper/info:'), clr.green('The hosts in this deployment are:\n'), _.map(hosts.collection, function (host) { return host.name; })); -}; - -var get_location = function () { - if (process.env['AZ_AFFINITY']) { - return '--affinity-group=' + process.env['AZ_AFFINITY']; - } else if (process.env['AZ_LOCATION']) { - return '--location=' + process.env['AZ_LOCATION']; - } else { - return '--location=West Europe'; - } -} -var get_vm_size = function () { - if (process.env['AZ_VM_SIZE']) { - return '--vm-size=' + process.env['AZ_VM_SIZE']; - } else { - return '--vm-size=Small'; - } -} - -exports.queue_default_network = function () { - task_queue.push([ - 'network', 'vnet', 'create', - get_location(), - '--address-space=172.16.0.0', - conf.resources['vnet'], - ]); -} - -exports.queue_storage_if_needed = function() { - if (!process.env['AZURE_STORAGE_ACCOUNT']) { - conf.resources['storage_account'] = util.rand_suffix; - task_queue.push([ - 'storage', 'account', 'create', - '--type=LRS', - get_location(), - conf.resources['storage_account'], - ]); - process.env['AZURE_STORAGE_ACCOUNT'] = conf.resources['storage_account']; - } else { - // Preserve it for resizing, so we don't create a new one by accedent, - // when the environment variable is unset - conf.resources['storage_account'] = process.env['AZURE_STORAGE_ACCOUNT']; - } -}; - -exports.queue_machines = function (name_prefix, coreos_update_channel, cloud_config_creator) { - var x = conf.nodes[name_prefix]; - var vm_create_base_args = [ - 'vm', 'create', - get_location(), - get_vm_size(), - '--connect=' + conf.resources['service'], - '--virtual-network-name=' + conf.resources['vnet'], - '--no-ssh-password', - '--ssh-cert=' + conf.resources['ssh_key']['pem'], - ]; - - var cloud_config = cloud_config_creator(x, conf); - - var next_host = function (n) { - hosts.ssh_port_counter += 1; - var host = { name: util.hostname(n, name_prefix), port: hosts.ssh_port_counter }; - if (cloud_config instanceof Array) { - host.cloud_config_file = cloud_config[n]; - } else { - host.cloud_config_file = cloud_config; - } - hosts.collection.push(host); - return _.map([ - "--vm-name=<%= name %>", - "--ssh=<%= port %>", - "--custom-data=<%= cloud_config_file %>", - ], function (arg) { return _.template(arg)(host); }); - }; - - task_queue = task_queue.concat(_(x).times(function (n) { - if (conf.resizing && n < conf.old_size) { - return []; - } else { - return vm_create_base_args.concat(next_host(n), [ - coreos_image_ids[coreos_update_channel], 'core', - ]); - } - })); -}; - -exports.create_config = function (name, nodes) { - conf = { - name: name, - nodes: nodes, - weave_salt: util.rand_string(), - resources: { - vnet: [name, 'internal-vnet', util.rand_suffix].join('-'), - service: [name, util.rand_suffix].join('-'), - ssh_key: create_ssh_key(name), - } - }; - -}; - -exports.destroy_cluster = function (state_file) { - load_state(state_file); - if (conf.hosts === undefined) { - console.log(clr.red('azure_wrapper/fail: Nothing to delete.')); - process.abort(); - } - - conf.destroying = true; - task_queue = _.map(conf.hosts, function (host) { - return ['vm', 'delete', '--quiet', '--blob-delete', host.name]; - }); - - task_queue.push(['network', 'vnet', 'delete', '--quiet', conf.resources['vnet']]); - task_queue.push(['storage', 'account', 'delete', '--quiet', conf.resources['storage_account']]); - - exports.run_task_queue(); -}; - -exports.load_state_for_resizing = function (state_file, node_type, new_nodes) { - load_state(state_file); - if (conf.hosts === undefined) { - console.log(clr.red('azure_wrapper/fail: Nothing to look at.')); - process.abort(); - } - conf.resizing = true; - conf.old_size = conf.nodes[node_type]; - conf.old_state_file = state_file; - conf.nodes[node_type] += new_nodes; - hosts.collection = conf.hosts; - hosts.ssh_port_counter += conf.hosts.length; - process.env['AZURE_STORAGE_ACCOUNT'] = conf.resources['storage_account']; -} diff --git a/release-0.19.0/docs/getting-started-guides/coreos/azure/lib/cloud_config.js b/release-0.19.0/docs/getting-started-guides/coreos/azure/lib/cloud_config.js deleted file mode 100644 index 75cff6cf2db..00000000000 --- a/release-0.19.0/docs/getting-started-guides/coreos/azure/lib/cloud_config.js +++ /dev/null @@ -1,43 +0,0 @@ -var _ = require('underscore'); -var fs = require('fs'); -var yaml = require('js-yaml'); -var colors = require('colors/safe'); - - -var write_cloud_config_from_object = function (data, output_file) { - try { - fs.writeFileSync(output_file, [ - '#cloud-config', - yaml.safeDump(data), - ].join("\n")); - return output_file; - } catch (e) { - console.log(colors.red(e)); - } -}; - -exports.generate_environment_file_entry_from_object = function (hostname, environ) { - var data = { - hostname: hostname, - environ_array: _.map(environ, function (value, key) { - return [key.toUpperCase(), JSON.stringify(value.toString())].join('='); - }), - }; - - return { - permissions: '0600', - owner: 'root', - content: _.template("<%= environ_array.join('\\n') %>\n")(data), - path: _.template("/etc/weave.<%= hostname %>.env")(data), - }; -}; - -exports.process_template = function (input_file, output_file, processor) { - var data = {}; - try { - data = yaml.safeLoad(fs.readFileSync(input_file, 'utf8')); - } catch (e) { - console.log(colors.red(e)); - } - return write_cloud_config_from_object(processor(_.clone(data)), output_file); -}; diff --git a/release-0.19.0/docs/getting-started-guides/coreos/azure/lib/deployment_logic/kubernetes.js b/release-0.19.0/docs/getting-started-guides/coreos/azure/lib/deployment_logic/kubernetes.js deleted file mode 100644 index e497a55708d..00000000000 --- a/release-0.19.0/docs/getting-started-guides/coreos/azure/lib/deployment_logic/kubernetes.js +++ /dev/null @@ -1,76 +0,0 @@ -var _ = require('underscore'); -_.mixin(require('underscore.string').exports()); - -var util = require('../util.js'); -var cloud_config = require('../cloud_config.js'); - - -etcd_initial_cluster_conf_self = function (conf) { - var port = '2380'; - - var data = { - nodes: _(conf.nodes.etcd).times(function (n) { - var host = util.hostname(n, 'etcd'); - return [host, [host, port].join(':')].join('=http://'); - }), - }; - - return { - 'name': 'etcd2.service', - 'drop-ins': [{ - 'name': '50-etcd-initial-cluster.conf', - 'content': _.template("[Service]\nEnvironment=ETCD_INITIAL_CLUSTER=<%= nodes.join(',') %>\n")(data), - }], - }; -}; - -etcd_initial_cluster_conf_kube = function (conf) { - var port = '4001'; - - var data = { - nodes: _(conf.nodes.etcd).times(function (n) { - var host = util.hostname(n, 'etcd'); - return 'http://' + [host, port].join(':'); - }), - }; - - return { - 'name': 'apiserver.service', - 'drop-ins': [{ - 'name': '50-etcd-initial-cluster.conf', - 'content': _.template("[Service]\nEnvironment=ETCD_SERVERS=--etcd_servers=<%= nodes.join(',') %>\n")(data), - }], - }; -}; - -exports.create_etcd_cloud_config = function (node_count, conf) { - var input_file = './cloud_config_templates/kubernetes-cluster-etcd-node-template.yml'; - var output_file = util.join_output_file_path('kubernetes-cluster-etcd-nodes', 'generated.yml'); - - return cloud_config.process_template(input_file, output_file, function(data) { - data.coreos.units.push(etcd_initial_cluster_conf_self(conf)); - return data; - }); -}; - -exports.create_node_cloud_config = function (node_count, conf) { - var elected_node = 0; - - var input_file = './cloud_config_templates/kubernetes-cluster-main-nodes-template.yml'; - var output_file = util.join_output_file_path('kubernetes-cluster-main-nodes', 'generated.yml'); - - var make_node_config = function (n) { - return cloud_config.generate_environment_file_entry_from_object(util.hostname(n, 'kube'), { - weave_password: conf.weave_salt, - weave_peers: n === elected_node ? "" : util.hostname(elected_node, 'kube'), - breakout_route: util.ipv4([10, 2, 0, 0], 16), - bridge_address_cidr: util.ipv4([10, 2, n, 1], 24), - }); - }; - - return cloud_config.process_template(input_file, output_file, function(data) { - data.write_files = data.write_files.concat(_(node_count).times(make_node_config)); - data.coreos.units.push(etcd_initial_cluster_conf_kube(conf)); - return data; - }); -}; diff --git a/release-0.19.0/docs/getting-started-guides/coreos/azure/lib/util.js b/release-0.19.0/docs/getting-started-guides/coreos/azure/lib/util.js deleted file mode 100644 index 2c88b8cff35..00000000000 --- a/release-0.19.0/docs/getting-started-guides/coreos/azure/lib/util.js +++ /dev/null @@ -1,33 +0,0 @@ -var _ = require('underscore'); -_.mixin(require('underscore.string').exports()); - -exports.ipv4 = function (ocets, prefix) { - return { - ocets: ocets, - prefix: prefix, - toString: function () { - return [ocets.join('.'), prefix].join('/'); - } - } -}; - -exports.hostname = function hostname (n, prefix) { - return _.template("<%= pre %>-<%= seq %>")({ - pre: prefix || 'core', - seq: _.pad(n, 2, '0'), - }); -}; - -exports.rand_string = function () { - var crypto = require('crypto'); - var shasum = crypto.createHash('sha256'); - shasum.update(crypto.randomBytes(256)); - return shasum.digest('hex'); -}; - - -exports.rand_suffix = exports.rand_string().substring(50); - -exports.join_output_file_path = function(prefix, suffix) { - return './output/' + [prefix, exports.rand_suffix, suffix].join('_'); -}; diff --git a/release-0.19.0/docs/getting-started-guides/coreos/azure/package.json b/release-0.19.0/docs/getting-started-guides/coreos/azure/package.json deleted file mode 100644 index 2eb45fd03ff..00000000000 --- a/release-0.19.0/docs/getting-started-guides/coreos/azure/package.json +++ /dev/null @@ -1,19 +0,0 @@ -{ - "name": "coreos-azure-weave", - "version": "1.0.0", - "description": "Small utility to bring up a woven CoreOS cluster", - "main": "index.js", - "scripts": { - "test": "echo \"Error: no test specified\" && exit 1" - }, - "author": "Ilya Dmitrichenko ", - "license": "Apache 2.0", - "dependencies": { - "azure-cli": "^0.9.2", - "colors": "^1.0.3", - "js-yaml": "^3.2.5", - "openssl-wrapper": "^0.2.1", - "underscore": "^1.7.0", - "underscore.string": "^3.0.2" - } -} diff --git a/release-0.19.0/docs/getting-started-guides/coreos/azure/scale-kubernetes-cluster.js b/release-0.19.0/docs/getting-started-guides/coreos/azure/scale-kubernetes-cluster.js deleted file mode 100755 index f606898874c..00000000000 --- a/release-0.19.0/docs/getting-started-guides/coreos/azure/scale-kubernetes-cluster.js +++ /dev/null @@ -1,10 +0,0 @@ -#!/usr/bin/env node - -var azure = require('./lib/azure_wrapper.js'); -var kube = require('./lib/deployment_logic/kubernetes.js'); - -azure.load_state_for_resizing(process.argv[2], 'kube', parseInt(process.argv[3] || 1)); - -azure.run_task_queue([ - azure.queue_machines('kube', 'stable', kube.create_node_cloud_config), -]); diff --git a/release-0.19.0/docs/getting-started-guides/coreos/bare_metal_offline.md b/release-0.19.0/docs/getting-started-guides/coreos/bare_metal_offline.md deleted file mode 100644 index 0745215cee6..00000000000 --- a/release-0.19.0/docs/getting-started-guides/coreos/bare_metal_offline.md +++ /dev/null @@ -1,645 +0,0 @@ -# Bare Metal CoreOS with Kubernetes (OFFLINE) -Deploy a CoreOS running Kubernetes environment. This particular guild is made to help those in an OFFLINE system, wither for testing a POC before the real deal, or you are restricted to be totally offline for your applications. - - -## High Level Design -1. Manage the tftp directory - * /tftpboot/(coreos)(centos)(RHEL) - * /tftpboot/pxelinux.0/(MAC) -> linked to Linux image config file -2. Update per install the link for pxelinux -3. Update the DHCP config to reflect the host needing deployment -4. Setup nodes to deploy CoreOS creating a etcd cluster. -5. Have no access to the public [etcd discovery tool](https://discovery.etcd.io/). -6. Installing the CoreOS slaves to become Kubernetes minions. - -## Pre-requisites -1. Installed *CentOS 6* for PXE server -2. At least two bare metal nodes to work with - -## This Guides variables -| Node Description | MAC | IP | -| :---------------------------- | :---------------: | :---------: | -| CoreOS/etcd/Kubernetes Master | d0:00:67:13:0d:00 | 10.20.30.40 | -| CoreOS Slave 1 | d0:00:67:13:0d:01 | 10.20.30.41 | -| CoreOS Slave 2 | d0:00:67:13:0d:02 | 10.20.30.42 | - - -## Setup PXELINUX CentOS -To setup CentOS PXELINUX environment there is a complete [guide here](http://docs.fedoraproject.org/en-US/Fedora/7/html/Installation_Guide/ap-pxe-server.html). This section is the abbreviated version. - -1. Install packages needed on CentOS - - sudo yum install tftp-server dhcp syslinux - -2. ```vi /etc/xinetd.d/tftp``` to enable tftp service and change disable to 'no' - disable = no - -3. Copy over the syslinux images we will need. - - su - - mkdir -p /tftpboot - cd /tftpboot - cp /usr/share/syslinux/pxelinux.0 /tftpboot - cp /usr/share/syslinux/menu.c32 /tftpboot - cp /usr/share/syslinux/memdisk /tftpboot - cp /usr/share/syslinux/mboot.c32 /tftpboot - cp /usr/share/syslinux/chain.c32 /tftpboot - - /sbin/service dhcpd start - /sbin/service xinetd start - /sbin/chkconfig tftp on - -4. Setup default boot menu - - mkdir /tftpboot/pxelinux.cfg - touch /tftpboot/pxelinux.cfg/default - -5. Edit the menu ```vi /tftpboot/pxelinux.cfg/default``` - - default menu.c32 - prompt 0 - timeout 15 - ONTIMEOUT local - display boot.msg - - MENU TITLE Main Menu - - LABEL local - MENU LABEL Boot local hard drive - LOCALBOOT 0 - -Now you should have a working PXELINUX setup to image CoreOS nodes. You can verify the services by using VirtualBox locally or with bare metal servers. - -## Adding CoreOS to PXE -This section describes how to setup the CoreOS images to live alongside a pre-existing PXELINUX environment. - -1. Find or create the TFTP root directory that everything will be based off of. - * For this document we will assume ```/tftpboot/``` is our root directory. -2. Once we know and have our tftp root directory we will create a new directory structure for our CoreOS images. -3. Download the CoreOS PXE files provided by the CoreOS team. - - MY_TFTPROOT_DIR=/tftpboot - mkdir -p $MY_TFTPROOT_DIR/images/coreos/ - cd $MY_TFTPROOT_DIR/images/coreos/ - wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe.vmlinuz - wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe.vmlinuz.sig - wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe_image.cpio.gz - wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe_image.cpio.gz.sig - gpg --verify coreos_production_pxe.vmlinuz.sig - gpg --verify coreos_production_pxe_image.cpio.gz.sig - -4. Edit the menu ```vi /tftpboot/pxelinux.cfg/default``` again - - default menu.c32 - prompt 0 - timeout 300 - ONTIMEOUT local - display boot.msg - - MENU TITLE Main Menu - - LABEL local - MENU LABEL Boot local hard drive - LOCALBOOT 0 - - MENU BEGIN CoreOS Menu - - LABEL coreos-master - MENU LABEL CoreOS Master - KERNEL images/coreos/coreos_production_pxe.vmlinuz - APPEND initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http:///pxe-cloud-config-single-master.yml - - LABEL coreos-slave - MENU LABEL CoreOS Slave - KERNEL images/coreos/coreos_production_pxe.vmlinuz - APPEND initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http:///pxe-cloud-config-slave.yml - MENU END - -This configuration file will now boot from local drive but have the option to PXE image CoreOS. - -## DHCP configuration -This section covers configuring the DHCP server to hand out our new images. In this case we are assuming that there are other servers that will boot alongside other images. - -1. Add the ```filename``` to the _host_ or _subnet_ sections. - - filename "/tftpboot/pxelinux.0"; - -2. At this point we want to make pxelinux configuration files that will be the templates for the different CoreOS deployments. - - subnet 10.20.30.0 netmask 255.255.255.0 { - next-server 10.20.30.242; - option broadcast-address 10.20.30.255; - filename ""; - - ... - # http://www.syslinux.org/wiki/index.php/PXELINUX - host core_os_master { - hardware ethernet d0:00:67:13:0d:00; - option routers 10.20.30.1; - fixed-address 10.20.30.40; - option domain-name-servers 10.20.30.242; - filename "/pxelinux.0"; - } - host core_os_slave { - hardware ethernet d0:00:67:13:0d:01; - option routers 10.20.30.1; - fixed-address 10.20.30.41; - option domain-name-servers 10.20.30.242; - filename "/pxelinux.0"; - } - host core_os_slave2 { - hardware ethernet d0:00:67:13:0d:02; - option routers 10.20.30.1; - fixed-address 10.20.30.42; - option domain-name-servers 10.20.30.242; - filename "/pxelinux.0"; - } - ... - } - -We will be specifying the node configuration later in the guide. - -# Kubernetes -To deploy our configuration we need to create an ```etcd``` master. To do so we want to pxe CoreOS with a specific cloud-config.yml. There are two options we have here. -1. Is to template the cloud config file and programmatically create new static configs for different cluster setups. -2. Have a service discovery protocol running in our stack to do auto discovery. - -This demo we just make a static single ```etcd``` server to host our Kubernetes and ```etcd``` master servers. - -Since we are OFFLINE here most of the helping processes in CoreOS and Kubernetes are then limited. To do our setup we will then have to download and serve up our binaries for Kubernetes in our local environment. - -An easy solution is to host a small web server on the DHCP/TFTP host for all our binaries to make them available to the local CoreOS PXE machines. - -To get this up and running we are going to setup a simple ```apache``` server to serve our binaries needed to bootstrap Kubernetes. - -This is on the PXE server from the previous section: - - rm /etc/httpd/conf.d/welcome.conf - cd /var/www/html/ - wget -O kube-register https://github.com/kelseyhightower/kube-register/releases/download/v0.0.2/kube-register-0.0.2-linux-amd64 - wget -O setup-network-environment https://github.com/kelseyhightower/setup-network-environment/releases/download/v1.0.0/setup-network-environment - wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubernetes --no-check-certificate - wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-apiserver --no-check-certificate - wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-controller-manager --no-check-certificate - wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-scheduler --no-check-certificate - wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubectl --no-check-certificate - wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubecfg --no-check-certificate - wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubelet --no-check-certificate - wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-proxy --no-check-certificate - wget -O flanneld https://storage.googleapis.com/k8s/flanneld --no-check-certificate - -This sets up our binaries we need to run Kubernetes. This would need to be enhanced to download from the Internet for updates in the future. - -Now for the good stuff! - -## Cloud Configs -The following config files are tailored for the OFFLINE version of a Kubernetes deployment. - -These are based on the work found here: [master.yml](http://docs.k8s.io/getting-started-guides/coreos/cloud-configs/master.yaml), [node.yml](http://docs.k8s.io/getting-started-guides/coreos/cloud-configs/node.yaml) - -To make the setup work, you need to replace a few placeholders: - - - Replace `` with your PXE server ip address (e.g. 10.20.30.242) - - Replace `` with the kubernetes master ip address (e.g. 10.20.30.40) - - If you run a private docker registry, replace `rdocker.example.com` with your docker registry dns name. - - If you use a proxy, replace `rproxy.example.com` with your proxy server (and port) - - Add your own SSH public key(s) to the cloud config at the end - -### master.yml -On the PXE server make and fill in the variables ```vi /var/www/html/coreos/pxe-cloud-config-master.yml```. - - - #cloud-config - --- - write_files: - - path: /opt/bin/waiter.sh - owner: root - content: | - #! /usr/bin/bash - until curl http://127.0.0.1:4001/v2/machines; do sleep 2; done - - path: /opt/bin/kubernetes-download.sh - owner: root - permissions: 0755 - content: | - #! /usr/bin/bash - /usr/bin/wget -N -P "/opt/bin" "http:///kubectl" - /usr/bin/wget -N -P "/opt/bin" "http:///kubernetes" - /usr/bin/wget -N -P "/opt/bin" "http:///kubecfg" - chmod +x /opt/bin/* - - path: /etc/profile.d/opt-path.sh - owner: root - permissions: 0755 - content: | - #! /usr/bin/bash - PATH=$PATH/opt/bin - coreos: - units: - - name: 10-eno1.network - runtime: true - content: | - [Match] - Name=eno1 - [Network] - DHCP=yes - - name: 20-nodhcp.network - runtime: true - content: | - [Match] - Name=en* - [Network] - DHCP=none - - name: get-kube-tools.service - runtime: true - command: start - content: | - [Service] - ExecStartPre=-/usr/bin/mkdir -p /opt/bin - ExecStart=/opt/bin/kubernetes-download.sh - RemainAfterExit=yes - Type=oneshot - - name: setup-network-environment.service - command: start - content: | - [Unit] - Description=Setup Network Environment - Documentation=https://github.com/kelseyhightower/setup-network-environment - Requires=network-online.target - After=network-online.target - [Service] - ExecStartPre=-/usr/bin/mkdir -p /opt/bin - ExecStartPre=/usr/bin/wget -N -P /opt/bin http:///setup-network-environment - ExecStartPre=/usr/bin/chmod +x /opt/bin/setup-network-environment - ExecStart=/opt/bin/setup-network-environment - RemainAfterExit=yes - Type=oneshot - - name: etcd.service - command: start - content: | - [Unit] - Description=etcd - Requires=setup-network-environment.service - After=setup-network-environment.service - [Service] - EnvironmentFile=/etc/network-environment - User=etcd - PermissionsStartOnly=true - ExecStart=/usr/bin/etcd \ - --name ${DEFAULT_IPV4} \ - --addr ${DEFAULT_IPV4}:4001 \ - --bind-addr 0.0.0.0 \ - --cluster-active-size 1 \ - --data-dir /var/lib/etcd \ - --http-read-timeout 86400 \ - --peer-addr ${DEFAULT_IPV4}:7001 \ - --snapshot true - Restart=always - RestartSec=10s - - name: fleet.socket - command: start - content: | - [Socket] - ListenStream=/var/run/fleet.sock - - name: fleet.service - command: start - content: | - [Unit] - Description=fleet daemon - Wants=etcd.service - After=etcd.service - Wants=fleet.socket - After=fleet.socket - [Service] - Environment="FLEET_ETCD_SERVERS=http://127.0.0.1:4001" - Environment="FLEET_METADATA=role=master" - ExecStart=/usr/bin/fleetd - Restart=always - RestartSec=10s - - name: etcd-waiter.service - command: start - content: | - [Unit] - Description=etcd waiter - Wants=network-online.target - Wants=etcd.service - After=etcd.service - After=network-online.target - Before=flannel.service - Before=setup-network-environment.service - [Service] - ExecStartPre=/usr/bin/chmod +x /opt/bin/waiter.sh - ExecStart=/usr/bin/bash /opt/bin/waiter.sh - RemainAfterExit=true - Type=oneshot - - name: flannel.service - command: start - content: | - [Unit] - Wants=etcd-waiter.service - After=etcd-waiter.service - Requires=etcd.service - After=etcd.service - After=network-online.target - Wants=network-online.target - Description=flannel is an etcd backed overlay network for containers - [Service] - Type=notify - ExecStartPre=-/usr/bin/mkdir -p /opt/bin - ExecStartPre=/usr/bin/wget -N -P /opt/bin http:///flanneld - ExecStartPre=/usr/bin/chmod +x /opt/bin/flanneld - ExecStartPre=-/usr/bin/etcdctl mk /coreos.com/network/config '{"Network":"10.100.0.0/16", "Backend": {"Type": "vxlan"}}' - ExecStart=/opt/bin/flanneld - - name: kube-apiserver.service - command: start - content: | - [Unit] - Description=Kubernetes API Server - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=etcd.service - After=etcd.service - [Service] - ExecStartPre=-/usr/bin/mkdir -p /opt/bin - ExecStartPre=/usr/bin/wget -N -P /opt/bin http:///kube-apiserver - ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-apiserver - ExecStart=/opt/bin/kube-apiserver \ - --address=0.0.0.0 \ - --port=8080 \ - --service-cluster-ip-range=10.100.0.0/16 \ - --etcd_servers=http://127.0.0.1:4001 \ - --logtostderr=true - Restart=always - RestartSec=10 - - name: kube-controller-manager.service - command: start - content: | - [Unit] - Description=Kubernetes Controller Manager - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=kube-apiserver.service - After=kube-apiserver.service - [Service] - ExecStartPre=/usr/bin/wget -N -P /opt/bin http:///kube-controller-manager - ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-controller-manager - ExecStart=/opt/bin/kube-controller-manager \ - --master=127.0.0.1:8080 \ - --logtostderr=true - Restart=always - RestartSec=10 - - name: kube-scheduler.service - command: start - content: | - [Unit] - Description=Kubernetes Scheduler - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=kube-apiserver.service - After=kube-apiserver.service - [Service] - ExecStartPre=/usr/bin/wget -N -P /opt/bin http:///kube-scheduler - ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-scheduler - ExecStart=/opt/bin/kube-scheduler --master=127.0.0.1:8080 - Restart=always - RestartSec=10 - - name: kube-register.service - command: start - content: | - [Unit] - Description=Kubernetes Registration Service - Documentation=https://github.com/kelseyhightower/kube-register - Requires=kube-apiserver.service - After=kube-apiserver.service - Requires=fleet.service - After=fleet.service - [Service] - ExecStartPre=/usr/bin/wget -N -P /opt/bin http:///kube-register - ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-register - ExecStart=/opt/bin/kube-register \ - --metadata=role=node \ - --fleet-endpoint=unix:///var/run/fleet.sock \ - --healthz-port=10248 \ - --api-endpoint=http://127.0.0.1:8080 - Restart=always - RestartSec=10 - update: - group: stable - reboot-strategy: off - ssh_authorized_keys: - - ssh-rsa AAAAB3NzaC1yc2EAAAAD... - - -### node.yml -On the PXE server make and fill in the variables ```vi /var/www/html/coreos/pxe-cloud-config-slave.yml```. - - #cloud-config - --- - write_files: - - path: /etc/default/docker - content: | - DOCKER_EXTRA_OPTS='--insecure-registry="rdocker.example.com:5000"' - coreos: - units: - - name: 10-eno1.network - runtime: true - content: | - [Match] - Name=eno1 - [Network] - DHCP=yes - - name: 20-nodhcp.network - runtime: true - content: | - [Match] - Name=en* - [Network] - DHCP=none - - name: etcd.service - mask: true - - name: docker.service - drop-ins: - - name: 50-insecure-registry.conf - content: | - [Service] - Environment="HTTP_PROXY=http://rproxy.example.com:3128/" "NO_PROXY=localhost,127.0.0.0/8,rdocker.example.com" - - name: fleet.service - command: start - content: | - [Unit] - Description=fleet daemon - Wants=fleet.socket - After=fleet.socket - [Service] - Environment="FLEET_ETCD_SERVERS=http://:4001" - Environment="FLEET_METADATA=role=node" - ExecStart=/usr/bin/fleetd - Restart=always - RestartSec=10s - - name: flannel.service - command: start - content: | - [Unit] - After=network-online.target - Wants=network-online.target - Description=flannel is an etcd backed overlay network for containers - [Service] - Type=notify - ExecStartPre=-/usr/bin/mkdir -p /opt/bin - ExecStartPre=/usr/bin/wget -N -P /opt/bin http:///flanneld - ExecStartPre=/usr/bin/chmod +x /opt/bin/flanneld - ExecStart=/opt/bin/flanneld -etcd-endpoints http://:4001 - - name: docker.service - command: start - content: | - [Unit] - After=flannel.service - Wants=flannel.service - Description=Docker Application Container Engine - Documentation=http://docs.docker.io - [Service] - EnvironmentFile=-/etc/default/docker - EnvironmentFile=/run/flannel/subnet.env - ExecStartPre=/bin/mount --make-rprivate / - ExecStart=/usr/bin/docker -d --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU} -s=overlay -H fd:// ${DOCKER_EXTRA_OPTS} - [Install] - WantedBy=multi-user.target - - name: setup-network-environment.service - command: start - content: | - [Unit] - Description=Setup Network Environment - Documentation=https://github.com/kelseyhightower/setup-network-environment - Requires=network-online.target - After=network-online.target - [Service] - ExecStartPre=-/usr/bin/mkdir -p /opt/bin - ExecStartPre=/usr/bin/wget -N -P /opt/bin http:///setup-network-environment - ExecStartPre=/usr/bin/chmod +x /opt/bin/setup-network-environment - ExecStart=/opt/bin/setup-network-environment - RemainAfterExit=yes - Type=oneshot - - name: kube-proxy.service - command: start - content: | - [Unit] - Description=Kubernetes Proxy - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=setup-network-environment.service - After=setup-network-environment.service - [Service] - ExecStartPre=/usr/bin/wget -N -P /opt/bin http:///kube-proxy - ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-proxy - ExecStart=/opt/bin/kube-proxy \ - --etcd_servers=http://:4001 \ - --logtostderr=true - Restart=always - RestartSec=10 - - name: kube-kubelet.service - command: start - content: | - [Unit] - Description=Kubernetes Kubelet - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=setup-network-environment.service - After=setup-network-environment.service - [Service] - EnvironmentFile=/etc/network-environment - ExecStartPre=/usr/bin/wget -N -P /opt/bin http:///kubelet - ExecStartPre=/usr/bin/chmod +x /opt/bin/kubelet - ExecStart=/opt/bin/kubelet \ - --address=0.0.0.0 \ - --port=10250 \ - --hostname_override=${DEFAULT_IPV4} \ - --api_servers=:8080 \ - --healthz_bind_address=0.0.0.0 \ - --healthz_port=10248 \ - --logtostderr=true - Restart=always - RestartSec=10 - update: - group: stable - reboot-strategy: off - ssh_authorized_keys: - - ssh-rsa AAAAB3NzaC1yc2EAAAAD... - - -## New pxelinux.cfg file -Create a pxelinux target file for a _slave_ node: ```vi /tftpboot/pxelinux.cfg/coreos-node-slave``` - - default coreos - prompt 1 - timeout 15 - - display boot.msg - - label coreos - menu default - kernel images/coreos/coreos_production_pxe.vmlinuz - append initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http:///coreos/pxe-cloud-config-slave.yml console=tty0 console=ttyS0 coreos.autologin=tty1 coreos.autologin=ttyS0 - -And one for the _master_ node: ```vi /tftpboot/pxelinux.cfg/coreos-node-master``` - - default coreos - prompt 1 - timeout 15 - - display boot.msg - - label coreos - menu default - kernel images/coreos/coreos_production_pxe.vmlinuz - append initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http:///coreos/pxe-cloud-config-master.yml console=tty0 console=ttyS0 coreos.autologin=tty1 coreos.autologin=ttyS0 - -## Specify the pxelinux targets -Now that we have our new targets setup for master and slave we want to configure the specific hosts to those targets. We will do this by using the pxelinux mechanism of setting a specific MAC addresses to a specific pxelinux.cfg file. - -Refer to the MAC address table in the beginning of this guide. Documentation for more details can be found [here](http://www.syslinux.org/wiki/index.php/PXELINUX). - - cd /tftpboot/pxelinux.cfg - ln -s coreos-node-master 01-d0-00-67-13-0d-00 - ln -s coreos-node-slave 01-d0-00-67-13-0d-01 - ln -s coreos-node-slave 01-d0-00-67-13-0d-02 - - -Reboot these servers to get the images PXEd and ready for running containers! - -## Creating test pod -Now that the CoreOS with Kubernetes installed is up and running lets spin up some Kubernetes pods to demonstrate the system. - -See [a simple nginx example](../../../examples/simple-nginx.md) to try out your new cluster. - -For more complete applications, please look in the [examples directory](../../../examples). - -## Helping commands for debugging - -List all keys in etcd: - - etcdctl ls --recursive - -List fleet machines - - fleetctl list-machines - -Check system status of services on master node: - - systemctl status kube-apiserver - systemctl status kube-controller-manager - systemctl status kube-scheduler - systemctl status kube-register - -Check system status of services on a minion node: - - systemctl status kube-kubelet - systemctl status docker.service - -List Kubernetes - - kubectl get pods - kubectl get minions - - -Kill all pods: - - for i in `kubectl get pods | awk '{print $1}'`; do kubectl stop pod $i; done - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/coreos/bare_metal_offline.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/getting-started-guides/coreos/bare_metal_offline.md?pixel)]() diff --git a/release-0.19.0/docs/getting-started-guides/coreos/cloud-configs/master.yaml b/release-0.19.0/docs/getting-started-guides/coreos/cloud-configs/master.yaml deleted file mode 100644 index af7247414b3..00000000000 --- a/release-0.19.0/docs/getting-started-guides/coreos/cloud-configs/master.yaml +++ /dev/null @@ -1,180 +0,0 @@ -#cloud-config - ---- -hostname: master -coreos: - etcd2: - name: master - listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001 - advertise-client-urls: http://$private_ipv4:2379,http://$private_ipv4:4001 - initial-cluster-token: k8s_etcd - listen-peer-urls: http://$private_ipv4:2380,http://$private_ipv4:7001 - initial-advertise-peer-urls: http://$private_ipv4:2380 - initial-cluster: master=http://$private_ipv4:2380 - initial-cluster-state: new - fleet: - metadata: "role=master" - units: - - name: setup-network-environment.service - command: start - content: | - [Unit] - Description=Setup Network Environment - Documentation=https://github.com/kelseyhightower/setup-network-environment - Requires=network-online.target - After=network-online.target - - [Service] - ExecStartPre=-/usr/bin/mkdir -p /opt/bin - ExecStartPre=/usr/bin/curl -L -o /opt/bin/setup-network-environment -z /opt/bin/setup-network-environment https://github.com/kelseyhightower/setup-network-environment/releases/download/v1.0.0/setup-network-environment - ExecStartPre=/usr/bin/chmod +x /opt/bin/setup-network-environment - ExecStart=/opt/bin/setup-network-environment - RemainAfterExit=yes - Type=oneshot - - name: fleet.service - command: start - - name: flanneld.service - command: start - drop-ins: - - name: 50-network-config.conf - content: | - [Unit] - Requires=etcd2.service - [Service] - ExecStartPre=/usr/bin/etcdctl set /coreos.com/network/config '{"Network":"10.244.0.0/16", "Backend": {"Type": "vxlan"}}' - - name: docker-cache.service - command: start - content: | - [Unit] - Description=Docker cache proxy - Requires=early-docker.service - After=early-docker.service - Before=early-docker.target - - [Service] - Restart=always - TimeoutStartSec=0 - RestartSec=5 - Environment="TMPDIR=/var/tmp/" - Environment="DOCKER_HOST=unix:///var/run/early-docker.sock" - ExecStartPre=-/usr/bin/docker kill docker-registry - ExecStartPre=-/usr/bin/docker rm docker-registry - ExecStartPre=/usr/bin/docker pull quay.io/devops/docker-registry:latest - # GUNICORN_OPTS is an workaround for - # https://github.com/docker/docker-registry/issues/892 - ExecStart=/usr/bin/docker run --rm --net host --name docker-registry \ - -e STANDALONE=false \ - -e GUNICORN_OPTS=[--preload] \ - -e MIRROR_SOURCE=https://registry-1.docker.io \ - -e MIRROR_SOURCE_INDEX=https://index.docker.io \ - -e MIRROR_TAGS_CACHE_TTL=1800 \ - quay.io/devops/docker-registry:latest - - name: docker.service - content: | - [Unit] - Description=Docker Application Container Engine - Documentation=http://docs.docker.com - After=docker.socket early-docker.target network.target - Requires=docker.socket early-docker.target - - [Service] - Environment=TMPDIR=/var/tmp - EnvironmentFile=-/run/flannel_docker_opts.env - EnvironmentFile=/etc/network-environment - MountFlags=slave - LimitNOFILE=1048576 - LimitNPROC=1048576 - ExecStart=/usr/lib/coreos/dockerd --daemon --host=fd:// --registry-mirror=http://${DEFAULT_IPV4}:5000 $DOCKER_OPT_BIP $DOCKER_OPT_MTU $DOCKER_OPT_IPMASQ - - [Install] - WantedBy=multi-user.target - drop-ins: - - name: 51-docker-mirror.conf - content: | - [Unit] - # making sure that docker-cache is up and that flanneld finished - # startup, otherwise containers won't land in flannel's network... - Requires=docker-cache.service flanneld.service - After=docker-cache.service flanneld.service - - name: kube-apiserver.service - command: start - content: | - [Unit] - Description=Kubernetes API Server - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=etcd2.service setup-network-environment.service - After=etcd2.service setup-network-environment.service - - [Service] - EnvironmentFile=/etc/network-environment - ExecStartPre=-/usr/bin/mkdir -p /opt/bin - ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-apiserver -z /opt/bin/kube-apiserver https://storage.googleapis.com/kubernetes-release/release/v0.18.0/bin/linux/amd64/kube-apiserver - ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-apiserver - ExecStart=/opt/bin/kube-apiserver \ - --allow_privileged=true \ - --insecure_bind_address=0.0.0.0 \ - --insecure_port=8080 \ - --kubelet_https=true \ - --secure_port=6443 \ - --service-cluster-ip-range=10.100.0.0/16 \ - --etcd_servers=http://127.0.0.1:4001 \ - --public_address_override=${DEFAULT_IPV4} \ - --logtostderr=true - Restart=always - RestartSec=10 - - name: kube-controller-manager.service - command: start - content: | - [Unit] - Description=Kubernetes Controller Manager - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=kube-apiserver.service - After=kube-apiserver.service - - [Service] - ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-controller-manager -z /opt/bin/kube-controller-manager https://storage.googleapis.com/kubernetes-release/release/v0.18.0/bin/linux/amd64/kube-controller-manager - ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-controller-manager - ExecStart=/opt/bin/kube-controller-manager \ - --master=127.0.0.1:8080 \ - --logtostderr=true - Restart=always - RestartSec=10 - - name: kube-scheduler.service - command: start - content: | - [Unit] - Description=Kubernetes Scheduler - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=kube-apiserver.service - After=kube-apiserver.service - - [Service] - ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-scheduler -z /opt/bin/kube-scheduler https://storage.googleapis.com/kubernetes-release/release/v0.18.0/bin/linux/amd64/kube-scheduler - ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-scheduler - ExecStart=/opt/bin/kube-scheduler --master=127.0.0.1:8080 - Restart=always - RestartSec=10 - - name: kube-register.service - command: start - content: | - [Unit] - Description=Kubernetes Registration Service - Documentation=https://github.com/kelseyhightower/kube-register - Requires=kube-apiserver.service - After=kube-apiserver.service - Requires=fleet.service - After=fleet.service - - [Service] - ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-register -z /opt/bin/kube-register https://github.com/kelseyhightower/kube-register/releases/download/v0.0.3/kube-register-0.0.3-linux-amd64 - ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-register - ExecStart=/opt/bin/kube-register \ - --metadata=role=node \ - --fleet-endpoint=unix:///var/run/fleet.sock \ - --api-endpoint=http://127.0.0.1:8080 \ - --healthz-port=10248 - Restart=always - RestartSec=10 - update: - group: alpha - reboot-strategy: off diff --git a/release-0.19.0/docs/getting-started-guides/coreos/cloud-configs/node.yaml b/release-0.19.0/docs/getting-started-guides/coreos/cloud-configs/node.yaml deleted file mode 100644 index 0668a7e8bdd..00000000000 --- a/release-0.19.0/docs/getting-started-guides/coreos/cloud-configs/node.yaml +++ /dev/null @@ -1,105 +0,0 @@ -#cloud-config -write-files: - - path: /opt/bin/wupiao - permissions: '0755' - content: | - #!/bin/bash - # [w]ait [u]ntil [p]ort [i]s [a]ctually [o]pen - [ -n "$1" ] && [ -n "$2" ] && while ! curl --output /dev/null \ - --silent --head --fail \ - http://${1}:${2}; do sleep 1 && echo -n .; done; - exit $? -coreos: - etcd2: - listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001 - advertise-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001 - initial-cluster: master=http://:2380 - proxy: on - fleet: - metadata: "role=node" - units: - - name: fleet.service - command: start - - name: flanneld.service - command: start - drop-ins: - - name: 50-network-config.conf - content: | - [Unit] - Requires=etcd2.service - [Service] - ExecStartPre=/usr/bin/etcdctl set /coreos.com/network/config '{"Network":"10.244.0.0/16", "Backend": {"Type": "vxlan"}}' - - name: docker.service - command: start - drop-ins: - - name: 51-docker-mirror.conf - content: | - [Unit] - Requires=flanneld.service - After=flanneld.service - [Service] - Environment=DOCKER_OPTS='--registry-mirror=http://:5000' - - name: setup-network-environment.service - command: start - content: | - [Unit] - Description=Setup Network Environment - Documentation=https://github.com/kelseyhightower/setup-network-environment - Requires=network-online.target - After=network-online.target - - [Service] - ExecStartPre=-/usr/bin/mkdir -p /opt/bin - ExecStartPre=/usr/bin/curl -L -o /opt/bin/setup-network-environment -z /opt/bin/setup-network-environment https://github.com/kelseyhightower/setup-network-environment/releases/download/v1.0.0/setup-network-environment - ExecStartPre=/usr/bin/chmod +x /opt/bin/setup-network-environment - ExecStart=/opt/bin/setup-network-environment - RemainAfterExit=yes - Type=oneshot - - name: kube-proxy.service - command: start - content: | - [Unit] - Description=Kubernetes Proxy - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=setup-network-environment.service - After=setup-network-environment.service - - [Service] - ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-proxy -z /opt/bin/kube-proxy https://storage.googleapis.com/kubernetes-release/release/v0.18.0/bin/linux/amd64/kube-proxy - ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-proxy - # wait for kubernetes master to be up and ready - ExecStartPre=/opt/bin/wupiao 8080 - ExecStart=/opt/bin/kube-proxy \ - --master=:8080 \ - --logtostderr=true - Restart=always - RestartSec=10 - - name: kube-kubelet.service - command: start - content: | - [Unit] - Description=Kubernetes Kubelet - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=setup-network-environment.service - After=setup-network-environment.service - - [Service] - EnvironmentFile=/etc/network-environment - ExecStartPre=/usr/bin/curl -L -o /opt/bin/kubelet -z /opt/bin/kubelet https://storage.googleapis.com/kubernetes-release/release/v0.18.0/bin/linux/amd64/kubelet - ExecStartPre=/usr/bin/chmod +x /opt/bin/kubelet - # wait for kubernetes master to be up and ready - ExecStartPre=/opt/bin/wupiao 8080 - ExecStart=/opt/bin/kubelet \ - --address=0.0.0.0 \ - --port=10250 \ - --hostname_override=${DEFAULT_IPV4} \ - --api_servers=:8080 \ - --allow_privileged=true \ - --logtostderr=true \ - --healthz_bind_address=0.0.0.0 \ - --healthz_port=10248 - Restart=always - RestartSec=10 - update: - group: alpha - reboot-strategy: off diff --git a/release-0.19.0/docs/getting-started-guides/coreos/cloud-configs/standalone.yaml b/release-0.19.0/docs/getting-started-guides/coreos/cloud-configs/standalone.yaml deleted file mode 100644 index a37b05e37d3..00000000000 --- a/release-0.19.0/docs/getting-started-guides/coreos/cloud-configs/standalone.yaml +++ /dev/null @@ -1,168 +0,0 @@ -#cloud-config - ---- -hostname: master -coreos: - etcd2: - name: master - listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001 - advertise-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001 - initial-cluster-token: k8s_etcd - listen-peer-urls: http://0.0.0.0:2380,http://0.0.0.0:7001 - initial-advertise-peer-urls: http://0.0.0.0:2380 - initial-cluster: master=http://0.0.0.0:2380 - initial-cluster-state: new - units: - - name: etcd2.service - command: start - - name: fleet.service - command: start - - name: flanneld.service - command: start - drop-ins: - - name: 50-network-config.conf - content: | - [Unit] - Requires=etcd2.service - [Service] - ExecStartPre=/usr/bin/etcdctl set /coreos.com/network/config '{"Network":"10.244.0.0/16", "Backend": {"Type": "vxlan"}}' - - name: docker-cache.service - command: start - content: | - [Unit] - Description=Docker cache proxy - Requires=early-docker.service - After=early-docker.service - Before=early-docker.target - - [Service] - Restart=always - TimeoutStartSec=0 - RestartSec=5 - Environment="TMPDIR=/var/tmp/" - Environment="DOCKER_HOST=unix:///var/run/early-docker.sock" - ExecStartPre=-/usr/bin/docker kill docker-registry - ExecStartPre=-/usr/bin/docker rm docker-registry - ExecStartPre=/usr/bin/docker pull quay.io/devops/docker-registry:latest - # GUNICORN_OPTS is an workaround for - # https://github.com/docker/docker-registry/issues/892 - ExecStart=/usr/bin/docker run --rm --net host --name docker-registry \ - -e STANDALONE=false \ - -e GUNICORN_OPTS=[--preload] \ - -e MIRROR_SOURCE=https://registry-1.docker.io \ - -e MIRROR_SOURCE_INDEX=https://index.docker.io \ - -e MIRROR_TAGS_CACHE_TTL=1800 \ - quay.io/devops/docker-registry:latest - - name: docker.service - command: start - drop-ins: - - name: 51-docker-mirror.conf - content: | - [Unit] - # making sure that docker-cache is up and that flanneld finished - # startup, otherwise containers won't land in flannel's network... - Requires=docker-cache.service flanneld.service - After=docker-cache.service flanneld.service - [Service] - Environment=DOCKER_OPTS='--registry-mirror=http://$private_ipv4:5000' - - name: kube-apiserver.service - command: start - content: | - [Unit] - Description=Kubernetes API Server - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=etcd2.service - After=etcd2.service - - [Service] - ExecStartPre=-/usr/bin/mkdir -p /opt/bin - ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-apiserver - ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-apiserver - ExecStart=/opt/bin/kube-apiserver \ - --allow_privileged=true \ - --insecure_bind_address=0.0.0.0 \ - --insecure_port=8080 \ - --kubelet_https=true \ - --secure_port=6443 \ - --service-cluster-ip-range=10.100.0.0/16 \ - --etcd_servers=http://127.0.0.1:4001 \ - --public_address_override=127.0.0.1 \ - --logtostderr=true - Restart=always - RestartSec=10 - - name: kube-controller-manager.service - command: start - content: | - [Unit] - Description=Kubernetes Controller Manager - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=kube-apiserver.service - After=kube-apiserver.service - - [Service] - ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-controller-manager - ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-controller-manager - ExecStart=/opt/bin/kube-controller-manager \ - --machines=127.0.0.1 \ - --master=127.0.0.1:8080 \ - --logtostderr=true - Restart=always - RestartSec=10 - - name: kube-scheduler.service - command: start - content: | - [Unit] - Description=Kubernetes Scheduler - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=kube-apiserver.service - After=kube-apiserver.service - - [Service] - ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-scheduler - ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-scheduler - ExecStart=/opt/bin/kube-scheduler --master=127.0.0.1:8080 - Restart=always - RestartSec=10 - - name: kube-proxy.service - command: start - content: | - [Unit] - Description=Kubernetes Proxy - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=etcd2.service - After=etcd2.service - - [Service] - ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-proxy - ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-proxy - ExecStart=/opt/bin/kube-proxy \ - --master=127.0.0.1:8080 \ - --logtostderr=true - Restart=always - RestartSec=10 - - name: kube-kubelet.service - command: start - content: | - [Unit] - Description=Kubernetes Kubelet - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=etcd2.service - After=etcd2.service - - [Service] - ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubelet - ExecStartPre=/usr/bin/chmod +x /opt/bin/kubelet - ExecStart=/opt/bin/kubelet \ - --address=0.0.0.0 \ - --port=10250 \ - --hostname_override=127.0.0.1 \ - --api_servers=127.0.0.1:8080 \ - --allow_privileged=true \ - --logtostderr=true \ - --healthz_bind_address=0.0.0.0 \ - --healthz_port=10248 - Restart=always - RestartSec=10 - update: - group: alpha - reboot-strategy: off diff --git a/release-0.19.0/docs/getting-started-guides/coreos/coreos_multinode_cluster.md b/release-0.19.0/docs/getting-started-guides/coreos/coreos_multinode_cluster.md deleted file mode 100644 index 5ac05ebb7b9..00000000000 --- a/release-0.19.0/docs/getting-started-guides/coreos/coreos_multinode_cluster.md +++ /dev/null @@ -1,142 +0,0 @@ -# CoreOS Multinode Cluster - -Use the [master.yaml](cloud-configs/master.yaml) and [node.yaml](cloud-configs/node.yaml) cloud-configs to provision a multi-node Kubernetes cluster. - -> **Attention**: This requires at least CoreOS version **[653.0.0][coreos653]**, as this was the first release to include etcd2. - -[coreos653]: https://coreos.com/releases/#653.0.0 - -## Overview - -* Provision the master node -* Capture the master node private IP address -* Edit node.yaml -* Provision one or more worker nodes - -### AWS - -*Attention:* Replace `````` below for a [suitable version of CoreOS image for AWS](https://coreos.com/docs/running-coreos/cloud-providers/ec2/). - -#### Provision the Master - -``` -aws ec2 create-security-group --group-name kubernetes --description "Kubernetes Security Group" -aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 22 --cidr 0.0.0.0/0 -aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 80 --cidr 0.0.0.0/0 -aws ec2 authorize-security-group-ingress --group-name kubernetes --source-security-group-name kubernetes -``` - -``` -aws ec2 run-instances \ ---image-id \ ---key-name \ ---region us-west-2 \ ---security-groups kubernetes \ ---instance-type m3.medium \ ---user-data file://master.yaml -``` - -#### Capture the private IP address - -``` -aws ec2 describe-instances --instance-id -``` - -#### Edit node.yaml - -Edit `node.yaml` and replace all instances of `` with the private IP address of the master node. - -#### Provision worker nodes - -``` -aws ec2 run-instances \ ---count 1 \ ---image-id \ ---key-name \ ---region us-west-2 \ ---security-groups kubernetes \ ---instance-type m3.medium \ ---user-data file://node.yaml -``` - -### GCE - -*Attention:* Replace `````` below for a [suitable version of CoreOS image for GCE](https://coreos.com/docs/running-coreos/cloud-providers/google-compute-engine/). - -#### Provision the Master - -``` -gcloud compute instances create master \ ---image-project coreos-cloud \ ---image \ ---boot-disk-size 200GB \ ---machine-type n1-standard-1 \ ---zone us-central1-a \ ---metadata-from-file user-data=master.yaml -``` - -#### Capture the private IP address - -``` -gcloud compute instances list -``` - -#### Edit node.yaml - -Edit `node.yaml` and replace all instances of `` with the private IP address of the master node. - -#### Provision worker nodes - -``` -gcloud compute instances create node1 \ ---image-project coreos-cloud \ ---image \ ---boot-disk-size 200GB \ ---machine-type n1-standard-1 \ ---zone us-central1-a \ ---metadata-from-file user-data=node.yaml -``` - -#### Establish network connectivity - -Next, setup an ssh tunnel to the master so you can run kubectl from your local host. -In one terminal, run `gcloud compute ssh master --ssh-flag="-L 8080:127.0.0.1:8080"` and in a second -run `gcloud compute ssh master --ssh-flag="-R 8080:127.0.0.1:8080"`. - -### VMware Fusion - -#### Create the master config-drive - -``` -mkdir -p /tmp/new-drive/openstack/latest/ -cp master.yaml /tmp/new-drive/openstack/latest/user_data -hdiutil makehybrid -iso -joliet -joliet-volume-name "config-2" -joliet -o master.iso /tmp/new-drive -``` - -#### Provision the Master - -Boot the [vmware image](https://coreos.com/docs/running-coreos/platforms/vmware) using `master.iso` as a config drive. - -#### Capture the master private IP address - -#### Edit node.yaml - -Edit `node.yaml` and replace all instances of `` with the private IP address of the master node. - -#### Create the node config-drive - -``` -mkdir -p /tmp/new-drive/openstack/latest/ -cp node.yaml /tmp/new-drive/openstack/latest/user_data -hdiutil makehybrid -iso -joliet -joliet-volume-name "config-2" -joliet -o node.iso /tmp/new-drive -``` - -#### Provision worker nodes - -Boot one or more the [vmware image](https://coreos.com/docs/running-coreos/platforms/vmware) using `node.iso` as a config drive. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/coreos/coreos_multinode_cluster.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/getting-started-guides/coreos/coreos_multinode_cluster.md?pixel)]() diff --git a/release-0.19.0/docs/getting-started-guides/coreos/coreos_single_node_cluster.md b/release-0.19.0/docs/getting-started-guides/coreos/coreos_single_node_cluster.md deleted file mode 100644 index 5bb0b555080..00000000000 --- a/release-0.19.0/docs/getting-started-guides/coreos/coreos_single_node_cluster.md +++ /dev/null @@ -1,66 +0,0 @@ -# CoreOS - Single Node Kubernetes Cluster - -Use the [standalone.yaml](cloud-configs/standalone.yaml) cloud-config to provision a single node Kubernetes cluster. - -> **Attention**: This requires at least CoreOS version **[653.0.0][coreos653]**, as this was the first release to include etcd2. - -[coreos653]: https://coreos.com/releases/#653.0.0 - -### CoreOS image versions - -### AWS - -``` -aws ec2 create-security-group --group-name kubernetes --description "Kubernetes Security Group" -aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 22 --cidr 0.0.0.0/0 -aws ec2 authorize-security-group-ingress --group-name kubernetes --source-security-group-name kubernetes -``` - -*Attention:* Replace `````` bellow for a [suitable version of CoreOS image for AWS](https://coreos.com/docs/running-coreos/cloud-providers/ec2/). - -``` -aws ec2 run-instances \ ---image-id \ ---key-name \ ---region us-west-2 \ ---security-groups kubernetes \ ---instance-type m3.medium \ ---user-data file://standalone.yaml -``` - -### GCE - -*Attention:* Replace `````` bellow for a [suitable version of CoreOS image for GCE](https://coreos.com/docs/running-coreos/cloud-providers/google-compute-engine/). - -``` -gcloud compute instances create standalone \ ---image-project coreos-cloud \ ---image \ ---boot-disk-size 200GB \ ---machine-type n1-standard-1 \ ---zone us-central1-a \ ---metadata-from-file user-data=standalone.yaml -``` - -Next, setup an ssh tunnel to the instance so you can run kubectl from your local host. -In one terminal, run `gcloud compute ssh standalone --ssh-flag="-L 8080:127.0.0.1:8080"` and in a second -run `gcloud compute ssh standalone --ssh-flag="-R 8080:127.0.0.1:8080"`. - - -### VMware Fusion - -Create a [config-drive](https://coreos.com/docs/cluster-management/setup/cloudinit-config-drive) ISO. - -``` -mkdir -p /tmp/new-drive/openstack/latest/ -cp standalone.yaml /tmp/new-drive/openstack/latest/user_data -hdiutil makehybrid -iso -joliet -joliet-volume-name "config-2" -joliet -o standalone.iso /tmp/new-drive -``` - -Boot the [vmware image](https://coreos.com/docs/running-coreos/platforms/vmware) using the `standalone.iso` as a config drive. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/coreos/coreos_single_node_cluster.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/getting-started-guides/coreos/coreos_single_node_cluster.md?pixel)]() diff --git a/release-0.19.0/docs/getting-started-guides/docker-multinode.md b/release-0.19.0/docs/getting-started-guides/docker-multinode.md deleted file mode 100644 index deb7b281821..00000000000 --- a/release-0.19.0/docs/getting-started-guides/docker-multinode.md +++ /dev/null @@ -1,51 +0,0 @@ -### Running Multi-Node Kubernetes Using Docker - -_Note_: -These instructions are somewhat significantly more advanced than the [single node](docker.md) instructions. If you are -interested in just starting to explore Kubernetes, we recommend that you start there. - -## Table of Contents - * [Overview](#overview) - * [Installing the master node](#master-node) - * [Installing a worker node](#adding-a-worker-node) - * [Testing your cluster](#testing-your-cluster) - -## Overview -This guide will set up a 2-node kubernetes cluster, consisting of a _master_ node which hosts the API server and orchestrates work -and a _worker_ node which receives work from the master. You can repeat the process of adding worker nodes an arbitrary number of -times to create larger clusters. - -Here's a diagram of what the final result will look like: -![Kubernetes Single Node on Docker](k8s-docker.png) - -### Bootstrap Docker -This guide also uses a pattern of running two instances of the Docker daemon - 1) A _bootstrap_ Docker instance which is used to start system daemons like ```flanneld``` and ```etcd``` - 2) A _main_ Docker instance which is used for the Kubernetes infrastructure and user's scheduled containers - -This pattern is necessary because the ```flannel``` daemon is responsible for setting up and managing the network that interconnects -all of the Docker containers created by Kubernetes. To achieve this, it must run outside of the _main_ Docker daemon. However, -it is still useful to use containers for deployment and management, so we create a simpler _bootstrap_ daemon to achieve this. - -## Master Node -The first step in the process is to initialize the master node. - -See [here](docker-multinode/master.md) for detailed instructions. - -## Adding a worker node - -Once your master is up and running you can add one or more workers on different machines. - -See [here](docker-multinode/worker.md) for detailed instructions. - -## Testing your cluster - -Once your cluster has been created you can [test it out](docker-multinode/testing.md) - -For more complete applications, please look in the [examples directory](../../examples) - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/docker-multinode.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/getting-started-guides/docker-multinode.md?pixel)]() diff --git a/release-0.19.0/docs/getting-started-guides/docker-multinode/master.md b/release-0.19.0/docs/getting-started-guides/docker-multinode/master.md deleted file mode 100644 index 64c93124803..00000000000 --- a/release-0.19.0/docs/getting-started-guides/docker-multinode/master.md +++ /dev/null @@ -1,149 +0,0 @@ -## Installing a Kubernetes Master Node via Docker -We'll begin by setting up the master node. For the purposes of illustration, we'll assume that the IP of this machine is ```${MASTER_IP}``` - -There are two main phases to installing the master: - * [Setting up ```flanneld``` and ```etcd```](#setting-up-flanneld-and-etcd) - * [Starting the Kubernetes master components](#starting-the-kubernetes-master) - - -## Setting up flanneld and etcd - -### Setup Docker-Bootstrap -We're going to use ```flannel``` to set up networking between Docker daemons. Flannel itself (and etcd on which it relies) will run inside of -Docker containers themselves. To achieve this, we need a separate "bootstrap" instance of the Docker daemon. This daemon will be started with -```--iptables=false``` so that it can only run containers with ```--net=host```. That's sufficient to bootstrap our system. - -Run: -```sh -sudo sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &' -``` - -_Important Note_: -If you are running this on a long running system, rather than experimenting, you should run the bootstrap Docker instance under something like SysV init, upstart or systemd so that it is restarted -across reboots and failures. - - -### Startup etcd for flannel and the API server to use -Run: -``` -sudo docker -H unix:///var/run/docker-bootstrap.sock run --net=host -d gcr.io/google_containers/etcd:2.0.9 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data -``` - -Next, you need to set a CIDR range for flannel. This CIDR should be chosen to be non-overlapping with any existing network you are using: - -```sh -sudo docker -H unix:///var/run/docker-bootstrap.sock run --net=host gcr.io/google_containers/etcd:2.0.9 etcdctl set /coreos.com/network/config '{ "Network": "10.1.0.0/16" }' -``` - - -### Set up Flannel on the master node -Flannel is a network abstraction layer build by CoreOS, we will use it to provide simplfied networking between our Pods of containers. - -Flannel re-configures the bridge that Docker uses for networking. As a result we need to stop Docker, reconfigure its networking, and then restart Docker. - -#### Bring down Docker -To re-configure Docker to use flannel, we need to take docker down, run flannel and then restart Docker. - -Turning down Docker is system dependent, it may be: - -```sh -sudo /etc/init.d/docker stop -``` - -or - -```sh -sudo systemctl stop docker -``` - -or it may be something else. - -#### Run flannel - -Now run flanneld itself: -```sh -sudo docker -H unix:///var/run/docker-bootstrap.sock run -d --net=host --privileged -v /dev/net:/dev/net quay.io/coreos/flannel:0.3.0 -``` - -The previous command should have printed a really long hash, copy this hash. - -Now get the subnet settings from flannel: -``` -sudo docker -H unix:///var/run/docker-bootstrap.sock exec cat /run/flannel/subnet.env -``` - -#### Edit the docker configuration -You now need to edit the docker configuration to activate new flags. Again, this is system specific. - -This may be in ```/etc/default/docker``` or ```/etc/systemd/service/docker.service``` or it may be elsewhere. - -Regardless, you need to add the following to the docker comamnd line: -```sh ---bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU} -``` - -#### Remove the existing Docker bridge -Docker creates a bridge named ```docker0``` by default. You need to remove this: - -```sh -sudo /sbin/ifconfig docker0 down -sudo brctl delbr docker0 -``` - -You may need to install the ```bridge-utils``` package for the ```brctl``` binary. - -#### Restart Docker -Again this is system dependent, it may be: - -```sh -sudo /etc/init.d/docker start -``` - -it may be: -```sh -systemctl start docker -``` - -## Starting the Kubernetes Master -Ok, now that your networking is set up, you can startup Kubernetes, this is the same as the single-node case, we will use the "main" instance of the Docker daemon for the Kubernetes components. - -```sh -sudo docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.17.0 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests-multi -``` - -### Also run the service proxy -```sh -sudo docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.17.0 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2 -``` - -### Test it out -At this point, you should have a functioning 1-node cluster. Let's test it out! - -Download the kubectl binary -([OS X](http://storage.googleapis.com/kubernetes-release/release/v0.17.0/bin/darwin/amd64/kubectl)) -([linux](http://storage.googleapis.com/kubernetes-release/release/v0.17.0/bin/linux/amd64/kubectl)) - -List the nodes - -```sh -kubectl get nodes -``` - -This should print: -``` -NAME LABELS STATUS -127.0.0.1 Ready -``` - -If the status of the node is ```NotReady``` or ```Unknown``` please check that all of the containers you created are successfully running. -If all else fails, ask questions on IRC at #google-containers. - - -### Next steps -Move on to [adding one or more workers](worker.md) - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/docker-multinode/master.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/getting-started-guides/docker-multinode/master.md?pixel)]() diff --git a/release-0.19.0/docs/getting-started-guides/docker-multinode/testing.md b/release-0.19.0/docs/getting-started-guides/docker-multinode/testing.md deleted file mode 100644 index 9781e125852..00000000000 --- a/release-0.19.0/docs/getting-started-guides/docker-multinode/testing.md +++ /dev/null @@ -1,63 +0,0 @@ -## Testing your Kubernetes cluster. - -To validate that your node(s) have been added, run: - -```sh -kubectl get nodes -``` - -That should show something like: -``` -NAME LABELS STATUS -10.240.99.26 Ready -127.0.0.1 Ready -``` - -If the status of any node is ```Unknown``` or ```NotReady``` your cluster is broken, double check that all containers are running properly, and if all else fails, contact us on IRC at -```#google-containers``` for advice. - -### Run an application -```sh -kubectl -s http://localhost:8080 run nginx --image=nginx --port=80 -``` - -now run ```docker ps``` you should see nginx running. You may need to wait a few minutes for the image to get pulled. - -### Expose it as a service: -```sh -kubectl expose rc nginx --port=80 -``` - -This should print: -``` -NAME LABELS SELECTOR IP PORT(S) -nginx run=nginx 80/TCP -``` - -Hit the webserver: -```sh -curl -``` - -Note that you will need run this curl command on your boot2docker VM if you are running on OS X. - -### Scaling - -Now try to scale up the nginx you created before: - -```sh -kubectl scale rc nginx --replicas=3 -``` - -And list the pods - -```sh -kubectl get pods -``` - -You should see pods landing on the newly added machine. - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/docker-multinode/testing.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/getting-started-guides/docker-multinode/testing.md?pixel)]() diff --git a/release-0.19.0/docs/getting-started-guides/docker-multinode/worker.md b/release-0.19.0/docs/getting-started-guides/docker-multinode/worker.md deleted file mode 100644 index d171f3d00e2..00000000000 --- a/release-0.19.0/docs/getting-started-guides/docker-multinode/worker.md +++ /dev/null @@ -1,139 +0,0 @@ -## Adding a Kubernetes worker node via Docker. - -These instructions are very similar to the master set-up above, but they are duplicated for clarity. -You need to repeat these instructions for each node you want to join the cluster. -We will assume that the IP address of this node is ```${NODE_IP}``` and you have the IP address of the master in ```${MASTER_IP}``` that you created in the [master instructions](master.md). - -For each worker node, there are three steps: - * [Set up ```flanneld``` on the worker node](#set-up-flanneld-on-the-worker-node) - * [Start kubernetes on the worker node](#start-kubernetes-on-the-worker-node) - * [Add the worker to the cluster](#add-the-node-to-the-cluster) - -### Set up Flanneld on the worker node -As before, the Flannel daemon is going to provide network connectivity. - -#### Set up a bootstrap docker: -As previously, we need a second instance of the Docker daemon running to bootstrap the flannel networking. - -Run: -```sh -sudo sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &' -``` - -_Important Note_: -If you are running this on a long running system, rather than experimenting, you should run the bootstrap Docker instance under something like SysV init, upstart or systemd so that it is restarted -across reboots and failures. - -#### Bring down Docker -To re-configure Docker to use flannel, we need to take docker down, run flannel and then restart Docker. - -Turning down Docker is system dependent, it may be: - -```sh -sudo /etc/init.d/docker stop -``` - -or - -```sh -sudo systemctl stop docker -``` - -or it may be something else. - -#### Run flannel - -Now run flanneld itself, this call is slightly different from the above, since we point it at the etcd instance on the master. -```sh -sudo docker -H unix:///var/run/docker-bootstrap.sock run -d --net=host --privileged -v /dev/net:/dev/net quay.io/coreos/flannel:0.3.0 /opt/bin/flanneld --etcd-endpoints=http://${MASTER_IP}:4001 -``` - -The previous command should have printed a really long hash, copy this hash. - -Now get the subnet settings from flannel: -``` -sudo docker -H unix:///var/run/docker-bootstrap.sock exec cat /run/flannel/subnet.env -``` - - -#### Edit the docker configuration -You now need to edit the docker configuration to activate new flags. Again, this is system specific. - -This may be in ```/etc/default/docker``` or ```/etc/systemd/service/docker.service``` or it may be elsewhere. - -Regardless, you need to add the following to the docker comamnd line: -```sh ---bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU} -``` - -#### Remove the existing Docker bridge -Docker creates a bridge named ```docker0``` by default. You need to remove this: - -```sh -sudo /sbin/ifconfig docker0 down -sudo brctl delbr docker0 -``` - -You may need to install the ```bridge-utils``` package for the ```brctl``` binary. - -#### Restart Docker -Again this is system dependent, it may be: - -```sh -sudo /etc/init.d/docker start -``` - -it may be: -```sh -systemctl start docker -``` - -### Start Kubernetes on the worker node -#### Run the kubelet -Again this is similar to the above, but the ```--api_servers``` now points to the master we set up in the beginning. - -```sh -sudo docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.17.0 /hyperkube kubelet --api_servers=http://${MASTER_IP}:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=$(hostname -i) -``` - -#### Run the service proxy -The service proxy provides load-balancing between groups of containers defined by Kubernetes ```Services``` - -```sh -sudo docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.17.0 /hyperkube proxy --master=http://${MASTER_IP}:8080 --v=2 -``` - - -### Add the node to the cluster - -On the master you created above, create a file named ```node.yaml``` make it's contents: - -```yaml -apiVersion: v1 -kind: Node -metadata: - name: ${NODE_IP} -spec: - externalID: ${NODE_IP} -status: - # Fill in appropriate values below - capacity: - cpu: "1" - memory: 3Gi -``` - -Make the API call to add the node, you should do this on the master node that you created above. Otherwise you need to add ```-s=http://${MASTER_IP}:8080``` to point ```kubectl``` at the master. - -```sh -./kubectl create -f node.yaml -``` - -### Next steps - -Move on to [testing your cluster](testing.md) or [add another node](#adding-a-kubernetes-worker-node-via-docker) - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/docker-multinode/worker.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/getting-started-guides/docker-multinode/worker.md?pixel)]() diff --git a/release-0.19.0/docs/getting-started-guides/docker.md b/release-0.19.0/docs/getting-started-guides/docker.md deleted file mode 100644 index dd4c3aef558..00000000000 --- a/release-0.19.0/docs/getting-started-guides/docker.md +++ /dev/null @@ -1,87 +0,0 @@ -## Running kubernetes locally via Docker - -The following instructions show you how to set up a simple, single node kubernetes cluster using Docker. - -Here's a diagram of what the final result will look like: -![Kubernetes Single Node on Docker](k8s-singlenode-docker.png) - -### Step One: Run etcd -```sh -docker run --net=host -d gcr.io/google_containers/etcd:2.0.9 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data -``` - -### Step Two: Run the master -```sh -docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.17.0 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests -``` - -This actually runs the kubelet, which in turn runs a [pod](http://docs.k8s.io/pods.md) that contains the other master components. - -### Step Three: Run the service proxy -*Note, this could be combined with master above, but it requires --privileged for iptables manipulation* -```sh -docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.17.0 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2 -``` - -### Test it out -At this point you should have a running kubernetes cluster. You can test this by downloading the kubectl -binary -([OS X](https://storage.googleapis.com/kubernetes-release/release/v0.17.0/bin/darwin/amd64/kubectl)) -([linux](https://storage.googleapis.com/kubernetes-release/release/v0.17.0/bin/linux/amd64/kubectl)) - -*Note:* -On OS/X you will need to set up port forwarding via ssh: -```sh -boot2docker ssh -L8080:localhost:8080 -``` - -List the nodes in your cluster by running:: - -```sh -kubectl get nodes -``` - -This should print: -``` -NAME LABELS STATUS -127.0.0.1 Ready -``` - -If you are running different kubernetes clusters, you may need to specify ```-s http://localhost:8080``` to select the local cluster. - -### Run an application -```sh -kubectl -s http://localhost:8080 run nginx --image=nginx --port=80 -``` - -now run ```docker ps``` you should see nginx running. You may need to wait a few minutes for the image to get pulled. - -### Expose it as a service: -```sh -kubectl expose rc nginx --port=80 -``` - -This should print: -``` -NAME LABELS SELECTOR IP PORT(S) -nginx run=nginx 80/TCP -``` - -Hit the webserver: -```sh -curl -``` - -Note that you will need run this curl command on your boot2docker VM if you are running on OS X. - -### A note on turning down your cluster -Many of these containers run under the management of the ```kubelet``` binary, which attempts to keep containers running, even if they fail. So, in order to turn down -the cluster, you need to first kill the kubelet container, and then any other containers. - -You may use ```docker ps -a | awk '{print $1}' | xargs docker kill```, note this removes _all_ containers running under Docker, so use with caution. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/docker.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/getting-started-guides/docker.md?pixel)]() diff --git a/release-0.19.0/docs/getting-started-guides/fedora/fedora_ansible_config.md b/release-0.19.0/docs/getting-started-guides/fedora/fedora_ansible_config.md deleted file mode 100644 index 6fbf513ee26..00000000000 --- a/release-0.19.0/docs/getting-started-guides/fedora/fedora_ansible_config.md +++ /dev/null @@ -1,239 +0,0 @@ -#Configuring kubernetes on [Fedora](http://fedoraproject.org) via [Ansible](http://www.ansible.com/home). - -Configuring kubernetes on Fedora via Ansible offers a simple way to quickly create a clustered environment with little effort. - -Requirements: - -1. Host able to run ansible and able to clone the following repo: [kubernetes-ansible](https://github.com/eparis/kubernetes-ansible) -2. A Fedora 20+ or RHEL7 host to act as cluster master -3. As many Fedora 20+ or RHEL7 hosts as you would like, that act as cluster minions - -The hosts can be virtual or bare metal. The only requirement to make the ansible network setup work is that all of the machines are connected via the same layer 2 network. - -Ansible will take care of the rest of the configuration for you - configuring networking, installing packages, handling the firewall, etc... This example will use one master and two minions. - -## Architecture of the cluster - -A Kubernetes cluster reqiures etcd, a master, and n minions, so we will create a cluster with three hosts, for example: - -``` - fed1 (master,etcd) = 192.168.121.205 - fed2 (minion) = 192.168.121.84 - fed3 (minion) = 192.168.121.116 -``` - -**Make sure your local machine** - - - has ansible - - has git - -**then we just clone down the kubernetes-ansible repository** - -``` - yum install -y ansible git - git clone https://github.com/eparis/kubernetes-ansible.git - cd kubernetes-ansible -``` - -**Tell ansible about each machine and its role in your cluster.** - -Get the IP addresses from the master and minions. Add those to the `inventory` file (at the root of the repo) on the host running Ansible. - -We will set the kube_ip_addr to '10.254.0.[1-3]', for now. The reason we do this is explained later... It might work for you as a default. - -``` -[masters] -192.168.121.205 - -[etcd] -192.168.121.205 - -[minions] -192.168.121.84 kube_ip_addr=[10.254.0.1] -192.168.121.116 kube_ip_addr=[10.254.0.2] -``` - -**Setup ansible access to your nodes** - -If you already are running on a machine which has passwordless ssh access to the fed[1-3] nodes, and 'sudo' privileges, simply set the value of `ansible_ssh_user` in `group_vars/all.yaml` to the username which you use to ssh to the nodes (i.e. `fedora`), and proceed to the next step... - -*Otherwise* setup ssh on the machines like so (you will need to know the root password to all machines in the cluster). - -edit: group_vars/all.yml - -``` -ansible_ssh_user: root -``` - -## Configuring ssh access to the cluster - -If you already have ssh access to every machine using ssh public keys you may skip to [configuring the network](#configuring-the-network) - -**Create a password file.** - -The password file should contain the root password for every machine in the cluster. It will be used in order to lay down your ssh public key. Make sure your machines sshd-config allows password logins from root. - -``` -echo "password" > ~/rootpassword -``` - -**Agree to accept each machine's ssh public key** - -After this is completed, ansible is now enabled to ssh into any of the machines you're configuring. - -``` -ansible-playbook -i inventory ping.yml # This will look like it fails, that's ok -``` - -**Push your ssh public key to every machine** - -Again, you can skip this step if your ansible machine has ssh access to the nodes you are going to use in the kubernetes cluster. -``` -ansible-playbook -i inventory keys.yml -``` - -## Configuring the internal kubernetes network - -If you already have configured your network and docker will use it correctly, skip to [setting up the cluster](#setting-up-the-cluster) - -The ansible scripts are quite hacky configuring the network, you can see the [README](https://github.com/eparis/kubernetes-ansible) for details, or you can simply enter in variants of the 'kube_service_addresses' (in the all.yaml file) as `kube_ip_addr` entries in the minions field, as shown in the next section. - -**Configure the ip addresses which should be used to run pods on each machine** - -The IP address pool used to assign addresses to pods for each minion is the `kube_ip_addr`= option. Choose a /24 to use for each minion and add that to you inventory file. - -For this example, as shown earlier, we can do something like this... - -``` -[minions] -192.168.121.84 kube_ip_addr=10.254.0.1 -192.168.121.116 kube_ip_addr=10.254.0.2 -``` - -**Run the network setup playbook** - -There are two ways to do this: via flannel, or using NetworkManager. - -Flannel is a cleaner mechanism to use, and is the recommended choice. - -- If you are using flannel, you should check the kubernetes-ansible repository above. - -Currently, you essentially have to (1) update group_vars/all.yml, and then (2) run -``` -ansible-playbook -i inventory flannel.yml -``` - -- On the other hand, if using the NetworkManager based setup (i.e. you do not want to use flannel). - -On EACH node, make sure NetworkManager is installed, and the service "NetworkManager" is running, then you can run -the network manager playbook... - -``` -ansible-playbook -i inventory ./old-network-config/hack-network.yml -``` - -## Setting up the cluster - -**Configure the IP addresses used for services** - -Each kubernetes service gets its own IP address. These are not real IPs. You need only select a range of IPs which are not in use elsewhere in your environment. This must be done even if you do not use the network setup provided by the ansible scripts. - -edit: group_vars/all.yml - -``` -kube_service_addresses: 10.254.0.0/16 -``` - -**Tell ansible to get to work!** - -This will finally setup your whole kubernetes cluster for you. - -``` -ansible-playbook -i inventory setup.yml -``` - -## Testing and using your new cluster - -That's all there is to it. It's really that easy. At this point you should have a functioning kubernetes cluster. - - -**Show services running on masters and minions.** - -``` -systemctl | grep -i kube -``` - -**Show firewall rules on the masters and minions.** - -``` -iptables -nvL -``` - -**Create the following apache.json file and deploy pod to minion.** - -``` -cat << EOF > apache.json -{ - "kind": "Pod", - "apiVersion": "v1", - "metadata": { - "name": "fedoraapache", - "labels": { - "name": "fedoraapache" - } - }, - "spec": { - "containers": [ - { - "name": "fedoraapache", - "image": "fedora/apache", - "ports": [ - { - "hostPort": 80, - "containerPort": 80 - } - ] - } - ] - } -} -EOF - -/usr/bin/kubectl create -f apache.json - -**Testing your new kube cluster** - -``` - -**Check where the pod was created** - -``` -kubectl get pods -``` - -Important : Note that the IP of the pods IP fields are on the network which you created in the kube_ip_addr file. - -In this example, that was the 10.254 network. - -If you see 172 in the IP fields, networking was not setup correctly, and you may want to re run or dive deeper into the way networking is being setup by looking at the details of the networking scripts used above. - -**Check Docker status on minion.** - -``` -docker ps -docker images -``` - -**After the pod is 'Running' Check web server access on the minion** - -``` -curl http://localhost -``` - -That's it ! - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/fedora/fedora_ansible_config.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/getting-started-guides/fedora/fedora_ansible_config.md?pixel)]() diff --git a/release-0.19.0/docs/getting-started-guides/fedora/fedora_manual_config.md b/release-0.19.0/docs/getting-started-guides/fedora/fedora_manual_config.md deleted file mode 100644 index 3140cd63a8b..00000000000 --- a/release-0.19.0/docs/getting-started-guides/fedora/fedora_manual_config.md +++ /dev/null @@ -1,188 +0,0 @@ -##Getting started on [Fedora](http://fedoraproject.org) - -This is a getting started guide for Fedora. It is a manual configuration so you understand all the underlying packages / services / ports, etc... - -This guide will only get ONE node (previously minion) working. Multiple nodes require a functional [networking configuration](http://docs.k8s.io/networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious. - -The kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, fed-master, will be the kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_ (not needed if _etcd_ runs on a different host but this guide assumes that _etcd_ and kubernetes master run on the same host). The remaining host, fed-node will be the node and run kubelet, proxy and docker. - -**System Information:** - -Hosts: -``` -fed-master = 192.168.121.9 -fed-node = 192.168.121.65 -``` - -**Prepare the hosts:** - -* Install kubernetes on all hosts - fed-{master,node}. This will also pull in docker. Also install etcd on fed-master. This guide has been tested with kubernetes-0.15.0 but should work with other versions too. -* The [--enablerepo=update-testing](https://fedoraproject.org/wiki/QA:Updates_Testing) directive in the yum command below will ensure that the most recent Kubernetes version that is scheduled for pre-release will be installed. This should be a more recent version than the Fedora "stable" release for Kubernetes that you would get without adding the directive. -* If you want the very latest Kubernetes release [you can download and yum install the RPM directly from Fedora Koji](http://koji.fedoraproject.org/koji/packageinfo?packageID=19202) instead of using the yum install command below. - -``` -yum -y install --enablerepo=updates-testing kubernetes -``` -* Install etcd and iptables - -``` -yum -y install etcd iptables -``` - -* Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS). Make sure that communication works between fed-master and fed-node by using a utility such as ping. - -``` -echo "192.168.121.9 fed-master -192.168.121.65 fed-node" >> /etc/hosts -``` - -* Edit /etc/kubernetes/config which will be the same on all hosts (master and node) to contain: - -``` -# Comma separated list of nodes in the etcd cluster -KUBE_MASTER="--master=http://fed-master:8080" - -# logging to stderr means we get it in the systemd journal -KUBE_LOGTOSTDERR="--logtostderr=true" - -# journal message level, 0 is debug -KUBE_LOG_LEVEL="--v=0" - -# Should this cluster be allowed to run privileged docker containers -KUBE_ALLOW_PRIV="--allow_privileged=false" -``` - -* Disable the firewall on both the master and node, as docker does not play well with other firewall rule managers. Please note that iptables-services does not exist on default fedora server install. - -``` -systemctl disable iptables-services firewalld -systemctl stop iptables-services firewalld -``` - -**Configure the kubernetes services on the master.** - -* Edit /etc/kubernetes/apiserver to appear as such. The service_cluster_ip_range IP addresses must be an unused block of addresses, not used anywhere else. They do not need to be routed or assigned to anything. - -``` -# The address on the local server to listen to. -KUBE_API_ADDRESS="--address=0.0.0.0" - -# Comma separated list of nodes in the etcd cluster -KUBE_ETCD_SERVERS="--etcd_servers=http://127.0.0.1:4001" - -# Address range to use for services -KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" - -# Add your own! -KUBE_API_ARGS="" -``` - -* Edit /etc/etcd/etcd.conf,let the etcd to listen all the ip instead of 127.0.0.1, if not, you will get the error like "connection refused" -``` -ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:4001" -``` - -* Start the appropriate services on master: - -``` -for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do - systemctl restart $SERVICES - systemctl enable $SERVICES - systemctl status $SERVICES -done -``` - -* Addition of nodes: - -* Create following node.json file on kubernetes master node: - -```json -{ - "apiVersion": "v1", - "kind": "Node", - "metadata": { - "name": "fed-node", - "labels":{ "name": "fed-node-label"} - }, - "spec": { - "externalID": "fed-node" - } -} -``` - -Now create a node object internally in your kubernetes cluster by running: - -``` -$ kubectl create -f node.json - -$ kubectl get nodes -NAME LABELS STATUS -fed-node name=fed-node-label Unknown - -``` - -Please note that in the above, it only creates a representation for the node -_fed-node_ internally. It does not provision the actual _fed-node_. Also, it -is assumed that _fed-node_ (as specified in `name`) can be resolved and is -reachable from kubernetes master node. This guide will discuss how to provision -a kubernetes node (fed-node) below. - -**Configure the kubernetes services on the node.** - -***We need to configure the kubelet on the node.*** - -* Edit /etc/kubernetes/kubelet to appear as such: - -``` -### -# kubernetes kubelet (node) config - -# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) -KUBELET_ADDRESS="--address=0.0.0.0" - -# You may leave this blank to use the actual hostname -KUBELET_HOSTNAME="--hostname_override=fed-node" - -# location of the api-server -KUBELET_API_SERVER="--api_servers=http://fed-master:8080" - -# Add your own! -#KUBELET_ARGS="" -``` - -* Start the appropriate services on the node (fed-node). - -``` -for SERVICES in kube-proxy kubelet docker; do - systemctl restart $SERVICES - systemctl enable $SERVICES - systemctl status $SERVICES -done -``` - -* Check to make sure now the cluster can see the fed-node on fed-master, and its status changes to _Ready_. - -``` -kubectl get nodes -NAME LABELS STATUS -fed-node name=fed-node-label Ready -``` -* Deletion of nodes: - -To delete _fed-node_ from your kubernetes cluster, one should run the following on fed-master (Please do not do it, it is just for information): - -``` -$ kubectl delete -f node.json -``` - -*You should be finished!* - -**The cluster should be running! Launch a test pod.** - -You should have a functional cluster, check out [101](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/examples/walkthrough/README.md)! - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/fedora/fedora_manual_config.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/getting-started-guides/fedora/fedora_manual_config.md?pixel)]() diff --git a/release-0.19.0/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md b/release-0.19.0/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md deleted file mode 100644 index b5f5816939f..00000000000 --- a/release-0.19.0/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md +++ /dev/null @@ -1,165 +0,0 @@ -#**Kubernetes multiple nodes cluster with flannel on Fedora** - -This document describes how to deploy kubernetes on multiple hosts to set up a multi-node cluster and networking with flannel. Follow fedora [getting started guide](fedora_manual_config.md) to setup 1 master (fed-master) and 2 or more nodes (minions). Make sure that all nodes (minions) have different names (fed-node1, fed-node2 and so on) and labels (fed-node1-label, fed-node2-label, and so on) to avoid any conflict. Also make sure that the kubernetes master host is running etcd, kube-controller-manager, kube-scheduler, and kube-apiserver services, and the nodes (minions) are running docker, kube-proxy and kubelet services. Now install flannel on kubernetes nodes (minions). flannel on each node configures an overlay network that docker uses. flannel runs on each node to setup a unique class-C container network. - -##**Perform following commands on the kubernetes master** - -* Configure flannel by creating a `flannel-config.json` in your current directory on fed-master. flannel provides udp and vxlan among other overlay networking backend options. In this guide, we choose kernel based vxlan backend. The contents of the json are: - -``` -{ - "Network": "18.16.0.0/16", - "SubnetLen": 24, - "Backend": { - "Type": "vxlan", - "VNI": 1 - } -} -``` -**NOTE:** Choose an IP range that is *NOT* part of the public IP address range. - -* Add the configuration to the etcd server on fed-master. - -``` -# etcdctl set /coreos.com/network/config < flannel-config.json -``` - -* Verify the key exists in the etcd server on fed-master. - -``` -# etcdctl get /coreos.com/network/config -``` - -##**Perform following commands on all kubernetes nodes** - -* Edit the flannel configuration file /etc/sysconfig/flanneld as follows: - -``` -# Flanneld configuration options - -# etcd url location. Point this to the server where etcd runs -FLANNEL_ETCD="http://fed-master:4001" - -# etcd config key. This is the configuration key that flannel queries -# For address range assignment -FLANNEL_ETCD_KEY="/coreos.com/network" - -# Any additional options that you want to pass -FLANNEL_OPTIONS="" -``` - -**Note:** By default, flannel uses the interface for the default route. If you have multiple interfaces and would like to use an interface other than the default route one, you could add "-iface=" to FLANNEL_OPTIONS. For additional options, run `flanneld --help` on command line. - -* Enable the flannel service. - -``` -# systemctl enable flanneld -``` - -* If docker is not running, then starting flannel service is enough and skip the next step. - -``` -# systemctl start flanneld -``` - -* If docker is already running, then stop docker, delete docker bridge (docker0), start flanneld and restart docker as follows. Another alternative is to just reboot the system (`systemctl reboot`). - -``` -# systemctl stop docker -# ip link delete docker0 -# systemctl start flanneld -# systemctl start docker -``` - -*** - -##**Test the cluster and flannel configuration** - -* Now check the interfaces on the nodes. Notice there is now a flannel.1 interface, and the ip addresses of docker0 and flannel.1 interfaces are in the same network. You will notice that docker0 is assigned a subnet (18.16.29.0/24 as shown below) on each kubernetes node out of the IP range configured above. A working output should look like this: - -``` -# ip -4 a|grep inet - inet 127.0.0.1/8 scope host lo - inet 192.168.122.77/24 brd 192.168.122.255 scope global dynamic eth0 - inet 18.16.29.0/16 scope global flannel.1 - inet 18.16.29.1/24 scope global docker0 -``` - -* From any node in the cluster, check the cluster members by issuing a query to etcd server via curl (only partial output is shown using `grep -E "\{|\}|key|value"`). If you set up a 1 master and 3 nodes cluster, you should see one block for each node showing the subnets they have been assigned. You can associate those subnets to each node by the MAC address (VtepMAC) and IP address (Public IP) that is listed in the output. - -``` -# curl -s http://fed-master:4001/v2/keys/coreos.com/network/subnets | python -mjson.tool -{ - "node": { - "key": "/coreos.com/network/subnets", - { - "key": "/coreos.com/network/subnets/18.16.29.0-24", - "value": "{\"PublicIP\":\"192.168.122.77\",\"BackendType\":\"vxlan\",\"BackendData\":{\"VtepMAC\":\"46:f1:d0:18:d0:65\"}}" - }, - { - "key": "/coreos.com/network/subnets/18.16.83.0-24", - "value": "{\"PublicIP\":\"192.168.122.36\",\"BackendType\":\"vxlan\",\"BackendData\":{\"VtepMAC\":\"ca:38:78:fc:72:29\"}}" - }, - { - "key": "/coreos.com/network/subnets/18.16.90.0-24", - "value": "{\"PublicIP\":\"192.168.122.127\",\"BackendType\":\"vxlan\",\"BackendData\":{\"VtepMAC\":\"92:e2:80:ba:2d:4d\"}}" - } - } -} -``` - -* From all nodes, review the `/run/flannel/subnet.env` file. This file was generated automatically by flannel. - -``` -# cat /run/flannel/subnet.env -FLANNEL_SUBNET=18.16.29.1/24 -FLANNEL_MTU=1450 -FLANNEL_IPMASQ=false -``` - -* At this point, we have etcd running on the kubernetes master, and flannel / docker running on kubernetes nodes. Next steps are for testing cross-host container communication which will confirm that docker and flannel are configured properly. - -* Issue the following commands on any 2 nodes: - -``` -#docker run -it fedora:latest bash -bash-4.3# -``` - -* This will place you inside the container. Install iproute and iputils packages to install ip and ping utilities. Due to a [bug](https://bugzilla.redhat.com/show_bug.cgi?id=1142311), it is required to modify capabilities of ping binary to work around "Operation not permitted" error. - -``` -bash-4.3# yum -y install iproute iputils -bash-4.3# setcap cap_net_raw-ep /usr/bin/ping -``` - -* Now note the IP address on the first node: - -``` -bash-4.3# ip -4 a l eth0 | grep inet - inet 18.16.29.4/24 scope global eth0 -``` - -* And also note the IP address on the other node: - -``` -bash-4.3# ip a l eth0 | grep inet - inet 18.16.90.4/24 scope global eth0 -``` - -* Now ping from the first node to the other node: - -``` -bash-4.3# ping 18.16.90.4 -PING 18.16.90.4 (18.16.90.4) 56(84) bytes of data. -64 bytes from 18.16.90.4: icmp_seq=1 ttl=62 time=0.275 ms -64 bytes from 18.16.90.4: icmp_seq=2 ttl=62 time=0.372 ms -``` - -* Now kubernetes multi-node cluster is set up with overlay networking set up by flannel. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md?pixel)]() diff --git a/release-0.19.0/docs/getting-started-guides/gce.md b/release-0.19.0/docs/getting-started-guides/gce.md deleted file mode 100644 index acc1e06fb7b..00000000000 --- a/release-0.19.0/docs/getting-started-guides/gce.md +++ /dev/null @@ -1,124 +0,0 @@ -## Getting started on Google Compute Engine - -The example below creates a Kubernetes cluster with 4 worker node Virtual Machines and a master Virtual Machine (i.e. 5 VMs in your cluster). This cluster is set up and controlled from your workstation (or wherever you find convenient). - -### Before you start - -If you want a simplified getting started experience and GUI for managing clusters, please consider trying [Google Container Engine](https://cloud.google.com/container-engine/) for hosted cluster installation and management. - -If you want to use custom binaries or pure open source Kubernetes, please continue with the instructions below. - -### Prerequisites - -1. You need a Google Cloud Platform account with billing enabled. Visit the [Google Developers Console](http://cloud.google.com/console) for more details. -1. Make sure you have the `gcloud preview` command line component installed. Simply run `gcloud preview` at the command line - if it asks to install any components, go ahead and install them. If it simply shows help text, you're good to go. This is required as the cluster setup script uses GCE [Instance Groups](https://cloud.google.com/compute/docs/instance-groups/), which are in the gcloud preview namespace. You will also need to enable `Compute Engine Instance Group Manager API` in the developers console. `gcloud` can be installed as a part of the [Google Cloud SDK](https://cloud.google.com/sdk/) -1. Make sure that gcloud is set to use the Google Cloud Platform project you want. You can check the current project using `gcloud config list project` and change it via `gcloud config set project `. -1. Make sure you have credentials for GCloud by running ` gcloud auth login`. -1. Make sure you can start up a GCE VM from the command line. At least make sure you can do the [Create an instance](https://cloud.google.com/compute/docs/quickstart#create_an_instance) part of the GCE Quickstart. -1. Make sure you can ssh into the VM without interactive prompts. See the [Log in to the instance](https://cloud.google.com/compute/docs/quickstart#ssh) part of the GCE Quickstart. - -### Starting a Cluster - -You can install a client and start a cluster with this command: - -```bash -curl -sS https://get.k8s.io | bash -``` - -Once this command completes, you will have a master VM and four worker VMs, running as a Kubernetes cluster. By default, some containers will already be running on your cluster. Containers like `kibana` and `elasticsearch` provide [logging](../logging.md), while `heapster` provides [monitoring](../../cluster/addons/cluster-monitoring/README.md) services. - -If you run into trouble please see the section on [troubleshooting](gce.md#troubleshooting), or come ask questions on IRC at #google-containers on freenode. - -The next few steps will show you: - -1. how to set up the command line client on your workstation to manage the cluster -1. examples of how to use the cluster -1. how to delete the cluster -1. how to start clusters with non-default options (like larger clusters) - -### Installing the kubernetes command line tools on your workstation - -The cluster startup script will leave you with a running cluster and a ```kubernetes``` directory on your workstation. - -Add the appropriate binary folder to your ```PATH``` to access kubectl: - -```bash -# OS X -export PATH=path/to/kubernetes/platforms/darwin/amd64:$PATH - -# Linux -export PATH=path/to/kubernetes/platforms/linux/amd64:$PATH -``` - -Note: gcloud also ships with ```kubectl```, which by default is added to your path. -However the gcloud bundled kubectl version may be older than the one downloaded by the -get.k8s.io install script. We recommend you use the downloaded binary to avoid -potential issues with client/server version skew. - -### Getting started with your cluster -See [a simple nginx example](../../examples/simple-nginx.md) to try out your new cluster. - -For more complete applications, please look in the [examples directory](../../examples). - -### Tearing down the cluster -To remove/delete/teardown the cluster, use the `kube-down.sh` script. - -```bash -cd kubernetes -cluster/kube-down.sh -``` - -Likewise, the `kube-up.sh` in the same directory will bring it back up. You do not need to rerun the `curl` or `wget` command: everything needed to setup the Kubernetes cluster is now on your workstation. - -### Customizing - -The script above relies on Google Storage to stage the Kubernetes release. It -then will start (by default) a single master VM along with 4 worker VMs. You -can tweak some of these parameters by editing `kubernetes/cluster/gce/config-default.sh` -You can view a transcript of a successful cluster creation -[here](https://gist.github.com/satnam6502/fc689d1b46db9772adea). - -### Troubleshooting - -#### Project settings - -You need to have the Google Cloud Storage API, and the Google Cloud Storage -JSON API enabled. It is activated by default for new projects. Otherwise, it -can be done in the Google Cloud Console. See the [Google Cloud Storage JSON -API Overview](https://cloud.google.com/storage/docs/json_api/) for more -details. - -#### Cluster initialization hang - -If the Kubernetes startup script hangs waiting for the API to be reachable, you can troubleshoot by SSHing into the master and minion VMs and looking at logs such as `/var/log/startupscript.log`. - -Once you fix the issue, you should run `kube-down.sh` to cleanup after the partial cluster creation, before running `kube-up.sh` to try again. - -#### SSH - -If you're having trouble SSHing into your instances, ensure the GCE firewall -isn't blocking port 22 to your VMs. By default, this should work but if you -have edited firewall rules or created a new non-default network, you'll need to -expose it: `gcloud compute firewall-rules create --network= ---description "SSH allowed from anywhere" --allow tcp:22 default-ssh` - -Additionally, your GCE SSH key must either have no passcode or you need to be -using `ssh-agent`. - -#### Networking - -The instances must be able to connect to each other using their private IP. The -script uses the "default" network which should have a firewall rule called -"default-allow-internal" which allows traffic on any port on the private IPs. -If this rule is missing from the default network or if you change the network -being used in `cluster/config-default.sh` create a new rule with the following -field values: - -* Source Ranges: `10.0.0.0/8` -* Allowed Protocols and Port: `tcp:1-65535;udp:1-65535;icmp` - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/gce.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/getting-started-guides/gce.md?pixel)]() diff --git a/release-0.19.0/docs/getting-started-guides/juju.md b/release-0.19.0/docs/getting-started-guides/juju.md deleted file mode 100644 index fdbd77500a2..00000000000 --- a/release-0.19.0/docs/getting-started-guides/juju.md +++ /dev/null @@ -1,228 +0,0 @@ -## Getting started with Juju - -Juju handles provisioning machines and deploying complex systems to a -wide number of clouds, supporting service orchestration once the bundle of -services has been deployed. - - -### Prerequisites - -> Note: If you're running kube-up, on ubuntu - all of the dependencies -> will be handled for you. You may safely skip to the section: -> [Launch Kubernetes Cluster](#launch-kubernetes-cluster) - -#### On Ubuntu - -[Install the Juju client](https://juju.ubuntu.com/install) on your -local ubuntu system: - - sudo add-apt-repository ppa:juju/stable - sudo apt-get update - sudo apt-get install juju-core juju-quickstart - - -#### With Docker - -If you are not using ubuntu or prefer the isolation of docker, you may -run the following: - - mkdir ~/.juju - sudo docker run -v ~/.juju:/home/ubuntu/.juju -ti whitmo/jujubox:latest - -At this point from either path you will have access to the `juju -quickstart` command. - -To set up the credentials for your chosen cloud run: - - juju quickstart --constraints="mem=3.75G" -i - -Follow the dialogue and choose `save` and `use`. Quickstart will now -bootstrap the juju root node and setup the juju web based user -interface. - - -## Launch Kubernetes cluster - -You will need to have the Kubernetes tools compiled before launching the cluster - - make all WHAT=cmd/kubectl - export KUBERNETES_PROVIDER=juju - cluster/kube-up.sh - -If this is your first time running the `kube-up.sh` script, it will install -the required predependencies to get started with Juju, additionally it will -launch a curses based configuration utility allowing you to select your cloud -provider and enter the proper access credentials. - -Next it will deploy the kubernetes master, etcd, 2 minions with flannel based -Software Defined Networking. - - -## Exploring the cluster - -Juju status provides information about each unit in the cluster: - - juju status --format=oneline - - docker/0: 52.4.92.78 (started) - - flannel-docker/0: 52.4.92.78 (started) - - kubernetes/0: 52.4.92.78 (started) - - docker/1: 52.6.104.142 (started) - - flannel-docker/1: 52.6.104.142 (started) - - kubernetes/1: 52.6.104.142 (started) - - etcd/0: 52.5.216.210 (started) 4001/tcp - - juju-gui/0: 52.5.205.174 (started) 80/tcp, 443/tcp - - kubernetes-master/0: 52.6.19.238 (started) 8080/tcp - -You can use `juju ssh` to access any of the units: - - juju ssh kubernetes-master/0 - - -## Run some containers! - -`kubectl` is available on the kubernetes master node. We'll ssh in to -launch some containers, but one could use kubectl locally setting -KUBERNETES_MASTER to point at the ip of `kubernetes-master/0`. - -No pods will be available before starting a container: - - kubectl get pods - POD CONTAINER(S) IMAGE(S) HOST LABELS STATUS - - kubectl get replicationcontrollers - CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS - -We'll follow the aws-coreos example. Create a pod manifest: `pod.json` - -``` -{ - "apiVersion": "v1", - "kind": "Pod", - "metadata": { - "name": "hello", - "labels": { - "name": "hello", - "environment": "testing" - } - }, - "spec": { - "containers": [{ - "name": "hello", - "image": "quay.io/kelseyhightower/hello", - "ports": [{ - "containerPort": 80, - "hostPort": 80 - }] - }] - } -} -``` - -Create the pod with kubectl: - - kubectl create -f pod.json - - -Get info on the pod: - - kubectl get pods - - -To test the hello app, we'll need to locate which minion is hosting -the container. Better tooling for using juju to introspect container -is in the works but for let'suse `juju run` and `juju status` to find -our hello app. - -Exit out of our ssh session and run: - - juju run --unit kubernetes/0 "docker ps -n=1" - ... - juju run --unit kubernetes/1 "docker ps -n=1" - CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES - 02beb61339d8 quay.io/kelseyhightower/hello:latest /hello About an hour ago Up About an hour k8s_hello.... - - -We see `kubernetes/1` has our container, we can open port 80: - - juju run --unit kubernetes/1 "open-port 80" - juju expose kubernetes - sudo apt-get install curl - curl $(juju status --format=oneline kubernetes/1 | cut -d' ' -f3) - -Finally delete the pod: - - juju ssh kubernetes-master/0 - kubectl delete pods hello - - -## Scale out cluster - -We can add minion units like so: - - juju add-unit docker # creates unit docker/2, kubernetes/2, docker-flannel/2 - - -## Launch the "petstore" example app - -The petstore example is available as a -[juju action](https://jujucharms.com/docs/devel/actions). - - juju action do kubernetes-master/0 - - -Note: this example includes curl statements to exercise the app. - - -## Tear down cluster - - ./kube-down.sh - -or - - juju destroy-environment --force `juju env` - - -## More Info - -Kubernetes Bundle on Github - - - [Bundle Repository](https://github.com/whitmo/bundle-kubernetes) - * [Kubernetes master charm](https://github.com/whitmo/charm-kubernetes-master) - * [Kubernetes mininion charm](https://github.com/whitmo/charm-kubernetes) - - [Bundle Documentation](http://whitmo.github.io/bundle-kubernetes) - - [More about Juju](https://juju.ubuntu.com) - - -### Cloud compatibility - -Juju runs natively against a variety of cloud providers and can be -made to work against many more using a generic manual provider. - -Provider | v0.15.0 --------------- | ------- -AWS | TBD -HPCloud | TBD -OpenStack | TBD -Joyent | TBD -Azure | TBD -Digital Ocean | TBD -MAAS (bare metal) | TBD -GCE | TBD - - -Provider | v0.8.1 --------------- | ------- -AWS | [Pass](http://reports.vapour.ws/charm-test-details/charm-bundle-test-parent-136) -HPCloud | [Pass](http://reports.vapour.ws/charm-test-details/charm-bundle-test-parent-136) -OpenStack | [Pass](http://reports.vapour.ws/charm-test-details/charm-bundle-test-parent-136) -Joyent | [Pass](http://reports.vapour.ws/charm-test-details/charm-bundle-test-parent-136) -Azure | TBD -Digital Ocean | TBD -MAAS (bare metal) | TBD -GCE | TBD - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/juju.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/getting-started-guides/juju.md?pixel)]() diff --git a/release-0.19.0/docs/getting-started-guides/k8s-docker.png b/release-0.19.0/docs/getting-started-guides/k8s-docker.png deleted file mode 100644 index 6795e35e83d..00000000000 Binary files a/release-0.19.0/docs/getting-started-guides/k8s-docker.png and /dev/null differ diff --git a/release-0.19.0/docs/getting-started-guides/k8s-singlenode-docker.png b/release-0.19.0/docs/getting-started-guides/k8s-singlenode-docker.png deleted file mode 100644 index 5ebf812682d..00000000000 Binary files a/release-0.19.0/docs/getting-started-guides/k8s-singlenode-docker.png and /dev/null differ diff --git a/release-0.19.0/docs/getting-started-guides/libvirt-coreos.md b/release-0.19.0/docs/getting-started-guides/libvirt-coreos.md deleted file mode 100644 index 4bf14bd1e76..00000000000 --- a/release-0.19.0/docs/getting-started-guides/libvirt-coreos.md +++ /dev/null @@ -1,260 +0,0 @@ -## Getting started with libvirt CoreOS - -### Highlights - -* Super-fast cluster boot-up (few seconds instead of several minutes for vagrant) -* Reduced disk usage thanks to [COW](https://en.wikibooks.org/wiki/QEMU/Images#Copy_on_write) -* Reduced memory footprint thanks to [KSM](https://www.kernel.org/doc/Documentation/vm/ksm.txt) - -### Prerequisites - -1. Install [dnsmasq](http://www.thekelleys.org.uk/dnsmasq/doc.html) -2. Install [ebtables](http://ebtables.netfilter.org/) -3. Install [qemu](http://wiki.qemu.org/Main_Page) -4. Install [libvirt](http://libvirt.org/) -5. Enable and start the libvirt daemon, e.g: - * ``systemctl enable libvirtd`` - * ``systemctl start libvirtd`` -6. [Grant libvirt access to your user¹](https://libvirt.org/aclpolkit.html) -7. Check that your $HOME is accessible to the qemu user² - -#### ¹ Depending on your distribution, libvirt access may be denied by default or may require a password at each access. - -You can test it with the following command: -``` -virsh -c qemu:///system pool-list -``` - -If you have access error messages, please read https://libvirt.org/acl.html and https://libvirt.org/aclpolkit.html . - -In short, if your libvirt has been compiled with Polkit support (ex: Arch, Fedora 21), you can create `/etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules` as follows to grant full access to libvirt to `$USER` - -``` -sudo /bin/sh -c "cat - > /etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules" << EOF -polkit.addRule(function(action, subject) { - if (action.id == "org.libvirt.unix.manage" && - subject.user == "$USER") { - return polkit.Result.YES; - polkit.log("action=" + action); - polkit.log("subject=" + subject); - } -}); -EOF -``` - -If your libvirt has not been compiled with Polkit (ex: Ubuntu 14.04.1 LTS), check the permissions on the libvirt unix socket: - -``` -ls -l /var/run/libvirt/libvirt-sock -srwxrwx--- 1 root libvirtd 0 févr. 12 16:03 /var/run/libvirt/libvirt-sock - -usermod -a -G libvirtd $USER -# $USER needs to logout/login to have the new group be taken into account -``` - -(Replace `$USER` with your login name) - -#### ² Qemu will run with a specific user. It must have access to the VMs drives - -All the disk drive resources needed by the VM (CoreOS disk image, kubernetes binaries, cloud-init files, etc.) are put inside `./cluster/libvirt-coreos/libvirt_storage_pool`. - -As we’re using the `qemu:///system` instance of libvirt, qemu will run with a specific `user:group` distinct from your user. It is configured in `/etc/libvirt/qemu.conf`. That qemu user must have access to that libvirt storage pool. - -If your `$HOME` is world readable, everything is fine. If your $HOME is private, `cluster/kube-up.sh` will fail with an error message like: - -``` -error: Cannot access storage file '$HOME/.../kubernetes/cluster/libvirt-coreos/libvirt_storage_pool/kubernetes_master.img' (as uid:99, gid:78): Permission denied -``` - -In order to fix that issue, you have several possibilities: -* set `POOL_PATH` inside `cluster/libvirt-coreos/config-default.sh` to a directory: - * backed by a filesystem with a lot of free disk space - * writable by your user; - * accessible by the qemu user. -* Grant the qemu user access to the storage pool. - -On Arch: - -``` -setfacl -m g:kvm:--x ~ -``` - -### Setup - -By default, the libvirt-coreos setup will create a single kubernetes master and 3 kubernetes minions. Because the VM drives use Copy-on-Write and because of memory ballooning and KSM, there is a lot of resource over-allocation. - -To start your local cluster, open a shell and run: - -```shell -cd kubernetes - -export KUBERNETES_PROVIDER=libvirt-coreos -cluster/kube-up.sh -``` - -The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine. - -The `NUM_MINIONS` environment variable may be set to specify the number of minions to start. If it is not set, the number of minions defaults to 3. - -The `KUBE_PUSH` environment variable may be set to specify which kubernetes binaries must be deployed on the cluster. Its possible values are: - -* `release` (default if `KUBE_PUSH` is not set) will deploy the binaries of `_output/release-tars/kubernetes-server-….tar.gz`. This is built with `make release` or `make release-skip-tests`. -* `local` will deploy the binaries of `_output/local/go/bin`. These are built with `make`. - -You can check that your machines are there and running with: - -``` -virsh -c qemu:///system list - Id Name State ----------------------------------------------------- - 15 kubernetes_master running - 16 kubernetes_minion-01 running - 17 kubernetes_minion-02 running - 18 kubernetes_minion-03 running - ``` - -You can check that the kubernetes cluster is working with: - -``` -$ kubectl get nodes -NAME LABELS STATUS -192.168.10.2 Ready -192.168.10.3 Ready -192.168.10.4 Ready -``` - -The VMs are running [CoreOS](https://coreos.com/). -Your ssh keys have already been pushed to the VM. (It looks for ~/.ssh/id_*.pub) -The user to use to connect to the VM is `core`. -The IP to connect to the master is 192.168.10.1. -The IPs to connect to the minions are 192.168.10.2 and onwards. - -Connect to `kubernetes_master`: -``` -ssh core@192.168.10.1 -``` - -Connect to `kubernetes_minion-01`: -``` -ssh core@192.168.10.2 -``` - -### Interacting with your Kubernetes cluster with the `kube-*` scripts. - -All of the following commands assume you have set `KUBERNETES_PROVIDER` appropriately: - -``` -export KUBERNETES_PROVIDER=libvirt-coreos -``` - -Bring up a libvirt-CoreOS cluster of 5 minions - -``` -NUM_MINIONS=5 cluster/kube-up.sh -``` - -Destroy the libvirt-CoreOS cluster - -``` -cluster/kube-down.sh -``` - -Update the libvirt-CoreOS cluster with a new Kubernetes release produced by `make release` or `make release-skip-tests`: - -``` -cluster/kube-push.sh -``` - -Update the libvirt-CoreOS cluster with the locally built Kubernetes binaries produced by `make`: -``` -KUBE_PUSH=local cluster/kube-push.sh -``` - -Interact with the cluster - -``` -kubectl ... -``` - -### Troubleshooting - -#### !!! Cannot find kubernetes-server-linux-amd64.tar.gz - -Build the release tarballs: - -``` -make release -``` - -#### Can't find virsh in PATH, please fix and retry. - -Install libvirt - -On Arch: - -``` -pacman -S qemu libvirt -``` - -On Ubuntu 14.04.1: - -``` -aptitude install qemu-system-x86 libvirt-bin -``` - -On Fedora 21: - -``` -yum install qemu libvirt -``` - -#### error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory - -Start the libvirt daemon - -On Arch: - -``` -systemctl start libvirtd -``` - -On Ubuntu 14.04.1: - -``` -service libvirt-bin start -``` - -#### error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': Permission denied - -Fix libvirt access permission (Remember to adapt `$USER`) - -On Arch and Fedora 21: - -``` -cat > /etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules <apiserver.log 2>&1 & - -$ ./bin/km controller-manager \ - --master=$servicehost:8888 \ - --mesos_master=${mesos_master} \ - --v=1 >controller.log 2>&1 & - -$ ./bin/km scheduler \ - --address=${servicehost} \ - --mesos_master=${mesos_master} \ - --etcd_servers=http://${servicehost}:4001 \ - --mesos_user=root \ - --api_servers=$servicehost:8888 \ - --v=2 >scheduler.log 2>&1 & -``` - -Also on the master node, we'll start up a proxy instance to act as a -public-facing service router, for testing the web interface a little -later on. - -```bash -$ sudo ./bin/km proxy \ - --bind_address=${servicehost} \ - --etcd_servers=http://${servicehost}:4001 \ - --logtostderr=true >proxy.log 2>&1 & -``` - -Disown your background jobs so that they'll stay running if you log out. - -```bash -$ disown -a -``` -#### Validate KM Services -Interact with the kubernetes-mesos framework via `kubectl`: - -```bash -$ bin/kubectl get pods -POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS -``` - -```bash -$ bin/kubectl get services # your service IPs will likely differ -NAME LABELS SELECTOR IP PORT -kubernetes component=apiserver,provider=kubernetes 10.10.10.2 443 -``` -Lastly, use the Mesos CLI tool to validate the Kubernetes scheduler framework has been registered and running: -```bash -$ mesos state | grep "Kubernetes" - "name": "Kubernetes", -``` -Or, look for Kubernetes in the Mesos web GUI by pointing your browser to -`http://${mesos_master}`. Make sure you have an active VPN connection. -Go to the Frameworks tab, and look for an active framework named "Kubernetes". - -## Spin up a pod - -Write a JSON pod description to a local file: - -```bash -$ cat <nginx.json -{ "kind": "Pod", -"apiVersion": "v1beta1", -"id": "nginx-id-01", -"desiredState": { - "manifest": { - "version": "v1beta1", - "containers": [{ - "name": "nginx-01", - "image": "nginx", - "ports": [{ - "containerPort": 80, - "hostPort": 31000 - }], - "livenessProbe": { - "enabled": true, - "type": "http", - "initialDelaySeconds": 30, - "httpGet": { - "path": "/index.html", - "port": "8081" - } - } - }] - } -}, -"labels": { - "name": "foo" -} } -EOPOD -``` - -Send the pod description to Kubernetes using the `kubectl` CLI: - -```bash -$ bin/kubectl create -f nginx.json -nginx-id-01 -``` - -Wait a minute or two while `dockerd` downloads the image layers from the internet. -We can use the `kubectl` interface to monitor the status of our pod: - -```bash -$ bin/kubectl get pods -POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS -nginx-id-01 172.17.5.27 nginx-01 nginx 10.72.72.178/10.72.72.178 cluster=gce,name=foo Running -``` - -Verify that the pod task is running in the Mesos web GUI. Click on the -Kubernetes framework. The next screen should show the running Mesos task that -started the Kubernetes pod. - -## Run the Example Guestbook App - -Following the instructions from the kubernetes-mesos [examples/guestbook][6]: - -```bash -$ export ex=k8sm/examples/guestbook -$ bin/kubectl create -f $ex/redis-master.json -$ bin/kubectl create -f $ex/redis-master-service.json -$ bin/kubectl create -f $ex/redis-slave-controller.json -$ bin/kubectl create -f $ex/redis-slave-service.json -$ bin/kubectl create -f $ex/frontend-controller.json - -$ cat </tmp/frontend-service -{ - "id": "frontend", - "kind": "Service", - "apiVersion": "v1beta1", - "port": 9998, - "selector": { - "name": "frontend" - }, - "publicIPs": [ - "${servicehost}" - ] -} -EOS -$ bin/kubectl create -f /tmp/frontend-service -``` - -Watch your pods transition from `Pending` to `Running`: - -```bash -$ watch 'bin/kubectl get pods' -``` - -Review your Mesos cluster's tasks: - -```bash -$ mesos ps - TIME STATE RSS CPU %MEM COMMAND USER ID - 0:00:05 R 41.25 MB 0.5 64.45 none root 0597e78b-d826-11e4-9162-42010acb46e2 - 0:00:08 R 41.58 MB 0.5 64.97 none root 0595b321-d826-11e4-9162-42010acb46e2 - 0:00:10 R 41.93 MB 0.75 65.51 none root ff8fff87-d825-11e4-9162-42010acb46e2 - 0:00:10 R 41.93 MB 0.75 65.51 none root 0597fa32-d826-11e4-9162-42010acb46e2 - 0:00:05 R 41.25 MB 0.5 64.45 none root ff8e01f9-d825-11e4-9162-42010acb46e2 - 0:00:10 R 41.93 MB 0.75 65.51 none root fa1da063-d825-11e4-9162-42010acb46e2 - 0:00:08 R 41.58 MB 0.5 64.97 none root b9b2e0b2-d825-11e4-9162-42010acb46e2 -``` -The number of Kubernetes pods listed earlier (from `bin/kubectl get pods`) should equal to the number active Mesos tasks listed the previous listing (`mesos ps`). - -Next, determine the internal IP address of the front end [service][7]: - -```bash -$ bin/kubectl get services -NAME LABELS SELECTOR IP PORT -kubernetes component=apiserver,provider=kubernetes 10.10.10.2 443 -redismaster name=redis-master 10.10.10.49 10000 -redisslave name=redisslave name=redisslave 10.10.10.109 10001 -frontend name=frontend 10.10.10.149 9998 -``` - -Interact with the frontend application via curl using the front-end service IP address from above: - -```bash -$ curl http://${frontend_service_ip_address}:9998/index.php?cmd=get\&key=messages -{"data": ""} -``` - -Or via the Redis CLI: - -```bash -$ sudo apt-get install redis-tools -$ redis-cli -h ${redis_master_service_ip_address} -p 10000 -10.233.254.108:10000> dump messages -"\x00\x06,world\x06\x00\xc9\x82\x8eHj\xe5\xd1\x12" -``` -#### Test Guestbook App -Or interact with the frontend application via your browser, in 2 steps: - -First, open the firewall on the master machine. - -```bash -# determine the internal port for the frontend service -$ sudo iptables-save|grep -e frontend # -- port 36336 in this case --A KUBE-PORTALS-CONTAINER -d 10.10.10.149/32 -p tcp -m comment --comment frontend -m tcp --dport 9998 -j DNAT --to-destination 10.22.183.23:36336 --A KUBE-PORTALS-CONTAINER -d 10.22.183.23/32 -p tcp -m comment --comment frontend -m tcp --dport 9998 -j DNAT --to-destination 10.22.183.23:36336 --A KUBE-PORTALS-HOST -d 10.10.10.149/32 -p tcp -m comment --comment frontend -m tcp --dport 9998 -j DNAT --to-destination 10.22.183.23:36336 --A KUBE-PORTALS-HOST -d 10.22.183.23/32 -p tcp -m comment --comment frontend -m tcp --dport 9998 -j DNAT --to-destination 10.22.183.23:36336 - -# open up access to the internal port for the frontend service -$ sudo iptables -A INPUT -i eth0 -p tcp -m state --state NEW,ESTABLISHED -m tcp \ - --dport ${internal_frontend_service_port} -j ACCEPT -``` - -Next, add a firewall rule in the Google Cloud Platform Console. Choose Compute > -Compute Engine > Networks, click on the name of your mesosphere-* network, then -click "New firewall rule" and allow access to TCP port 9998. - -![Google Cloud Platform firewall configuration][8] - -Now, you can visit the guestbook in your browser! - -![Kubernetes Guestbook app running on Mesos][9] - -[1]: http://mesosphere.com/docs/tutorials/run-hadoop-on-mesos-using-installer -[2]: http://mesosphere.com/docs/tutorials/run-spark-on-mesos -[3]: http://mesosphere.com/docs/tutorials/run-chronos-on-mesos -[4]: http://cloud.google.com -[5]: https://cloud.google.com/compute/ -[6]: https://github.com/mesosphere/kubernetes-mesos/tree/v0.4.0/examples/guestbook -[7]: https://github.com/GoogleCloudPlatform/kubernetes/blob/v0.11.0/docs/services.md#ips-and-vips -[8]: mesos/k8s-firewall.png -[9]: mesos/k8s-guestbook.png -[10]: http://mesos.apache.org/ - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/mesos.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/getting-started-guides/mesos.md?pixel)]() diff --git a/release-0.19.0/docs/getting-started-guides/mesos/k8s-firewall.png b/release-0.19.0/docs/getting-started-guides/mesos/k8s-firewall.png deleted file mode 100755 index ed1c57ca7d0..00000000000 Binary files a/release-0.19.0/docs/getting-started-guides/mesos/k8s-firewall.png and /dev/null differ diff --git a/release-0.19.0/docs/getting-started-guides/mesos/k8s-guestbook.png b/release-0.19.0/docs/getting-started-guides/mesos/k8s-guestbook.png deleted file mode 100755 index 07d2458b3b5..00000000000 Binary files a/release-0.19.0/docs/getting-started-guides/mesos/k8s-guestbook.png and /dev/null differ diff --git a/release-0.19.0/docs/getting-started-guides/ovirt.md b/release-0.19.0/docs/getting-started-guides/ovirt.md deleted file mode 100644 index 749ca21365e..00000000000 --- a/release-0.19.0/docs/getting-started-guides/ovirt.md +++ /dev/null @@ -1,50 +0,0 @@ -## What is oVirt - -oVirt is a virtual datacenter manager that delivers powerful management of multiple virtual machines on multiple hosts. Using KVM and libvirt, oVirt can be installed on Fedora, CentOS, or Red Hat Enterprise Linux hosts to set up and manage your virtual data center. - -## oVirt Cloud Provider Deployment - -The oVirt cloud provider allows to easily discover and automatically add new VM instances as nodes to your kubernetes cluster. -At the moment there are no community-supported or pre-loaded VM images including kubernetes but it is possible to [import] or [install] Project Atomic (or Fedora) in a VM to [generate a template]. Any other distribution that includes kubernetes may work as well. - -It is mandatory to [install the ovirt-guest-agent] in the guests for the VM ip address and hostname to be reported to ovirt-engine and ultimately to kubernetes. - -Once the kubernetes template is available it is possible to start instantiating VMs that can be discovered by the cloud provider. - -[import]: http://ovedou.blogspot.it/2014/03/importing-glance-images-as-ovirt.html -[install]: http://www.ovirt.org/Quick_Start_Guide#Create_Virtual_Machines -[generate a template]: http://www.ovirt.org/Quick_Start_Guide#Using_Templates -[install the ovirt-guest-agent]: http://www.ovirt.org/How_to_install_the_guest_agent_in_Fedora - -## Using the oVirt Cloud Provider - -The oVirt Cloud Provider requires access to the oVirt REST-API to gather the proper information, the required credential should be specified in the `ovirt-cloud.conf` file: - - [connection] - uri = https://localhost:8443/ovirt-engine/api - username = admin@internal - password = admin - -In the same file it is possible to specify (using the `filters` section) what search query to use to identify the VMs to be reported to kubernetes: - - [filters] - # Search query used to find nodes - vms = tag=kubernetes - -In the above example all the VMs tagged with the `kubernetes` label will be reported as nodes to kubernetes. - -The `ovirt-cloud.conf` file then must be specified in kube-controller-manager: - - kube-controller-manager ... --cloud-provider=ovirt --cloud-config=/path/to/ovirt-cloud.conf ... - -## oVirt Cloud Provider Screencast - -This short screencast demonstrates how the oVirt Cloud Provider can be used to dynamically add VMs to your kubernetes cluster. - -[![Screencast](http://img.youtube.com/vi/JyyST4ZKne8/0.jpg)](http://www.youtube.com/watch?v=JyyST4ZKne8) - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/ovirt.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/getting-started-guides/ovirt.md?pixel)]() diff --git a/release-0.19.0/docs/getting-started-guides/rackspace.md b/release-0.19.0/docs/getting-started-guides/rackspace.md deleted file mode 100644 index 8dedfda7450..00000000000 --- a/release-0.19.0/docs/getting-started-guides/rackspace.md +++ /dev/null @@ -1,58 +0,0 @@ -# Rackspace - -* Supported Version: v0.16.2 - * `git checkout v0.16.2` - -In general, the dev-build-and-up.sh workflow for Rackspace is the similar to GCE. The specific implementation is different due to the use of CoreOS, Rackspace Cloud Files and the overall network design. - -These scripts should be used to deploy development environments for Kubernetes. If your account leverages RackConnect or non-standard networking, these scripts will most likely not work without modification. - -NOTE: The rackspace scripts do NOT rely on `saltstack` and instead rely on cloud-init for configuration. - -The current cluster design is inspired by: -- [corekube](https://github.com/metral/corekube/) -- [Angus Lees](https://github.com/anguslees/kube-openstack/) - -## Prerequisites -1. Python2.7 -2. You need to have both `nova` and `swiftly` installed. It's recommended to use a python virtualenv to install these packages into. -3. Make sure you have the appropriate environment variables set to interact with the OpenStack APIs. See [Rackspace Documentation](http://docs.rackspace.com/servers/api/v2/cs-gettingstarted/content/section_gs_install_nova.html) for more details. - -##Provider: Rackspace - -- To install the latest released version of kubernetes use `export KUBERNETES_PROVIDER=rackspace; wget -q -O - https://get.k8s.io | bash` -- To build your own released version from source use `export KUBERNETES_PROVIDER=rackspace` and run the `bash hack/dev-build-and-up.sh` - -## Build -1. The kubernetes binaries will be built via the common build scripts in `build/`. -2. If you've set the ENV `KUBERNETES_PROVIDER=rackspace`, the scripts will upload `kubernetes-server-linux-amd64.tar.gz` to Cloud Files. -2. A cloud files container will be created via the `swiftly` CLI and a temp URL will be enabled on the object. -3. The built `kubernetes-server-linux-amd64.tar.gz` will be uploaded to this container and the URL will be passed to master/minions nodes when booted. - -## Cluster -There is a specific `cluster/rackspace` directory with the scripts for the following steps: -1. A cloud network will be created and all instances will be attached to this network. - - flanneld uses this network for next hop routing. These routes allow the containers running on each node to communicate with one another on this private network. -2. A SSH key will be created and uploaded if needed. This key must be used to ssh into the machines since we won't capture the password. -3. The master server and additional nodes will be created via the `nova` CLI. A `cloud-config.yaml` is generated and provided as user-data with the entire configuration for the systems. -4. We then boot as many nodes as defined via `$NUM_MINIONS`. - -## Some notes: -- The scripts expect `eth2` to be the cloud network that the containers will communicate across. -- A number of the items in `config-default.sh` are overridable via environment variables. -- For older versions please either: - * Sync back to `v0.9` with `git checkout v0.9` - * Download a [snapshot of `v0.9`](https://github.com/GoogleCloudPlatform/kubernetes/archive/v0.9.tar.gz) - * Sync back to `v0.3` with `git checkout v0.3` - * Download a [snapshot of `v0.3`](https://github.com/GoogleCloudPlatform/kubernetes/archive/v0.3.tar.gz) - -## Network Design -- eth0 - Public Interface used for servers/containers to reach the internet -- eth1 - ServiceNet - Intra-cluster communication (k8s, etcd, etc) communicate via this interface. The `cloud-config` files use the special CoreOS identifier `$private_ipv4` to configure the services. -- eth2 - Cloud Network - Used for k8s pods to communicate with one another. The proxy service will pass traffic via this interface. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/rackspace.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/getting-started-guides/rackspace.md?pixel)]() diff --git a/release-0.19.0/docs/getting-started-guides/rkt/README.md b/release-0.19.0/docs/getting-started-guides/rkt/README.md deleted file mode 100644 index 0d54eaf69e5..00000000000 --- a/release-0.19.0/docs/getting-started-guides/rkt/README.md +++ /dev/null @@ -1,95 +0,0 @@ -# Run Kubernetes with rkt - -This document describes how to run Kubernetes using [rkt](https://github.com/coreos/rkt) as a container runtime. -We still have [a bunch of work](https://github.com/GoogleCloudPlatform/kubernetes/issues/8262) to do to make the experience with rkt wonderful, please stay tuned! - -### **Prerequisite** - -- [systemd](http://www.freedesktop.org/wiki/Software/systemd/) should be installed on your machine and should be enabled. The minimum version required at this moment (2015/05/28) is [215](http://lists.freedesktop.org/archives/systemd-devel/2014-July/020903.html). - *(Note that systemd is not required by rkt itself, we are using it here to monitor and manage the pods launched by kubelet.)* - -- Install the latest rkt release according to the instructions [here](https://github.com/coreos/rkt). - The minimum version required for now is [v0.5.6](https://github.com/coreos/rkt/releases/tag/v0.5.6). - -- Make sure the `rkt metadata service` is running because it is necessary for running pod in private network mode. - More details about the networking of rkt can be found in the [documentation](https://github.com/coreos/rkt/blob/master/Documentation/networking.md). - - To start the `rkt metadata service`, you can simply run: - ```shell - $ sudo rkt metadata-service - ``` - - If you want the service to be running as a systemd service, then: - ```shell - $ sudo systemd-run rkt metadata-service - ``` - Alternatively, you can use the [rkt-metadata.service](https://github.com/coreos/rkt/blob/master/dist/init/systemd/rkt-metadata.service) and [rkt-metadata.socket](https://github.com/coreos/rkt/blob/master/dist/init/systemd/rkt-metadata.socket) to start the service. - - -### Local cluster - -To use rkt as the container runtime, you just need to set the environment variable `CONTAINER_RUNTIME`: -```shell -$ export CONTAINER_RUNTIME=rkt -$ hack/local-up-cluster.sh -``` - -### CoreOS cluster on GCE - -To use rkt as the container runtime for your CoreOS cluster on GCE, you need to specify the OS distribution, project, image: -```shell -$ export KUBE_OS_DISTRIBUTION=coreos -$ export KUBE_GCE_MINION_IMAGE= -$ export KUBE_GCE_MINION_PROJECT=coreos-cloud -$ export KUBE_CONTAINER_RUNTIME=rkt -``` - -You can optionally choose the version of rkt used by setting `KUBE_RKT_VERSION`: -```shell -$ export KUBE_RKT_VERSION=0.5.6 -``` - -Then you can launch the cluster by: -````shell -$ kube-up.sh -``` - -Note that we are still working on making all containerized the master components run smoothly in rkt. Before that we are not able to run the master node with rkt yet. - -### CoreOS cluster on AWS - -To use rkt as the container runtime for your CoreOS cluster on AWS, you need to specify the provider and OS distribution: -```shell -$ export KUBERNETES_PROVIDER=aws -$ export KUBE_OS_DISTRIBUTION=coreos -$ export KUBE_CONTAINER_RUNTIME=rkt -``` - -You can optionally choose the version of rkt used by setting `KUBE_RKT_VERSION`: -```shell -$ export KUBE_RKT_VERSION=0.5.6 -``` - -You can optionally choose the CoreOS channel by setting `COREOS_CHANNEL`: -```shell -$ export COREOS_CHANNEL=stable -``` - -Then you can launch the cluster by: -````shell -$ kube-up.sh -``` - -Note: CoreOS is not supported as the master using the automated launch -scripts. The master node is always Ubuntu. - -### Getting started with your cluster -See [a simple nginx example](../../examples/simple-nginx.md) to try out your new cluster. - -For more complete applications, please look in the [examples directory](../../examples). - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/rkt/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/getting-started-guides/rkt/README.md?pixel)]() diff --git a/release-0.19.0/docs/getting-started-guides/ubuntu.md b/release-0.19.0/docs/getting-started-guides/ubuntu.md deleted file mode 100644 index d210db0954c..00000000000 --- a/release-0.19.0/docs/getting-started-guides/ubuntu.md +++ /dev/null @@ -1,180 +0,0 @@ -# Kubernetes Deployment On Bare-metal Ubuntu Nodes - -This document describes how to deploy kubernetes on ubuntu nodes, including 1 master node and 3 minion nodes, and people uses this approach can scale to **any number of minion nodes** by changing some settings with ease. The original idea was heavily inspired by @jainvipin 's ubuntu single node work, which has been merge into this document. - -[Cloud team from Zhejiang University](https://github.com/ZJU-SEL) will maintain this work. - -### **Prerequisites:** -*1 The minion nodes have installed docker version 1.2+ and bridge-utils to manipulate linux bridge* - -*2 All machines can communicate with each other, no need to connect Internet (should use private docker registry in this case)* - -*3 These guide is tested OK on Ubuntu 14.04 LTS 64bit server, but it should also work on most Ubuntu versions* - -*4 Dependences of this guide: etcd-2.0.9, flannel-0.4.0, k8s-0.18.0, but it may work with higher versions* - -*5 All the remote servers can be ssh logged in without a password by using key authentication* - - -### **Main Steps** -#### I. Make *kubernetes* , *etcd* and *flanneld* binaries - -First clone the kubernetes github repo, `$ git clone https://github.com/GoogleCloudPlatform/kubernetes.git` -then `$ cd kubernetes/cluster/ubuntu`. - -Then run `$ ./build.sh`, this will download all the needed binaries into `./binaries`. - -You can customize your etcd version, flannel version, k8s version by changing variable `ETCD_VERSION` , `FLANNEL_VERSION` and `K8S_VERSION` in build.sh, default etcd version is 2.0.9, flannel version is 0.4.0 and K8s version is 0.18.0. - -Please make sure that there are `kube-apiserver`, `kube-controller-manager`, `kube-scheduler`, `kubelet`, `kube-proxy`, `etcd`, `etcdctl` and `flannel` in the binaries/master or binaries/minion directory. - -> We used flannel here because we want to use overlay network, but please remember it is not the only choice, and it is also not a k8s' necessary dependence. Actually you can just build up k8s cluster natively, or use flannel, Open vSwitch or any other SDN tool you like, we just choose flannel here as a example. - -#### II. Configure and start the kubernetes cluster -An example cluster is listed as below: - -| IP Address|Role | -|---------|------| -|10.10.103.223| minion | -|10.10.103.162| minion | -|10.10.103.250| both master and minion| - -First configure the cluster information in cluster/ubuntu/config-default.sh, below is a simple sample. - -``` -export nodes="vcap@10.10.103.250 vcap@10.10.103.162 vcap@10.10.103.223" - -export roles=("ai" "i" "i") - -export NUM_MINIONS=${NUM_MINIONS:-3} - -export SERVICE_CLUSTER_IP_RANGE=11.1.1.0/24 - -export FLANNEL_NET=172.16.0.0/16 - - -``` - -The first variable `nodes` defines all your cluster nodes, MASTER node comes first and separated with blank space like ` ` - -Then the `roles ` variable defines the role of above machine in the same order, "ai" stands for machine acts as both master and minion, "a" stands for master, "i" stands for minion. So they are just defined the k8s cluster as the table above described. - -The `NUM_MINIONS` variable defines the total number of minions. - -The `SERVICE_CLUSTER_IP_RANGE` variable defines the kubernetes service IP range. Please make sure that you do have a valid private ip range defined here, because some IaaS provider may reserve private ips. You can use below three private network range accordin to rfc1918. Besides you'd better not choose the one that conflicts with your own private network range. - - 10.0.0.0 - 10.255.255.255 (10/8 prefix) - - 172.16.0.0 - 172.31.255.255 (172.16/12 prefix) - - 192.168.0.0 - 192.168.255.255 (192.168/16 prefix) - -The `FLANNEL_NET` variable defines the IP range used for flannel overlay network, should not conflict with above `SERVICE_CLUSTER_IP_RANGE`. - -After all the above variable being set correctly. We can use below command in cluster/ directory to bring up the whole cluster. - -`$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh` - -The scripts is automatically scp binaries and config files to all the machines and start the k8s service on them. The only thing you need to do is to type the sudo password when promoted. The current machine name is shown below like. So you will not type in the wrong password. - -``` - -Deploying minion on machine 10.10.103.223 - -... - -[sudo] password to copy files and start minion: - -``` - -If all things goes right, you will see the below message from console -`Cluster validation succeeded` indicating the k8s is up. - -**All done !** - -You can also use `kubectl` command to see if the newly created k8s is working correctly. The `kubectl` binary is under the `cluster/ubuntu/binaries` directory. You can move it into your PATH. Then you can use the below command smoothly. - -For example, use `$ kubectl get nodes` to see if all your minion nodes are in ready status. It may take some time for the minions ready to use like below. - -``` - -NAME LABELS STATUS - -10.10.103.162 kubernetes.io/hostname=10.10.103.162 Ready - -10.10.103.223 kubernetes.io/hostname=10.10.103.223 Ready - -10.10.103.250 kubernetes.io/hostname=10.10.103.250 Ready - - -``` - -Also you can run kubernetes [guest-example](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/guestbook) to build a redis backend cluster on the k8s. - - -#### IV. Deploy addons - -After the previous parts, you will have a working k8s cluster, this part will teach you how to deploy addones like dns onto the existing cluster. - -The configuration of dns is configured in cluster/ubuntu/config-default.sh. - -``` - -ENABLE_CLUSTER_DNS=true - -DNS_SERVER_IP="192.168.3.10" - -DNS_DOMAIN="kubernetes.local" - -DNS_REPLICAS=1 - -``` -The `DNS_SERVER_IP` is defining the ip of dns server which must be in the service_cluster_ip_range. - -The `DNS_REPLICAS` describes how many dns pod running in the cluster. - -After all the above variable have been set. Just type the below command - -``` - -$ cd cluster/ubuntu - -$ KUBERNETES_PROVIDER=ubuntu ./deployAddons.sh - -``` - -After some time, you can use `$ kubectl get pods` to see the dns pod is running in the cluster. Done! - - -#### IV. Trouble Shooting - -Generally, what this approach did is quite simple: - -1. Download and copy binaries and configuration files to proper dirctories on every node - -2. Configure `etcd` using IPs based on input from user - -3. Create and start flannel network - -So, if you see a problem, **check etcd configuration first** - -Please try: - -1. Check `/var/log/upstart/etcd.log` for suspicious etcd log - -2. Check `/etc/default/etcd`, as we do not have much input validation, a right config should be like: - ``` - ETCD_OPTS="-name infra1 -initial-advertise-peer-urls -listen-peer-urls -initial-cluster-token etcd-cluster-1 -initial-cluster infra1=,infra2=,infra3= -initial-cluster-state new" - ``` - -3. You can use below command - `$ KUBERNETES_PROVIDER=ubuntu ./kube-down.sh` to bring down the cluster and run - `$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh` again to start again. - -4. You can also customize your own settings in `/etc/default/{component_name}` after configured success. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/ubuntu.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/getting-started-guides/ubuntu.md?pixel)]() diff --git a/release-0.19.0/docs/getting-started-guides/vagrant.md b/release-0.19.0/docs/getting-started-guides/vagrant.md deleted file mode 100644 index c8884ab83a6..00000000000 --- a/release-0.19.0/docs/getting-started-guides/vagrant.md +++ /dev/null @@ -1,308 +0,0 @@ -## Getting started with Vagrant - -Running kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/develop on your local machine (Linux, Mac OS X). - -### Prerequisites -1. Install latest version >= 1.6.2 of vagrant from http://www.vagrantup.com/downloads.html -2. Install one of: - 1. The latest version of Virtual Box from https://www.virtualbox.org/wiki/Downloads - 2. [VMWare Fusion](https://www.vmware.com/products/fusion/) version 5 or greater as well as the appropriate [Vagrant VMWare Fusion provider](https://www.vagrantup.com/vmware) - 3. [VMWare Workstation](https://www.vmware.com/products/workstation/) version 9 or greater as well as the [Vagrant VMWare Workstation provider](https://www.vagrantup.com/vmware) - 4. [Parallels Desktop](https://www.parallels.com/products/desktop/) version 9 or greater as well as the [Vagrant Parallels provider](https://parallels.github.io/vagrant-parallels/) - 5. libvirt with KVM and enable support of hardware virtualisation. [Vagrant-libvirt](https://github.com/pradels/vagrant-libvirt). For fedora provided official rpm, and possible to use ```yum install vagrant-libvirt``` - -### Setup - -Setting up a cluster is as simple as running: - -```sh -export KUBERNETES_PROVIDER=vagrant -curl -sS https://get.k8s.io | bash -``` - -The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine. - -By default, the Vagrant setup will create a single kubernetes-master and 1 kubernetes-minion. Each VM will take 1 GB, so make sure you have at least 2GB to 4GB of free memory (plus appropriate free disk space). To start your local cluster, open a shell and run: - -```sh -cd kubernetes - -export KUBERNETES_PROVIDER=vagrant -./cluster/kube-up.sh -``` - -Vagrant will provision each machine in the cluster with all the necessary components to run Kubernetes. The initial setup can take a few minutes to complete on each machine. - -If you installed more than one Vagrant provider, Kubernetes will usually pick the appropriate one. However, you can override which one Kubernetes will use by setting the [`VAGRANT_DEFAULT_PROVIDER`](https://docs.vagrantup.com/v2/providers/default.html) environment variable: - -```sh -export VAGRANT_DEFAULT_PROVIDER=parallels -export KUBERNETES_PROVIDER=vagrant -./cluster/kube-up.sh -``` - -By default, each VM in the cluster is running Fedora, and all of the Kubernetes services are installed into systemd. - -To access the master or any minion: - -```sh -vagrant ssh master -vagrant ssh minion-1 -``` - -If you are running more than one minion, you can access the others by: - -```sh -vagrant ssh minion-2 -vagrant ssh minion-3 -``` - -To view the service status and/or logs on the kubernetes-master: -```sh -vagrant ssh master -[vagrant@kubernetes-master ~] $ sudo systemctl status kube-apiserver -[vagrant@kubernetes-master ~] $ sudo journalctl -r -u kube-apiserver - -[vagrant@kubernetes-master ~] $ sudo systemctl status kube-controller-manager -[vagrant@kubernetes-master ~] $ sudo journalctl -r -u kube-controller-manager - -[vagrant@kubernetes-master ~] $ sudo systemctl status etcd -[vagrant@kubernetes-master ~] $ sudo systemctl status nginx -``` - -To view the services on any of the kubernetes-minion(s): -```sh -vagrant ssh minion-1 -[vagrant@kubernetes-minion-1] $ sudo systemctl status docker -[vagrant@kubernetes-minion-1] $ sudo journalctl -r -u docker -[vagrant@kubernetes-minion-1] $ sudo systemctl status kubelet -[vagrant@kubernetes-minion-1] $ sudo journalctl -r -u kubelet -``` - -### Interacting with your Kubernetes cluster with Vagrant. - -With your Kubernetes cluster up, you can manage the nodes in your cluster with the regular Vagrant commands. - -To push updates to new Kubernetes code after making source changes: -```sh -./cluster/kube-push.sh -``` - -To stop and then restart the cluster: -```sh -vagrant halt -./cluster/kube-up.sh -``` - -To destroy the cluster: -```sh -vagrant destroy -``` - -Once your Vagrant machines are up and provisioned, the first thing to do is to check that you can use the `kubectl.sh` script. - -You may need to build the binaries first, you can do this with ```make``` - -```sh -$ ./cluster/kubectl.sh get nodes - -NAME LABELS -10.245.1.4 -10.245.1.5 -10.245.1.3 -``` - -### Authenticating with your master - -When using the vagrant provider in Kubernetes, the `cluster/kubectl.sh` script will cache your credentials in a `~/.kubernetes_vagrant_auth` file so you will not be prompted for them in the future. - -```sh -cat ~/.kubernetes_vagrant_auth -{ "User": "vagrant", - "Password": "vagrant", - "CAFile": "/home/k8s_user/.kubernetes.vagrant.ca.crt", - "CertFile": "/home/k8s_user/.kubecfg.vagrant.crt", - "KeyFile": "/home/k8s_user/.kubecfg.vagrant.key" -} -``` - -You should now be set to use the `cluster/kubectl.sh` script. For example try to list the nodes that you have started with: - -```sh -./cluster/kubectl.sh get nodes -``` - -### Running containers - -Your cluster is running, you can list the nodes in your cluster: - -```sh -$ ./cluster/kubectl.sh get nodes - -NAME LABELS -10.245.2.4 -10.245.2.3 -10.245.2.2 -``` - -Now start running some containers! - -You can now use any of the `cluster/kube-*.sh` commands to interact with your VM machines. -Before starting a container there will be no pods, services and replication controllers. - -```sh -$ ./cluster/kubectl.sh get pods -NAME IMAGE(S) HOST LABELS STATUS - -$ ./cluster/kubectl.sh get services -NAME LABELS SELECTOR IP PORT - -$ ./cluster/kubectl.sh get replicationcontrollers -NAME IMAGE(S SELECTOR REPLICAS -``` - -Start a container running nginx with a replication controller and three replicas - -```sh -$ ./cluster/kubectl.sh run my-nginx --image=nginx --replicas=3 --port=80 -``` - -When listing the pods, you will see that three containers have been started and are in Waiting state: - -```sh -$ ./cluster/kubectl.sh get pods -NAME IMAGE(S) HOST LABELS STATUS -781191ff-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.4/10.245.2.4 name=myNginx Waiting -7813c8bd-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.2/10.245.2.2 name=myNginx Waiting -78140853-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.3/10.245.2.3 name=myNginx Waiting -``` - -You need to wait for the provisioning to complete, you can monitor the nodes by doing: - -```sh -$ sudo salt '*minion-1' cmd.run 'docker images' -kubernetes-minion-1: - REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE - 96864a7d2df3 26 hours ago 204.4 MB - google/cadvisor latest e0575e677c50 13 days ago 12.64 MB - kubernetes/pause latest 6c4579af347b 8 weeks ago 239.8 kB -``` - -Once the docker image for nginx has been downloaded, the container will start and you can list it: - -```sh -$ sudo salt '*minion-1' cmd.run 'docker ps' -kubernetes-minion-1: - CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES - dbe79bf6e25b nginx:latest "nginx" 21 seconds ago Up 19 seconds k8s--mynginx.8c5b8a3a--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--fcfa837f - fa0e29c94501 kubernetes/pause:latest "/pause" 8 minutes ago Up 8 minutes 0.0.0.0:8080->80/tcp k8s--net.a90e7ce4--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--baf5b21b - aa2ee3ed844a google/cadvisor:latest "/usr/bin/cadvisor - 38 minutes ago Up 38 minutes k8s--cadvisor.9e90d182--cadvisor_-_agent.file--4626b3a2 - 65a3a926f357 kubernetes/pause:latest "/pause" 39 minutes ago Up 39 minutes 0.0.0.0:4194->8080/tcp k8s--net.c5ba7f0e--cadvisor_-_agent.file--342fd561 -``` - -Going back to listing the pods, services and replicationcontrollers, you now have: - -```sh -$ ./cluster/kubectl.sh get pods -NAME IMAGE(S) HOST LABELS STATUS -781191ff-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.4/10.245.2.4 name=myNginx Running -7813c8bd-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.2/10.245.2.2 name=myNginx Running -78140853-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.3/10.245.2.3 name=myNginx Running - -$ ./cluster/kubectl.sh get services -NAME LABELS SELECTOR IP PORT - -$ ./cluster/kubectl.sh get replicationcontrollers -NAME IMAGE(S SELECTOR REPLICAS -myNginx nginx name=my-nginx 3 -``` - -We did not start any services, hence there are none listed. But we see three replicas displayed properly. -Check the [guestbook](../../examples/guestbook/README.md) application to learn how to create a service. -You can already play with scaling the replicas with: - -```sh -$ ./cluster/kubectl.sh scale rc my-nginx --replicas=2 -$ ./cluster/kubectl.sh get pods -NAME IMAGE(S) HOST LABELS STATUS -7813c8bd-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.2/10.245.2.2 name=myNginx Running -78140853-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.3/10.245.2.3 name=myNginx Running -``` - -Congratulations! - -### Troubleshooting - -#### I keep downloading the same (large) box all the time! - -By default the Vagrantfile will download the box from S3. You can change this (and cache the box locally) by providing a name and an alternate URL when calling `kube-up.sh` - -```sh -export KUBERNETES_BOX_NAME=choose_your_own_name_for_your_kuber_box -export KUBERNETES_BOX_URL=path_of_your_kuber_box -export KUBERNETES_PROVIDER=vagrant -./cluster/kube-up.sh -``` - -#### I just created the cluster, but I am getting authorization errors! - -You probably have an incorrect ~/.kubernetes_vagrant_auth file for the cluster you are attempting to contact. - -```sh -rm ~/.kubernetes_vagrant_auth -``` - -After using kubectl.sh make sure that the correct credentials are set: - -```sh -cat ~/.kubernetes_vagrant_auth -{ - "User": "vagrant", - "Password": "vagrant" -} -``` - -#### I just created the cluster, but I do not see my container running! - -If this is your first time creating the cluster, the kubelet on each minion schedules a number of docker pull requests to fetch prerequisite images. This can take some time and as a result may delay your initial pod getting provisioned. - -#### I want to make changes to Kubernetes code! - -To set up a vagrant cluster for hacking, follow the [vagrant developer guide](../devel/developer-guides/vagrant.md). - -#### I have brought Vagrant up but the nodes won't validate! - -Log on to one of the nodes (`vagrant ssh minion-1`) and inspect the salt minion log (`sudo cat /var/log/salt/minion`). - -#### I want to change the number of nodes! - -You can control the number of nodes that are instantiated via the environment variable `NUM_MINIONS` on your host machine. If you plan to work with replicas, we strongly encourage you to work with enough nodes to satisfy your largest intended replica size. If you do not plan to work with replicas, you can save some system resources by running with a single minion. You do this, by setting `NUM_MINIONS` to 1 like so: - -```sh -export NUM_MINIONS=1 -``` - -#### I want my VMs to have more memory! - -You can control the memory allotted to virtual machines with the `KUBERNETES_MEMORY` environment variable. -Just set it to the number of megabytes you would like the machines to have. For example: - -```sh -export KUBERNETES_MEMORY=2048 -``` - -If you need more granular control, you can set the amount of memory for the master and nodes independently. For example: - -```sh -export KUBERNETES_MASTER_MEMORY=1536 -export KUBERNETES_MINION_MEMORY=2048 -``` - -#### I ran vagrant suspend and nothing works! -```vagrant suspend``` seems to mess up the network. It's not supported at this time. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/vagrant.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/getting-started-guides/vagrant.md?pixel)]() diff --git a/release-0.19.0/docs/getting-started-guides/vsphere.md b/release-0.19.0/docs/getting-started-guides/vsphere.md deleted file mode 100644 index 5180912a2da..00000000000 --- a/release-0.19.0/docs/getting-started-guides/vsphere.md +++ /dev/null @@ -1,86 +0,0 @@ -## Getting started with vSphere - -The example below creates a Kubernetes cluster with 4 worker node Virtual -Machines and a master Virtual Machine (i.e. 5 VMs in your cluster). This -cluster is set up and controlled from your workstation (or wherever you find -convenient). - -### Prerequisites - -1. You need administrator credentials to an ESXi machine or vCenter instance. -2. You must have Go (version 1.2 or later) installed: [www.golang.org](http://www.golang.org). -3. You must have your `GOPATH` set up and include `$GOPATH/bin` in your `PATH`. - - ```sh - export GOPATH=$HOME/src/go - mkdir -p $GOPATH - export PATH=$PATH:$GOPATH/bin - ``` - -4. Install the govc tool to interact with ESXi/vCenter: - - ```sh - go get github.com/vmware/govmomi/govc - ``` - -5. Get or build a [binary release](binary_release.md) - -### Setup - -Download a prebuilt Debian 7.7 VMDK that we'll use as a base image: - -```sh -curl --remote-name-all https://storage.googleapis.com/govmomi/vmdk/2014-11-11/kube.vmdk.gz{,.md5} -md5sum -c kube.vmdk.gz.md5 -gzip -d kube.vmdk.gz -``` - -Import this VMDK into your vSphere datastore: - -```sh -export GOVC_URL='user:pass@hostname' -export GOVC_INSECURE=1 # If the host above uses a self-signed cert -export GOVC_DATASTORE='target datastore' -export GOVC_RESOURCE_POOL='resource pool or cluster with access to datastore' - -govc import.vmdk kube.vmdk ./kube/ -``` - -Verify that the VMDK was correctly uploaded and expanded to ~3GiB: - -```sh -govc datastore.ls ./kube/ -``` - -Take a look at the file `cluster/vsphere/config-common.sh` fill in the required -parameters. The guest login for the image that you imported is `kube:kube`. - -### Starting a cluster - -Now, let's continue with deploying Kubernetes. -This process takes about ~10 minutes. - -```sh -cd kubernetes # Extracted binary release OR repository root -export KUBERNETES_PROVIDER=vsphere -cluster/kube-up.sh -``` - -Refer to the top level README and the getting started guide for Google Compute -Engine. Once you have successfully reached this point, your vSphere Kubernetes -deployment works just as any other one! - -**Enjoy!** - -### Extra: debugging deployment failure - -The output of `kube-up.sh` displays the IP addresses of the VMs it deploys. You -can log into any VM as the `kube` user to poke around and figure out what is -going on (find yourself authorized with your SSH key, or use the password -`kube` otherwise). - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/vsphere.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/getting-started-guides/vsphere.md?pixel)]() diff --git a/release-0.19.0/docs/glossary.md b/release-0.19.0/docs/glossary.md deleted file mode 100644 index 086e580f6f4..00000000000 --- a/release-0.19.0/docs/glossary.md +++ /dev/null @@ -1,61 +0,0 @@ - -# Glossary and Concept Index - -**Authorization** -:Kubernetes does not currently have an authorization system. Anyone with the cluster password can do anything. We plan -to add sophisticated authorization, and to make it pluggable. See the [access control design doc](./design/access.md) and -[this issue](https://github.com/GoogleCloudPlatform/kubernetes/issues/1430). - -**Annotation** -: A key/value pair that can hold large (compared to a Label), and possibly not human-readable data. Intended to store -non-identifying metadata associated with an object, such as provenance information. Not indexed. - -**Image** -: A [Docker Image](https://docs.docker.com/userguide/dockerimages/). See [images](./images.md). - -**Label** -: A key/value pair conveying user-defined identifying attributes of an object, and used to form sets of related objects, such as -pods which are replicas in a load-balanced service. Not intended to hold large or non-human-readable data. See [labels](./labels.md). - -**Name** -: A user-provided name for an object. See [identifiers](identifiers.md). - -**Namespace** -: A namespace is like a prefix to the name of an object. You can configure your client to use a particular namespace, -so you do not have to type it all the time. Namespaces allow multiple projects to prevent naming collisions between unrelated teams. - -**Pod** -: A collection of containers which will be scheduled onto the same node, which share and an IP and port space, and which -can be created/destroyed together. See [pods](./pods.md). - -**Replication Controller** -: A _replication controller_ ensures that a specified number of pod "replicas" are running at any one time. Both allows -for easy scaling of replicated systems, and handles restarting of a Pod when the machine it is on reboots or otherwise fails. - -**Resource** -: CPU, memory, and other things that a pod can request. See [resources](resources.md). - -**Secret** -: An object containing sensitive information, such as authentication tokens, which can be made available to containers upon request. See [secrets](secrets.md). - -**Selector** -: An expression that matches Labels. Can identify related objects, such as pods which are replicas in a load-balanced -service. See [labels](labels.md). - -**Service** -: A load-balanced set of `pods` which can be accessed via a single stable IP address. See [services](./services.md). - -**UID** -: An identifier on all Kubernetes objects that is set by the Kubernetes API server. Can be used to distinguish between historical -occurrences of same-Name objects. See [identifiers](identifiers.md). - -**Volume** -: A directory, possibly with some data in it, which is accessible to a Container as part of its filesystem. Kubernetes -Volumes build upon [Docker Volumes](https://docs.docker.com/userguide/dockervolumes/), adding provisioning of the Volume -directory and/or device. See [volumes](volumes.md). - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/glossary.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/glossary.md?pixel)]() diff --git a/release-0.19.0/docs/high-availability/default-kubelet b/release-0.19.0/docs/high-availability/default-kubelet deleted file mode 100644 index ad38c8d7215..00000000000 --- a/release-0.19.0/docs/high-availability/default-kubelet +++ /dev/null @@ -1 +0,0 @@ -DAEMON_ARGS="$DAEMON_ARGS --cloud_provider=gce --config=/etc/kubernetes/manifests --allow_privileged=False --v=2 --cluster_dns=10.0.0.10 --cluster_domain=cluster.local --configure-cbr0=true --cgroup_root=/ --system-container=/system" \ No newline at end of file diff --git a/release-0.19.0/docs/high-availability/init-kubelet b/release-0.19.0/docs/high-availability/init-kubelet deleted file mode 100644 index 8acf7a15dbf..00000000000 --- a/release-0.19.0/docs/high-availability/init-kubelet +++ /dev/null @@ -1,126 +0,0 @@ -#!/bin/bash -# -### BEGIN INIT INFO -# Provides: kubelet -# Required-Start: $local_fs $network $syslog -# Required-Stop: -# Default-Start: 2 3 4 5 -# Default-Stop: 0 1 6 -# Short-Description: The Kubernetes node container manager -# Description: -# The Kubernetes container manager maintains docker state against a state file. -### END INIT INFO - - -# PATH should only include /usr/* if it runs after the mountnfs.sh script -PATH=/sbin:/usr/sbin:/bin:/usr/bin -DESC="The Kubernetes container manager" -NAME=kubelet -DAEMON=/usr/local/bin/kubelet -DAEMON_ARGS="" -DAEMON_LOG_FILE=/var/log/$NAME.log -PIDFILE=/var/run/$NAME.pid -SCRIPTNAME=/etc/init.d/$NAME -DAEMON_USER=root - -# Exit if the package is not installed -[ -x "$DAEMON" ] || exit 0 - -# Read configuration variable file if it is present -[ -r /etc/default/$NAME ] && . /etc/default/$NAME - -# Define LSB log_* functions. -# Depend on lsb-base (>= 3.2-14) to ensure that this file is present -# and status_of_proc is working. -. /lib/lsb/init-functions - -# -# Function that starts the daemon/service -# -do_start() -{ - # Avoid a potential race at boot time when both monit and init.d start - # the same service - PIDS=$(pidof $DAEMON) - for PID in ${PIDS}; do - kill -9 $PID - done - - # Return - # 0 if daemon has been started - # 1 if daemon was already running - # 2 if daemon could not be started - start-stop-daemon --start --quiet --background --no-close \ - --make-pidfile --pidfile $PIDFILE \ - --exec $DAEMON -c $DAEMON_USER --test > /dev/null \ - || return 1 - start-stop-daemon --start --quiet --background --no-close \ - --make-pidfile --pidfile $PIDFILE \ - --exec $DAEMON -c $DAEMON_USER -- \ - $DAEMON_ARGS >> $DAEMON_LOG_FILE 2>&1 \ - || return 2 -} - -# -# Function that stops the daemon/service -# -do_stop() -{ - # Return - # 0 if daemon has been stopped - # 1 if daemon was already stopped - # 2 if daemon could not be stopped - # other if a failure occurred - start-stop-daemon --stop --quiet --retry=TERM/30/KILL/5 --pidfile $PIDFILE --name $NAME - RETVAL="$?" - [ "$RETVAL" = 2 ] && return 2 - # Many daemons don't delete their pidfiles when they exit. - rm -f $PIDFILE - return "$RETVAL" -} - - -case "$1" in - start) - log_daemon_msg "Starting $DESC" "$NAME" - do_start - case "$?" in - 0|1) log_end_msg 0 || exit 0 ;; - 2) log_end_msg 1 || exit 1 ;; - esac - ;; - stop) - log_daemon_msg "Stopping $DESC" "$NAME" - do_stop - case "$?" in - 0|1) log_end_msg 0 ;; - 2) exit 1 ;; - esac - ;; - status) - status_of_proc -p $PIDFILE "$DAEMON" "$NAME" && exit 0 || exit $? - ;; - - restart|force-reload) - log_daemon_msg "Restarting $DESC" "$NAME" - do_stop - case "$?" in - 0|1) - do_start - case "$?" in - 0) log_end_msg 0 ;; - 1) log_end_msg 1 ;; # Old process is still running - *) log_end_msg 1 ;; # Failed to start - esac - ;; - *) - # Failed to stop - log_end_msg 1 - ;; - esac - ;; - *) - echo "Usage: $SCRIPTNAME {start|stop|status|restart|force-reload}" >&2 - exit 3 - ;; -esac \ No newline at end of file diff --git a/release-0.19.0/docs/high-availability/monit-docker b/release-0.19.0/docs/high-availability/monit-docker deleted file mode 100644 index 8c2753a430a..00000000000 --- a/release-0.19.0/docs/high-availability/monit-docker +++ /dev/null @@ -1,9 +0,0 @@ -check process docker with pidfile /var/run/docker.pid -group docker -start program = "/etc/init.d/docker start" -stop program = "/etc/init.d/docker stop" -if does not exist then restart -if failed - unixsocket /var/run/docker.sock - protocol HTTP request "/version" -then restart \ No newline at end of file diff --git a/release-0.19.0/docs/high-availability/monit-kubelet b/release-0.19.0/docs/high-availability/monit-kubelet deleted file mode 100644 index d7878916702..00000000000 --- a/release-0.19.0/docs/high-availability/monit-kubelet +++ /dev/null @@ -1,11 +0,0 @@ -check process kubelet with pidfile /var/run/kubelet.pid -group kubelet -start program = "/etc/init.d/kubelet start" -stop program = "/etc/init.d/kubelet stop" -if does not exist then restart -if failed - host 127.0.0.1 - port 10248 - protocol HTTP - request "/healthz" -then restart \ No newline at end of file diff --git a/release-0.19.0/docs/high-availability/podmaster.json b/release-0.19.0/docs/high-availability/podmaster.json deleted file mode 100644 index 8fb13b5911a..00000000000 --- a/release-0.19.0/docs/high-availability/podmaster.json +++ /dev/null @@ -1,57 +0,0 @@ -{ -"apiVersion": "v1beta3", -"kind": "Pod", -"metadata": {"name":"scheduler-master"}, -"spec":{ -"hostNetwork": true, -"containers":[ - { - "name": "scheduler-elector", - "image": "gcr.io/google_containers/podmaster:1.1", - "command": [ - "/podmaster", - "--etcd-servers=http://127.0.0.1:4001", - "--key=scheduler", - "--source-file=/kubernetes/kube-scheduler.manifest", - "--dest-file=/manifests/kube-scheduler.manifest" - ], - "volumeMounts": [ - { "name": "k8s", - "mountPath": "/kubernetes", - "readOnly": true}, - { "name": "manifests", - "mountPath": "/manifests", - "readOnly": false} - ] - }, - { - "name": "controller-manager-elector", - "image": "gcr.io/google_containers/podmaster:1.1", - "command": [ - "/podmaster", - "--etcd-servers=http://127.0.0.1:4001", - "--key=controller", - "--source-file=/kubernetes/kube-controller-manager.manifest", - "--dest-file=/manifests/kube-controller-manager.manifest" - ], - "volumeMounts": [ - { "name": "k8s", - "mountPath": "/kubernetes", - "readOnly": true}, - { "name": "manifests", - "mountPath": "/manifests", - "readOnly": false} - ] - } -], -"volumes":[ - { "name": "k8s", - "hostPath": { - "path": "/srv/kubernetes"} - }, -{ "name": "manifests", - "hostPath": { - "path": "/etc/kubernetes/manifests"} - } -] -}} diff --git a/release-0.19.0/docs/identifiers.md b/release-0.19.0/docs/identifiers.md deleted file mode 100644 index a9332e32626..00000000000 --- a/release-0.19.0/docs/identifiers.md +++ /dev/null @@ -1,16 +0,0 @@ -# Identifiers -All objects in the Kubernetes REST API are unambiguously identified by a Name and a UID. - -For non-unique user-provided attributes, Kubernetes provides [labels](labels.md) and [annotations](annotations.md). - -## Names -Names are generally client-provided. Only one object of a given kind can have a given name at a time (i.e., they are spatially unique). But if you delete an object, you can make a new object with the same name. Names are the used to refer to an object in a resource URL, such as `/api/v1/pods/some-name`. By convention, the names of Kubernetes resources should be up to maximum length of 253 characters and consist of lower case alphanumeric characters, `-`, and `.`, but certain resources have more specific restructions. See the [identifiers design doc](design/identifiers.md) for the precise syntax rules for names. - -## UIDs -UID are generated by Kubernetes. Every object created over the whole lifetime of a Kubernetes cluster has a distinct UID (i.e., they are spatially and temporally unique). - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/identifiers.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/identifiers.md?pixel)]() diff --git a/release-0.19.0/docs/images.md b/release-0.19.0/docs/images.md deleted file mode 100644 index e06548b750c..00000000000 --- a/release-0.19.0/docs/images.md +++ /dev/null @@ -1,38 +0,0 @@ -# Images -Each container in a pod has its own image. Currently, the only type of image supported is a [Docker Image](https://docs.docker.com/userguide/dockerimages/). - -You create your Docker image and push it to a registry before referring to it in a kubernetes pod. - -The `image` property of a container supports the same syntax as the `docker` command does, including private registries and tags. - -## Using a Private Registry - -### Google Container Registry -Kubernetes has native support for the [Google Container Registry](https://cloud.google.com/tools/container-registry/), when running on Google Compute Engine. If you are running your cluster on Google Compute Engine or Google Container Engine, simply use the full image name (e.g. gcr.io/my_project/image:tag) and the kubelet will automatically authenticate and pull down your private image. - -### Other Private Registries -Docker stores keys for private registries in a `.dockercfg` file. Create a config file by running `docker login .` and then copying the resulting `.dockercfg` file to the kubelet working dir. -The kubelet working dir varies by cloud provider. It is `/` on GCE and `/home/core` on CoreOS. You can determine the working dir by running this command: -`sudo ls -ld /proc/$(pidof kubelet)/cwd` on a kNode. - -All users of the cluster will have access to any private registry in the `.dockercfg`. - -## Preloading Images - -Be default, the kubelet will try to pull each image from the specified registry. -However, if the `imagePullPolicy` property of the container is set to `IfNotPresent` or `Never`, -then a local image is used (preferentially or exclusively, respectively). - -This can be used to preload certain images for speed or as an alternative to authenticating to a private registry. - -Pull Policy is per-container, but any user of the cluster will have access to all local images. - -## Updating Images - -The default pull policy is `PullIfNotPresent` which causes the Kubelet to not pull an image if it already exists. If you would like to always force a pull you must set a pull image policy of `PullAlways` or specify a `:latest` tag on your image. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/images.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/images.md?pixel)]() diff --git a/release-0.19.0/docs/kibana.png b/release-0.19.0/docs/kibana.png deleted file mode 100644 index 91375ece2a5..00000000000 Binary files a/release-0.19.0/docs/kibana.png and /dev/null differ diff --git a/release-0.19.0/docs/kubeconfig-file.md b/release-0.19.0/docs/kubeconfig-file.md deleted file mode 100644 index 739eac51d7c..00000000000 --- a/release-0.19.0/docs/kubeconfig-file.md +++ /dev/null @@ -1,155 +0,0 @@ -# kubeconfig files -In order to easily switch between multiple clusters, a kubeconfig file was defined. This file contains a series of authentication mechanisms and cluster connection information associated with nicknames. It also introduces the concept of a tuple of authentication information (user) and cluster connection information called a context that is also associated with a nickname. - -Multiple kubeconfig files are allowed. At runtime they are loaded and merged together along with override options specified from the command line (see rules below). - -## Related discussion -https://github.com/GoogleCloudPlatform/kubernetes/issues/1755 - -## Example kubeconfig file -``` -apiVersion: v1 -clusters: -- cluster: - api-version: v1 - server: http://cow.org:8080 - name: cow-cluster -- cluster: - certificate-authority: path/to/my/cafile - server: https://horse.org:4443 - name: horse-cluster -- cluster: - insecure-skip-tls-verify: true - server: https://pig.org:443 - name: pig-cluster -contexts: -- context: - cluster: horse-cluster - namespace: chisel-ns - user: green-user - name: federal-context -- context: - cluster: pig-cluster - namespace: saw-ns - user: black-user - name: queen-anne-context -current-context: federal-context -kind: Config -preferences: - colors: true -users: -- name: blue-user - user: - token: blue-token -- name: green-user - user: - client-certificate: path/to/my/client/cert - client-key: path/to/my/client/key -``` - -## Loading and merging rules -The rules for loading and merging the kubeconfig files are straightforward, but there are a lot of them. The final config is built in this order: - 1. Get the kubeconfig from disk. This is done with the following hierarchy and merge rules: - - - If the CommandLineLocation (the value of the `kubeconfig` command line option) is set, use this file only. No merging. Only one instance of this flag is allowed. - - - Else, if EnvVarLocation (the value of $KUBECONFIG) is available, use it as a list of files that should be merged. - Merge files together based on the following rules. - Empty filenames are ignored. Files with non-deserializable content produced errors. - The first file to set a particular value or map key wins and the value or map key is never changed. - This means that the first file to set CurrentContext will have its context preserved. It also means that if two files specify a "red-user", only values from the first file's red-user are used. Even non-conflicting entries from the second file's "red-user" are discarded. - - - Otherwise, use HomeDirectoryLocation (~/.kube/config) with no merging. - 1. Determine the context to use based on the first hit in this chain - 1. command line argument - the value of the `context` command line option - 1. current-context from the merged kubeconfig file - 1. Empty is allowed at this stage - 1. Determine the cluster info and user to use. At this point, we may or may not have a context. They are built based on the first hit in this chain. (run it twice, once for user, once for cluster) - 1. command line argument - `user` for user name and `cluster` for cluster name - 1. If context is present, then use the context's value - 1. Empty is allowed - 1. Determine the actual cluster info to use. At this point, we may or may not have a cluster info. Build each piece of the cluster info based on the chain (first hit wins): - 1. command line arguments - `server`, `api-version`, `certificate-authority`, and `insecure-skip-tls-verify` - 1. If cluster info is present and a value for the attribute is present, use it. - 1. If you don't have a server location, error. - 1. Determine the actual user info to use. User is built using the same rules as cluster info, EXCEPT that you can only have one authentication technique per user. - 1. Load precedence is 1) command line flag, 2) user fields from kubeconfig - 1. The command line flags are: `client-certificate`, `client-key`, `username`, `password`, and `token`. - 1. If there are two conflicting techniques, fail. - 1. For any information still missing, use default values and potentially prompt for authentication information - -## Manipulation of kubeconfig via `kubectl config ` -In order to more easily manipulate kubeconfig files, there are a series of subcommands to `kubectl config` to help. -See [docs/kubectl_config.md](kubectl_config.md) for help. - -### Example -``` -$kubectl config set-credentials myself --username=admin --password=secret -$kubectl config set-cluster local-server --server=http://localhost:8080 -$kubectl config set-context default-context --cluster=local-server --user=myself -$kubectl config use-context default-context -$kubectl config set contexts.default-context.namespace the-right-prefix -$kubectl config view -``` -produces this output -``` -clusters: - local-server: - server: http://localhost:8080 -contexts: - default-context: - cluster: local-server - namespace: the-right-prefix - user: myself -current-context: default-context -preferences: {} -users: - myself: - username: admin - password: secret - -``` -and a kubeconfig file that looks like this -``` -apiVersion: v1 -clusters: -- cluster: - server: http://localhost:8080 - name: local-server -contexts: -- context: - cluster: local-server - namespace: the-right-prefix - user: myself - name: default-context -current-context: default-context -kind: Config -preferences: {} -users: -- name: myself - user: - username: admin - password: secret -``` - -#### Commands for the example file -``` -$kubectl config set preferences.colors true -$kubectl config set-cluster cow-cluster --server=http://cow.org:8080 --api-version=v1 -$kubectl config set-cluster horse-cluster --server=https://horse.org:4443 --certificate-authority=path/to/my/cafile -$kubectl config set-cluster pig-cluster --server=https://pig.org:443 --insecure-skip-tls-verify=true -$kubectl config set-credentials blue-user --token=blue-token -$kubectl config set-credentials green-user --client-certificate=path/to/my/client/cert --client-key=path/to/my/client/key -$kubectl config set-context queen-anne-context --cluster=pig-cluster --user=black-user --namespace=saw-ns -$kubectl config set-context federal-context --cluster=horse-cluster --user=green-user --namespace=chisel-ns -$kubectl config use-context federal-context -``` - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubeconfig-file.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/kubeconfig-file.md?pixel)]() diff --git a/release-0.19.0/docs/kubectl.md b/release-0.19.0/docs/kubectl.md deleted file mode 100644 index 6fb9360414f..00000000000 --- a/release-0.19.0/docs/kubectl.md +++ /dev/null @@ -1,73 +0,0 @@ -## kubectl - -kubectl controls the Kubernetes cluster manager - -### Synopsis - - -kubectl controls the Kubernetes cluster manager. - -Find more information at https://github.com/GoogleCloudPlatform/kubernetes. - -``` -kubectl -``` - -### Options - -``` - --alsologtostderr=false: log to standard error as well as files - --api-version="": The API version to use when talking to the server - --certificate-authority="": Path to a cert. file for the certificate authority. - --client-certificate="": Path to a client key file for TLS. - --client-key="": Path to a client key file for TLS. - --cluster="": The name of the kubeconfig cluster to use - --context="": The name of the kubeconfig context to use - -h, --help=false: help for kubectl - --insecure-skip-tls-verify=false: If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - --kubeconfig="": Path to the kubeconfig file to use for CLI requests. - --log-backtrace-at=:0: when logging hits line file:N, emit a stack trace - --log-dir=: If non-empty, write log files in this directory - --log-flush-frequency=5s: Maximum number of seconds between log flushes - --logtostderr=true: log to standard error instead of files - --match-server-version=false: Require server version to match client version - --namespace="": If present, the namespace scope for this CLI request. - --password="": Password for basic authentication to the API server. - -s, --server="": The address and port of the Kubernetes API server - --stderrthreshold=2: logs at or above this threshold go to stderr - --token="": Bearer token for authentication to the API server. - --user="": The name of the kubeconfig user to use - --username="": Username for basic authentication to the API server. - --v=0: log level for V logs - --validate=false: If true, use a schema to validate the input before sending it - --vmodule=: comma-separated list of pattern=N settings for file-filtered logging -``` - -### SEE ALSO -* [kubectl api-versions](kubectl_api-versions.md) - Print available API versions. -* [kubectl cluster-info](kubectl_cluster-info.md) - Display cluster info -* [kubectl config](kubectl_config.md) - config modifies kubeconfig files -* [kubectl create](kubectl_create.md) - Create a resource by filename or stdin -* [kubectl delete](kubectl_delete.md) - Delete a resource by filename, stdin, resource and ID, or by resources and label selector. -* [kubectl describe](kubectl_describe.md) - Show details of a specific resource -* [kubectl exec](kubectl_exec.md) - Execute a command in a container. -* [kubectl expose](kubectl_expose.md) - Take a replicated application and expose it as Kubernetes Service -* [kubectl get](kubectl_get.md) - Display one or many resources -* [kubectl label](kubectl_label.md) - Update the labels on a resource -* [kubectl logs](kubectl_logs.md) - Print the logs for a container in a pod. -* [kubectl namespace](kubectl_namespace.md) - SUPERCEDED: Set and view the current Kubernetes namespace -* [kubectl port-forward](kubectl_port-forward.md) - Forward one or more local ports to a pod. -* [kubectl proxy](kubectl_proxy.md) - Run a proxy to the Kubernetes API server -* [kubectl rolling-update](kubectl_rolling-update.md) - Perform a rolling update of the given ReplicationController. -* [kubectl run](kubectl_run.md) - Run a particular image on the cluster. -* [kubectl scale](kubectl_scale.md) - Set a new size for a Replication Controller. -* [kubectl stop](kubectl_stop.md) - Gracefully shut down a resource by id or filename. -* [kubectl update](kubectl_update.md) - Update a resource by filename or stdin. -* [kubectl version](kubectl_version.md) - Print the client and server version information. - -###### Auto generated by spf13/cobra at 2015-05-22 14:24:30.1784975 +0000 UTC - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/kubectl.md?pixel)]() diff --git a/release-0.19.0/docs/kubectl_api-versions.md b/release-0.19.0/docs/kubectl_api-versions.md deleted file mode 100644 index f5cd8f49b7d..00000000000 --- a/release-0.19.0/docs/kubectl_api-versions.md +++ /dev/null @@ -1,57 +0,0 @@ -## kubectl api-versions - -Print available API versions. - -### Synopsis - - -Print available API versions. - -``` -kubectl api-versions -``` - -### Options - -``` - -h, --help=false: help for api-versions -``` - -### Options inherited from parent commands - -``` - --alsologtostderr=false: log to standard error as well as files - --api-version="": The API version to use when talking to the server - --certificate-authority="": Path to a cert. file for the certificate authority. - --client-certificate="": Path to a client key file for TLS. - --client-key="": Path to a client key file for TLS. - --cluster="": The name of the kubeconfig cluster to use - --context="": The name of the kubeconfig context to use - --insecure-skip-tls-verify=false: If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - --kubeconfig="": Path to the kubeconfig file to use for CLI requests. - --log-backtrace-at=:0: when logging hits line file:N, emit a stack trace - --log-dir=: If non-empty, write log files in this directory - --log-flush-frequency=5s: Maximum number of seconds between log flushes - --logtostderr=true: log to standard error instead of files - --match-server-version=false: Require server version to match client version - --namespace="": If present, the namespace scope for this CLI request. - --password="": Password for basic authentication to the API server. - -s, --server="": The address and port of the Kubernetes API server - --stderrthreshold=2: logs at or above this threshold go to stderr - --token="": Bearer token for authentication to the API server. - --user="": The name of the kubeconfig user to use - --username="": Username for basic authentication to the API server. - --v=0: log level for V logs - --validate=false: If true, use a schema to validate the input before sending it - --vmodule=: comma-separated list of pattern=N settings for file-filtered logging -``` - -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager - -###### Auto generated by spf13/cobra at 2015-05-21 10:33:11.231770799 +0000 UTC - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl_api-versions.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/kubectl_api-versions.md?pixel)]() diff --git a/release-0.19.0/docs/kubectl_cluster-info.md b/release-0.19.0/docs/kubectl_cluster-info.md deleted file mode 100644 index 531dc89794a..00000000000 --- a/release-0.19.0/docs/kubectl_cluster-info.md +++ /dev/null @@ -1,57 +0,0 @@ -## kubectl cluster-info - -Display cluster info - -### Synopsis - - -Display addresses of the master and services with label kubernetes.io/cluster-service=true - -``` -kubectl cluster-info -``` - -### Options - -``` - -h, --help=false: help for cluster-info -``` - -### Options inherited from parent commands - -``` - --alsologtostderr=false: log to standard error as well as files - --api-version="": The API version to use when talking to the server - --certificate-authority="": Path to a cert. file for the certificate authority. - --client-certificate="": Path to a client key file for TLS. - --client-key="": Path to a client key file for TLS. - --cluster="": The name of the kubeconfig cluster to use - --context="": The name of the kubeconfig context to use - --insecure-skip-tls-verify=false: If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - --kubeconfig="": Path to the kubeconfig file to use for CLI requests. - --log-backtrace-at=:0: when logging hits line file:N, emit a stack trace - --log-dir=: If non-empty, write log files in this directory - --log-flush-frequency=5s: Maximum number of seconds between log flushes - --logtostderr=true: log to standard error instead of files - --match-server-version=false: Require server version to match client version - --namespace="": If present, the namespace scope for this CLI request. - --password="": Password for basic authentication to the API server. - -s, --server="": The address and port of the Kubernetes API server - --stderrthreshold=2: logs at or above this threshold go to stderr - --token="": Bearer token for authentication to the API server. - --user="": The name of the kubeconfig user to use - --username="": Username for basic authentication to the API server. - --v=0: log level for V logs - --validate=false: If true, use a schema to validate the input before sending it - --vmodule=: comma-separated list of pattern=N settings for file-filtered logging -``` - -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager - -###### Auto generated by spf13/cobra at 2015-05-21 10:33:11.230831561 +0000 UTC - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl_cluster-info.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/kubectl_cluster-info.md?pixel)]() diff --git a/release-0.19.0/docs/kubectl_config.md b/release-0.19.0/docs/kubectl_config.md deleted file mode 100644 index 6f909c6b207..00000000000 --- a/release-0.19.0/docs/kubectl_config.md +++ /dev/null @@ -1,70 +0,0 @@ -## kubectl config - -config modifies kubeconfig files - -### Synopsis - - -config modifies kubeconfig files using subcommands like "kubectl config set current-context my-context" - -The loading order follows these rules: - 1. If the --kubeconfig flag is set, then only that file is loaded. The flag may only be set once and no merging takes place. - 2. If $KUBECONFIG environment variable is set, then it is used a list of paths (normal path delimitting rules for your system). These paths are merged together. When a value is modified, it is modified in the file that defines the stanza. When a value is created, it is created in the first file that exists. If no files in the chain exist, then it creates the last file in the list. - 3. Otherwise, ${HOME}/.kube/config is used and no merging takes place. - - -``` -kubectl config SUBCOMMAND -``` - -### Options - -``` - -h, --help=false: help for config - --kubeconfig="": use a particular kubeconfig file -``` - -### Options inherited from parent commands - -``` - --alsologtostderr=false: log to standard error as well as files - --api-version="": The API version to use when talking to the server - --certificate-authority="": Path to a cert. file for the certificate authority. - --client-certificate="": Path to a client key file for TLS. - --client-key="": Path to a client key file for TLS. - --cluster="": The name of the kubeconfig cluster to use - --context="": The name of the kubeconfig context to use - --insecure-skip-tls-verify=false: If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - --log-backtrace-at=:0: when logging hits line file:N, emit a stack trace - --log-dir=: If non-empty, write log files in this directory - --log-flush-frequency=5s: Maximum number of seconds between log flushes - --logtostderr=true: log to standard error instead of files - --match-server-version=false: Require server version to match client version - --namespace="": If present, the namespace scope for this CLI request. - --password="": Password for basic authentication to the API server. - -s, --server="": The address and port of the Kubernetes API server - --stderrthreshold=2: logs at or above this threshold go to stderr - --token="": Bearer token for authentication to the API server. - --user="": The name of the kubeconfig user to use - --username="": Username for basic authentication to the API server. - --v=0: log level for V logs - --validate=false: If true, use a schema to validate the input before sending it - --vmodule=: comma-separated list of pattern=N settings for file-filtered logging -``` - -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager -* [kubectl config set](kubectl_config_set.md) - Sets an individual value in a kubeconfig file -* [kubectl config set-cluster](kubectl_config_set-cluster.md) - Sets a cluster entry in kubeconfig -* [kubectl config set-context](kubectl_config_set-context.md) - Sets a context entry in kubeconfig -* [kubectl config set-credentials](kubectl_config_set-credentials.md) - Sets a user entry in kubeconfig -* [kubectl config unset](kubectl_config_unset.md) - Unsets an individual value in a kubeconfig file -* [kubectl config use-context](kubectl_config_use-context.md) - Sets the current-context in a kubeconfig file -* [kubectl config view](kubectl_config_view.md) - displays Merged kubeconfig settings or a specified kubeconfig file. - -###### Auto generated by spf13/cobra at 2015-05-21 10:33:11.229842268 +0000 UTC - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl_config.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/kubectl_config.md?pixel)]() diff --git a/release-0.19.0/docs/kubectl_config_set-cluster.md b/release-0.19.0/docs/kubectl_config_set-cluster.md deleted file mode 100644 index 1ca4e740cbc..00000000000 --- a/release-0.19.0/docs/kubectl_config_set-cluster.md +++ /dev/null @@ -1,72 +0,0 @@ -## kubectl config set-cluster - -Sets a cluster entry in kubeconfig - -### Synopsis - - -Sets a cluster entry in kubeconfig. -Specifying a name that already exists will merge new fields on top of existing values for those fields. - -``` -kubectl config set-cluster NAME [--server=server] [--certificate-authority=path/to/certficate/authority] [--api-version=apiversion] [--insecure-skip-tls-verify=true] -``` - -### Examples - -``` -// Set only the server field on the e2e cluster entry without touching other values. -$ kubectl config set-cluster e2e --server=https://1.2.3.4 - -// Embed certificate authority data for the e2e cluster entry -$ kubectl config set-cluster e2e --certificate-authority=~/.kube/e2e/kubernetes.ca.crt - -// Disable cert checking for the dev cluster entry -$ kubectl config set-cluster e2e --insecure-skip-tls-verify=true -``` - -### Options - -``` - --api-version=: api-version for the cluster entry in kubeconfig - --certificate-authority=: path to certificate-authority for the cluster entry in kubeconfig - --embed-certs=false: embed-certs for the cluster entry in kubeconfig - -h, --help=false: help for set-cluster - --insecure-skip-tls-verify=false: insecure-skip-tls-verify for the cluster entry in kubeconfig - --server=: server for the cluster entry in kubeconfig -``` - -### Options inherited from parent commands - -``` - --alsologtostderr=false: log to standard error as well as files - --client-certificate="": Path to a client key file for TLS. - --client-key="": Path to a client key file for TLS. - --cluster="": The name of the kubeconfig cluster to use - --context="": The name of the kubeconfig context to use - --kubeconfig="": use a particular kubeconfig file - --log-backtrace-at=:0: when logging hits line file:N, emit a stack trace - --log-dir=: If non-empty, write log files in this directory - --log-flush-frequency=5s: Maximum number of seconds between log flushes - --logtostderr=true: log to standard error instead of files - --match-server-version=false: Require server version to match client version - --namespace="": If present, the namespace scope for this CLI request. - --password="": Password for basic authentication to the API server. - --stderrthreshold=2: logs at or above this threshold go to stderr - --token="": Bearer token for authentication to the API server. - --user="": The name of the kubeconfig user to use - --username="": Username for basic authentication to the API server. - --v=0: log level for V logs - --validate=false: If true, use a schema to validate the input before sending it - --vmodule=: comma-separated list of pattern=N settings for file-filtered logging -``` - -### SEE ALSO -* [kubectl config](kubectl_config.md) - config modifies kubeconfig files - -###### Auto generated by spf13/cobra at 2015-05-21 10:33:11.222182293 +0000 UTC - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl_config_set-cluster.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/kubectl_config_set-cluster.md?pixel)]() diff --git a/release-0.19.0/docs/kubectl_config_set-context.md b/release-0.19.0/docs/kubectl_config_set-context.md deleted file mode 100644 index ea8a19164e4..00000000000 --- a/release-0.19.0/docs/kubectl_config_set-context.md +++ /dev/null @@ -1,65 +0,0 @@ -## kubectl config set-context - -Sets a context entry in kubeconfig - -### Synopsis - - -Sets a context entry in kubeconfig -Specifying a name that already exists will merge new fields on top of existing values for those fields. - -``` -kubectl config set-context NAME [--cluster=cluster_nickname] [--user=user_nickname] [--namespace=namespace] -``` - -### Examples - -``` -// Set the user field on the gce context entry without touching other values -$ kubectl config set-context gce --user=cluster-admin -``` - -### Options - -``` - --cluster=: cluster for the context entry in kubeconfig - -h, --help=false: help for set-context - --namespace=: namespace for the context entry in kubeconfig - --user=: user for the context entry in kubeconfig -``` - -### Options inherited from parent commands - -``` - --alsologtostderr=false: log to standard error as well as files - --api-version="": The API version to use when talking to the server - --certificate-authority="": Path to a cert. file for the certificate authority. - --client-certificate="": Path to a client key file for TLS. - --client-key="": Path to a client key file for TLS. - --context="": The name of the kubeconfig context to use - --insecure-skip-tls-verify=false: If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - --kubeconfig="": use a particular kubeconfig file - --log-backtrace-at=:0: when logging hits line file:N, emit a stack trace - --log-dir=: If non-empty, write log files in this directory - --log-flush-frequency=5s: Maximum number of seconds between log flushes - --logtostderr=true: log to standard error instead of files - --match-server-version=false: Require server version to match client version - --password="": Password for basic authentication to the API server. - -s, --server="": The address and port of the Kubernetes API server - --stderrthreshold=2: logs at or above this threshold go to stderr - --token="": Bearer token for authentication to the API server. - --username="": Username for basic authentication to the API server. - --v=0: log level for V logs - --validate=false: If true, use a schema to validate the input before sending it - --vmodule=: comma-separated list of pattern=N settings for file-filtered logging -``` - -### SEE ALSO -* [kubectl config](kubectl_config.md) - config modifies kubeconfig files - -###### Auto generated by spf13/cobra at 2015-05-21 10:33:11.225463229 +0000 UTC - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl_config_set-context.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/kubectl_config_set-context.md?pixel)]() diff --git a/release-0.19.0/docs/kubectl_config_set-credentials.md b/release-0.19.0/docs/kubectl_config_set-credentials.md deleted file mode 100644 index 9093c8396f1..00000000000 --- a/release-0.19.0/docs/kubectl_config_set-credentials.md +++ /dev/null @@ -1,85 +0,0 @@ -## kubectl config set-credentials - -Sets a user entry in kubeconfig - -### Synopsis - - -Sets a user entry in kubeconfig -Specifying a name that already exists will merge new fields on top of existing values. - - Client-certificate flags: - --client-certificate=certfile --client-key=keyfile - - Bearer token flags: - --token=bearer_token - - Basic auth flags: - --username=basic_user --password=basic_password - - Bearer token and basic auth are mutually exclusive. - - -``` -kubectl config set-credentials NAME [--client-certificate=path/to/certfile] [--client-key=path/to/keyfile] [--token=bearer_token] [--username=basic_user] [--password=basic_password] -``` - -### Examples - -``` -// Set only the "client-key" field on the "cluster-admin" -// entry, without touching other values: -$ kubectl set-credentials cluster-admin --client-key=~/.kube/admin.key - -// Set basic auth for the "cluster-admin" entry -$ kubectl set-credentials cluster-admin --username=admin --password=uXFGweU9l35qcif - -// Embed client certificate data in the "cluster-admin" entry -$ kubectl set-credentials cluster-admin --client-certificate=~/.kube/admin.crt --embed-certs=true -``` - -### Options - -``` - --client-certificate=: path to client-certificate for the user entry in kubeconfig - --client-key=: path to client-key for the user entry in kubeconfig - --embed-certs=false: embed client cert/key for the user entry in kubeconfig - -h, --help=false: help for set-credentials - --password=: password for the user entry in kubeconfig - --token=: token for the user entry in kubeconfig - --username=: username for the user entry in kubeconfig -``` - -### Options inherited from parent commands - -``` - --alsologtostderr=false: log to standard error as well as files - --api-version="": The API version to use when talking to the server - --certificate-authority="": Path to a cert. file for the certificate authority. - --cluster="": The name of the kubeconfig cluster to use - --context="": The name of the kubeconfig context to use - --insecure-skip-tls-verify=false: If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - --kubeconfig="": use a particular kubeconfig file - --log-backtrace-at=:0: when logging hits line file:N, emit a stack trace - --log-dir=: If non-empty, write log files in this directory - --log-flush-frequency=5s: Maximum number of seconds between log flushes - --logtostderr=true: log to standard error instead of files - --match-server-version=false: Require server version to match client version - --namespace="": If present, the namespace scope for this CLI request. - -s, --server="": The address and port of the Kubernetes API server - --stderrthreshold=2: logs at or above this threshold go to stderr - --user="": The name of the kubeconfig user to use - --v=0: log level for V logs - --validate=false: If true, use a schema to validate the input before sending it - --vmodule=: comma-separated list of pattern=N settings for file-filtered logging -``` - -### SEE ALSO -* [kubectl config](kubectl_config.md) - config modifies kubeconfig files - -###### Auto generated by spf13/cobra at 2015-05-21 10:33:11.22419139 +0000 UTC - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl_config_set-credentials.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/kubectl_config_set-credentials.md?pixel)]() diff --git a/release-0.19.0/docs/kubectl_config_set.md b/release-0.19.0/docs/kubectl_config_set.md deleted file mode 100644 index 024b576c307..00000000000 --- a/release-0.19.0/docs/kubectl_config_set.md +++ /dev/null @@ -1,59 +0,0 @@ -## kubectl config set - -Sets an individual value in a kubeconfig file - -### Synopsis - - -Sets an individual value in a kubeconfig file -PROPERTY_NAME is a dot delimited name where each token represents either a attribute name or a map key. Map keys may not contain dots. -PROPERTY_VALUE is the new value you wish to set. - -``` -kubectl config set PROPERTY_NAME PROPERTY_VALUE -``` - -### Options - -``` - -h, --help=false: help for set -``` - -### Options inherited from parent commands - -``` - --alsologtostderr=false: log to standard error as well as files - --api-version="": The API version to use when talking to the server - --certificate-authority="": Path to a cert. file for the certificate authority. - --client-certificate="": Path to a client key file for TLS. - --client-key="": Path to a client key file for TLS. - --cluster="": The name of the kubeconfig cluster to use - --context="": The name of the kubeconfig context to use - --insecure-skip-tls-verify=false: If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - --kubeconfig="": use a particular kubeconfig file - --log-backtrace-at=:0: when logging hits line file:N, emit a stack trace - --log-dir=: If non-empty, write log files in this directory - --log-flush-frequency=5s: Maximum number of seconds between log flushes - --logtostderr=true: log to standard error instead of files - --match-server-version=false: Require server version to match client version - --namespace="": If present, the namespace scope for this CLI request. - --password="": Password for basic authentication to the API server. - -s, --server="": The address and port of the Kubernetes API server - --stderrthreshold=2: logs at or above this threshold go to stderr - --token="": Bearer token for authentication to the API server. - --user="": The name of the kubeconfig user to use - --username="": Username for basic authentication to the API server. - --v=0: log level for V logs - --validate=false: If true, use a schema to validate the input before sending it - --vmodule=: comma-separated list of pattern=N settings for file-filtered logging -``` - -### SEE ALSO -* [kubectl config](kubectl_config.md) - config modifies kubeconfig files - -###### Auto generated by spf13/cobra at 2015-05-21 10:33:11.226564217 +0000 UTC - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl_config_set.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/kubectl_config_set.md?pixel)]() diff --git a/release-0.19.0/docs/kubectl_config_unset.md b/release-0.19.0/docs/kubectl_config_unset.md deleted file mode 100644 index 0cceec32ae2..00000000000 --- a/release-0.19.0/docs/kubectl_config_unset.md +++ /dev/null @@ -1,58 +0,0 @@ -## kubectl config unset - -Unsets an individual value in a kubeconfig file - -### Synopsis - - -Unsets an individual value in a kubeconfig file -PROPERTY_NAME is a dot delimited name where each token represents either a attribute name or a map key. Map keys may not contain dots. - -``` -kubectl config unset PROPERTY_NAME -``` - -### Options - -``` - -h, --help=false: help for unset -``` - -### Options inherited from parent commands - -``` - --alsologtostderr=false: log to standard error as well as files - --api-version="": The API version to use when talking to the server - --certificate-authority="": Path to a cert. file for the certificate authority. - --client-certificate="": Path to a client key file for TLS. - --client-key="": Path to a client key file for TLS. - --cluster="": The name of the kubeconfig cluster to use - --context="": The name of the kubeconfig context to use - --insecure-skip-tls-verify=false: If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - --kubeconfig="": use a particular kubeconfig file - --log-backtrace-at=:0: when logging hits line file:N, emit a stack trace - --log-dir=: If non-empty, write log files in this directory - --log-flush-frequency=5s: Maximum number of seconds between log flushes - --logtostderr=true: log to standard error instead of files - --match-server-version=false: Require server version to match client version - --namespace="": If present, the namespace scope for this CLI request. - --password="": Password for basic authentication to the API server. - -s, --server="": The address and port of the Kubernetes API server - --stderrthreshold=2: logs at or above this threshold go to stderr - --token="": Bearer token for authentication to the API server. - --user="": The name of the kubeconfig user to use - --username="": Username for basic authentication to the API server. - --v=0: log level for V logs - --validate=false: If true, use a schema to validate the input before sending it - --vmodule=: comma-separated list of pattern=N settings for file-filtered logging -``` - -### SEE ALSO -* [kubectl config](kubectl_config.md) - config modifies kubeconfig files - -###### Auto generated by spf13/cobra at 2015-05-21 10:33:11.228039789 +0000 UTC - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl_config_unset.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/kubectl_config_unset.md?pixel)]() diff --git a/release-0.19.0/docs/kubectl_config_use-context.md b/release-0.19.0/docs/kubectl_config_use-context.md deleted file mode 100644 index 1222f008a25..00000000000 --- a/release-0.19.0/docs/kubectl_config_use-context.md +++ /dev/null @@ -1,57 +0,0 @@ -## kubectl config use-context - -Sets the current-context in a kubeconfig file - -### Synopsis - - -Sets the current-context in a kubeconfig file - -``` -kubectl config use-context CONTEXT_NAME -``` - -### Options - -``` - -h, --help=false: help for use-context -``` - -### Options inherited from parent commands - -``` - --alsologtostderr=false: log to standard error as well as files - --api-version="": The API version to use when talking to the server - --certificate-authority="": Path to a cert. file for the certificate authority. - --client-certificate="": Path to a client key file for TLS. - --client-key="": Path to a client key file for TLS. - --cluster="": The name of the kubeconfig cluster to use - --context="": The name of the kubeconfig context to use - --insecure-skip-tls-verify=false: If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - --kubeconfig="": use a particular kubeconfig file - --log-backtrace-at=:0: when logging hits line file:N, emit a stack trace - --log-dir=: If non-empty, write log files in this directory - --log-flush-frequency=5s: Maximum number of seconds between log flushes - --logtostderr=true: log to standard error instead of files - --match-server-version=false: Require server version to match client version - --namespace="": If present, the namespace scope for this CLI request. - --password="": Password for basic authentication to the API server. - -s, --server="": The address and port of the Kubernetes API server - --stderrthreshold=2: logs at or above this threshold go to stderr - --token="": Bearer token for authentication to the API server. - --user="": The name of the kubeconfig user to use - --username="": Username for basic authentication to the API server. - --v=0: log level for V logs - --validate=false: If true, use a schema to validate the input before sending it - --vmodule=: comma-separated list of pattern=N settings for file-filtered logging -``` - -### SEE ALSO -* [kubectl config](kubectl_config.md) - config modifies kubeconfig files - -###### Auto generated by spf13/cobra at 2015-05-21 10:33:11.228948447 +0000 UTC - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl_config_use-context.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/kubectl_config_use-context.md?pixel)]() diff --git a/release-0.19.0/docs/kubectl_config_view.md b/release-0.19.0/docs/kubectl_config_view.md deleted file mode 100644 index 15e4f7f1776..00000000000 --- a/release-0.19.0/docs/kubectl_config_view.md +++ /dev/null @@ -1,77 +0,0 @@ -## kubectl config view - -displays Merged kubeconfig settings or a specified kubeconfig file. - -### Synopsis - - -displays Merged kubeconfig settings or a specified kubeconfig file. - -You can use --output=template --template=TEMPLATE to extract specific values. - -``` -kubectl config view -``` - -### Examples - -``` -// Show Merged kubeconfig settings. -$ kubectl config view - -// Get the password for the e2e user -$ kubectl config view -o template --template='{{range .users}}{{ if eq .name "e2e" }}{{ index .user.password }}{{end}}{{end}}' -``` - -### Options - -``` - --flatten=false: flatten the resulting kubeconfig file into self contained output (useful for creating portable kubeconfig files) - -h, --help=false: help for view - --merge=true: merge together the full hierarchy of kubeconfig files - --minify=false: remove all information not used by current-context from the output - --no-headers=false: When using the default output, don't print headers. - -o, --output="": Output format. One of: json|yaml|template|templatefile. - --output-version="": Output the formatted object with the given version (default api-version). - --raw=false: display raw byte data - -t, --template="": Template string or path to template file to use when -o=template or -o=templatefile. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview] -``` - -### Options inherited from parent commands - -``` - --alsologtostderr=false: log to standard error as well as files - --api-version="": The API version to use when talking to the server - --certificate-authority="": Path to a cert. file for the certificate authority. - --client-certificate="": Path to a client key file for TLS. - --client-key="": Path to a client key file for TLS. - --cluster="": The name of the kubeconfig cluster to use - --context="": The name of the kubeconfig context to use - --insecure-skip-tls-verify=false: If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - --kubeconfig="": use a particular kubeconfig file - --log-backtrace-at=:0: when logging hits line file:N, emit a stack trace - --log-dir=: If non-empty, write log files in this directory - --log-flush-frequency=5s: Maximum number of seconds between log flushes - --logtostderr=true: log to standard error instead of files - --match-server-version=false: Require server version to match client version - --namespace="": If present, the namespace scope for this CLI request. - --password="": Password for basic authentication to the API server. - -s, --server="": The address and port of the Kubernetes API server - --stderrthreshold=2: logs at or above this threshold go to stderr - --token="": Bearer token for authentication to the API server. - --user="": The name of the kubeconfig user to use - --username="": Username for basic authentication to the API server. - --v=0: log level for V logs - --validate=false: If true, use a schema to validate the input before sending it - --vmodule=: comma-separated list of pattern=N settings for file-filtered logging -``` - -### SEE ALSO -* [kubectl config](kubectl_config.md) - config modifies kubeconfig files - -###### Auto generated by spf13/cobra at 2015-06-09 19:55:35.92095292 +0000 UTC - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl_config_view.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/kubectl_config_view.md?pixel)]() diff --git a/release-0.19.0/docs/kubectl_create.md b/release-0.19.0/docs/kubectl_create.md deleted file mode 100644 index 106102df054..00000000000 --- a/release-0.19.0/docs/kubectl_create.md +++ /dev/null @@ -1,70 +0,0 @@ -## kubectl create - -Create a resource by filename or stdin - -### Synopsis - - -Create a resource by filename or stdin. - -JSON and YAML formats are accepted. - -``` -kubectl create -f FILENAME -``` - -### Examples - -``` -// Create a pod using the data in pod.json. -$ kubectl create -f pod.json - -// Create a pod based on the JSON passed into stdin. -$ cat pod.json | kubectl create -f - -``` - -### Options - -``` - -f, --filename=[]: Filename, directory, or URL to file to use to create the resource - -h, --help=false: help for create -``` - -### Options inherited from parent commands - -``` - --alsologtostderr=false: log to standard error as well as files - --api-version="": The API version to use when talking to the server - --certificate-authority="": Path to a cert. file for the certificate authority. - --client-certificate="": Path to a client key file for TLS. - --client-key="": Path to a client key file for TLS. - --cluster="": The name of the kubeconfig cluster to use - --context="": The name of the kubeconfig context to use - --insecure-skip-tls-verify=false: If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - --kubeconfig="": Path to the kubeconfig file to use for CLI requests. - --log-backtrace-at=:0: when logging hits line file:N, emit a stack trace - --log-dir=: If non-empty, write log files in this directory - --log-flush-frequency=5s: Maximum number of seconds between log flushes - --logtostderr=true: log to standard error instead of files - --match-server-version=false: Require server version to match client version - --namespace="": If present, the namespace scope for this CLI request. - --password="": Password for basic authentication to the API server. - -s, --server="": The address and port of the Kubernetes API server - --stderrthreshold=2: logs at or above this threshold go to stderr - --token="": Bearer token for authentication to the API server. - --user="": The name of the kubeconfig user to use - --username="": Username for basic authentication to the API server. - --v=0: log level for V logs - --validate=false: If true, use a schema to validate the input before sending it - --vmodule=: comma-separated list of pattern=N settings for file-filtered logging -``` - -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager - -###### Auto generated by spf13/cobra at 2015-05-21 10:33:11.178299587 +0000 UTC - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl_create.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/kubectl_create.md?pixel)]() diff --git a/release-0.19.0/docs/kubectl_delete.md b/release-0.19.0/docs/kubectl_delete.md deleted file mode 100644 index f4a93b62ef6..00000000000 --- a/release-0.19.0/docs/kubectl_delete.md +++ /dev/null @@ -1,92 +0,0 @@ -## kubectl delete - -Delete a resource by filename, stdin, resource and ID, or by resources and label selector. - -### Synopsis - - -Delete a resource by filename, stdin, resource and ID, or by resources and label selector. - -JSON and YAML formats are accepted. - -If both a filename and command line arguments are passed, the command line -arguments are used and the filename is ignored. - -Note that the delete command does NOT do resource version checks, so if someone -submits an update to a resource right when you submit a delete, their update -will be lost along with the rest of the resource. - -``` -kubectl delete ([-f FILENAME] | (RESOURCE [(ID | -l label | --all)] -``` - -### Examples - -``` -// Delete a pod using the type and ID specified in pod.json. -$ kubectl delete -f pod.json - -// Delete a pod based on the type and ID in the JSON passed into stdin. -$ cat pod.json | kubectl delete -f - - -// Delete pods and services with label name=myLabel. -$ kubectl delete pods,services -l name=myLabel - -// Delete a pod with ID 1234-56-7890-234234-456456. -$ kubectl delete pod 1234-56-7890-234234-456456 - -// Delete all pods -$ kubectl delete pods --all -``` - -### Options - -``` - --all=false: [-all] to select all the specified resources. - --cascade=true: If true, cascade the delete resources managed by this resource (e.g. Pods created by a ReplicationController). Default true. - -f, --filename=[]: Filename, directory, or URL to a file containing the resource to delete. - --grace-period=-1: Period of time in seconds given to the resource to terminate gracefully. Ignored if negative. - -h, --help=false: help for delete - --ignore-not-found=false: Treat "resource not found" as a successful delete. - -l, --selector="": Selector (label query) to filter on. - --timeout=0: The length of time to wait before giving up on a delete, zero means determine a timeout from the size of the object -``` - -### Options inherited from parent commands - -``` - --alsologtostderr=false: log to standard error as well as files - --api-version="": The API version to use when talking to the server - --certificate-authority="": Path to a cert. file for the certificate authority. - --client-certificate="": Path to a client key file for TLS. - --client-key="": Path to a client key file for TLS. - --cluster="": The name of the kubeconfig cluster to use - --context="": The name of the kubeconfig context to use - --insecure-skip-tls-verify=false: If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - --kubeconfig="": Path to the kubeconfig file to use for CLI requests. - --log-backtrace-at=:0: when logging hits line file:N, emit a stack trace - --log-dir=: If non-empty, write log files in this directory - --log-flush-frequency=5s: Maximum number of seconds between log flushes - --logtostderr=true: log to standard error instead of files - --match-server-version=false: Require server version to match client version - --namespace="": If present, the namespace scope for this CLI request. - --password="": Password for basic authentication to the API server. - -s, --server="": The address and port of the Kubernetes API server - --stderrthreshold=2: logs at or above this threshold go to stderr - --token="": Bearer token for authentication to the API server. - --user="": The name of the kubeconfig user to use - --username="": Username for basic authentication to the API server. - --v=0: log level for V logs - --validate=false: If true, use a schema to validate the input before sending it - --vmodule=: comma-separated list of pattern=N settings for file-filtered logging -``` - -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager - -###### Auto generated by spf13/cobra at 2015-06-03 18:21:01.053120485 +0000 UTC - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl_delete.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/kubectl_delete.md?pixel)]() diff --git a/release-0.19.0/docs/kubectl_describe.md b/release-0.19.0/docs/kubectl_describe.md deleted file mode 100644 index 51aa400da61..00000000000 --- a/release-0.19.0/docs/kubectl_describe.md +++ /dev/null @@ -1,70 +0,0 @@ -## kubectl describe - -Show details of a specific resource - -### Synopsis - - -Show details of a specific resource. - -This command joins many API calls together to form a detailed description of a -given resource. - -``` -kubectl describe (RESOURCE NAME | RESOURCE/NAME) -``` - -### Examples - -``` -// Describe a node -$ kubectl describe nodes kubernetes-minion-emt8.c.myproject.internal - -// Describe a pod -$ kubectl describe pods/nginx -``` - -### Options - -``` - -h, --help=false: help for describe -``` - -### Options inherited from parent commands - -``` - --alsologtostderr=false: log to standard error as well as files - --api-version="": The API version to use when talking to the server - --certificate-authority="": Path to a cert. file for the certificate authority. - --client-certificate="": Path to a client key file for TLS. - --client-key="": Path to a client key file for TLS. - --cluster="": The name of the kubeconfig cluster to use - --context="": The name of the kubeconfig context to use - --insecure-skip-tls-verify=false: If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - --kubeconfig="": Path to the kubeconfig file to use for CLI requests. - --log-backtrace-at=:0: when logging hits line file:N, emit a stack trace - --log-dir=: If non-empty, write log files in this directory - --log-flush-frequency=5s: Maximum number of seconds between log flushes - --logtostderr=true: log to standard error instead of files - --match-server-version=false: Require server version to match client version - --namespace="": If present, the namespace scope for this CLI request. - --password="": Password for basic authentication to the API server. - -s, --server="": The address and port of the Kubernetes API server - --stderrthreshold=2: logs at or above this threshold go to stderr - --token="": Bearer token for authentication to the API server. - --user="": The name of the kubeconfig user to use - --username="": Username for basic authentication to the API server. - --v=0: log level for V logs - --validate=false: If true, use a schema to validate the input before sending it - --vmodule=: comma-separated list of pattern=N settings for file-filtered logging -``` - -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager - -###### Auto generated by spf13/cobra at 2015-05-21 10:33:11.177122438 +0000 UTC - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl_describe.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/kubectl_describe.md?pixel)]() diff --git a/release-0.19.0/docs/kubectl_exec.md b/release-0.19.0/docs/kubectl_exec.md deleted file mode 100644 index a31c8c0e1e2..00000000000 --- a/release-0.19.0/docs/kubectl_exec.md +++ /dev/null @@ -1,74 +0,0 @@ -## kubectl exec - -Execute a command in a container. - -### Synopsis - - -Execute a command in a container. - -``` -kubectl exec POD -c CONTAINER -- COMMAND [args...] -``` - -### Examples - -``` -// get output from running 'date' from pod 123456-7890, using the first container by default -$ kubectl exec 123456-7890 date - -// get output from running 'date' in ruby-container from pod 123456-7890 -$ kubectl exec 123456-7890 -c ruby-container date - -//switch to raw terminal mode, sends stdin to 'bash' in ruby-container from pod 123456-780 and sends stdout/stderr from 'bash' back to the client -$ kubectl exec 123456-7890 -c ruby-container -i -t -- bash -il -``` - -### Options - -``` - -c, --container="": Container name - -h, --help=false: help for exec - -p, --pod="": Pod name - -i, --stdin=false: Pass stdin to the container - -t, --tty=false: Stdin is a TTY -``` - -### Options inherited from parent commands - -``` - --alsologtostderr=false: log to standard error as well as files - --api-version="": The API version to use when talking to the server - --certificate-authority="": Path to a cert. file for the certificate authority. - --client-certificate="": Path to a client key file for TLS. - --client-key="": Path to a client key file for TLS. - --cluster="": The name of the kubeconfig cluster to use - --context="": The name of the kubeconfig context to use - --insecure-skip-tls-verify=false: If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - --kubeconfig="": Path to the kubeconfig file to use for CLI requests. - --log-backtrace-at=:0: when logging hits line file:N, emit a stack trace - --log-dir=: If non-empty, write log files in this directory - --log-flush-frequency=5s: Maximum number of seconds between log flushes - --logtostderr=true: log to standard error instead of files - --match-server-version=false: Require server version to match client version - --namespace="": If present, the namespace scope for this CLI request. - --password="": Password for basic authentication to the API server. - -s, --server="": The address and port of the Kubernetes API server - --stderrthreshold=2: logs at or above this threshold go to stderr - --token="": Bearer token for authentication to the API server. - --user="": The name of the kubeconfig user to use - --username="": Username for basic authentication to the API server. - --v=0: log level for V logs - --validate=false: If true, use a schema to validate the input before sending it - --vmodule=: comma-separated list of pattern=N settings for file-filtered logging -``` - -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager - -###### Auto generated by spf13/cobra at 2015-05-27 22:47:02.898315735 +0000 UTC - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl_exec.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/kubectl_exec.md?pixel)]() diff --git a/release-0.19.0/docs/kubectl_expose.md b/release-0.19.0/docs/kubectl_expose.md deleted file mode 100644 index ef564e15bc1..00000000000 --- a/release-0.19.0/docs/kubectl_expose.md +++ /dev/null @@ -1,91 +0,0 @@ -## kubectl expose - -Take a replicated application and expose it as Kubernetes Service - -### Synopsis - - -Take a replicated application and expose it as Kubernetes Service. - -Looks up a replication controller or service by name and uses the selector for that resource as the -selector for a new Service on the specified port. If no labels are specified, the new service will -re-use the labels from the resource it exposes. - -``` -kubectl expose RESOURCE NAME --port=port [--protocol=TCP|UDP] [--target-port=number-or-name] [--name=name] [--public-ip=ip] [--type=type] -``` - -### Examples - -``` -// Creates a service for a replicated nginx, which serves on port 80 and connects to the containers on port 8000. -$ kubectl expose rc nginx --port=80 --target-port=8000 - -// Creates a second service based on the above service, exposing the container port 8443 as port 443 with the name "nginx-https" -$ kubectl expose service nginx --port=443 --target-port=8443 --name=nginx-https - -// Create a service for a replicated streaming application on port 4100 balancing UDP traffic and named 'video-stream'. -$ kubectl expose rc streamer --port=4100 --protocol=udp --name=video-stream -``` - -### Options - -``` - --container-port="": Synonym for --target-port - --create-external-load-balancer=false: If true, create an external load balancer for this service (trumped by --type). Implementation is cloud provider dependent. Default is 'false'. - --dry-run=false: If true, only print the object that would be sent, without creating it. - --generator="service/v1": The name of the API generator to use. Default is 'service/v1'. - -h, --help=false: help for expose - -l, --labels="": Labels to apply to the service created by this call. - --name="": The name for the newly created object. - --no-headers=false: When using the default output, don't print headers. - -o, --output="": Output format. One of: json|yaml|template|templatefile. - --output-version="": Output the formatted object with the given version (default api-version). - --overrides="": An inline JSON override for the generated object. If this is non-empty, it is used to override the generated object. Requires that the object supply a valid apiVersion field. - --port=-1: The port that the service should serve on. Required. - --protocol="TCP": The network protocol for the service to be created. Default is 'tcp'. - --public-ip="": Name of a public IP address to set for the service. The service will be assigned this IP in addition to its generated service IP. - --selector="": A label selector to use for this service. If empty (the default) infer the selector from the replication controller. - --target-port="": Name or number for the port on the container that the service should direct traffic to. Optional. - -t, --template="": Template string or path to template file to use when -o=template or -o=templatefile. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview] - --type="": Type for this service: ClusterIP, NodePort, or LoadBalancer. Default is 'ClusterIP' unless --create-external-load-balancer is specified. -``` - -### Options inherited from parent commands - -``` - --alsologtostderr=false: log to standard error as well as files - --api-version="": The API version to use when talking to the server - --certificate-authority="": Path to a cert. file for the certificate authority. - --client-certificate="": Path to a client key file for TLS. - --client-key="": Path to a client key file for TLS. - --cluster="": The name of the kubeconfig cluster to use - --context="": The name of the kubeconfig context to use - --insecure-skip-tls-verify=false: If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - --kubeconfig="": Path to the kubeconfig file to use for CLI requests. - --log-backtrace-at=:0: when logging hits line file:N, emit a stack trace - --log-dir=: If non-empty, write log files in this directory - --log-flush-frequency=5s: Maximum number of seconds between log flushes - --logtostderr=true: log to standard error instead of files - --match-server-version=false: Require server version to match client version - --namespace="": If present, the namespace scope for this CLI request. - --password="": Password for basic authentication to the API server. - -s, --server="": The address and port of the Kubernetes API server - --stderrthreshold=2: logs at or above this threshold go to stderr - --token="": Bearer token for authentication to the API server. - --user="": The name of the kubeconfig user to use - --username="": Username for basic authentication to the API server. - --v=0: log level for V logs - --validate=false: If true, use a schema to validate the input before sending it - --vmodule=: comma-separated list of pattern=N settings for file-filtered logging -``` - -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager - -###### Auto generated by spf13/cobra at 2015-06-02 11:05:52.857144556 +0000 UTC - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl_expose.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/kubectl_expose.md?pixel)]() diff --git a/release-0.19.0/docs/kubectl_get.md b/release-0.19.0/docs/kubectl_get.md deleted file mode 100644 index a3a2c35c43e..00000000000 --- a/release-0.19.0/docs/kubectl_get.md +++ /dev/null @@ -1,95 +0,0 @@ -## kubectl get - -Display one or many resources - -### Synopsis - - -Display one or many resources. - -Possible resources include pods (po), replication controllers (rc), services -(svc), nodes, events (ev), component statuses (cs), limit ranges (limits), -nodes (no), persistent volumes (pv), persistent volume claims (pvc) -or resource quotas (quota). - -By specifying the output as 'template' and providing a Go template as the value -of the --template flag, you can filter the attributes of the fetched resource(s). - -``` -kubectl get [(-o|--output=)json|yaml|template|...] (RESOURCE [NAME] | RESOURCE/NAME ...) -``` - -### Examples - -``` -// List all pods in ps output format. -$ kubectl get pods - -// List a single replication controller with specified NAME in ps output format. -$ kubectl get replicationcontroller web - -// List a single pod in JSON output format. -$ kubectl get -o json pod web-pod-13je7 - -// Return only the phase value of the specified pod. -$ kubectl get -o template web-pod-13je7 --template={{.status.phase}} --api-version=v1 - -// List all replication controllers and services together in ps output format. -$ kubectl get rc,services - -// List one or more resources by their type and names -$ kubectl get rc/web service/frontend pods/web-pod-13je7 -``` - -### Options - -``` - --all-namespaces=false: If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace. - -h, --help=false: help for get - --no-headers=false: When using the default output, don't print headers. - -o, --output="": Output format. One of: json|yaml|template|templatefile. - --output-version="": Output the formatted object with the given version (default api-version). - -l, --selector="": Selector (label query) to filter on - -t, --template="": Template string or path to template file to use when -o=template or -o=templatefile. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview] - -w, --watch=false: After listing/getting the requested object, watch for changes. - --watch-only=false: Watch for changes to the requested object(s), without listing/getting first. -``` - -### Options inherited from parent commands - -``` - --alsologtostderr=false: log to standard error as well as files - --api-version="": The API version to use when talking to the server - --certificate-authority="": Path to a cert. file for the certificate authority. - --client-certificate="": Path to a client key file for TLS. - --client-key="": Path to a client key file for TLS. - --cluster="": The name of the kubeconfig cluster to use - --context="": The name of the kubeconfig context to use - --insecure-skip-tls-verify=false: If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - --kubeconfig="": Path to the kubeconfig file to use for CLI requests. - --log-backtrace-at=:0: when logging hits line file:N, emit a stack trace - --log-dir=: If non-empty, write log files in this directory - --log-flush-frequency=5s: Maximum number of seconds between log flushes - --logtostderr=true: log to standard error instead of files - --match-server-version=false: Require server version to match client version - --namespace="": If present, the namespace scope for this CLI request. - --password="": Password for basic authentication to the API server. - -s, --server="": The address and port of the Kubernetes API server - --stderrthreshold=2: logs at or above this threshold go to stderr - --token="": Bearer token for authentication to the API server. - --user="": The name of the kubeconfig user to use - --username="": Username for basic authentication to the API server. - --v=0: log level for V logs - --validate=false: If true, use a schema to validate the input before sending it - --vmodule=: comma-separated list of pattern=N settings for file-filtered logging -``` - -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager - -###### Auto generated by spf13/cobra at 2015-06-05 21:08:36.511279339 +0000 UTC - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl_get.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/kubectl_get.md?pixel)]() diff --git a/release-0.19.0/docs/kubectl_label.md b/release-0.19.0/docs/kubectl_label.md deleted file mode 100644 index 6f39e6d4e1d..00000000000 --- a/release-0.19.0/docs/kubectl_label.md +++ /dev/null @@ -1,89 +0,0 @@ -## kubectl label - -Update the labels on a resource - -### Synopsis - - -Update the labels on a resource. - -A label must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores, up to 63 characters. -If --overwrite is true, then existing labels can be overwritten, otherwise attempting to overwrite a label will result in an error. -If --resource-version is specified, then updates will use this resource version, otherwise the existing resource-version will be used. - -``` -kubectl label [--overwrite] RESOURCE NAME KEY_1=VAL_1 ... KEY_N=VAL_N [--resource-version=version] -``` - -### Examples - -``` -// Update pod 'foo' with the label 'unhealthy' and the value 'true'. -$ kubectl label pods foo unhealthy=true - -// Update pod 'foo' with the label 'status' and the value 'unhealthy', overwriting any existing value. -$ kubectl label --overwrite pods foo status=unhealthy - -// Update all pods in the namespace -$ kubectl label pods --all status=unhealthy - -// Update pod 'foo' only if the resource is unchanged from version 1. -$ kubectl label pods foo status=unhealthy --resource-version=1 - -// Update pod 'foo' by removing a label named 'bar' if it exists. -// Does not require the --overwrite flag. -$ kubectl label pods foo bar- -``` - -### Options - -``` - --all=false: select all resources in the namespace of the specified resource types - -h, --help=false: help for label - --no-headers=false: When using the default output, don't print headers. - -o, --output="": Output format. One of: json|yaml|template|templatefile. - --output-version="": Output the formatted object with the given version (default api-version). - --overwrite=false: If true, allow labels to be overwritten, otherwise reject label updates that overwrite existing labels. - --resource-version="": If non-empty, the labels update will only succeed if this is the current resource-version for the object. Only valid when specifying a single resource. - -l, --selector="": Selector (label query) to filter on - -t, --template="": Template string or path to template file to use when -o=template or -o=templatefile. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview] -``` - -### Options inherited from parent commands - -``` - --alsologtostderr=false: log to standard error as well as files - --api-version="": The API version to use when talking to the server - --certificate-authority="": Path to a cert. file for the certificate authority. - --client-certificate="": Path to a client key file for TLS. - --client-key="": Path to a client key file for TLS. - --cluster="": The name of the kubeconfig cluster to use - --context="": The name of the kubeconfig context to use - --insecure-skip-tls-verify=false: If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - --kubeconfig="": Path to the kubeconfig file to use for CLI requests. - --log-backtrace-at=:0: when logging hits line file:N, emit a stack trace - --log-dir=: If non-empty, write log files in this directory - --log-flush-frequency=5s: Maximum number of seconds between log flushes - --logtostderr=true: log to standard error instead of files - --match-server-version=false: Require server version to match client version - --namespace="": If present, the namespace scope for this CLI request. - --password="": Password for basic authentication to the API server. - -s, --server="": The address and port of the Kubernetes API server - --stderrthreshold=2: logs at or above this threshold go to stderr - --token="": Bearer token for authentication to the API server. - --user="": The name of the kubeconfig user to use - --username="": Username for basic authentication to the API server. - --v=0: log level for V logs - --validate=false: If true, use a schema to validate the input before sending it - --vmodule=: comma-separated list of pattern=N settings for file-filtered logging -``` - -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager - -###### Auto generated by spf13/cobra at 2015-05-28 08:44:48.996047458 +0000 UTC - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl_label.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/kubectl_label.md?pixel)]() diff --git a/release-0.19.0/docs/kubectl_logs.md b/release-0.19.0/docs/kubectl_logs.md deleted file mode 100644 index 6168af61819..00000000000 --- a/release-0.19.0/docs/kubectl_logs.md +++ /dev/null @@ -1,73 +0,0 @@ -## kubectl logs - -Print the logs for a container in a pod. - -### Synopsis - - -Print the logs for a container in a pod. If the pod has only one container, the container name is optional. - -``` -kubectl logs [-f] [-p] POD [CONTAINER] -``` - -### Examples - -``` -// Returns snapshot of ruby-container logs from pod 123456-7890. -$ kubectl logs 123456-7890 ruby-container - -// Returns snapshot of previous terminated ruby-container logs from pod 123456-7890. -$ kubectl logs -p 123456-7890 ruby-container - -// Starts streaming of ruby-container logs from pod 123456-7890. -$ kubectl logs -f 123456-7890 ruby-container -``` - -### Options - -``` - -f, --follow=false: Specify if the logs should be streamed. - -h, --help=false: help for logs - --interactive=true: If true, prompt the user for input when required. Default true. - -p, --previous=false: If true, print the logs for the previous instance of the container in a pod if it exists. -``` - -### Options inherited from parent commands - -``` - --alsologtostderr=false: log to standard error as well as files - --api-version="": The API version to use when talking to the server - --certificate-authority="": Path to a cert. file for the certificate authority. - --client-certificate="": Path to a client key file for TLS. - --client-key="": Path to a client key file for TLS. - --cluster="": The name of the kubeconfig cluster to use - --context="": The name of the kubeconfig context to use - --insecure-skip-tls-verify=false: If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - --kubeconfig="": Path to the kubeconfig file to use for CLI requests. - --log-backtrace-at=:0: when logging hits line file:N, emit a stack trace - --log-dir=: If non-empty, write log files in this directory - --log-flush-frequency=5s: Maximum number of seconds between log flushes - --logtostderr=true: log to standard error instead of files - --match-server-version=false: Require server version to match client version - --namespace="": If present, the namespace scope for this CLI request. - --password="": Password for basic authentication to the API server. - -s, --server="": The address and port of the Kubernetes API server - --stderrthreshold=2: logs at or above this threshold go to stderr - --token="": Bearer token for authentication to the API server. - --user="": The name of the kubeconfig user to use - --username="": Username for basic authentication to the API server. - --v=0: log level for V logs - --validate=false: If true, use a schema to validate the input before sending it - --vmodule=: comma-separated list of pattern=N settings for file-filtered logging -``` - -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager - -###### Auto generated by spf13/cobra at 2015-05-21 20:24:03.06578685 +0000 UTC - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl_logs.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/kubectl_logs.md?pixel)]() diff --git a/release-0.19.0/docs/kubectl_namespace.md b/release-0.19.0/docs/kubectl_namespace.md deleted file mode 100644 index 8b79872f7c5..00000000000 --- a/release-0.19.0/docs/kubectl_namespace.md +++ /dev/null @@ -1,60 +0,0 @@ -## kubectl namespace - -SUPERCEDED: Set and view the current Kubernetes namespace - -### Synopsis - - -SUPERCEDED: Set and view the current Kubernetes namespace scope for command line requests. - -namespace has been superceded by the context.namespace field of .kubeconfig files. See 'kubectl config set-context --help' for more details. - - -``` -kubectl namespace [namespace] -``` - -### Options - -``` - -h, --help=false: help for namespace -``` - -### Options inherited from parent commands - -``` - --alsologtostderr=false: log to standard error as well as files - --api-version="": The API version to use when talking to the server - --certificate-authority="": Path to a cert. file for the certificate authority. - --client-certificate="": Path to a client key file for TLS. - --client-key="": Path to a client key file for TLS. - --cluster="": The name of the kubeconfig cluster to use - --context="": The name of the kubeconfig context to use - --insecure-skip-tls-verify=false: If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - --kubeconfig="": Path to the kubeconfig file to use for CLI requests. - --log-backtrace-at=:0: when logging hits line file:N, emit a stack trace - --log-dir=: If non-empty, write log files in this directory - --log-flush-frequency=5s: Maximum number of seconds between log flushes - --logtostderr=true: log to standard error instead of files - --match-server-version=false: Require server version to match client version - --namespace="": If present, the namespace scope for this CLI request. - --password="": Password for basic authentication to the API server. - -s, --server="": The address and port of the Kubernetes API server - --stderrthreshold=2: logs at or above this threshold go to stderr - --token="": Bearer token for authentication to the API server. - --user="": The name of the kubeconfig user to use - --username="": Username for basic authentication to the API server. - --v=0: log level for V logs - --validate=false: If true, use a schema to validate the input before sending it - --vmodule=: comma-separated list of pattern=N settings for file-filtered logging -``` - -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager - -###### Auto generated by spf13/cobra at 2015-05-21 10:33:11.181662849 +0000 UTC - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl_namespace.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/kubectl_namespace.md?pixel)]() diff --git a/release-0.19.0/docs/kubectl_port-forward.md b/release-0.19.0/docs/kubectl_port-forward.md deleted file mode 100644 index b9a4abfa7fc..00000000000 --- a/release-0.19.0/docs/kubectl_port-forward.md +++ /dev/null @@ -1,75 +0,0 @@ -## kubectl port-forward - -Forward one or more local ports to a pod. - -### Synopsis - - -Forward one or more local ports to a pod. - -``` -kubectl port-forward -p POD [LOCAL_PORT:]REMOTE_PORT [...[LOCAL_PORT_N:]REMOTE_PORT_N] -``` - -### Examples - -``` - -// listens on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod -$ kubectl port-forward -p mypod 5000 6000 - -// listens on port 8888 locally, forwarding to 5000 in the pod -$ kubectl port-forward -p mypod 8888:5000 - -// listens on a random port locally, forwarding to 5000 in the pod -$ kubectl port-forward -p mypod :5000 - -// listens on a random port locally, forwarding to 5000 in the pod -$ kubectl port-forward -p mypod 0:5000 -``` - -### Options - -``` - -h, --help=false: help for port-forward - -p, --pod="": Pod name -``` - -### Options inherited from parent commands - -``` - --alsologtostderr=false: log to standard error as well as files - --api-version="": The API version to use when talking to the server - --certificate-authority="": Path to a cert. file for the certificate authority. - --client-certificate="": Path to a client key file for TLS. - --client-key="": Path to a client key file for TLS. - --cluster="": The name of the kubeconfig cluster to use - --context="": The name of the kubeconfig context to use - --insecure-skip-tls-verify=false: If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - --kubeconfig="": Path to the kubeconfig file to use for CLI requests. - --log-backtrace-at=:0: when logging hits line file:N, emit a stack trace - --log-dir=: If non-empty, write log files in this directory - --log-flush-frequency=5s: Maximum number of seconds between log flushes - --logtostderr=true: log to standard error instead of files - --match-server-version=false: Require server version to match client version - --namespace="": If present, the namespace scope for this CLI request. - --password="": Password for basic authentication to the API server. - -s, --server="": The address and port of the Kubernetes API server - --stderrthreshold=2: logs at or above this threshold go to stderr - --token="": Bearer token for authentication to the API server. - --user="": The name of the kubeconfig user to use - --username="": Username for basic authentication to the API server. - --v=0: log level for V logs - --validate=false: If true, use a schema to validate the input before sending it - --vmodule=: comma-separated list of pattern=N settings for file-filtered logging -``` - -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager - -###### Auto generated by spf13/cobra at 2015-05-21 10:33:11.187520496 +0000 UTC - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl_port-forward.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/kubectl_port-forward.md?pixel)]() diff --git a/release-0.19.0/docs/kubectl_proxy.md b/release-0.19.0/docs/kubectl_proxy.md deleted file mode 100644 index 572a851a4e3..00000000000 --- a/release-0.19.0/docs/kubectl_proxy.md +++ /dev/null @@ -1,87 +0,0 @@ -## kubectl proxy - -Run a proxy to the Kubernetes API server - -### Synopsis - - -To proxy all of the kubernetes api and nothing else, use: - -kubectl proxy --api-prefix=/ - -To proxy only part of the kubernetes api and also some static files: - -kubectl proxy --www=/my/files --www-prefix=/static/ --api-prefix=/api/ - -The above lets you 'curl localhost:8001/api/v1/pods'. - -To proxy the entire kubernetes api at a different root, use: - -kubectl proxy --api-prefix=/custom/ - -The above lets you 'curl localhost:8001/custom/api/v1/pods' - - -``` -kubectl proxy [--port=PORT] [--www=static-dir] [--www-prefix=prefix] [--api-prefix=prefix] -``` - -### Examples - -``` -// Run a proxy to kubernetes apiserver on port 8011, serving static content from ./local/www/ -$ kubectl proxy --port=8011 --www=./local/www/ - -// Run a proxy to kubernetes apiserver, changing the api prefix to k8s-api -// This makes e.g. the pods api available at localhost:8011/k8s-api/v1/pods/ -$ kubectl proxy --api-prefix=/k8s-api -``` - -### Options - -``` - --api-prefix="/api/": Prefix to serve the proxied API under. - -h, --help=false: help for proxy - -p, --port=8001: The port on which to run the proxy. - -w, --www="": Also serve static files from the given directory under the specified prefix. - -P, --www-prefix="/static/": Prefix to serve static files under, if static file directory is specified. -``` - -### Options inherited from parent commands - -``` - --alsologtostderr=false: log to standard error as well as files - --api-version="": The API version to use when talking to the server - --certificate-authority="": Path to a cert. file for the certificate authority. - --client-certificate="": Path to a client key file for TLS. - --client-key="": Path to a client key file for TLS. - --cluster="": The name of the kubeconfig cluster to use - --context="": The name of the kubeconfig context to use - --insecure-skip-tls-verify=false: If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - --kubeconfig="": Path to the kubeconfig file to use for CLI requests. - --log-backtrace-at=:0: when logging hits line file:N, emit a stack trace - --log-dir=: If non-empty, write log files in this directory - --log-flush-frequency=5s: Maximum number of seconds between log flushes - --logtostderr=true: log to standard error instead of files - --match-server-version=false: Require server version to match client version - --namespace="": If present, the namespace scope for this CLI request. - --password="": Password for basic authentication to the API server. - -s, --server="": The address and port of the Kubernetes API server - --stderrthreshold=2: logs at or above this threshold go to stderr - --token="": Bearer token for authentication to the API server. - --user="": The name of the kubeconfig user to use - --username="": Username for basic authentication to the API server. - --v=0: log level for V logs - --validate=false: If true, use a schema to validate the input before sending it - --vmodule=: comma-separated list of pattern=N settings for file-filtered logging -``` - -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager - -###### Auto generated by spf13/cobra at 2015-06-05 21:08:36.513099878 +0000 UTC - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl_proxy.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/kubectl_proxy.md?pixel)]() diff --git a/release-0.19.0/docs/kubectl_rolling-update.md b/release-0.19.0/docs/kubectl_rolling-update.md deleted file mode 100644 index 06a8fa38dcd..00000000000 --- a/release-0.19.0/docs/kubectl_rolling-update.md +++ /dev/null @@ -1,91 +0,0 @@ -## kubectl rolling-update - -Perform a rolling update of the given ReplicationController. - -### Synopsis - - -Perform a rolling update of the given ReplicationController. - -Replaces the specified controller with new controller, updating one pod at a time to use the -new PodTemplate. The new-controller.json must specify the same namespace as the -existing controller and overwrite at least one (common) label in its replicaSelector. - -``` -kubectl rolling-update OLD_CONTROLLER_NAME ([NEW_CONTROLLER_NAME] --image=NEW_CONTAINER_IMAGE | -f NEW_CONTROLLER_SPEC) -``` - -### Examples - -``` -// Update pods of frontend-v1 using new controller data in frontend-v2.json. -$ kubectl rolling-update frontend-v1 -f frontend-v2.json - -// Update pods of frontend-v1 using JSON data passed into stdin. -$ cat frontend-v2.json | kubectl rolling-update frontend-v1 -f - - -// Update the pods of frontend-v1 to frontend-v2 by just changing the image, and switching the -// name of the replication controller. -$ kubectl rolling-update frontend-v1 frontend-v2 --image=image:v2 - -// Update the pods of frontend by just changing the image, and keeping the old name -$ kubectl rolling-update frontend --image=image:v2 - -``` - -### Options - -``` - --deployment-label-key="deployment": The key to use to differentiate between two different controllers, default 'deployment'. Only relevant when --image is specified, ignored otherwise - --dry-run=false: If true, print out the changes that would be made, but don't actually make them. - -f, --filename="": Filename or URL to file to use to create the new controller. - -h, --help=false: help for rolling-update - --image="": Image to upgrade the controller to. Can not be used with --filename/-f - --no-headers=false: When using the default output, don't print headers. - -o, --output="": Output format. One of: json|yaml|template|templatefile. - --output-version="": Output the formatted object with the given version (default api-version). - --poll-interval="3s": Time delay between polling controller status after update. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". - --rollback=false: If true, this is a request to abort an existing rollout that is partially rolled out. It effectively reverses current and next and runs a rollout - -t, --template="": Template string or path to template file to use when -o=template or -o=templatefile. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview] - --timeout="5m0s": Max time to wait for a controller to update before giving up. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". - --update-period="1m0s": Time to wait between updating pods. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". -``` - -### Options inherited from parent commands - -``` - --alsologtostderr=false: log to standard error as well as files - --api-version="": The API version to use when talking to the server - --certificate-authority="": Path to a cert. file for the certificate authority. - --client-certificate="": Path to a client key file for TLS. - --client-key="": Path to a client key file for TLS. - --cluster="": The name of the kubeconfig cluster to use - --context="": The name of the kubeconfig context to use - --insecure-skip-tls-verify=false: If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - --kubeconfig="": Path to the kubeconfig file to use for CLI requests. - --log-backtrace-at=:0: when logging hits line file:N, emit a stack trace - --log-dir=: If non-empty, write log files in this directory - --log-flush-frequency=5s: Maximum number of seconds between log flushes - --logtostderr=true: log to standard error instead of files - --match-server-version=false: Require server version to match client version - --namespace="": If present, the namespace scope for this CLI request. - --password="": Password for basic authentication to the API server. - -s, --server="": The address and port of the Kubernetes API server - --stderrthreshold=2: logs at or above this threshold go to stderr - --token="": Bearer token for authentication to the API server. - --user="": The name of the kubeconfig user to use - --username="": Username for basic authentication to the API server. - --v=0: log level for V logs - --validate=false: If true, use a schema to validate the input before sending it - --vmodule=: comma-separated list of pattern=N settings for file-filtered logging -``` - -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager - -###### Auto generated by spf13/cobra at 2015-05-21 10:33:11.184123104 +0000 UTC - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl_rolling-update.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/kubectl_rolling-update.md?pixel)]() diff --git a/release-0.19.0/docs/kubectl_run.md b/release-0.19.0/docs/kubectl_run.md deleted file mode 100644 index 349cba853b3..00000000000 --- a/release-0.19.0/docs/kubectl_run.md +++ /dev/null @@ -1,86 +0,0 @@ -## kubectl run - -Run a particular image on the cluster. - -### Synopsis - - -Create and run a particular image, possibly replicated. -Creates a replication controller to manage the created container(s). - -``` -kubectl run NAME --image=image [--port=port] [--replicas=replicas] [--dry-run=bool] [--overrides=inline-json] -``` - -### Examples - -``` -// Starts a single instance of nginx. -$ kubectl run nginx --image=nginx - -// Starts a replicated instance of nginx. -$ kubectl run nginx --image=nginx --replicas=5 - -// Dry run. Print the corresponding API objects without creating them. -$ kubectl run nginx --image=nginx --dry-run - -// Start a single instance of nginx, but overload the spec of the replication controller with a partial set of values parsed from JSON. -$ kubectl run nginx --image=nginx --overrides='{ "apiVersion": "v1", "spec": { ... } }' -``` - -### Options - -``` - --dry-run=false: If true, only print the object that would be sent, without sending it. - --generator="run/v1": The name of the API generator to use. Default is 'run-controller/v1'. - -h, --help=false: help for run - --hostport=-1: The host port mapping for the container port. To demonstrate a single-machine container. - --image="": The image for the container to run. - -l, --labels="": Labels to apply to the pod(s). - --no-headers=false: When using the default output, don't print headers. - -o, --output="": Output format. One of: json|yaml|template|templatefile. - --output-version="": Output the formatted object with the given version (default api-version). - --overrides="": An inline JSON override for the generated object. If this is non-empty, it is used to override the generated object. Requires that the object supply a valid apiVersion field. - --port=-1: The port that this container exposes. - -r, --replicas=1: Number of replicas to create for this container. Default is 1. - -t, --template="": Template string or path to template file to use when -o=template or -o=templatefile. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview] -``` - -### Options inherited from parent commands - -``` - --alsologtostderr=false: log to standard error as well as files - --api-version="": The API version to use when talking to the server - --certificate-authority="": Path to a cert. file for the certificate authority. - --client-certificate="": Path to a client key file for TLS. - --client-key="": Path to a client key file for TLS. - --cluster="": The name of the kubeconfig cluster to use - --context="": The name of the kubeconfig context to use - --insecure-skip-tls-verify=false: If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - --kubeconfig="": Path to the kubeconfig file to use for CLI requests. - --log-backtrace-at=:0: when logging hits line file:N, emit a stack trace - --log-dir=: If non-empty, write log files in this directory - --log-flush-frequency=5s: Maximum number of seconds between log flushes - --logtostderr=true: log to standard error instead of files - --match-server-version=false: Require server version to match client version - --namespace="": If present, the namespace scope for this CLI request. - --password="": Password for basic authentication to the API server. - -s, --server="": The address and port of the Kubernetes API server - --stderrthreshold=2: logs at or above this threshold go to stderr - --token="": Bearer token for authentication to the API server. - --user="": The name of the kubeconfig user to use - --username="": Username for basic authentication to the API server. - --v=0: log level for V logs - --validate=false: If true, use a schema to validate the input before sending it - --vmodule=: comma-separated list of pattern=N settings for file-filtered logging -``` - -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager - -###### Auto generated by spf13/cobra at 2015-06-05 21:08:36.513272503 +0000 UTC - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl_run.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/kubectl_run.md?pixel)]() diff --git a/release-0.19.0/docs/kubectl_scale.md b/release-0.19.0/docs/kubectl_scale.md deleted file mode 100644 index 546951adaf1..00000000000 --- a/release-0.19.0/docs/kubectl_scale.md +++ /dev/null @@ -1,75 +0,0 @@ -## kubectl scale - -Set a new size for a Replication Controller. - -### Synopsis - - -Set a new size for a Replication Controller. - -Scale also allows users to specify one or more preconditions for the scale action. -If --current-replicas or --resource-version is specified, it is validated before the -scale is attempted, and it is guaranteed that the precondition holds true when the -scale is sent to the server. - -``` -kubectl scale [--resource-version=version] [--current-replicas=count] --replicas=COUNT RESOURCE ID -``` - -### Examples - -``` -// Scale replication controller named 'foo' to 3. -$ kubectl scale --replicas=3 replicationcontrollers foo - -// If the replication controller named foo's current size is 2, scale foo to 3. -$ kubectl scale --current-replicas=2 --replicas=3 replicationcontrollers foo -``` - -### Options - -``` - --current-replicas=-1: Precondition for current size. Requires that the current size of the replication controller match this value in order to scale. - -h, --help=false: help for scale - --replicas=-1: The new desired number of replicas. Required. - --resource-version="": Precondition for resource version. Requires that the current resource version match this value in order to scale. -``` - -### Options inherited from parent commands - -``` - --alsologtostderr=false: log to standard error as well as files - --api-version="": The API version to use when talking to the server - --certificate-authority="": Path to a cert. file for the certificate authority. - --client-certificate="": Path to a client key file for TLS. - --client-key="": Path to a client key file for TLS. - --cluster="": The name of the kubeconfig cluster to use - --context="": The name of the kubeconfig context to use - --insecure-skip-tls-verify=false: If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - --kubeconfig="": Path to the kubeconfig file to use for CLI requests. - --log-backtrace-at=:0: when logging hits line file:N, emit a stack trace - --log-dir=: If non-empty, write log files in this directory - --log-flush-frequency=5s: Maximum number of seconds between log flushes - --logtostderr=true: log to standard error instead of files - --match-server-version=false: Require server version to match client version - --namespace="": If present, the namespace scope for this CLI request. - --password="": Password for basic authentication to the API server. - -s, --server="": The address and port of the Kubernetes API server - --stderrthreshold=2: logs at or above this threshold go to stderr - --token="": Bearer token for authentication to the API server. - --user="": The name of the kubeconfig user to use - --username="": Username for basic authentication to the API server. - --v=0: log level for V logs - --validate=false: If true, use a schema to validate the input before sending it - --vmodule=: comma-separated list of pattern=N settings for file-filtered logging -``` - -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager - -###### Auto generated by spf13/cobra at 2015-05-21 10:33:11.185268791 +0000 UTC - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl_scale.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/kubectl_scale.md?pixel)]() diff --git a/release-0.19.0/docs/kubectl_stop.md b/release-0.19.0/docs/kubectl_stop.md deleted file mode 100644 index ba56a33f478..00000000000 --- a/release-0.19.0/docs/kubectl_stop.md +++ /dev/null @@ -1,82 +0,0 @@ -## kubectl stop - -Gracefully shut down a resource by id or filename. - -### Synopsis - - -Gracefully shut down a resource by id or filename. - -Attempts to shut down and delete a resource that supports graceful termination. -If the resource is scalable it will be scaled to 0 before deletion. - -``` -kubectl stop (-f FILENAME | RESOURCE (ID | -l label | --all)) -``` - -### Examples - -``` -// Shut down foo. -$ kubectl stop replicationcontroller foo - -// Stop pods and services with label name=myLabel. -$ kubectl stop pods,services -l name=myLabel - -// Shut down the service defined in service.json -$ kubectl stop -f service.json - -// Shut down all resources in the path/to/resources directory -$ kubectl stop -f path/to/resources -``` - -### Options - -``` - --all=false: [-all] to select all the specified resources. - -f, --filename=[]: Filename, directory, or URL to file of resource(s) to be stopped. - --grace-period=-1: Period of time in seconds given to the resource to terminate gracefully. Ignored if negative. - -h, --help=false: help for stop - --ignore-not-found=false: Treat "resource not found" as a successful stop. - -l, --selector="": Selector (label query) to filter on. - --timeout=0: The length of time to wait before giving up on a delete, zero means determine a timeout from the size of the object -``` - -### Options inherited from parent commands - -``` - --alsologtostderr=false: log to standard error as well as files - --api-version="": The API version to use when talking to the server - --certificate-authority="": Path to a cert. file for the certificate authority. - --client-certificate="": Path to a client key file for TLS. - --client-key="": Path to a client key file for TLS. - --cluster="": The name of the kubeconfig cluster to use - --context="": The name of the kubeconfig context to use - --insecure-skip-tls-verify=false: If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - --kubeconfig="": Path to the kubeconfig file to use for CLI requests. - --log-backtrace-at=:0: when logging hits line file:N, emit a stack trace - --log-dir=: If non-empty, write log files in this directory - --log-flush-frequency=5s: Maximum number of seconds between log flushes - --logtostderr=true: log to standard error instead of files - --match-server-version=false: Require server version to match client version - --namespace="": If present, the namespace scope for this CLI request. - --password="": Password for basic authentication to the API server. - -s, --server="": The address and port of the Kubernetes API server - --stderrthreshold=2: logs at or above this threshold go to stderr - --token="": Bearer token for authentication to the API server. - --user="": The name of the kubeconfig user to use - --username="": Username for basic authentication to the API server. - --v=0: log level for V logs - --validate=false: If true, use a schema to validate the input before sending it - --vmodule=: comma-separated list of pattern=N settings for file-filtered logging -``` - -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager - -###### Auto generated by spf13/cobra at 2015-05-29 23:14:50.709764383 +0000 UTC - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl_stop.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/kubectl_stop.md?pixel)]() diff --git a/release-0.19.0/docs/kubectl_update.md b/release-0.19.0/docs/kubectl_update.md deleted file mode 100644 index 9471efb1a6b..00000000000 --- a/release-0.19.0/docs/kubectl_update.md +++ /dev/null @@ -1,70 +0,0 @@ -## kubectl update - -Update a resource by filename or stdin. - -### Synopsis - - -Update a resource by filename or stdin. - -JSON and YAML formats are accepted. - -``` -kubectl update -f FILENAME -``` - -### Examples - -``` -// Update a pod using the data in pod.json. -$ kubectl update -f pod.json - -// Update a pod based on the JSON passed into stdin. -$ cat pod.json | kubectl update -f - -``` - -### Options - -``` - -f, --filename=[]: Filename, directory, or URL to file to use to update the resource. - -h, --help=false: help for update -``` - -### Options inherited from parent commands - -``` - --alsologtostderr=false: log to standard error as well as files - --api-version="": The API version to use when talking to the server - --certificate-authority="": Path to a cert. file for the certificate authority. - --client-certificate="": Path to a client key file for TLS. - --client-key="": Path to a client key file for TLS. - --cluster="": The name of the kubeconfig cluster to use - --context="": The name of the kubeconfig context to use - --insecure-skip-tls-verify=false: If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - --kubeconfig="": Path to the kubeconfig file to use for CLI requests. - --log-backtrace-at=:0: when logging hits line file:N, emit a stack trace - --log-dir=: If non-empty, write log files in this directory - --log-flush-frequency=5s: Maximum number of seconds between log flushes - --logtostderr=true: log to standard error instead of files - --match-server-version=false: Require server version to match client version - --namespace="": If present, the namespace scope for this CLI request. - --password="": Password for basic authentication to the API server. - -s, --server="": The address and port of the Kubernetes API server - --stderrthreshold=2: logs at or above this threshold go to stderr - --token="": Bearer token for authentication to the API server. - --user="": The name of the kubeconfig user to use - --username="": Username for basic authentication to the API server. - --v=0: log level for V logs - --validate=false: If true, use a schema to validate the input before sending it - --vmodule=: comma-separated list of pattern=N settings for file-filtered logging -``` - -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager - -###### Auto generated by spf13/cobra at 2015-05-29 01:11:24.431126385 +0000 UTC - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl_update.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/kubectl_update.md?pixel)]() diff --git a/release-0.19.0/docs/kubectl_version.md b/release-0.19.0/docs/kubectl_version.md deleted file mode 100644 index 1c1dfe0fc38..00000000000 --- a/release-0.19.0/docs/kubectl_version.md +++ /dev/null @@ -1,58 +0,0 @@ -## kubectl version - -Print the client and server version information. - -### Synopsis - - -Print the client and server version information. - -``` -kubectl version -``` - -### Options - -``` - -c, --client=false: Client version only (no server required). - -h, --help=false: help for version -``` - -### Options inherited from parent commands - -``` - --alsologtostderr=false: log to standard error as well as files - --api-version="": The API version to use when talking to the server - --certificate-authority="": Path to a cert. file for the certificate authority. - --client-certificate="": Path to a client key file for TLS. - --client-key="": Path to a client key file for TLS. - --cluster="": The name of the kubeconfig cluster to use - --context="": The name of the kubeconfig context to use - --insecure-skip-tls-verify=false: If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - --kubeconfig="": Path to the kubeconfig file to use for CLI requests. - --log-backtrace-at=:0: when logging hits line file:N, emit a stack trace - --log-dir=: If non-empty, write log files in this directory - --log-flush-frequency=5s: Maximum number of seconds between log flushes - --logtostderr=true: log to standard error instead of files - --match-server-version=false: Require server version to match client version - --namespace="": If present, the namespace scope for this CLI request. - --password="": Password for basic authentication to the API server. - -s, --server="": The address and port of the Kubernetes API server - --stderrthreshold=2: logs at or above this threshold go to stderr - --token="": Bearer token for authentication to the API server. - --user="": The name of the kubeconfig user to use - --username="": Username for basic authentication to the API server. - --v=0: log level for V logs - --validate=false: If true, use a schema to validate the input before sending it - --vmodule=: comma-separated list of pattern=N settings for file-filtered logging -``` - -### SEE ALSO -* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager - -###### Auto generated by spf13/cobra at 2015-05-21 10:33:11.232741611 +0000 UTC - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl_version.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/kubectl_version.md?pixel)]() diff --git a/release-0.19.0/docs/labels.md b/release-0.19.0/docs/labels.md deleted file mode 100644 index 50c7d3ff266..00000000000 --- a/release-0.19.0/docs/labels.md +++ /dev/null @@ -1,110 +0,0 @@ -# Labels - -_Labels_ are key/value pairs that are attached to objects, such as pods. -Labels are intended to be used to specify identifying attributes of objects that are meaningful and relevant to users, but which do not directly imply semantics to the core system. -Labels can be used to organize and to select subsets of objects. Labels can be attached to objects at creation time and subsequently added and modified at any time. -Each object can have a set of key/value labels defined. Each Key must be unique for a given object. -``` -"labels": { - "key1" : "value1", - "key2" : "value2" -} -``` - -We'll eventually index and reverse-index labels for efficient queries and watches, use them to sort and group in UIs and CLIs, etc. We don't want to pollute labels with non-identifying, especially large and/or structured, data. Non-identifying information should be recorded using [annotations](annotations.md). - - -## Motivation - -Labels enable users to map their own organizational structures onto system objects in a loosely coupled fashion, without requiring clients to store these mappings. - -Service deployments and batch processing pipelines are often multi-dimensional entities (e.g., multiple partitions or deployments, multiple release tracks, multiple tiers, multiple micro-services per tier). Management often requires cross-cutting operations, which breaks encapsulation of strictly hierarchical representations, especially rigid hierarchies determined by the infrastructure rather than by users. - -Example labels: - - * `"release" : "stable"`, `"release" : "canary"`, ... - * `"environment" : "dev"`, `"environment" : "qa"`, `"environment" : "production"` - * `"tier" : "frontend"`, `"tier" : "backend"`, `"tier" : "middleware"` - * `"partition" : "customerA"`, `"partition" : "customerB"`, ... - * `"track" : "daily"`, `"track" : "weekly"` - -These are just examples; you are free to develop your own conventions. - - -## Syntax and character set - -_Labels_ are key value pairs. Valid label keys have two segments: an optional prefix and name, separated by a slash (`/`). The name segment is required and must be 63 characters or less, beginning and ending with an alphanumeric character (`[a-z0-9A-Z]`) with dashes (`-`), underscores (`_`), dots (`.`), and alphanumerics between. The prefix is optional. If specified, the prefix must be a DNS subdomain: a series of DNS labels separated by dots (`.`), not longer than 253 characters in total, followed by a slash (`/`). -If the prefix is omitted, the label key is presumed to be private to the user. System components which use labels must specify a prefix. The `kubernetes.io/` prefix is reserved for kubernetes core components. - -Valid label values must be 63 characters or less and must be empty or begin and end with an alphanumeric character (`[a-z0-9A-Z]`) with dashes (`-`), underscores (`_`), dots (`.`), and alphanumerics between. - -## Label selectors - -Unlike [names and UIDs](identifiers.md), labels do not provide uniqueness. In general, we expect many objects to carry the same label(s). - -Via a _label selector_, the client/user can identify a set of objects. The label selector is the core grouping primitive in Kubernetes. - -The API currently supports two types of selectors: _equality-based_ and _set-based_. -A label selector can be made of multiple _requirements_ which are comma-separated. In the case of multiple requirements, all must be satisfied so comma separator acts as an AND logical operator. - -### _Equality-based_ requirement - -_Equality-_ or _inequality-based_ requirements allow filtering by label keys and values. Matching objects must have all of the specified labels (both keys and values), though they may have additional labels as well. -Three kinds of operators are admitted `=`,`==`,`!=`. The first two represent _equality_ and are simply synonyms. While the latter represents _inequality_. For example: -``` -environment = production -tier != frontend -``` - -The former selects all resources with key equal to `environment` and value equal to `production`. -The latter selects all resources with key equal to `tier` and value distinct from `frontend`. -One could filter for resources in `production` but not `frontend` using the comma operator: `environment=production,tier!=frontend` - - -### _Set-based_ requirement - -_Set-based_ label requirements allow filtering keys according to a set of values. Matching objects must have all of the specified labels (i.e. all keys and at least one of the values specified for each key). Three kind of operators are supported: `in`,`notin` and exists (only the key identifier). For example: -``` -environment in (production, qa) -tier notin (frontend, backend) -partition -``` -The first example selects all resources with key equal to `environment` and value equal to `production` or `qa`. -The second example selects all resources with key equal to `tier` and value other than `frontend` and `backend`. -The third example selects all resources including a label with key `partition`; no values are checked. -Similarly the comma separator acts as an _AND_ operator for example filtering resource with a `partition` key (not matter the value) and with `environment` different than `qa`. For example: `partition,environment notin (qa)`. -The _set-based_ label selector is a general form of equality since `environment=production` is equivalent to `environment in (production)`; similarly for `!=` and `notin`. - -_Set-based_ requirements can be mixed with _equality-based_ requirements. For example: `partition in (customerA, customerB),environment!=qa`. - - -## API - -LIST and WATCH operations may specify label selectors to filter the sets of objects returned using a query parameter. Both requirements are permitted: - - * _equality-based_ requirements: `?label-selector=key1%3Dvalue1,key2%3Dvalue2` - * _set-based_ requirements: `?label-selector=key+in+%28value1%2Cvalue2%29%2Ckey2+notin+%28value3` - -Kubernetes also currently supports two objects that use label selectors to keep track of their members, `service`s and `replicationcontroller`s: - -* `service`: A [service](services.md) is a configuration unit for the proxies that run on every worker node. It is named and points to one or more pods. -* `replicationcontroller`: A [replication controller](replication-controller.md) ensures that a specified number of pod "replicas" are running at any one time. - -The set of pods that a `service` targets is defined with a label selector. Similarly, the population of pods that a `replicationcontroller` is monitoring is also defined with a label selector. For management convenience and consistency, `services` and `replicationcontrollers` may themselves have labels and would generally carry the labels their corresponding pods have in common. - -Sets identified by labels could be overlapping (think Venn diagrams). For instance, a service might target all pods with `"tier": "frontend"` and `"environment" : "prod"`. Now say you have 10 replicated pods that make up this tier. But you want to be able to 'canary' a new version of this component. You could set up a `replicationcontroller` (with `replicas` set to 9) for the bulk of the replicas with labels `"tier" : "frontend"` and `"environment" : "prod"` and `"track" : "stable"` and another `replicationcontroller` (with `replicas` set to 1) for the canary with labels `"tier" : "frontend"` and `"environment" : "prod"` and `"track" : "canary"`. Now the service is covering both the canary and non-canary pods. But you can mess with the `replicationcontrollers` separately to test things out, monitor the results, etc. - -Note that the superset described in the previous example is also heterogeneous. In long-lived, highly available, horizontally scaled, distributed, continuously evolving service applications, heterogeneity is inevitable, due to canaries, incremental rollouts, live reconfiguration, simultaneous updates and auto-scaling, hardware upgrades, and so on. - -Pods (and other objects) may belong to multiple sets simultaneously, which enables representation of service substructure and/or superstructure. In particular, labels are intended to facilitate the creation of non-hierarchical, multi-dimensional deployment structures. They are useful for a variety of management purposes (e.g., configuration, deployment) and for application introspection and analysis (e.g., logging, monitoring, alerting, analytics). Without the ability to form sets by intersecting labels, many implicitly related, overlapping flat sets would need to be created, for each subset and/or superset desired, which would lose semantic information and be difficult to keep consistent. Purely hierarchically nested sets wouldn't readily support slicing sets across different dimensions. - - -## Future developments - -Concerning API: we may extend such filtering to DELETE operations in the future. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/labels.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/labels.md?pixel)]() diff --git a/release-0.19.0/docs/logging.md b/release-0.19.0/docs/logging.md deleted file mode 100644 index 2c667f64a73..00000000000 --- a/release-0.19.0/docs/logging.md +++ /dev/null @@ -1,52 +0,0 @@ -# Logging - -## Logging by Kubernetes Components -Kubernetes components, such as kubelet and apiserver, use the [glog](https://godoc.org/github.com/golang/glog) logging library. Developer conventions for logging severity are described in [devel/logging.md](devel/logging.md). - -## Logging in Containers -There are no Kubernetes-specific requirements for logging from within containers. [search](https://www.google.com/?q=docker+container+logging) will turn up any number of articles about logging and -Docker containers. However, we do provide an example of how to collect, index, and view pod logs [using Fluentd, Elasticsearch, and Kibana](./getting-started-guides/logging.md) - - -## Logging to Elasticsearch on the GCE platform -Currently the collection of container logs using the [Fluentd](http://www.fluentd.org/) log collector is -enabled by default for clusters created for the GCE platform. Each node uses Fluentd to collect -the container logs which are submitted in [Logstash](http://logstash.net/docs/1.4.2/tutorials/getting-started-with-logstash) -format (in JSON) to an [Elasticsearch](http://www.elasticsearch.org/) cluster which runs as a Kubernetes service. -As of Kubernetes 0.11, when you create a cluster the console output reports the URL of both the Elasticsearch cluster as well as -a URL for a [Kibana](http://www.elasticsearch.org/overview/kibana/) dashboard viewer for the logs that have been ingested -into Elasticsearch. -``` -Elasticsearch is running at https://104.197.10.10/api/v1/proxy/namespaces/default/services/elasticsearch-logging -Kibana is running at https://104.197.10.10/api/v1/proxy/namespaces/default/services/kibana-logging -``` -Visiting the Kibana dashboard URL in a browser should give a display like this: -![Kibana](kibana.png) - -To learn how to query, filter etc. using Kibana you might like to look at this [tutorial](http://www.elasticsearch.org/guide/en/kibana/current/working-with-queries-and-filters.html). - -You can check to see if any logs are being ingested into Elasticsearch by curling against its URL. You will need to provide the username and password that was generated when your cluster was created. This can be found in the `kubernetes_auth` file for your cluster. -``` -$ curl -k -u admin:Drt3KdRGnoQL6TQM https://130.211.152.93/api/v1/proxy/namespaces/default/services/elasticsearch-logging/_search?size=10 -``` -A [demonstration](../examples/logging-demo/README.md) of two synthetic logging sources can be used -to check that logging is working correctly. - -Cluster logging can be turned on or off using the environment variable `ENABLE_NODE_LOGGING` which is defined in the -`config-default.sh` file for each provider. For the GCE provider this is set by default to `true`. Set this -to `false` to disable cluster logging. - -The type of logging is used is specified by the environment variable `LOGGING_DESTINATION` which for the -GCE provider has the default value `elasticsearch`. If this is set to `gcp` for the GCE provider then -logs will be sent to the Google Cloud Logging system instead. - -When using Elasticsearch the number of Elasticsearch instances can be controlled by setting the -variable `ELASTICSEARCH_LOGGING_REPLICAS` which has the default value of `1`. For large clusters -or clusters that are generating log information at a high rate you may wish to use more -Elasticsearch instances. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/logging.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/logging.md?pixel)]() diff --git a/release-0.19.0/docs/making-release-notes.md b/release-0.19.0/docs/making-release-notes.md deleted file mode 100644 index 9002b90afd1..00000000000 --- a/release-0.19.0/docs/making-release-notes.md +++ /dev/null @@ -1,36 +0,0 @@ -## Making release notes -This documents the process for making release notes for a release. - -### 1) Note the PR number of the previous release -Find the PR that was merged with the previous release. Remember this number -_TODO_: Figure out a way to record this somewhere to save the next release engineer time. - -### 2) Build the release-notes tool -```bash -${KUBERNETES_ROOT}/build/make-release-notes.sh -``` - -### 3) Trim the release notes -This generates a list of the entire set of PRs merged since the last release. It is likely long -and many PRs aren't worth mentioning. - -Open up ```candidate-notes.md``` in your favorite editor. - -Remove, regroup, organize to your hearts content. - - -### 4) Update CHANGELOG.md -With the final markdown all set, cut and paste it to the top of ```CHANGELOG.md``` - -### 5) Update the Release page - * Switch to the [releases](https://github.com/GoogleCloudPlatform/kubernetes/releases) page. - * Open up the release you are working on. - * Cut and paste the final markdown from above into the release notes - * Press Save. - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/making-release-notes.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/making-release-notes.md?pixel)]() diff --git a/release-0.19.0/docs/man/Dockerfile b/release-0.19.0/docs/man/Dockerfile deleted file mode 100644 index 9910bd48f90..00000000000 --- a/release-0.19.0/docs/man/Dockerfile +++ /dev/null @@ -1,7 +0,0 @@ -FROM golang:1.3 -RUN mkdir -p /go/src/github.com/cpuguy83 -RUN mkdir -p /go/src/github.com/cpuguy83 \ - && git clone -b v1 https://github.com/cpuguy83/go-md2man.git /go/src/github.com/cpuguy83/go-md2man \ - && cd /go/src/github.com/cpuguy83/go-md2man \ - && go get -v ./... -CMD ["/go/bin/go-md2man", "--help"] diff --git a/release-0.19.0/docs/man/README.md b/release-0.19.0/docs/man/README.md deleted file mode 100644 index 3c24f7b2798..00000000000 --- a/release-0.19.0/docs/man/README.md +++ /dev/null @@ -1,49 +0,0 @@ -Kubernetes Documentation -==================== - -This directory contains the Kubernetes user manual in the Markdown format. -Do *not* edit the man pages in the man1 directory. Instead, amend the -Markdown (*.md) files. - -# File List - - kube-apiserver.1.md - kube-controller-manager.1.md - kubelet.1.md - kube-proxy.1.md - kube-scheduler.1.md - Dockerfile - md2man-all.sh - -# Generating man pages from the Markdown files - -The recommended approach for generating the man pages is via a Docker -container using the supplied `Dockerfile` to create an image with the correct -environment. This uses `go-md2man`, a pure Go Markdown to man page generator. - -## Building the md2man image - -There is a `Dockerfile` provided in the `kubernetes/docs/man` directory. - -Using this `Dockerfile`, create a Docker image tagged `docker/md2man`: - - docker build -t docker/md2man . - -## Utilizing the image - -Once the image is built, run a container using the image with *volumes*: - - docker run -v //kubernetes/docs/man:/docs:rw \ - -w /docs -i docker/md2man /docs/md2man-all.sh - -The `md2man` Docker container will process the Markdown files and generate -the man pages inside the `docker/docs/man/man1` directory using -Docker volumes. For more information on Docker volumes see the man page for -`docker run` and also look at the article [Sharing Directories via Volumes] -(http://docs.docker.com/use/working_with_volumes/). - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/man/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/man/README.md?pixel)]() diff --git a/release-0.19.0/docs/man/kube-apiserver.1.md b/release-0.19.0/docs/man/kube-apiserver.1.md deleted file mode 100644 index 89221a6b522..00000000000 --- a/release-0.19.0/docs/man/kube-apiserver.1.md +++ /dev/null @@ -1,198 +0,0 @@ -% KUBERNETES(1) kubernetes User Manuals -% Scott Collier -% October 2014 -# NAME -kube-apiserver \- Provides the API for kubernetes orchestration. - -# SYNOPSIS -**kube-apiserver** [OPTIONS] - -# DESCRIPTION - -The **kubernetes** API server validates and configures data for 3 types of objects: pods, services, and replicationcontrollers. Beyond just servicing REST operations, the API Server does two other things as well: 1. Schedules pods to worker nodes. Right now the scheduler is very simple. 2. Synchronize pod information (where they are, what ports they are exposing) with the service configuration. - -The the kube-apiserver several options. - -# OPTIONS -**--address**=127.0.0.1 - DEPRECATED: see --insecure-bind-address instead - -**--admission-control**="AlwaysAdmit" - Ordered list of plug-ins to do admission control of resources into cluster. Comma-delimited list of: AlwaysDeny, AlwaysAdmit, ServiceAccount, NamespaceExists, NamespaceLifecycle, NamespaceAutoProvision, LimitRanger, SecurityContextDeny, ResourceQuota - -**--admission-control-config-file**="" - File with admission control configuration. - -**--allow-privileged**=false - If true, allow privileged containers. - -**--alsologtostderr**=false - log to standard error as well as files - -**--api-burst**=200 - API burst amount for the read only port - -**--api-prefix**="/api" - The prefix for API requests on the server. Default '/api'. - -**--api-rate**=10 - API rate limit as QPS for the read only port - -**--authorization-mode**="AlwaysAllow" - Selects how to do authorization on the secure port. One of: AlwaysAllow,AlwaysDeny,ABAC - -**--authorization-policy-file**="" - File with authorization policy in csv format, used with --authorization-mode=ABAC, on the secure port. - -**--basic-auth-file**="" - If set, the file that will be used to admit requests to the secure port of the API server via http basic authentication. - -**--bind-address**=0.0.0.0 - The IP address on which to serve the --read-only-port and --secure-port ports. This address must be reachable by the rest of the cluster. If blank, all interfaces will be used. - -**--cert-dir**="/var/run/kubernetes" - The directory where the TLS certs are located (by default /var/run/kubernetes). If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored. - -**--client-ca-file**="" - If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate. - -**--cloud-config**="" - The path to the cloud provider configuration file. Empty string for no configuration file. - -**--cloud-provider**="" - The provider for cloud services. Empty string for no provider. - -**--cluster-name**="kubernetes" - The instance prefix for the cluster - -**--cors-allowed-origins**=[] - List of allowed origins for CORS, comma separated. An allowed origin can be a regular expression to support subdomain matching. If this list is empty CORS will not be enabled. - -**--etcd-config**="" - The config file for the etcd client. Mutually exclusive with -etcd-servers. - -**--etcd-prefix**="/registry" - The prefix for all resource paths in etcd. - -**--etcd-servers**=[] - List of etcd servers to watch (http://ip:port), comma separated. Mutually exclusive with -etcd-config - -**--event-ttl**=1h0m0s - Amount of time to retain events. Default 1 hour. - -**--external-hostname**="" - The hostname to use when generating externalized URLs for this master (e.g. Swagger API Docs.) - -**--insecure-bind-address**=127.0.0.1 - The IP address on which to serve the --insecure-port (set to 0.0.0.0 for all interfaces). Defaults to localhost. - -**--insecure-port**=8080 - The port on which to serve unsecured, unauthenticated access. Default 8080. It is assumed that firewall rules are set up such that this port is not reachable from outside of the cluster and that port 443 on the cluster's public address is proxied to this port. This is performed by nginx in the default setup. - -**--kubelet_certificate_authority**="" - Path to a cert. file for the certificate authority. - -**--kubelet_client_certificate**="" - Path to a client key file for TLS. - -**--kubelet_client_key**="" - Path to a client key file for TLS. - -**--kubelet_https**=true - Use https for kubelet connections - -**--kubelet_port**=10250 - Kubelet port - -**--kubelet_timeout**=5s - Timeout for kubelet operations - -**--log_backtrace_at**=:0 - when logging hits line file:N, emit a stack trace - -**--log_dir**= - If non-empty, write log files in this directory - -**--log_flush_frequency**=5s - Maximum number of seconds between log flushes - -**--logtostderr**=true - log to standard error instead of files - -**--long-running-request-regexp**="[.*\\/watch$][^\\/proxy.*]" - A regular expression matching long running requests which should be excluded from maximum inflight request handling. - -**--master-service-namespace**="default" - The namespace from which the kubernetes master services should be injected into pods - -**--max-requests-inflight**=400 - The maximum number of requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit. - -**--old-etcd-prefix**="/registry" - The previous prefix for all resource paths in etcd, if any. - -**--port**=8080 - DEPRECATED: see --insecure-port instead - -**--service-cluster-ip-range**= - A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods. - -**--profiling**=true - Enable profiling via web interface host:port/debug/pprof/ - -**--public-address-override**=0.0.0.0 - DEPRECATED: see --bind-address instead - -**--read-only-port**=7080 - The port on which to serve read-only resources. If 0, don't serve read-only at all. It is assumed that firewall rules are set up such that this port is not reachable from outside of the cluster. - -**--runtime-config**= - A set of key=value pairs that describe runtime configuration that may be passed to the apiserver. api/ key can be used to turn on/off specific api versions. api/all and api/legacy are special keys to control all and legacy api versions respectively. - -**--secure-port**=6443 - The port on which to serve HTTPS with authentication and authorization. If 0, don't serve HTTPS at all. - -**--service-account-key-file**="" - File containing PEM-encoded x509 RSA private or public key, used to verify ServiceAccount tokens. If unspecified, --tls-private-key-file is used. - -**--service-account-lookup**=false - If true, validate ServiceAccount tokens exist in etcd as part of authentication. - -**--stderrthreshold**=2 - logs at or above this threshold go to stderr - -**--storage-version**="" - The version to store resources with. Defaults to server preferred - -**--tls-cert-file**="" - File containing x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to /var/run/kubernetes. - -**--tls-private-key-file**="" - File containing x509 private key matching --tls-cert-file. - -**--token-auth-file**="" - If set, the file that will be used to secure the secure port of the API server via token authentication. - -**--v**=0 - log level for V logs - -**--version**=false - Print version information and quit - -**--vmodule**= - comma-separated list of pattern=N settings for file-filtered logging - -# EXAMPLES -``` -/usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd_servers=http://127.0.0.1:4001 --insecure_bind_address=127.0.0.1 --insecure_port=8080 --kubelet_port=10250 --service-cluster-ip-range=10.1.1.0/24 --allow_privileged=false -``` - -# HISTORY -October 2014, Originally compiled by Scott Collier (scollier at redhat dot com) based - on the kubernetes source material and internal work. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/man/kube-apiserver.1.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/man/kube-apiserver.1.md?pixel)]() diff --git a/release-0.19.0/docs/man/kube-controller-manager.1.md b/release-0.19.0/docs/man/kube-controller-manager.1.md deleted file mode 100644 index a9081b47fc5..00000000000 --- a/release-0.19.0/docs/man/kube-controller-manager.1.md +++ /dev/null @@ -1,141 +0,0 @@ -% KUBERNETES(1) kubernetes User Manuals -% Scott Collier -% October 2014 -# NAME -kube-controller-manager \- Enforces kubernetes services. - -# SYNOPSIS -**kube-controller-manager** [OPTIONS] - -# DESCRIPTION - -The **kubernetes** controller manager is really a service that is layered on top of the simple pod API. To enforce this layering, the logic for the replicationcontroller is actually broken out into another server. This server watches etcd for changes to replicationcontroller objects and then uses the public Kubernetes API to implement the replication algorithm. - -The kube-controller-manager has several options. - -# OPTIONS -**--address**=127.0.0.1 - The IP address to serve on (set to 0.0.0.0 for all interfaces) - -**--allocate-node-cidrs**=false - Should CIDRs for Pods be allocated and set on the cloud provider. - -**--alsologtostderr**=false - log to standard error as well as files - -**--cloud-config**="" - The path to the cloud provider configuration file. Empty string for no configuration file. - -**--cloud-provider**="" - The provider for cloud services. Empty string for no provider. - -**--cluster-cidr**= - CIDR Range for Pods in cluster. - -**--concurrent-endpoint-syncs**=5 - The number of endpoint syncing operations that will be done concurrently. Larger number = faster endpoint updating, but more CPU (and network) load - -**--concurrent_rc_syncs**=5 - The number of replication controllers that are allowed to sync concurrently. Larger number = more reponsive replica management, but more CPU (and network) load - -**--deleting-pods-burst**=10 - Number of nodes on which pods are bursty deleted in case of node failure. For more details look into RateLimiter. - -**--deleting-pods-qps**=0.1 - Number of nodes per second on which pods are deleted in case of node failure. - -**--kubeconfig**="" - Path to kubeconfig file with authorization and master location information. - -**--log_backtrace_at**=:0 - when logging hits line file:N, emit a stack trace - -**--log_dir**= - If non-empty, write log files in this directory - -**--log_flush_frequency**=5s - Maximum number of seconds between log flushes - -**--logtostderr**=true - log to standard error instead of files - -**--machines**=[] - List of machines to schedule onto, comma separated. - -**--master**="" - The address of the Kubernetes API server (overrides any value in kubeconfig) - -**--minion-regexp**="" - If non empty, and --cloud-provider is specified, a regular expression for matching minion VMs. - -**--namespace-sync-period**=5m0s - The period for syncing namespace life-cycle updates - -**--node-memory**=3Gi - The amount of memory (in bytes) provisioned on each node - -**--node-milli-cpu**=1000 - The amount of MilliCPU provisioned on each node - -**--node-monitor-grace-period**=40s - Amount of time which we allow running Node to be unresponsive before marking it unhealty. Must be N times more than kubelet's nodeStatusUpdateFrequency, where N means number of retries allowed for kubelet to post node status. - -**--node-monitor-period**=5s - The period for syncing NodeStatus in NodeController. - -**--node-startup-grace-period**=1m0s - Amount of time which we allow starting Node to be unresponsive before marking it unhealty. - -**--node-sync-period**=10s - The period for syncing nodes from cloudprovider. Longer periods will result in fewer calls to cloud provider, but may delay addition of new nodes to cluster. - -**--pod-eviction-timeout**=5m0s - The grace peroid for deleting pods on failed nodes. - -**--port**=10252 - The port that the controller-manager's http service runs on - -**--profiling**=true - Enable profiling via web interface host:port/debug/pprof/ - -**--pvclaimbinder-sync-period**=10s - The period for syncing persistent volumes and persistent volume claims - -**--register-retry-count**=10 - The number of retries for initial node registration. Retry interval equals node-sync-period. - -**--resource-quota-sync-period**=10s - The period for syncing quota usage status in the system - -**--service-account-private-key-file**="" - Filename containing a PEM-encoded private RSA key used to sign service account tokens. - -**--stderrthreshold**=2 - logs at or above this threshold go to stderr - -**--sync-nodes**=true - If true, and --cloud-provider is specified, sync nodes from the cloud provider. Default true. - -**--v**=0 - log level for V logs - -**--version**=false - Print version information and quit - -**--vmodule**= - comma-separated list of pattern=N settings for file-filtered logging - -# EXAMPLES -``` -/usr/bin/kube-controller-manager --logtostderr=true --v=0 --master=127.0.0.1:8080 --machines=127.0.0.1 -``` - -# HISTORY -October 2014, Originally compiled by Scott Collier (scollier at redhat dot com) based - on the kubernetes source material and internal work. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/man/kube-controller-manager.1.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/man/kube-controller-manager.1.md?pixel)]() diff --git a/release-0.19.0/docs/man/kube-proxy.1.md b/release-0.19.0/docs/man/kube-proxy.1.md deleted file mode 100644 index a49b2af07ed..00000000000 --- a/release-0.19.0/docs/man/kube-proxy.1.md +++ /dev/null @@ -1,78 +0,0 @@ -% KUBERNETES(1) kubernetes User Manuals -% Scott Collier -% October 2014 -# NAME -kube-proxy \- Provides network proxy services. - -# SYNOPSIS -**kube-proxy** [OPTIONS] - -# DESCRIPTION - -The **kubernetes** network proxy runs on each node. This reflects services as defined in the Kubernetes API on each node and can do simple TCP stream forwarding or round robin TCP forwarding across a set of backends. Service endpoints are currently found through Docker-links-compatible environment variables specifying ports opened by the service proxy. Currently the user must select a port to expose the service on on the proxy, as well as the container's port to target. - -The kube-proxy takes several options. - -# OPTIONS -**--alsologtostderr**=false - log to standard error as well as files - -**--bind-address**=0.0.0.0 - The IP address for the proxy server to serve on (set to 0.0.0.0 for all interfaces) - -**--healthz-bind-address**=127.0.0.1 - The IP address for the health check server to serve on, defaulting to 127.0.0.1 (set to 0.0.0.0 for all interfaces) - -**--healthz-port**=10249 - The port to bind the health check server. Use 0 to disable. - -**--kubeconfig**="" - Path to kubeconfig file with authorization information (the master location is set by the master flag). - -**--log_backtrace_at**=:0 - when logging hits line file:N, emit a stack trace - -**--log_dir**= - If non-empty, write log files in this directory - -**--log_flush_frequency**=5s - Maximum number of seconds between log flushes - -**--logtostderr**=true - log to standard error instead of files - -**--master**="" - The address of the Kubernetes API server (overrides any value in kubeconfig) - -**--oom-score-adj**=-899 - The oom_score_adj value for kube-proxy process. Values must be within the range [-1000, 1000] - -**--resource-container**="/kube-proxy" - Absolute name of the resource-only container to create and run the Kube-proxy in (Default: /kube-proxy). - -**--stderrthreshold**=2 - logs at or above this threshold go to stderr - -**--v**=0 - log level for V logs - -**--version**=false - Print version information and quit - -**--vmodule**= - comma-separated list of pattern=N settings for file-filtered logging - -# EXAMPLES -``` -/usr/bin/kube-proxy --logtostderr=true --v=0 --master=http://127.0.0.1:8080 -``` - -# HISTORY -October 2014, Originally compiled by Scott Collier (scollier at redhat dot com) based - on the kubernetes source material and internal work. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/man/kube-proxy.1.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/man/kube-proxy.1.md?pixel)]() diff --git a/release-0.19.0/docs/man/kube-scheduler.1.md b/release-0.19.0/docs/man/kube-scheduler.1.md deleted file mode 100644 index c470bd2472a..00000000000 --- a/release-0.19.0/docs/man/kube-scheduler.1.md +++ /dev/null @@ -1,78 +0,0 @@ -% KUBERNETES(1) kubernetes User Manuals -% Scott Collier -% October 2014 -# NAME -kube-scheduler \- Schedules containers on hosts. - -# SYNOPSIS -**kube-scheduler** [OPTIONS] - -# DESCRIPTION - -The **kubernetes** scheduler is a policy-rich, topology-aware, workload-specific function that significantly impacts availability, performance, and capacity. The scheduler needs to take into account individual and collective resource requirements, quality of service requirements, hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference, deadlines, and so on. Workload-specific requirements will be exposed through the API as necessary. - -The kube-scheduler can take several options. - -# OPTIONS -**--address**=127.0.0.1 - The IP address to serve on (set to 0.0.0.0 for all interfaces) - -**--algorithm-provider**="DefaultProvider" - The scheduling algorithm provider to use, one of: DefaultProvider - -**--alsologtostderr**=false - log to standard error as well as files - -**--kubeconfig**="" - Path to kubeconfig file with authorization and master location information. - -**--log_backtrace_at**=:0 - when logging hits line file:N, emit a stack trace - -**--log_dir**= - If non-empty, write log files in this directory - -**--log_flush_frequency**=5s - Maximum number of seconds between log flushes - -**--logtostderr**=true - log to standard error instead of files - -**--master**="" - The address of the Kubernetes API server (overrides any value in kubeconfig) - -**--policy-config-file**="" - File with scheduler policy configuration - -**--port**=10251 - The port that the scheduler's http service runs on - -**--profiling**=true - Enable profiling via web interface host:port/debug/pprof/ - -**--stderrthreshold**=2 - logs at or above this threshold go to stderr - -**--v**=0 - log level for V logs - -**--version**=false - Print version information and quit - -**--vmodule**= - comma-separated list of pattern=N settings for file-filtered logging - -# EXAMPLES -``` -/usr/bin/kube-scheduler --logtostderr=true --v=0 --master=127.0.0.1:8080 -``` - -# HISTORY -October 2014, Originally compiled by Scott Collier (scollier@redhat.com) based - on the kubernetes source material and internal work. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/man/kube-scheduler.1.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/man/kube-scheduler.1.md?pixel)]() diff --git a/release-0.19.0/docs/man/kubelet.1.md b/release-0.19.0/docs/man/kubelet.1.md deleted file mode 100644 index 62b09c675ff..00000000000 --- a/release-0.19.0/docs/man/kubelet.1.md +++ /dev/null @@ -1,259 +0,0 @@ -% KUBERNETES(1) kubernetes User Manuals -% Scott Collier -% October 2014 -# NAME -kubelet \- Processes a container manifest so the containers are launched according to how they are described. - -# SYNOPSIS -**kubelet** [OPTIONS] - -# DESCRIPTION - -The **kubernetes** kubelet runs on each node. - -The Kubelet ensures that pods defined by "container manifests" are running. -Container manifests simply refer to the YAML or JSON files which we use to represent pods, but viewed from the perspective of the kubelet. -Thus, the Kubelet watches for these manifests (which can be provided by different mechanisms) and ensures that the containers described in those manifests are started. - -By "watch", we specifically mean, that the Kubelet monitors either an HTTP endpoint, or a directory, a file, or a server. - -There are 3 ways that a container manifest can be provided to the Kubelet: - - File: Path to a file OR directory passed as a flag on the command line. This file is rechecked every 20 seconds (configurable with a flag). See the --config option. - HTTP endpoint: HTTP endpoint passed as a parameter on the command line. This endpoint is checked every 20 seconds (also configurable with a flag). - HTTP server: The kubelet can also listen for HTTP and respond to a simple API submissions of new manifests (currently, this is underspecified). - -# OPTIONS -**--address**=0.0.0.0 - The IP address for the info server to serve on (set to 0.0.0.0 for all interfaces) - -**--allow_dynamic_housekeeping**=true - Whether to allow the housekeeping interval to be dynamic - -**--allow-privileged**=false - If true, allow containers to request privileged mode. [default=false] - -**--alsologtostderr**=false - log to standard error as well as files - -**--api-servers**=[] - List of Kubernetes API servers for publishing events, and reading pods and services. (ip:port), comma separated. Although this is a critical argument for common kube deployments, note that kubelets can still run pods from manifests without an api-server. - -**--boot_id_file**=/proc/sys/kernel/random/boot_id - Comma-separated list of files to check for boot-id. Use the first one that exists. - -**--cadvisor-port**=4194 - The port of the localhost cAdvisor endpoint - -**--cert-dir**="/var/run/kubernetes" - The directory where the TLS certs are located (by default /var/run/kubernetes). If --tls_cert_file and --tls_private_key_file are provided, this flag will be ignored. - -**--cgroup_root**="" - Optional root cgroup to use for pods. This is handled by the container runtime on a best effort basis. Default: '', which means use the container runtime default. - -**--cloud-config**="" - The path to the cloud provider configuration file. Empty string for no configuration file. - -**--cloud-provider**="" - The provider for cloud services. Empty string for no provider. - -**--cluster-dns**= - IP address for a cluster DNS server. If set, kubelet will configure all containers to use this for DNS resolution in addition to the host's DNS servers - -**--cluster-domain**="" - Domain for this cluster. If set, kubelet will configure all containers to search this domain in addition to the host's search domains - -**--config**="" - Path to the config file or directory of manifest files. For example, --config=/foo/ would run .manifest files under /foo on startup of the kubelet (even if no api-server was yet running). - -**--configure-cbr0**=false - If true, kubelet will configure cbr0 based on Node.Spec.PodCIDR. - -**--container_hints**=/etc/cadvisor/container_hints.json - location of the container hints file - -**--container_runtime**="docker" - The container runtime to use. Possible values: 'docker', 'rkt'. Default: 'docker'. - -**--docker**=unix:///var/run/docker.sock - docker endpoint - -**--docker-daemon-container**="/docker-daemon" - Optional resource-only container in which to place the Docker Daemon. Empty for no container (Default: /docker-daemon). - -**--docker-endpoint**="" - If non-empty, use this for the docker endpoint to communicate with - -**--docker_only**=false - Only report docker containers in addition to root stats - -**--docker_root**=/var/lib/docker - Absolute path to the Docker state root directory (default: /var/lib/docker) - -**--docker_run**=/var/run/docker - Absolute path to the Docker run directory (default: /var/run/docker) - -**--enable-debugging-handlers**=true - Enables server endpoints for log collection and local running of containers and commands - -**--enable_load_reader**=false - Whether to enable cpu load reader - -**--enable-server**=true - Enable the info server - -**--event_storage_age_limit**=default=24h - Max length of time for which to store events (per type). Value is a comma separated list of key values, where the keys are event types (e.g.: creation, oom) or "default" and the value is a duration. Default is applied to all non-specified event types - -**--event_storage_event_limit**=default=100000 - Max number of events to store (per type). Value is a comma separated list of key values, where the keys are event types (e.g.: creation, oom) or "default" and the value is an integer. Default is applied to all non-specified event types - -**--file-check-frequency**=20s - Duration between checking config files for new data - -**--global_housekeeping_interval**=1m0s - Interval between global housekeepings - -**--google-json-key**="" - The Google Cloud Platform Service Account JSON Key to use for authentication. - -**--healthz-bind-address**=127.0.0.1 - The IP address for the healthz server to serve on, defaulting to 127.0.0.1 (set to 0.0.0.0 for all interfaces) - -**--healthz-port**=10248 - The port of the localhost healthz endpoint - -**--host-network-sources**="file" - Comma-separated list of sources from which the Kubelet allows pods to use of host network. For all sources use "*" [default="file"] - -**--hostname-override**="" - If non-empty, will use this string as identification instead of the actual hostname. - -**--housekeeping_interval**=1s - Interval between container housekeepings - -**--http-check-frequency**=20s - Duration between checking http for new data - -**--image-gc-high-threshold**=90 - The percent of disk usage after which image garbage collection is always run. Default: 90%% - -**--image-gc-low-threshold**=80 - The percent of disk usage before which image garbage collection is never run. Lowest disk usage to garbage collect to. Default: 80%% - -**--kubeconfig**=/var/lib/kubelet/kubeconfig - Path to a kubeconfig file, specifying how to authenticate to API server (the master location is set by the api-servers flag). - -**--log_backtrace_at**=:0 - when logging hits line file:N, emit a stack trace - -**--log_cadvisor_usage**=false - Whether to log the usage of the cAdvisor container - -**--log_dir**= - If non-empty, write log files in this directory - -**--log_flush_frequency**=5s - Maximum number of seconds between log flushes - -**--logtostderr**=true - log to standard error instead of files - -**--low-diskspace-threshold-mb**=256 - The absolute free disk space, in MB, to maintain. When disk space falls below this threshold, new pods would be rejected. Default: 256 - -**--machine_id_file**=/etc/machine-id,/var/lib/dbus/machine-id - Comma-separated list of files to check for machine-id. Use the first one that exists. - -**--manifest-url**="" - URL for accessing the container manifest - -**--master-service-namespace**="default" - The namespace from which the kubernetes master services should be injected into pods - -**--max_housekeeping_interval**=1m0s - Largest interval to allow between container housekeepings - -**--max_pods**=100 - Number of Pods that can run on this Kubelet. - -**--maximum-dead-containers**=100 - Maximum number of old instances of a containers to retain globally. Each container takes up some disk space. Default: 100. - -**--maximum-dead-containers-per-container**=5 - Maximum number of old instances of a container to retain per container. Each container takes up some disk space. Default: 5. - -**--minimum-container-ttl-duration**=1m0s - Minimum age for a finished container before it is garbage collected. Examples: '300ms', '10s' or '2h45m' - -**--network-plugin**="" - The name of the network plugin to be invoked for various events in kubelet/pod lifecycle - -**--node-status-update-frequency**=10s - Specifies how often kubelet posts node status to master. Note: be cautious when changing the constant, it must work with nodeMonitorGracePeriod in nodecontroller. Default: 10s - -**--oom-score-adj**=-900 - The oom_score_adj value for kubelet process. Values must be within the range [-1000, 1000] - -**--pod-infra-container-image**="gcr.io/google_containers/pause:0.8.0" - The image whose network/ipc namespaces containers in each pod will use. - -**--port**=10250 - The port for the info server to serve on - -**--read-only-port**=10255 - The read-only port for the info server to serve on (set to 0 to disable) - -**--registry-burst**=10 - Maximum size of a bursty pulls, temporarily allows pulls to burst to this number, while still not exceeding registry_qps. Only used if --registry_qps > 0 - -**--registry-qps**=0 - If > 0, limit registry pull QPS to this value. If 0, unlimited. [default=0.0] - -**--resource-container**="/kubelet" - Absolute name of the resource-only container to create and run the Kubelet in (Default: /kubelet). - -**--root-dir**="/var/lib/kubelet" - Directory path for managing kubelet files (volume mounts,etc). - -**--runonce**=false - If true, exit after spawning pods from local manifests or remote urls. Exclusive with --api_servers, and --enable-server - -**--stderrthreshold**=2 - logs at or above this threshold go to stderr - -**--streaming-connection-idle-timeout**=0 - Maximum time a streaming connection can be idle before the connection is automatically closed. Example: '5m' - -**--sync-frequency**=10s - Max period between synchronizing running containers and config - -**--tls-cert-file**="" - File /gmrvcontaining x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If --tls_cert_file and --tls_private_key_file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory passed to --cert_dir. - -**--tls-private-key-file**="" - File containing x509 private key matching --tls_cert_file. - -**--v**=0 - log level for V logs - -**--version**=false - Print version information and quit - -**--vmodule**= - comma-separated list of pattern=N settings for file-filtered logging - -# EXAMPLES -``` -/usr/bin/kubelet --logtostderr=true --v=0 --api_servers=http://127.0.0.1:8080 --address=127.0.0.1 --port=10250 --hostname_override=127.0.0.1 --allow-privileged=false -``` - -# HISTORY -October 2014, Originally compiled by Scott Collier (scollier at redhat dot com) based - on the kubernetes source material and internal work. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/man/kubelet.1.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/man/kubelet.1.md?pixel)]() diff --git a/release-0.19.0/docs/man/man1/.files_generated b/release-0.19.0/docs/man/man1/.files_generated deleted file mode 100644 index 241e191b410..00000000000 --- a/release-0.19.0/docs/man/man1/.files_generated +++ /dev/null @@ -1,28 +0,0 @@ -kubectl-api-versions.1 -kubectl-cluster-info.1 -kubectl-config-set-cluster.1 -kubectl-config-set-context.1 -kubectl-config-set-credentials.1 -kubectl-config-set.1 -kubectl-config-unset.1 -kubectl-config-use-context.1 -kubectl-config-view.1 -kubectl-config.1 -kubectl-create.1 -kubectl-delete.1 -kubectl-describe.1 -kubectl-exec.1 -kubectl-expose.1 -kubectl-get.1 -kubectl-label.1 -kubectl-logs.1 -kubectl-namespace.1 -kubectl-port-forward.1 -kubectl-proxy.1 -kubectl-rolling-update.1 -kubectl-run.1 -kubectl-scale.1 -kubectl-stop.1 -kubectl-update.1 -kubectl-version.1 -kubectl.1 diff --git a/release-0.19.0/docs/man/man1/kube-apiserver.1 b/release-0.19.0/docs/man/man1/kube-apiserver.1 deleted file mode 100644 index 2fa1600b7ea..00000000000 --- a/release-0.19.0/docs/man/man1/kube-apiserver.1 +++ /dev/null @@ -1,259 +0,0 @@ -.TH "KUBERNETES" "1" " kubernetes User Manuals" "Scott Collier" "October 2014" "" - -.SH NAME -.PP -kube\-apiserver \- Provides the API for kubernetes orchestration. - -.SH SYNOPSIS -.PP -\fBkube\-apiserver\fP [OPTIONS] - -.SH DESCRIPTION -.PP -The \fBkubernetes\fP API server validates and configures data for 3 types of objects: pods, services, and replicationcontrollers. Beyond just servicing REST operations, the API Server does two other things as well: 1. Schedules pods to worker nodes. Right now the scheduler is very simple. 2. Synchronize pod information (where they are, what ports they are exposing) with the service configuration. - -.PP -The the kube\-apiserver several options. - -.SH OPTIONS -.PP -\fB\-\-address\fP=127.0.0.1 - DEPRECATED: see \-\-insecure\-bind\-address instead - -.PP -\fB\-\-admission\-control\fP="AlwaysAdmit" - Ordered list of plug\-ins to do admission control of resources into cluster. Comma\-delimited list of: AlwaysDeny, AlwaysAdmit, ServiceAccount, NamespaceExists, NamespaceLifecycle, NamespaceAutoProvision, LimitRanger, SecurityContextDeny, ResourceQuota - -.PP -\fB\-\-admission\-control\-config\-file\fP="" - File with admission control configuration. - -.PP -\fB\-\-allow\-privileged\fP=false - If true, allow privileged containers. - -.PP -\fB\-\-alsologtostderr\fP=false - log to standard error as well as files - -.PP -\fB\-\-api\-burst\fP=200 - API burst amount for the read only port - -.PP -\fB\-\-api\-prefix\fP="/api" - The prefix for API requests on the server. Default '/api'. - -.PP -\fB\-\-api\-rate\fP=10 - API rate limit as QPS for the read only port - -.PP -\fB\-\-authorization\-mode\fP="AlwaysAllow" - Selects how to do authorization on the secure port. One of: AlwaysAllow,AlwaysDeny,ABAC - -.PP -\fB\-\-authorization\-policy\-file\fP="" - File with authorization policy in csv format, used with \-\-authorization\-mode=ABAC, on the secure port. - -.PP -\fB\-\-basic\-auth\-file\fP="" - If set, the file that will be used to admit requests to the secure port of the API server via http basic authentication. - -.PP -\fB\-\-bind\-address\fP=0.0.0.0 - The IP address on which to serve the \-\-read\-only\-port and \-\-secure\-port ports. This address must be reachable by the rest of the cluster. If blank, all interfaces will be used. - -.PP -\fB\-\-cert\-dir\fP="/var/run/kubernetes" - The directory where the TLS certs are located (by default /var/run/kubernetes). If \-\-tls\-cert\-file and \-\-tls\-private\-key\-file are provided, this flag will be ignored. - -.PP -\fB\-\-client\-ca\-file\fP="" - If set, any request presenting a client certificate signed by one of the authorities in the client\-ca\-file is authenticated with an identity corresponding to the CommonName of the client certificate. - -.PP -\fB\-\-cloud\-config\fP="" - The path to the cloud provider configuration file. Empty string for no configuration file. - -.PP -\fB\-\-cloud\-provider\fP="" - The provider for cloud services. Empty string for no provider. - -.PP -\fB\-\-cluster\-name\fP="kubernetes" - The instance prefix for the cluster - -.PP -\fB\-\-cors\-allowed\-origins\fP=[] - List of allowed origins for CORS, comma separated. An allowed origin can be a regular expression to support subdomain matching. If this list is empty CORS will not be enabled. - -.PP -\fB\-\-etcd\-config\fP="" - The config file for the etcd client. Mutually exclusive with \-etcd\-servers. - -.PP -\fB\-\-etcd\-prefix\fP="/registry" - The prefix for all resource paths in etcd. - -.PP -\fB\-\-etcd\-servers\fP=[] - List of etcd servers to watch ( -\[la]http://ip:port\[ra]), comma separated. Mutually exclusive with \-etcd\-config - -.PP -\fB\-\-event\-ttl\fP=1h0m0s - Amount of time to retain events. Default 1 hour. - -.PP -\fB\-\-external\-hostname\fP="" - The hostname to use when generating externalized URLs for this master (e.g. Swagger API Docs.) - -.PP -\fB\-\-insecure\-bind\-address\fP=127.0.0.1 - The IP address on which to serve the \-\-insecure\-port (set to 0.0.0.0 for all interfaces). Defaults to localhost. - -.PP -\fB\-\-insecure\-port\fP=8080 - The port on which to serve unsecured, unauthenticated access. Default 8080. It is assumed that firewall rules are set up such that this port is not reachable from outside of the cluster and that port 443 on the cluster's public address is proxied to this port. This is performed by nginx in the default setup. - -.PP -\fB\-\-kubelet\_certificate\_authority\fP="" - Path to a cert. file for the certificate authority. - -.PP -\fB\-\-kubelet\_client\_certificate\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-kubelet\_client\_key\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-kubelet\_https\fP=true - Use https for kubelet connections - -.PP -\fB\-\-kubelet\_port\fP=10250 - Kubelet port - -.PP -\fB\-\-kubelet\_timeout\fP=5s - Timeout for kubelet operations - -.PP -\fB\-\-log\_backtrace\_at\fP=:0 - when logging hits line file:N, emit a stack trace - -.PP -\fB\-\-log\_dir\fP= - If non\-empty, write log files in this directory - -.PP -\fB\-\-log\_flush\_frequency\fP=5s - Maximum number of seconds between log flushes - -.PP -\fB\-\-logtostderr\fP=true - log to standard error instead of files - -.PP -\fB\-\-long\-running\-request\-regexp\fP="[.\fI\\/watch\$][^\\/proxy.\fP]" - A regular expression matching long running requests which should be excluded from maximum inflight request handling. - -.PP -\fB\-\-master\-service\-namespace\fP="default" - The namespace from which the kubernetes master services should be injected into pods - -.PP -\fB\-\-max\-requests\-inflight\fP=400 - The maximum number of requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit. - -.PP -\fB\-\-old\-etcd\-prefix\fP="/registry" - The previous prefix for all resource paths in etcd, if any. - -.PP -\fB\-\-port\fP=8080 - DEPRECATED: see \-\-insecure\-port instead - -.PP -\fB\-\-service\-cluster\-ip\-range\fP= - A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods. - -.PP -\fB\-\-profiling\fP=true - Enable profiling via web interface host:port/debug/pprof/ - -.PP -\fB\-\-public\-address\-override\fP=0.0.0.0 - DEPRECATED: see \-\-bind\-address instead - -.PP -\fB\-\-read\-only\-port\fP=7080 - The port on which to serve read\-only resources. If 0, don't serve read\-only at all. It is assumed that firewall rules are set up such that this port is not reachable from outside of the cluster. - -.PP -\fB\-\-runtime\-config\fP= - A set of key=value pairs that describe runtime configuration that may be passed to the apiserver. api/ key can be used to turn on/off specific api versions. api/all and api/legacy are special keys to control all and legacy api versions respectively. - -.PP -\fB\-\-secure\-port\fP=6443 - The port on which to serve HTTPS with authentication and authorization. If 0, don't serve HTTPS at all. - -.PP -\fB\-\-service\-account\-key\-file\fP="" - File containing PEM\-encoded x509 RSA private or public key, used to verify ServiceAccount tokens. If unspecified, \-\-tls\-private\-key\-file is used. - -.PP -\fB\-\-service\-account\-lookup\fP=false - If true, validate ServiceAccount tokens exist in etcd as part of authentication. - -.PP -\fB\-\-stderrthreshold\fP=2 - logs at or above this threshold go to stderr - -.PP -\fB\-\-storage\-version\fP="" - The version to store resources with. Defaults to server preferred - -.PP -\fB\-\-tls\-cert\-file\fP="" - File containing x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and \-\-tls\-cert\-file and \-\-tls\-private\-key\-file are not provided, a self\-signed certificate and key are generated for the public address and saved to /var/run/kubernetes. - -.PP -\fB\-\-tls\-private\-key\-file\fP="" - File containing x509 private key matching \-\-tls\-cert\-file. - -.PP -\fB\-\-token\-auth\-file\fP="" - If set, the file that will be used to secure the secure port of the API server via token authentication. - -.PP -\fB\-\-v\fP=0 - log level for V logs - -.PP -\fB\-\-version\fP=false - Print version information and quit - -.PP -\fB\-\-vmodule\fP= - comma\-separated list of pattern=N settings for file\-filtered logging - -.SH EXAMPLES -.PP -.RS - -.nf -/usr/bin/kube\-apiserver \-\-logtostderr=true \-\-v=0 \-\-etcd\_servers=http://127.0.0.1:4001 \-\-insecure\_bind\_address=127.0.0.1 \-\-insecure\_port=8080 \-\-kubelet\_port=10250 \-\-service\-cluster\-ip\-range=10.1.1.0/24 \-\-allow\_privileged=false - -.fi - -.SH HISTORY -.PP -October 2014, Originally compiled by Scott Collier (scollier at redhat dot com) based - on the kubernetes source material and internal work. - -.PP -[]() diff --git a/release-0.19.0/docs/man/man1/kube-controller-manager.1 b/release-0.19.0/docs/man/man1/kube-controller-manager.1 deleted file mode 100644 index df0d45603f3..00000000000 --- a/release-0.19.0/docs/man/man1/kube-controller-manager.1 +++ /dev/null @@ -1,182 +0,0 @@ -.TH "KUBERNETES" "1" " kubernetes User Manuals" "Scott Collier" "October 2014" "" - -.SH NAME -.PP -kube\-controller\-manager \- Enforces kubernetes services. - -.SH SYNOPSIS -.PP -\fBkube\-controller\-manager\fP [OPTIONS] - -.SH DESCRIPTION -.PP -The \fBkubernetes\fP controller manager is really a service that is layered on top of the simple pod API. To enforce this layering, the logic for the replicationcontroller is actually broken out into another server. This server watches etcd for changes to replicationcontroller objects and then uses the public Kubernetes API to implement the replication algorithm. - -.PP -The kube\-controller\-manager has several options. - -.SH OPTIONS -.PP -\fB\-\-address\fP=127.0.0.1 - The IP address to serve on (set to 0.0.0.0 for all interfaces) - -.PP -\fB\-\-allocate\-node\-cidrs\fP=false - Should CIDRs for Pods be allocated and set on the cloud provider. - -.PP -\fB\-\-alsologtostderr\fP=false - log to standard error as well as files - -.PP -\fB\-\-cloud\-config\fP="" - The path to the cloud provider configuration file. Empty string for no configuration file. - -.PP -\fB\-\-cloud\-provider\fP="" - The provider for cloud services. Empty string for no provider. - -.PP -\fB\-\-cluster\-cidr\fP= - CIDR Range for Pods in cluster. - -.PP -\fB\-\-concurrent\-endpoint\-syncs\fP=5 - The number of endpoint syncing operations that will be done concurrently. Larger number = faster endpoint updating, but more CPU (and network) load - -.PP -\fB\-\-concurrent\_rc\_syncs\fP=5 - The number of replication controllers that are allowed to sync concurrently. Larger number = more reponsive replica management, but more CPU (and network) load - -.PP -\fB\-\-deleting\-pods\-burst\fP=10 - Number of nodes on which pods are bursty deleted in case of node failure. For more details look into RateLimiter. - -.PP -\fB\-\-deleting\-pods\-qps\fP=0.1 - Number of nodes per second on which pods are deleted in case of node failure. - -.PP -\fB\-\-kubeconfig\fP="" - Path to kubeconfig file with authorization and master location information. - -.PP -\fB\-\-log\_backtrace\_at\fP=:0 - when logging hits line file:N, emit a stack trace - -.PP -\fB\-\-log\_dir\fP= - If non\-empty, write log files in this directory - -.PP -\fB\-\-log\_flush\_frequency\fP=5s - Maximum number of seconds between log flushes - -.PP -\fB\-\-logtostderr\fP=true - log to standard error instead of files - -.PP -\fB\-\-machines\fP=[] - List of machines to schedule onto, comma separated. - -.PP -\fB\-\-master\fP="" - The address of the Kubernetes API server (overrides any value in kubeconfig) - -.PP -\fB\-\-minion\-regexp\fP="" - If non empty, and \-\-cloud\-provider is specified, a regular expression for matching minion VMs. - -.PP -\fB\-\-namespace\-sync\-period\fP=5m0s - The period for syncing namespace life\-cycle updates - -.PP -\fB\-\-node\-memory\fP=3Gi - The amount of memory (in bytes) provisioned on each node - -.PP -\fB\-\-node\-milli\-cpu\fP=1000 - The amount of MilliCPU provisioned on each node - -.PP -\fB\-\-node\-monitor\-grace\-period\fP=40s - Amount of time which we allow running Node to be unresponsive before marking it unhealty. Must be N times more than kubelet's nodeStatusUpdateFrequency, where N means number of retries allowed for kubelet to post node status. - -.PP -\fB\-\-node\-monitor\-period\fP=5s - The period for syncing NodeStatus in NodeController. - -.PP -\fB\-\-node\-startup\-grace\-period\fP=1m0s - Amount of time which we allow starting Node to be unresponsive before marking it unhealty. - -.PP -\fB\-\-node\-sync\-period\fP=10s - The period for syncing nodes from cloudprovider. Longer periods will result in fewer calls to cloud provider, but may delay addition of new nodes to cluster. - -.PP -\fB\-\-pod\-eviction\-timeout\fP=5m0s - The grace peroid for deleting pods on failed nodes. - -.PP -\fB\-\-port\fP=10252 - The port that the controller\-manager's http service runs on - -.PP -\fB\-\-profiling\fP=true - Enable profiling via web interface host:port/debug/pprof/ - -.PP -\fB\-\-pvclaimbinder\-sync\-period\fP=10s - The period for syncing persistent volumes and persistent volume claims - -.PP -\fB\-\-register\-retry\-count\fP=10 - The number of retries for initial node registration. Retry interval equals node\-sync\-period. - -.PP -\fB\-\-resource\-quota\-sync\-period\fP=10s - The period for syncing quota usage status in the system - -.PP -\fB\-\-service\-account\-private\-key\-file\fP="" - Filename containing a PEM\-encoded private RSA key used to sign service account tokens. - -.PP -\fB\-\-stderrthreshold\fP=2 - logs at or above this threshold go to stderr - -.PP -\fB\-\-sync\-nodes\fP=true - If true, and \-\-cloud\-provider is specified, sync nodes from the cloud provider. Default true. - -.PP -\fB\-\-v\fP=0 - log level for V logs - -.PP -\fB\-\-version\fP=false - Print version information and quit - -.PP -\fB\-\-vmodule\fP= - comma\-separated list of pattern=N settings for file\-filtered logging - -.SH EXAMPLES -.PP -.RS - -.nf -/usr/bin/kube\-controller\-manager \-\-logtostderr=true \-\-v=0 \-\-master=127.0.0.1:8080 \-\-machines=127.0.0.1 - -.fi - -.SH HISTORY -.PP -October 2014, Originally compiled by Scott Collier (scollier at redhat dot com) based - on the kubernetes source material and internal work. - -.PP -[]() diff --git a/release-0.19.0/docs/man/man1/kube-proxy.1 b/release-0.19.0/docs/man/man1/kube-proxy.1 deleted file mode 100644 index ffb10f811fa..00000000000 --- a/release-0.19.0/docs/man/man1/kube-proxy.1 +++ /dev/null @@ -1,98 +0,0 @@ -.TH "KUBERNETES" "1" " kubernetes User Manuals" "Scott Collier" "October 2014" "" - -.SH NAME -.PP -kube\-proxy \- Provides network proxy services. - -.SH SYNOPSIS -.PP -\fBkube\-proxy\fP [OPTIONS] - -.SH DESCRIPTION -.PP -The \fBkubernetes\fP network proxy runs on each node. This reflects services as defined in the Kubernetes API on each node and can do simple TCP stream forwarding or round robin TCP forwarding across a set of backends. Service endpoints are currently found through Docker\-links\-compatible environment variables specifying ports opened by the service proxy. Currently the user must select a port to expose the service on on the proxy, as well as the container's port to target. - -.PP -The kube\-proxy takes several options. - -.SH OPTIONS -.PP -\fB\-\-alsologtostderr\fP=false - log to standard error as well as files - -.PP -\fB\-\-bind\-address\fP=0.0.0.0 - The IP address for the proxy server to serve on (set to 0.0.0.0 for all interfaces) - -.PP -\fB\-\-healthz\-bind\-address\fP=127.0.0.1 - The IP address for the health check server to serve on, defaulting to 127.0.0.1 (set to 0.0.0.0 for all interfaces) - -.PP -\fB\-\-healthz\-port\fP=10249 - The port to bind the health check server. Use 0 to disable. - -.PP -\fB\-\-kubeconfig\fP="" - Path to kubeconfig file with authorization information (the master location is set by the master flag). - -.PP -\fB\-\-log\_backtrace\_at\fP=:0 - when logging hits line file:N, emit a stack trace - -.PP -\fB\-\-log\_dir\fP= - If non\-empty, write log files in this directory - -.PP -\fB\-\-log\_flush\_frequency\fP=5s - Maximum number of seconds between log flushes - -.PP -\fB\-\-logtostderr\fP=true - log to standard error instead of files - -.PP -\fB\-\-master\fP="" - The address of the Kubernetes API server (overrides any value in kubeconfig) - -.PP -\fB\-\-oom\-score\-adj\fP=\-899 - The oom\_score\_adj value for kube\-proxy process. Values must be within the range [\-1000, 1000] - -.PP -\fB\-\-resource\-container\fP="/kube\-proxy" - Absolute name of the resource\-only container to create and run the Kube\-proxy in (Default: /kube\-proxy). - -.PP -\fB\-\-stderrthreshold\fP=2 - logs at or above this threshold go to stderr - -.PP -\fB\-\-v\fP=0 - log level for V logs - -.PP -\fB\-\-version\fP=false - Print version information and quit - -.PP -\fB\-\-vmodule\fP= - comma\-separated list of pattern=N settings for file\-filtered logging - -.SH EXAMPLES -.PP -.RS - -.nf -/usr/bin/kube\-proxy \-\-logtostderr=true \-\-v=0 \-\-master=http://127.0.0.1:8080 - -.fi - -.SH HISTORY -.PP -October 2014, Originally compiled by Scott Collier (scollier at redhat dot com) based - on the kubernetes source material and internal work. - -.PP -[]() diff --git a/release-0.19.0/docs/man/man1/kube-scheduler.1 b/release-0.19.0/docs/man/man1/kube-scheduler.1 deleted file mode 100644 index 85e749eed79..00000000000 --- a/release-0.19.0/docs/man/man1/kube-scheduler.1 +++ /dev/null @@ -1,98 +0,0 @@ -.TH "KUBERNETES" "1" " kubernetes User Manuals" "Scott Collier" "October 2014" "" - -.SH NAME -.PP -kube\-scheduler \- Schedules containers on hosts. - -.SH SYNOPSIS -.PP -\fBkube\-scheduler\fP [OPTIONS] - -.SH DESCRIPTION -.PP -The \fBkubernetes\fP scheduler is a policy\-rich, topology\-aware, workload\-specific function that significantly impacts availability, performance, and capacity. The scheduler needs to take into account individual and collective resource requirements, quality of service requirements, hardware/software/policy constraints, affinity and anti\-affinity specifications, data locality, inter\-workload interference, deadlines, and so on. Workload\-specific requirements will be exposed through the API as necessary. - -.PP -The kube\-scheduler can take several options. - -.SH OPTIONS -.PP -\fB\-\-address\fP=127.0.0.1 - The IP address to serve on (set to 0.0.0.0 for all interfaces) - -.PP -\fB\-\-algorithm\-provider\fP="DefaultProvider" - The scheduling algorithm provider to use, one of: DefaultProvider - -.PP -\fB\-\-alsologtostderr\fP=false - log to standard error as well as files - -.PP -\fB\-\-kubeconfig\fP="" - Path to kubeconfig file with authorization and master location information. - -.PP -\fB\-\-log\_backtrace\_at\fP=:0 - when logging hits line file:N, emit a stack trace - -.PP -\fB\-\-log\_dir\fP= - If non\-empty, write log files in this directory - -.PP -\fB\-\-log\_flush\_frequency\fP=5s - Maximum number of seconds between log flushes - -.PP -\fB\-\-logtostderr\fP=true - log to standard error instead of files - -.PP -\fB\-\-master\fP="" - The address of the Kubernetes API server (overrides any value in kubeconfig) - -.PP -\fB\-\-policy\-config\-file\fP="" - File with scheduler policy configuration - -.PP -\fB\-\-port\fP=10251 - The port that the scheduler's http service runs on - -.PP -\fB\-\-profiling\fP=true - Enable profiling via web interface host:port/debug/pprof/ - -.PP -\fB\-\-stderrthreshold\fP=2 - logs at or above this threshold go to stderr - -.PP -\fB\-\-v\fP=0 - log level for V logs - -.PP -\fB\-\-version\fP=false - Print version information and quit - -.PP -\fB\-\-vmodule\fP= - comma\-separated list of pattern=N settings for file\-filtered logging - -.SH EXAMPLES -.PP -.RS - -.nf -/usr/bin/kube\-scheduler \-\-logtostderr=true \-\-v=0 \-\-master=127.0.0.1:8080 - -.fi - -.SH HISTORY -.PP -October 2014, Originally compiled by Scott Collier (scollier@redhat.com) based - on the kubernetes source material and internal work. - -.PP -[]() diff --git a/release-0.19.0/docs/man/man1/kubectl-api-versions.1 b/release-0.19.0/docs/man/man1/kubectl-api-versions.1 deleted file mode 100644 index c4212fd1d46..00000000000 --- a/release-0.19.0/docs/man/man1/kubectl-api-versions.1 +++ /dev/null @@ -1,130 +0,0 @@ -.TH "KUBERNETES" "1" " kubernetes User Manuals" "Eric Paris" "Jan 2015" "" - - -.SH NAME -.PP -kubectl api\-versions \- Print available API versions. - - -.SH SYNOPSIS -.PP -\fBkubectl api\-versions\fP [OPTIONS] - - -.SH DESCRIPTION -.PP -Print available API versions. - - -.SH OPTIONS -.PP -\fB\-h\fP, \fB\-\-help\fP=false - help for api\-versions - - -.SH OPTIONS INHERITED FROM PARENT COMMANDS -.PP -\fB\-\-alsologtostderr\fP=false - log to standard error as well as files - -.PP -\fB\-\-api\-version\fP="" - The API version to use when talking to the server - -.PP -\fB\-\-certificate\-authority\fP="" - Path to a cert. file for the certificate authority. - -.PP -\fB\-\-client\-certificate\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-client\-key\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-cluster\fP="" - The name of the kubeconfig cluster to use - -.PP -\fB\-\-context\fP="" - The name of the kubeconfig context to use - -.PP -\fB\-\-insecure\-skip\-tls\-verify\fP=false - If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - -.PP -\fB\-\-kubeconfig\fP="" - Path to the kubeconfig file to use for CLI requests. - -.PP -\fB\-\-log\-backtrace\-at\fP=:0 - when logging hits line file:N, emit a stack trace - -.PP -\fB\-\-log\-dir\fP="" - If non\-empty, write log files in this directory - -.PP -\fB\-\-log\-flush\-frequency\fP=5s - Maximum number of seconds between log flushes - -.PP -\fB\-\-logtostderr\fP=true - log to standard error instead of files - -.PP -\fB\-\-match\-server\-version\fP=false - Require server version to match client version - -.PP -\fB\-\-namespace\fP="" - If present, the namespace scope for this CLI request. - -.PP -\fB\-\-password\fP="" - Password for basic authentication to the API server. - -.PP -\fB\-s\fP, \fB\-\-server\fP="" - The address and port of the Kubernetes API server - -.PP -\fB\-\-stderrthreshold\fP=2 - logs at or above this threshold go to stderr - -.PP -\fB\-\-token\fP="" - Bearer token for authentication to the API server. - -.PP -\fB\-\-user\fP="" - The name of the kubeconfig user to use - -.PP -\fB\-\-username\fP="" - Username for basic authentication to the API server. - -.PP -\fB\-\-v\fP=0 - log level for V logs - -.PP -\fB\-\-validate\fP=false - If true, use a schema to validate the input before sending it - -.PP -\fB\-\-vmodule\fP= - comma\-separated list of pattern=N settings for file\-filtered logging - - -.SH SEE ALSO -.PP -\fBkubectl(1)\fP, - - -.SH HISTORY -.PP -January 2015, Originally compiled by Eric Paris (eparis at redhat dot com) based on the kubernetes source material, but hopefully they have been automatically generated since! diff --git a/release-0.19.0/docs/man/man1/kubectl-cluster-info.1 b/release-0.19.0/docs/man/man1/kubectl-cluster-info.1 deleted file mode 100644 index 3f64ae37ab1..00000000000 --- a/release-0.19.0/docs/man/man1/kubectl-cluster-info.1 +++ /dev/null @@ -1,130 +0,0 @@ -.TH "KUBERNETES" "1" " kubernetes User Manuals" "Eric Paris" "Jan 2015" "" - - -.SH NAME -.PP -kubectl cluster\-info \- Display cluster info - - -.SH SYNOPSIS -.PP -\fBkubectl cluster\-info\fP [OPTIONS] - - -.SH DESCRIPTION -.PP -Display addresses of the master and services with label kubernetes.io/cluster\-service=true - - -.SH OPTIONS -.PP -\fB\-h\fP, \fB\-\-help\fP=false - help for cluster\-info - - -.SH OPTIONS INHERITED FROM PARENT COMMANDS -.PP -\fB\-\-alsologtostderr\fP=false - log to standard error as well as files - -.PP -\fB\-\-api\-version\fP="" - The API version to use when talking to the server - -.PP -\fB\-\-certificate\-authority\fP="" - Path to a cert. file for the certificate authority. - -.PP -\fB\-\-client\-certificate\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-client\-key\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-cluster\fP="" - The name of the kubeconfig cluster to use - -.PP -\fB\-\-context\fP="" - The name of the kubeconfig context to use - -.PP -\fB\-\-insecure\-skip\-tls\-verify\fP=false - If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - -.PP -\fB\-\-kubeconfig\fP="" - Path to the kubeconfig file to use for CLI requests. - -.PP -\fB\-\-log\-backtrace\-at\fP=:0 - when logging hits line file:N, emit a stack trace - -.PP -\fB\-\-log\-dir\fP="" - If non\-empty, write log files in this directory - -.PP -\fB\-\-log\-flush\-frequency\fP=5s - Maximum number of seconds between log flushes - -.PP -\fB\-\-logtostderr\fP=true - log to standard error instead of files - -.PP -\fB\-\-match\-server\-version\fP=false - Require server version to match client version - -.PP -\fB\-\-namespace\fP="" - If present, the namespace scope for this CLI request. - -.PP -\fB\-\-password\fP="" - Password for basic authentication to the API server. - -.PP -\fB\-s\fP, \fB\-\-server\fP="" - The address and port of the Kubernetes API server - -.PP -\fB\-\-stderrthreshold\fP=2 - logs at or above this threshold go to stderr - -.PP -\fB\-\-token\fP="" - Bearer token for authentication to the API server. - -.PP -\fB\-\-user\fP="" - The name of the kubeconfig user to use - -.PP -\fB\-\-username\fP="" - Username for basic authentication to the API server. - -.PP -\fB\-\-v\fP=0 - log level for V logs - -.PP -\fB\-\-validate\fP=false - If true, use a schema to validate the input before sending it - -.PP -\fB\-\-vmodule\fP= - comma\-separated list of pattern=N settings for file\-filtered logging - - -.SH SEE ALSO -.PP -\fBkubectl(1)\fP, - - -.SH HISTORY -.PP -January 2015, Originally compiled by Eric Paris (eparis at redhat dot com) based on the kubernetes source material, but hopefully they have been automatically generated since! diff --git a/release-0.19.0/docs/man/man1/kubectl-config-set-cluster.1 b/release-0.19.0/docs/man/man1/kubectl-config-set-cluster.1 deleted file mode 100644 index 374e37bbad7..00000000000 --- a/release-0.19.0/docs/man/man1/kubectl-config-set-cluster.1 +++ /dev/null @@ -1,153 +0,0 @@ -.TH "KUBERNETES" "1" " kubernetes User Manuals" "Eric Paris" "Jan 2015" "" - - -.SH NAME -.PP -kubectl config set\-cluster \- Sets a cluster entry in kubeconfig - - -.SH SYNOPSIS -.PP -\fBkubectl config set\-cluster\fP [OPTIONS] - - -.SH DESCRIPTION -.PP -Sets a cluster entry in kubeconfig. -Specifying a name that already exists will merge new fields on top of existing values for those fields. - - -.SH OPTIONS -.PP -\fB\-\-api\-version\fP="" - api\-version for the cluster entry in kubeconfig - -.PP -\fB\-\-certificate\-authority\fP="" - path to certificate\-authority for the cluster entry in kubeconfig - -.PP -\fB\-\-embed\-certs\fP=false - embed\-certs for the cluster entry in kubeconfig - -.PP -\fB\-h\fP, \fB\-\-help\fP=false - help for set\-cluster - -.PP -\fB\-\-insecure\-skip\-tls\-verify\fP=false - insecure\-skip\-tls\-verify for the cluster entry in kubeconfig - -.PP -\fB\-\-server\fP="" - server for the cluster entry in kubeconfig - - -.SH OPTIONS INHERITED FROM PARENT COMMANDS -.PP -\fB\-\-alsologtostderr\fP=false - log to standard error as well as files - -.PP -\fB\-\-client\-certificate\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-client\-key\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-cluster\fP="" - The name of the kubeconfig cluster to use - -.PP -\fB\-\-context\fP="" - The name of the kubeconfig context to use - -.PP -\fB\-\-kubeconfig\fP="" - use a particular kubeconfig file - -.PP -\fB\-\-log\-backtrace\-at\fP=:0 - when logging hits line file:N, emit a stack trace - -.PP -\fB\-\-log\-dir\fP="" - If non\-empty, write log files in this directory - -.PP -\fB\-\-log\-flush\-frequency\fP=5s - Maximum number of seconds between log flushes - -.PP -\fB\-\-logtostderr\fP=true - log to standard error instead of files - -.PP -\fB\-\-match\-server\-version\fP=false - Require server version to match client version - -.PP -\fB\-\-namespace\fP="" - If present, the namespace scope for this CLI request. - -.PP -\fB\-\-password\fP="" - Password for basic authentication to the API server. - -.PP -\fB\-\-stderrthreshold\fP=2 - logs at or above this threshold go to stderr - -.PP -\fB\-\-token\fP="" - Bearer token for authentication to the API server. - -.PP -\fB\-\-user\fP="" - The name of the kubeconfig user to use - -.PP -\fB\-\-username\fP="" - Username for basic authentication to the API server. - -.PP -\fB\-\-v\fP=0 - log level for V logs - -.PP -\fB\-\-validate\fP=false - If true, use a schema to validate the input before sending it - -.PP -\fB\-\-vmodule\fP= - comma\-separated list of pattern=N settings for file\-filtered logging - - -.SH EXAMPLE -.PP -.RS - -.nf -// Set only the server field on the e2e cluster entry without touching other values. -$ kubectl config set\-cluster e2e \-\-server=https://1.2.3.4 - -// Embed certificate authority data for the e2e cluster entry -$ kubectl config set\-cluster e2e \-\-certificate\-authority=\~/.kube/e2e/kubernetes.ca.crt - -// Disable cert checking for the dev cluster entry -$ kubectl config set\-cluster e2e \-\-insecure\-skip\-tls\-verify=true - -.fi -.RE - - -.SH SEE ALSO -.PP -\fBkubectl\-config(1)\fP, - - -.SH HISTORY -.PP -January 2015, Originally compiled by Eric Paris (eparis at redhat dot com) based on the kubernetes source material, but hopefully they have been automatically generated since! diff --git a/release-0.19.0/docs/man/man1/kubectl-config-set-context.1 b/release-0.19.0/docs/man/man1/kubectl-config-set-context.1 deleted file mode 100644 index 4e7928418ed..00000000000 --- a/release-0.19.0/docs/man/man1/kubectl-config-set-context.1 +++ /dev/null @@ -1,143 +0,0 @@ -.TH "KUBERNETES" "1" " kubernetes User Manuals" "Eric Paris" "Jan 2015" "" - - -.SH NAME -.PP -kubectl config set\-context \- Sets a context entry in kubeconfig - - -.SH SYNOPSIS -.PP -\fBkubectl config set\-context\fP [OPTIONS] - - -.SH DESCRIPTION -.PP -Sets a context entry in kubeconfig -Specifying a name that already exists will merge new fields on top of existing values for those fields. - - -.SH OPTIONS -.PP -\fB\-\-cluster\fP="" - cluster for the context entry in kubeconfig - -.PP -\fB\-h\fP, \fB\-\-help\fP=false - help for set\-context - -.PP -\fB\-\-namespace\fP="" - namespace for the context entry in kubeconfig - -.PP -\fB\-\-user\fP="" - user for the context entry in kubeconfig - - -.SH OPTIONS INHERITED FROM PARENT COMMANDS -.PP -\fB\-\-alsologtostderr\fP=false - log to standard error as well as files - -.PP -\fB\-\-api\-version\fP="" - The API version to use when talking to the server - -.PP -\fB\-\-certificate\-authority\fP="" - Path to a cert. file for the certificate authority. - -.PP -\fB\-\-client\-certificate\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-client\-key\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-context\fP="" - The name of the kubeconfig context to use - -.PP -\fB\-\-insecure\-skip\-tls\-verify\fP=false - If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - -.PP -\fB\-\-kubeconfig\fP="" - use a particular kubeconfig file - -.PP -\fB\-\-log\-backtrace\-at\fP=:0 - when logging hits line file:N, emit a stack trace - -.PP -\fB\-\-log\-dir\fP="" - If non\-empty, write log files in this directory - -.PP -\fB\-\-log\-flush\-frequency\fP=5s - Maximum number of seconds between log flushes - -.PP -\fB\-\-logtostderr\fP=true - log to standard error instead of files - -.PP -\fB\-\-match\-server\-version\fP=false - Require server version to match client version - -.PP -\fB\-\-password\fP="" - Password for basic authentication to the API server. - -.PP -\fB\-s\fP, \fB\-\-server\fP="" - The address and port of the Kubernetes API server - -.PP -\fB\-\-stderrthreshold\fP=2 - logs at or above this threshold go to stderr - -.PP -\fB\-\-token\fP="" - Bearer token for authentication to the API server. - -.PP -\fB\-\-username\fP="" - Username for basic authentication to the API server. - -.PP -\fB\-\-v\fP=0 - log level for V logs - -.PP -\fB\-\-validate\fP=false - If true, use a schema to validate the input before sending it - -.PP -\fB\-\-vmodule\fP= - comma\-separated list of pattern=N settings for file\-filtered logging - - -.SH EXAMPLE -.PP -.RS - -.nf -// Set the user field on the gce context entry without touching other values -$ kubectl config set\-context gce \-\-user=cluster\-admin - -.fi -.RE - - -.SH SEE ALSO -.PP -\fBkubectl\-config(1)\fP, - - -.SH HISTORY -.PP -January 2015, Originally compiled by Eric Paris (eparis at redhat dot com) based on the kubernetes source material, but hopefully they have been automatically generated since! diff --git a/release-0.19.0/docs/man/man1/kubectl-config-set-credentials.1 b/release-0.19.0/docs/man/man1/kubectl-config-set-credentials.1 deleted file mode 100644 index 1638f4bed7c..00000000000 --- a/release-0.19.0/docs/man/man1/kubectl-config-set-credentials.1 +++ /dev/null @@ -1,169 +0,0 @@ -.TH "KUBERNETES" "1" " kubernetes User Manuals" "Eric Paris" "Jan 2015" "" - - -.SH NAME -.PP -kubectl config set\-credentials \- Sets a user entry in kubeconfig - - -.SH SYNOPSIS -.PP -\fBkubectl config set\-credentials\fP [OPTIONS] - - -.SH DESCRIPTION -.PP -Sets a user entry in kubeconfig -Specifying a name that already exists will merge new fields on top of existing values. - -.PP -Client\-certificate flags: - \-\-client\-certificate=certfile \-\-client\-key=keyfile - -.PP -Bearer token flags: - \-\-token=bearer\_token - -.PP -Basic auth flags: - \-\-username=basic\_user \-\-password=basic\_password - -.PP -Bearer token and basic auth are mutually exclusive. - - -.SH OPTIONS -.PP -\fB\-\-client\-certificate\fP="" - path to client\-certificate for the user entry in kubeconfig - -.PP -\fB\-\-client\-key\fP="" - path to client\-key for the user entry in kubeconfig - -.PP -\fB\-\-embed\-certs\fP=false - embed client cert/key for the user entry in kubeconfig - -.PP -\fB\-h\fP, \fB\-\-help\fP=false - help for set\-credentials - -.PP -\fB\-\-password\fP="" - password for the user entry in kubeconfig - -.PP -\fB\-\-token\fP="" - token for the user entry in kubeconfig - -.PP -\fB\-\-username\fP="" - username for the user entry in kubeconfig - - -.SH OPTIONS INHERITED FROM PARENT COMMANDS -.PP -\fB\-\-alsologtostderr\fP=false - log to standard error as well as files - -.PP -\fB\-\-api\-version\fP="" - The API version to use when talking to the server - -.PP -\fB\-\-certificate\-authority\fP="" - Path to a cert. file for the certificate authority. - -.PP -\fB\-\-cluster\fP="" - The name of the kubeconfig cluster to use - -.PP -\fB\-\-context\fP="" - The name of the kubeconfig context to use - -.PP -\fB\-\-insecure\-skip\-tls\-verify\fP=false - If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - -.PP -\fB\-\-kubeconfig\fP="" - use a particular kubeconfig file - -.PP -\fB\-\-log\-backtrace\-at\fP=:0 - when logging hits line file:N, emit a stack trace - -.PP -\fB\-\-log\-dir\fP="" - If non\-empty, write log files in this directory - -.PP -\fB\-\-log\-flush\-frequency\fP=5s - Maximum number of seconds between log flushes - -.PP -\fB\-\-logtostderr\fP=true - log to standard error instead of files - -.PP -\fB\-\-match\-server\-version\fP=false - Require server version to match client version - -.PP -\fB\-\-namespace\fP="" - If present, the namespace scope for this CLI request. - -.PP -\fB\-s\fP, \fB\-\-server\fP="" - The address and port of the Kubernetes API server - -.PP -\fB\-\-stderrthreshold\fP=2 - logs at or above this threshold go to stderr - -.PP -\fB\-\-user\fP="" - The name of the kubeconfig user to use - -.PP -\fB\-\-v\fP=0 - log level for V logs - -.PP -\fB\-\-validate\fP=false - If true, use a schema to validate the input before sending it - -.PP -\fB\-\-vmodule\fP= - comma\-separated list of pattern=N settings for file\-filtered logging - - -.SH EXAMPLE -.PP -.RS - -.nf -// Set only the "client\-key" field on the "cluster\-admin" -// entry, without touching other values: -$ kubectl set\-credentials cluster\-admin \-\-client\-key=\~/.kube/admin.key - -// Set basic auth for the "cluster\-admin" entry -$ kubectl set\-credentials cluster\-admin \-\-username=admin \-\-password=uXFGweU9l35qcif - -// Embed client certificate data in the "cluster\-admin" entry -$ kubectl set\-credentials cluster\-admin \-\-client\-certificate=\~/.kube/admin.crt \-\-embed\-certs=true - -.fi -.RE - - -.SH SEE ALSO -.PP -\fBkubectl\-config(1)\fP, - - -.SH HISTORY -.PP -January 2015, Originally compiled by Eric Paris (eparis at redhat dot com) based on the kubernetes source material, but hopefully they have been automatically generated since! diff --git a/release-0.19.0/docs/man/man1/kubectl-config-set.1 b/release-0.19.0/docs/man/man1/kubectl-config-set.1 deleted file mode 100644 index f83ea2edaa2..00000000000 --- a/release-0.19.0/docs/man/man1/kubectl-config-set.1 +++ /dev/null @@ -1,132 +0,0 @@ -.TH "KUBERNETES" "1" " kubernetes User Manuals" "Eric Paris" "Jan 2015" "" - - -.SH NAME -.PP -kubectl config set \- Sets an individual value in a kubeconfig file - - -.SH SYNOPSIS -.PP -\fBkubectl config set\fP [OPTIONS] - - -.SH DESCRIPTION -.PP -Sets an individual value in a kubeconfig file -PROPERTY\_NAME is a dot delimited name where each token represents either a attribute name or a map key. Map keys may not contain dots. -PROPERTY\_VALUE is the new value you wish to set. - - -.SH OPTIONS -.PP -\fB\-h\fP, \fB\-\-help\fP=false - help for set - - -.SH OPTIONS INHERITED FROM PARENT COMMANDS -.PP -\fB\-\-alsologtostderr\fP=false - log to standard error as well as files - -.PP -\fB\-\-api\-version\fP="" - The API version to use when talking to the server - -.PP -\fB\-\-certificate\-authority\fP="" - Path to a cert. file for the certificate authority. - -.PP -\fB\-\-client\-certificate\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-client\-key\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-cluster\fP="" - The name of the kubeconfig cluster to use - -.PP -\fB\-\-context\fP="" - The name of the kubeconfig context to use - -.PP -\fB\-\-insecure\-skip\-tls\-verify\fP=false - If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - -.PP -\fB\-\-kubeconfig\fP="" - use a particular kubeconfig file - -.PP -\fB\-\-log\-backtrace\-at\fP=:0 - when logging hits line file:N, emit a stack trace - -.PP -\fB\-\-log\-dir\fP="" - If non\-empty, write log files in this directory - -.PP -\fB\-\-log\-flush\-frequency\fP=5s - Maximum number of seconds between log flushes - -.PP -\fB\-\-logtostderr\fP=true - log to standard error instead of files - -.PP -\fB\-\-match\-server\-version\fP=false - Require server version to match client version - -.PP -\fB\-\-namespace\fP="" - If present, the namespace scope for this CLI request. - -.PP -\fB\-\-password\fP="" - Password for basic authentication to the API server. - -.PP -\fB\-s\fP, \fB\-\-server\fP="" - The address and port of the Kubernetes API server - -.PP -\fB\-\-stderrthreshold\fP=2 - logs at or above this threshold go to stderr - -.PP -\fB\-\-token\fP="" - Bearer token for authentication to the API server. - -.PP -\fB\-\-user\fP="" - The name of the kubeconfig user to use - -.PP -\fB\-\-username\fP="" - Username for basic authentication to the API server. - -.PP -\fB\-\-v\fP=0 - log level for V logs - -.PP -\fB\-\-validate\fP=false - If true, use a schema to validate the input before sending it - -.PP -\fB\-\-vmodule\fP= - comma\-separated list of pattern=N settings for file\-filtered logging - - -.SH SEE ALSO -.PP -\fBkubectl\-config(1)\fP, - - -.SH HISTORY -.PP -January 2015, Originally compiled by Eric Paris (eparis at redhat dot com) based on the kubernetes source material, but hopefully they have been automatically generated since! diff --git a/release-0.19.0/docs/man/man1/kubectl-config-unset.1 b/release-0.19.0/docs/man/man1/kubectl-config-unset.1 deleted file mode 100644 index cea12d2e81a..00000000000 --- a/release-0.19.0/docs/man/man1/kubectl-config-unset.1 +++ /dev/null @@ -1,131 +0,0 @@ -.TH "KUBERNETES" "1" " kubernetes User Manuals" "Eric Paris" "Jan 2015" "" - - -.SH NAME -.PP -kubectl config unset \- Unsets an individual value in a kubeconfig file - - -.SH SYNOPSIS -.PP -\fBkubectl config unset\fP [OPTIONS] - - -.SH DESCRIPTION -.PP -Unsets an individual value in a kubeconfig file -PROPERTY\_NAME is a dot delimited name where each token represents either a attribute name or a map key. Map keys may not contain dots. - - -.SH OPTIONS -.PP -\fB\-h\fP, \fB\-\-help\fP=false - help for unset - - -.SH OPTIONS INHERITED FROM PARENT COMMANDS -.PP -\fB\-\-alsologtostderr\fP=false - log to standard error as well as files - -.PP -\fB\-\-api\-version\fP="" - The API version to use when talking to the server - -.PP -\fB\-\-certificate\-authority\fP="" - Path to a cert. file for the certificate authority. - -.PP -\fB\-\-client\-certificate\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-client\-key\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-cluster\fP="" - The name of the kubeconfig cluster to use - -.PP -\fB\-\-context\fP="" - The name of the kubeconfig context to use - -.PP -\fB\-\-insecure\-skip\-tls\-verify\fP=false - If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - -.PP -\fB\-\-kubeconfig\fP="" - use a particular kubeconfig file - -.PP -\fB\-\-log\-backtrace\-at\fP=:0 - when logging hits line file:N, emit a stack trace - -.PP -\fB\-\-log\-dir\fP="" - If non\-empty, write log files in this directory - -.PP -\fB\-\-log\-flush\-frequency\fP=5s - Maximum number of seconds between log flushes - -.PP -\fB\-\-logtostderr\fP=true - log to standard error instead of files - -.PP -\fB\-\-match\-server\-version\fP=false - Require server version to match client version - -.PP -\fB\-\-namespace\fP="" - If present, the namespace scope for this CLI request. - -.PP -\fB\-\-password\fP="" - Password for basic authentication to the API server. - -.PP -\fB\-s\fP, \fB\-\-server\fP="" - The address and port of the Kubernetes API server - -.PP -\fB\-\-stderrthreshold\fP=2 - logs at or above this threshold go to stderr - -.PP -\fB\-\-token\fP="" - Bearer token for authentication to the API server. - -.PP -\fB\-\-user\fP="" - The name of the kubeconfig user to use - -.PP -\fB\-\-username\fP="" - Username for basic authentication to the API server. - -.PP -\fB\-\-v\fP=0 - log level for V logs - -.PP -\fB\-\-validate\fP=false - If true, use a schema to validate the input before sending it - -.PP -\fB\-\-vmodule\fP= - comma\-separated list of pattern=N settings for file\-filtered logging - - -.SH SEE ALSO -.PP -\fBkubectl\-config(1)\fP, - - -.SH HISTORY -.PP -January 2015, Originally compiled by Eric Paris (eparis at redhat dot com) based on the kubernetes source material, but hopefully they have been automatically generated since! diff --git a/release-0.19.0/docs/man/man1/kubectl-config-use-context.1 b/release-0.19.0/docs/man/man1/kubectl-config-use-context.1 deleted file mode 100644 index 4ae194bd2a6..00000000000 --- a/release-0.19.0/docs/man/man1/kubectl-config-use-context.1 +++ /dev/null @@ -1,130 +0,0 @@ -.TH "KUBERNETES" "1" " kubernetes User Manuals" "Eric Paris" "Jan 2015" "" - - -.SH NAME -.PP -kubectl config use\-context \- Sets the current\-context in a kubeconfig file - - -.SH SYNOPSIS -.PP -\fBkubectl config use\-context\fP [OPTIONS] - - -.SH DESCRIPTION -.PP -Sets the current\-context in a kubeconfig file - - -.SH OPTIONS -.PP -\fB\-h\fP, \fB\-\-help\fP=false - help for use\-context - - -.SH OPTIONS INHERITED FROM PARENT COMMANDS -.PP -\fB\-\-alsologtostderr\fP=false - log to standard error as well as files - -.PP -\fB\-\-api\-version\fP="" - The API version to use when talking to the server - -.PP -\fB\-\-certificate\-authority\fP="" - Path to a cert. file for the certificate authority. - -.PP -\fB\-\-client\-certificate\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-client\-key\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-cluster\fP="" - The name of the kubeconfig cluster to use - -.PP -\fB\-\-context\fP="" - The name of the kubeconfig context to use - -.PP -\fB\-\-insecure\-skip\-tls\-verify\fP=false - If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - -.PP -\fB\-\-kubeconfig\fP="" - use a particular kubeconfig file - -.PP -\fB\-\-log\-backtrace\-at\fP=:0 - when logging hits line file:N, emit a stack trace - -.PP -\fB\-\-log\-dir\fP="" - If non\-empty, write log files in this directory - -.PP -\fB\-\-log\-flush\-frequency\fP=5s - Maximum number of seconds between log flushes - -.PP -\fB\-\-logtostderr\fP=true - log to standard error instead of files - -.PP -\fB\-\-match\-server\-version\fP=false - Require server version to match client version - -.PP -\fB\-\-namespace\fP="" - If present, the namespace scope for this CLI request. - -.PP -\fB\-\-password\fP="" - Password for basic authentication to the API server. - -.PP -\fB\-s\fP, \fB\-\-server\fP="" - The address and port of the Kubernetes API server - -.PP -\fB\-\-stderrthreshold\fP=2 - logs at or above this threshold go to stderr - -.PP -\fB\-\-token\fP="" - Bearer token for authentication to the API server. - -.PP -\fB\-\-user\fP="" - The name of the kubeconfig user to use - -.PP -\fB\-\-username\fP="" - Username for basic authentication to the API server. - -.PP -\fB\-\-v\fP=0 - log level for V logs - -.PP -\fB\-\-validate\fP=false - If true, use a schema to validate the input before sending it - -.PP -\fB\-\-vmodule\fP= - comma\-separated list of pattern=N settings for file\-filtered logging - - -.SH SEE ALSO -.PP -\fBkubectl\-config(1)\fP, - - -.SH HISTORY -.PP -January 2015, Originally compiled by Eric Paris (eparis at redhat dot com) based on the kubernetes source material, but hopefully they have been automatically generated since! diff --git a/release-0.19.0/docs/man/man1/kubectl-config-view.1 b/release-0.19.0/docs/man/man1/kubectl-config-view.1 deleted file mode 100644 index c4e237d5818..00000000000 --- a/release-0.19.0/docs/man/man1/kubectl-config-view.1 +++ /dev/null @@ -1,181 +0,0 @@ -.TH "KUBERNETES" "1" " kubernetes User Manuals" "Eric Paris" "Jan 2015" "" - - -.SH NAME -.PP -kubectl config view \- displays Merged kubeconfig settings or a specified kubeconfig file. - - -.SH SYNOPSIS -.PP -\fBkubectl config view\fP [OPTIONS] - - -.SH DESCRIPTION -.PP -displays Merged kubeconfig settings or a specified kubeconfig file. - -.PP -You can use \-\-output=template \-\-template=TEMPLATE to extract specific values. - - -.SH OPTIONS -.PP -\fB\-\-flatten\fP=false - flatten the resulting kubeconfig file into self contained output (useful for creating portable kubeconfig files) - -.PP -\fB\-h\fP, \fB\-\-help\fP=false - help for view - -.PP -\fB\-\-merge\fP=true - merge together the full hierarchy of kubeconfig files - -.PP -\fB\-\-minify\fP=false - remove all information not used by current\-context from the output - -.PP -\fB\-\-no\-headers\fP=false - When using the default output, don't print headers. - -.PP -\fB\-o\fP, \fB\-\-output\fP="" - Output format. One of: json|yaml|template|templatefile. - -.PP -\fB\-\-output\-version\fP="" - Output the formatted object with the given version (default api\-version). - -.PP -\fB\-\-raw\fP=false - display raw byte data - -.PP -\fB\-t\fP, \fB\-\-template\fP="" - Template string or path to template file to use when \-o=template or \-o=templatefile. The template format is golang templates [ -\[la]http://golang.org/pkg/text/template/#pkg-overview\[ra]] - - -.SH OPTIONS INHERITED FROM PARENT COMMANDS -.PP -\fB\-\-alsologtostderr\fP=false - log to standard error as well as files - -.PP -\fB\-\-api\-version\fP="" - The API version to use when talking to the server - -.PP -\fB\-\-certificate\-authority\fP="" - Path to a cert. file for the certificate authority. - -.PP -\fB\-\-client\-certificate\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-client\-key\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-cluster\fP="" - The name of the kubeconfig cluster to use - -.PP -\fB\-\-context\fP="" - The name of the kubeconfig context to use - -.PP -\fB\-\-insecure\-skip\-tls\-verify\fP=false - If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - -.PP -\fB\-\-kubeconfig\fP="" - use a particular kubeconfig file - -.PP -\fB\-\-log\-backtrace\-at\fP=:0 - when logging hits line file:N, emit a stack trace - -.PP -\fB\-\-log\-dir\fP="" - If non\-empty, write log files in this directory - -.PP -\fB\-\-log\-flush\-frequency\fP=5s - Maximum number of seconds between log flushes - -.PP -\fB\-\-logtostderr\fP=true - log to standard error instead of files - -.PP -\fB\-\-match\-server\-version\fP=false - Require server version to match client version - -.PP -\fB\-\-namespace\fP="" - If present, the namespace scope for this CLI request. - -.PP -\fB\-\-password\fP="" - Password for basic authentication to the API server. - -.PP -\fB\-s\fP, \fB\-\-server\fP="" - The address and port of the Kubernetes API server - -.PP -\fB\-\-stderrthreshold\fP=2 - logs at or above this threshold go to stderr - -.PP -\fB\-\-token\fP="" - Bearer token for authentication to the API server. - -.PP -\fB\-\-user\fP="" - The name of the kubeconfig user to use - -.PP -\fB\-\-username\fP="" - Username for basic authentication to the API server. - -.PP -\fB\-\-v\fP=0 - log level for V logs - -.PP -\fB\-\-validate\fP=false - If true, use a schema to validate the input before sending it - -.PP -\fB\-\-vmodule\fP= - comma\-separated list of pattern=N settings for file\-filtered logging - - -.SH EXAMPLE -.PP -.RS - -.nf -// Show Merged kubeconfig settings. -$ kubectl config view - -// Get the password for the e2e user -$ kubectl config view \-o template \-\-template='\{\{range .users\}\}\{\{ if eq .name "e2e" \}\}\{\{ index .user.password \}\}\{\{end\}\}\{\{end\}\}' - -.fi -.RE - - -.SH SEE ALSO -.PP -\fBkubectl\-config(1)\fP, - - -.SH HISTORY -.PP -January 2015, Originally compiled by Eric Paris (eparis at redhat dot com) based on the kubernetes source material, but hopefully they have been automatically generated since! diff --git a/release-0.19.0/docs/man/man1/kubectl-config.1 b/release-0.19.0/docs/man/man1/kubectl-config.1 deleted file mode 100644 index 66eb5e8a1a2..00000000000 --- a/release-0.19.0/docs/man/man1/kubectl-config.1 +++ /dev/null @@ -1,136 +0,0 @@ -.TH "KUBERNETES" "1" " kubernetes User Manuals" "Eric Paris" "Jan 2015" "" - - -.SH NAME -.PP -kubectl config \- config modifies kubeconfig files - - -.SH SYNOPSIS -.PP -\fBkubectl config\fP [OPTIONS] - - -.SH DESCRIPTION -.PP -config modifies kubeconfig files using subcommands like "kubectl config set current\-context my\-context" - -.PP -The loading order follows these rules: - 1. If the \-\-kubeconfig flag is set, then only that file is loaded. The flag may only be set once and no merging takes place. - 2. If $KUBECONFIG environment variable is set, then it is used a list of paths (normal path delimitting rules for your system). These paths are merged together. When a value is modified, it is modified in the file that defines the stanza. When a value is created, it is created in the first file that exists. If no files in the chain exist, then it creates the last file in the list. - 3. Otherwise, $\{HOME\}/.kube/config is used and no merging takes place. - - -.SH OPTIONS -.PP -\fB\-h\fP, \fB\-\-help\fP=false - help for config - -.PP -\fB\-\-kubeconfig\fP="" - use a particular kubeconfig file - - -.SH OPTIONS INHERITED FROM PARENT COMMANDS -.PP -\fB\-\-alsologtostderr\fP=false - log to standard error as well as files - -.PP -\fB\-\-api\-version\fP="" - The API version to use when talking to the server - -.PP -\fB\-\-certificate\-authority\fP="" - Path to a cert. file for the certificate authority. - -.PP -\fB\-\-client\-certificate\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-client\-key\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-cluster\fP="" - The name of the kubeconfig cluster to use - -.PP -\fB\-\-context\fP="" - The name of the kubeconfig context to use - -.PP -\fB\-\-insecure\-skip\-tls\-verify\fP=false - If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - -.PP -\fB\-\-log\-backtrace\-at\fP=:0 - when logging hits line file:N, emit a stack trace - -.PP -\fB\-\-log\-dir\fP="" - If non\-empty, write log files in this directory - -.PP -\fB\-\-log\-flush\-frequency\fP=5s - Maximum number of seconds between log flushes - -.PP -\fB\-\-logtostderr\fP=true - log to standard error instead of files - -.PP -\fB\-\-match\-server\-version\fP=false - Require server version to match client version - -.PP -\fB\-\-namespace\fP="" - If present, the namespace scope for this CLI request. - -.PP -\fB\-\-password\fP="" - Password for basic authentication to the API server. - -.PP -\fB\-s\fP, \fB\-\-server\fP="" - The address and port of the Kubernetes API server - -.PP -\fB\-\-stderrthreshold\fP=2 - logs at or above this threshold go to stderr - -.PP -\fB\-\-token\fP="" - Bearer token for authentication to the API server. - -.PP -\fB\-\-user\fP="" - The name of the kubeconfig user to use - -.PP -\fB\-\-username\fP="" - Username for basic authentication to the API server. - -.PP -\fB\-\-v\fP=0 - log level for V logs - -.PP -\fB\-\-validate\fP=false - If true, use a schema to validate the input before sending it - -.PP -\fB\-\-vmodule\fP= - comma\-separated list of pattern=N settings for file\-filtered logging - - -.SH SEE ALSO -.PP -\fBkubectl(1)\fP, \fBkubectl\-config\-view(1)\fP, \fBkubectl\-config\-set\-cluster(1)\fP, \fBkubectl\-config\-set\-credentials(1)\fP, \fBkubectl\-config\-set\-context(1)\fP, \fBkubectl\-config\-set(1)\fP, \fBkubectl\-config\-unset(1)\fP, \fBkubectl\-config\-use\-context(1)\fP, - - -.SH HISTORY -.PP -January 2015, Originally compiled by Eric Paris (eparis at redhat dot com) based on the kubernetes source material, but hopefully they have been automatically generated since! diff --git a/release-0.19.0/docs/man/man1/kubectl-create.1 b/release-0.19.0/docs/man/man1/kubectl-create.1 deleted file mode 100644 index b0c511d36d1..00000000000 --- a/release-0.19.0/docs/man/man1/kubectl-create.1 +++ /dev/null @@ -1,152 +0,0 @@ -.TH "KUBERNETES" "1" " kubernetes User Manuals" "Eric Paris" "Jan 2015" "" - - -.SH NAME -.PP -kubectl create \- Create a resource by filename or stdin - - -.SH SYNOPSIS -.PP -\fBkubectl create\fP [OPTIONS] - - -.SH DESCRIPTION -.PP -Create a resource by filename or stdin. - -.PP -JSON and YAML formats are accepted. - - -.SH OPTIONS -.PP -\fB\-f\fP, \fB\-\-filename\fP=[] - Filename, directory, or URL to file to use to create the resource - -.PP -\fB\-h\fP, \fB\-\-help\fP=false - help for create - - -.SH OPTIONS INHERITED FROM PARENT COMMANDS -.PP -\fB\-\-alsologtostderr\fP=false - log to standard error as well as files - -.PP -\fB\-\-api\-version\fP="" - The API version to use when talking to the server - -.PP -\fB\-\-certificate\-authority\fP="" - Path to a cert. file for the certificate authority. - -.PP -\fB\-\-client\-certificate\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-client\-key\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-cluster\fP="" - The name of the kubeconfig cluster to use - -.PP -\fB\-\-context\fP="" - The name of the kubeconfig context to use - -.PP -\fB\-\-insecure\-skip\-tls\-verify\fP=false - If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - -.PP -\fB\-\-kubeconfig\fP="" - Path to the kubeconfig file to use for CLI requests. - -.PP -\fB\-\-log\-backtrace\-at\fP=:0 - when logging hits line file:N, emit a stack trace - -.PP -\fB\-\-log\-dir\fP="" - If non\-empty, write log files in this directory - -.PP -\fB\-\-log\-flush\-frequency\fP=5s - Maximum number of seconds between log flushes - -.PP -\fB\-\-logtostderr\fP=true - log to standard error instead of files - -.PP -\fB\-\-match\-server\-version\fP=false - Require server version to match client version - -.PP -\fB\-\-namespace\fP="" - If present, the namespace scope for this CLI request. - -.PP -\fB\-\-password\fP="" - Password for basic authentication to the API server. - -.PP -\fB\-s\fP, \fB\-\-server\fP="" - The address and port of the Kubernetes API server - -.PP -\fB\-\-stderrthreshold\fP=2 - logs at or above this threshold go to stderr - -.PP -\fB\-\-token\fP="" - Bearer token for authentication to the API server. - -.PP -\fB\-\-user\fP="" - The name of the kubeconfig user to use - -.PP -\fB\-\-username\fP="" - Username for basic authentication to the API server. - -.PP -\fB\-\-v\fP=0 - log level for V logs - -.PP -\fB\-\-validate\fP=false - If true, use a schema to validate the input before sending it - -.PP -\fB\-\-vmodule\fP= - comma\-separated list of pattern=N settings for file\-filtered logging - - -.SH EXAMPLE -.PP -.RS - -.nf -// Create a pod using the data in pod.json. -$ kubectl create \-f pod.json - -// Create a pod based on the JSON passed into stdin. -$ cat pod.json | kubectl create \-f \- - -.fi -.RE - - -.SH SEE ALSO -.PP -\fBkubectl(1)\fP, - - -.SH HISTORY -.PP -January 2015, Originally compiled by Eric Paris (eparis at redhat dot com) based on the kubernetes source material, but hopefully they have been automatically generated since! diff --git a/release-0.19.0/docs/man/man1/kubectl-delete.1 b/release-0.19.0/docs/man/man1/kubectl-delete.1 deleted file mode 100644 index c1c74f614fd..00000000000 --- a/release-0.19.0/docs/man/man1/kubectl-delete.1 +++ /dev/null @@ -1,194 +0,0 @@ -.TH "KUBERNETES" "1" " kubernetes User Manuals" "Eric Paris" "Jan 2015" "" - - -.SH NAME -.PP -kubectl delete \- Delete a resource by filename, stdin, resource and ID, or by resources and label selector. - - -.SH SYNOPSIS -.PP -\fBkubectl delete\fP [OPTIONS] - - -.SH DESCRIPTION -.PP -Delete a resource by filename, stdin, resource and ID, or by resources and label selector. - -.PP -JSON and YAML formats are accepted. - -.PP -If both a filename and command line arguments are passed, the command line -arguments are used and the filename is ignored. - -.PP -Note that the delete command does NOT do resource version checks, so if someone -submits an update to a resource right when you submit a delete, their update -will be lost along with the rest of the resource. - - -.SH OPTIONS -.PP -\fB\-\-all\fP=false - [\-all] to select all the specified resources. - -.PP -\fB\-\-cascade\fP=true - If true, cascade the delete resources managed by this resource (e.g. Pods created by a ReplicationController). Default true. - -.PP -\fB\-f\fP, \fB\-\-filename\fP=[] - Filename, directory, or URL to a file containing the resource to delete. - -.PP -\fB\-\-grace\-period\fP=\-1 - Period of time in seconds given to the resource to terminate gracefully. Ignored if negative. - -.PP -\fB\-h\fP, \fB\-\-help\fP=false - help for delete - -.PP -\fB\-\-ignore\-not\-found\fP=false - Treat "resource not found" as a successful delete. - -.PP -\fB\-l\fP, \fB\-\-selector\fP="" - Selector (label query) to filter on. - -.PP -\fB\-\-timeout\fP=0 - The length of time to wait before giving up on a delete, zero means determine a timeout from the size of the object - - -.SH OPTIONS INHERITED FROM PARENT COMMANDS -.PP -\fB\-\-alsologtostderr\fP=false - log to standard error as well as files - -.PP -\fB\-\-api\-version\fP="" - The API version to use when talking to the server - -.PP -\fB\-\-certificate\-authority\fP="" - Path to a cert. file for the certificate authority. - -.PP -\fB\-\-client\-certificate\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-client\-key\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-cluster\fP="" - The name of the kubeconfig cluster to use - -.PP -\fB\-\-context\fP="" - The name of the kubeconfig context to use - -.PP -\fB\-\-insecure\-skip\-tls\-verify\fP=false - If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - -.PP -\fB\-\-kubeconfig\fP="" - Path to the kubeconfig file to use for CLI requests. - -.PP -\fB\-\-log\-backtrace\-at\fP=:0 - when logging hits line file:N, emit a stack trace - -.PP -\fB\-\-log\-dir\fP="" - If non\-empty, write log files in this directory - -.PP -\fB\-\-log\-flush\-frequency\fP=5s - Maximum number of seconds between log flushes - -.PP -\fB\-\-logtostderr\fP=true - log to standard error instead of files - -.PP -\fB\-\-match\-server\-version\fP=false - Require server version to match client version - -.PP -\fB\-\-namespace\fP="" - If present, the namespace scope for this CLI request. - -.PP -\fB\-\-password\fP="" - Password for basic authentication to the API server. - -.PP -\fB\-s\fP, \fB\-\-server\fP="" - The address and port of the Kubernetes API server - -.PP -\fB\-\-stderrthreshold\fP=2 - logs at or above this threshold go to stderr - -.PP -\fB\-\-token\fP="" - Bearer token for authentication to the API server. - -.PP -\fB\-\-user\fP="" - The name of the kubeconfig user to use - -.PP -\fB\-\-username\fP="" - Username for basic authentication to the API server. - -.PP -\fB\-\-v\fP=0 - log level for V logs - -.PP -\fB\-\-validate\fP=false - If true, use a schema to validate the input before sending it - -.PP -\fB\-\-vmodule\fP= - comma\-separated list of pattern=N settings for file\-filtered logging - - -.SH EXAMPLE -.PP -.RS - -.nf -// Delete a pod using the type and ID specified in pod.json. -$ kubectl delete \-f pod.json - -// Delete a pod based on the type and ID in the JSON passed into stdin. -$ cat pod.json | kubectl delete \-f \- - -// Delete pods and services with label name=myLabel. -$ kubectl delete pods,services \-l name=myLabel - -// Delete a pod with ID 1234\-56\-7890\-234234\-456456. -$ kubectl delete pod 1234\-56\-7890\-234234\-456456 - -// Delete all pods -$ kubectl delete pods \-\-all - -.fi -.RE - - -.SH SEE ALSO -.PP -\fBkubectl(1)\fP, - - -.SH HISTORY -.PP -January 2015, Originally compiled by Eric Paris (eparis at redhat dot com) based on the kubernetes source material, but hopefully they have been automatically generated since! diff --git a/release-0.19.0/docs/man/man1/kubectl-describe.1 b/release-0.19.0/docs/man/man1/kubectl-describe.1 deleted file mode 100644 index f8856055a3f..00000000000 --- a/release-0.19.0/docs/man/man1/kubectl-describe.1 +++ /dev/null @@ -1,149 +0,0 @@ -.TH "KUBERNETES" "1" " kubernetes User Manuals" "Eric Paris" "Jan 2015" "" - - -.SH NAME -.PP -kubectl describe \- Show details of a specific resource - - -.SH SYNOPSIS -.PP -\fBkubectl describe\fP [OPTIONS] - - -.SH DESCRIPTION -.PP -Show details of a specific resource. - -.PP -This command joins many API calls together to form a detailed description of a -given resource. - - -.SH OPTIONS -.PP -\fB\-h\fP, \fB\-\-help\fP=false - help for describe - - -.SH OPTIONS INHERITED FROM PARENT COMMANDS -.PP -\fB\-\-alsologtostderr\fP=false - log to standard error as well as files - -.PP -\fB\-\-api\-version\fP="" - The API version to use when talking to the server - -.PP -\fB\-\-certificate\-authority\fP="" - Path to a cert. file for the certificate authority. - -.PP -\fB\-\-client\-certificate\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-client\-key\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-cluster\fP="" - The name of the kubeconfig cluster to use - -.PP -\fB\-\-context\fP="" - The name of the kubeconfig context to use - -.PP -\fB\-\-insecure\-skip\-tls\-verify\fP=false - If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - -.PP -\fB\-\-kubeconfig\fP="" - Path to the kubeconfig file to use for CLI requests. - -.PP -\fB\-\-log\-backtrace\-at\fP=:0 - when logging hits line file:N, emit a stack trace - -.PP -\fB\-\-log\-dir\fP="" - If non\-empty, write log files in this directory - -.PP -\fB\-\-log\-flush\-frequency\fP=5s - Maximum number of seconds between log flushes - -.PP -\fB\-\-logtostderr\fP=true - log to standard error instead of files - -.PP -\fB\-\-match\-server\-version\fP=false - Require server version to match client version - -.PP -\fB\-\-namespace\fP="" - If present, the namespace scope for this CLI request. - -.PP -\fB\-\-password\fP="" - Password for basic authentication to the API server. - -.PP -\fB\-s\fP, \fB\-\-server\fP="" - The address and port of the Kubernetes API server - -.PP -\fB\-\-stderrthreshold\fP=2 - logs at or above this threshold go to stderr - -.PP -\fB\-\-token\fP="" - Bearer token for authentication to the API server. - -.PP -\fB\-\-user\fP="" - The name of the kubeconfig user to use - -.PP -\fB\-\-username\fP="" - Username for basic authentication to the API server. - -.PP -\fB\-\-v\fP=0 - log level for V logs - -.PP -\fB\-\-validate\fP=false - If true, use a schema to validate the input before sending it - -.PP -\fB\-\-vmodule\fP= - comma\-separated list of pattern=N settings for file\-filtered logging - - -.SH EXAMPLE -.PP -.RS - -.nf -// Describe a node -$ kubectl describe nodes kubernetes\-minion\-emt8.c.myproject.internal - -// Describe a pod -$ kubectl describe pods/nginx - -.fi -.RE - - -.SH SEE ALSO -.PP -\fBkubectl(1)\fP, - - -.SH HISTORY -.PP -January 2015, Originally compiled by Eric Paris (eparis at redhat dot com) based on the kubernetes source material, but hopefully they have been automatically generated since! diff --git a/release-0.19.0/docs/man/man1/kubectl-exec.1 b/release-0.19.0/docs/man/man1/kubectl-exec.1 deleted file mode 100644 index c06547c8181..00000000000 --- a/release-0.19.0/docs/man/man1/kubectl-exec.1 +++ /dev/null @@ -1,164 +0,0 @@ -.TH "KUBERNETES" "1" " kubernetes User Manuals" "Eric Paris" "Jan 2015" "" - - -.SH NAME -.PP -kubectl exec \- Execute a command in a container. - - -.SH SYNOPSIS -.PP -\fBkubectl exec\fP [OPTIONS] - - -.SH DESCRIPTION -.PP -Execute a command in a container. - - -.SH OPTIONS -.PP -\fB\-c\fP, \fB\-\-container\fP="" - Container name - -.PP -\fB\-h\fP, \fB\-\-help\fP=false - help for exec - -.PP -\fB\-p\fP, \fB\-\-pod\fP="" - Pod name - -.PP -\fB\-i\fP, \fB\-\-stdin\fP=false - Pass stdin to the container - -.PP -\fB\-t\fP, \fB\-\-tty\fP=false - Stdin is a TTY - - -.SH OPTIONS INHERITED FROM PARENT COMMANDS -.PP -\fB\-\-alsologtostderr\fP=false - log to standard error as well as files - -.PP -\fB\-\-api\-version\fP="" - The API version to use when talking to the server - -.PP -\fB\-\-certificate\-authority\fP="" - Path to a cert. file for the certificate authority. - -.PP -\fB\-\-client\-certificate\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-client\-key\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-cluster\fP="" - The name of the kubeconfig cluster to use - -.PP -\fB\-\-context\fP="" - The name of the kubeconfig context to use - -.PP -\fB\-\-insecure\-skip\-tls\-verify\fP=false - If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - -.PP -\fB\-\-kubeconfig\fP="" - Path to the kubeconfig file to use for CLI requests. - -.PP -\fB\-\-log\-backtrace\-at\fP=:0 - when logging hits line file:N, emit a stack trace - -.PP -\fB\-\-log\-dir\fP="" - If non\-empty, write log files in this directory - -.PP -\fB\-\-log\-flush\-frequency\fP=5s - Maximum number of seconds between log flushes - -.PP -\fB\-\-logtostderr\fP=true - log to standard error instead of files - -.PP -\fB\-\-match\-server\-version\fP=false - Require server version to match client version - -.PP -\fB\-\-namespace\fP="" - If present, the namespace scope for this CLI request. - -.PP -\fB\-\-password\fP="" - Password for basic authentication to the API server. - -.PP -\fB\-s\fP, \fB\-\-server\fP="" - The address and port of the Kubernetes API server - -.PP -\fB\-\-stderrthreshold\fP=2 - logs at or above this threshold go to stderr - -.PP -\fB\-\-token\fP="" - Bearer token for authentication to the API server. - -.PP -\fB\-\-user\fP="" - The name of the kubeconfig user to use - -.PP -\fB\-\-username\fP="" - Username for basic authentication to the API server. - -.PP -\fB\-\-v\fP=0 - log level for V logs - -.PP -\fB\-\-validate\fP=false - If true, use a schema to validate the input before sending it - -.PP -\fB\-\-vmodule\fP= - comma\-separated list of pattern=N settings for file\-filtered logging - - -.SH EXAMPLE -.PP -.RS - -.nf -// get output from running 'date' from pod 123456\-7890, using the first container by default -$ kubectl exec 123456\-7890 date - -// get output from running 'date' in ruby\-container from pod 123456\-7890 -$ kubectl exec 123456\-7890 \-c ruby\-container date - -//switch to raw terminal mode, sends stdin to 'bash' in ruby\-container from pod 123456\-780 and sends stdout/stderr from 'bash' back to the client -$ kubectl exec 123456\-7890 \-c ruby\-container \-i \-t \-\- bash \-il - -.fi -.RE - - -.SH SEE ALSO -.PP -\fBkubectl(1)\fP, - - -.SH HISTORY -.PP -January 2015, Originally compiled by Eric Paris (eparis at redhat dot com) based on the kubernetes source material, but hopefully they have been automatically generated since! diff --git a/release-0.19.0/docs/man/man1/kubectl-expose.1 b/release-0.19.0/docs/man/man1/kubectl-expose.1 deleted file mode 100644 index 55aaec9d511..00000000000 --- a/release-0.19.0/docs/man/man1/kubectl-expose.1 +++ /dev/null @@ -1,222 +0,0 @@ -.TH "KUBERNETES" "1" " kubernetes User Manuals" "Eric Paris" "Jan 2015" "" - - -.SH NAME -.PP -kubectl expose \- Take a replicated application and expose it as Kubernetes Service - - -.SH SYNOPSIS -.PP -\fBkubectl expose\fP [OPTIONS] - - -.SH DESCRIPTION -.PP -Take a replicated application and expose it as Kubernetes Service. - -.PP -Looks up a replication controller or service by name and uses the selector for that resource as the -selector for a new Service on the specified port. If no labels are specified, the new service will -re\-use the labels from the resource it exposes. - - -.SH OPTIONS -.PP -\fB\-\-container\-port\fP="" - Synonym for \-\-target\-port - -.PP -\fB\-\-create\-external\-load\-balancer\fP=false - If true, create an external load balancer for this service (trumped by \-\-type). Implementation is cloud provider dependent. Default is 'false'. - -.PP -\fB\-\-dry\-run\fP=false - If true, only print the object that would be sent, without creating it. - -.PP -\fB\-\-generator\fP="service/v1" - The name of the API generator to use. Default is 'service/v1'. - -.PP -\fB\-h\fP, \fB\-\-help\fP=false - help for expose - -.PP -\fB\-l\fP, \fB\-\-labels\fP="" - Labels to apply to the service created by this call. - -.PP -\fB\-\-name\fP="" - The name for the newly created object. - -.PP -\fB\-\-no\-headers\fP=false - When using the default output, don't print headers. - -.PP -\fB\-o\fP, \fB\-\-output\fP="" - Output format. One of: json|yaml|template|templatefile. - -.PP -\fB\-\-output\-version\fP="" - Output the formatted object with the given version (default api\-version). - -.PP -\fB\-\-overrides\fP="" - An inline JSON override for the generated object. If this is non\-empty, it is used to override the generated object. Requires that the object supply a valid apiVersion field. - -.PP -\fB\-\-port\fP=\-1 - The port that the service should serve on. Required. - -.PP -\fB\-\-protocol\fP="TCP" - The network protocol for the service to be created. Default is 'tcp'. - -.PP -\fB\-\-public\-ip\fP="" - Name of a public IP address to set for the service. The service will be assigned this IP in addition to its generated service IP. - -.PP -\fB\-\-selector\fP="" - A label selector to use for this service. If empty (the default) infer the selector from the replication controller. - -.PP -\fB\-\-target\-port\fP="" - Name or number for the port on the container that the service should direct traffic to. Optional. - -.PP -\fB\-t\fP, \fB\-\-template\fP="" - Template string or path to template file to use when \-o=template or \-o=templatefile. The template format is golang templates [ -\[la]http://golang.org/pkg/text/template/#pkg-overview\[ra]] - -.PP -\fB\-\-type\fP="" - Type for this service: ClusterIP, NodePort, or LoadBalancer. Default is 'ClusterIP' unless \-\-create\-external\-load\-balancer is specified. - - -.SH OPTIONS INHERITED FROM PARENT COMMANDS -.PP -\fB\-\-alsologtostderr\fP=false - log to standard error as well as files - -.PP -\fB\-\-api\-version\fP="" - The API version to use when talking to the server - -.PP -\fB\-\-certificate\-authority\fP="" - Path to a cert. file for the certificate authority. - -.PP -\fB\-\-client\-certificate\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-client\-key\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-cluster\fP="" - The name of the kubeconfig cluster to use - -.PP -\fB\-\-context\fP="" - The name of the kubeconfig context to use - -.PP -\fB\-\-insecure\-skip\-tls\-verify\fP=false - If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - -.PP -\fB\-\-kubeconfig\fP="" - Path to the kubeconfig file to use for CLI requests. - -.PP -\fB\-\-log\-backtrace\-at\fP=:0 - when logging hits line file:N, emit a stack trace - -.PP -\fB\-\-log\-dir\fP="" - If non\-empty, write log files in this directory - -.PP -\fB\-\-log\-flush\-frequency\fP=5s - Maximum number of seconds between log flushes - -.PP -\fB\-\-logtostderr\fP=true - log to standard error instead of files - -.PP -\fB\-\-match\-server\-version\fP=false - Require server version to match client version - -.PP -\fB\-\-namespace\fP="" - If present, the namespace scope for this CLI request. - -.PP -\fB\-\-password\fP="" - Password for basic authentication to the API server. - -.PP -\fB\-s\fP, \fB\-\-server\fP="" - The address and port of the Kubernetes API server - -.PP -\fB\-\-stderrthreshold\fP=2 - logs at or above this threshold go to stderr - -.PP -\fB\-\-token\fP="" - Bearer token for authentication to the API server. - -.PP -\fB\-\-user\fP="" - The name of the kubeconfig user to use - -.PP -\fB\-\-username\fP="" - Username for basic authentication to the API server. - -.PP -\fB\-\-v\fP=0 - log level for V logs - -.PP -\fB\-\-validate\fP=false - If true, use a schema to validate the input before sending it - -.PP -\fB\-\-vmodule\fP= - comma\-separated list of pattern=N settings for file\-filtered logging - - -.SH EXAMPLE -.PP -.RS - -.nf -// Creates a service for a replicated nginx, which serves on port 80 and connects to the containers on port 8000. -$ kubectl expose rc nginx \-\-port=80 \-\-target\-port=8000 - -// Creates a second service based on the above service, exposing the container port 8443 as port 443 with the name "nginx\-https" -$ kubectl expose service nginx \-\-port=443 \-\-target\-port=8443 \-\-name=nginx\-https - -// Create a service for a replicated streaming application on port 4100 balancing UDP traffic and named 'video\-stream'. -$ kubectl expose rc streamer \-\-port=4100 \-\-protocol=udp \-\-name=video\-stream - -.fi -.RE - - -.SH SEE ALSO -.PP -\fBkubectl(1)\fP, - - -.SH HISTORY -.PP -January 2015, Originally compiled by Eric Paris (eparis at redhat dot com) based on the kubernetes source material, but hopefully they have been automatically generated since! diff --git a/release-0.19.0/docs/man/man1/kubectl-get.1 b/release-0.19.0/docs/man/man1/kubectl-get.1 deleted file mode 100644 index e71f884a218..00000000000 --- a/release-0.19.0/docs/man/man1/kubectl-get.1 +++ /dev/null @@ -1,200 +0,0 @@ -.TH "KUBERNETES" "1" " kubernetes User Manuals" "Eric Paris" "Jan 2015" "" - - -.SH NAME -.PP -kubectl get \- Display one or many resources - - -.SH SYNOPSIS -.PP -\fBkubectl get\fP [OPTIONS] - - -.SH DESCRIPTION -.PP -Display one or many resources. - -.PP -Possible resources include pods (po), replication controllers (rc), services -(svc), nodes, events (ev), component statuses (cs), limit ranges (limits), -nodes (no), persistent volumes (pv), persistent volume claims (pvc) -or resource quotas (quota). - -.PP -By specifying the output as 'template' and providing a Go template as the value -of the \-\-template flag, you can filter the attributes of the fetched resource(s). - - -.SH OPTIONS -.PP -\fB\-\-all\-namespaces\fP=false - If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with \-\-namespace. - -.PP -\fB\-h\fP, \fB\-\-help\fP=false - help for get - -.PP -\fB\-\-no\-headers\fP=false - When using the default output, don't print headers. - -.PP -\fB\-o\fP, \fB\-\-output\fP="" - Output format. One of: json|yaml|template|templatefile. - -.PP -\fB\-\-output\-version\fP="" - Output the formatted object with the given version (default api\-version). - -.PP -\fB\-l\fP, \fB\-\-selector\fP="" - Selector (label query) to filter on - -.PP -\fB\-t\fP, \fB\-\-template\fP="" - Template string or path to template file to use when \-o=template or \-o=templatefile. The template format is golang templates [ -\[la]http://golang.org/pkg/text/template/#pkg-overview\[ra]] - -.PP -\fB\-w\fP, \fB\-\-watch\fP=false - After listing/getting the requested object, watch for changes. - -.PP -\fB\-\-watch\-only\fP=false - Watch for changes to the requested object(s), without listing/getting first. - - -.SH OPTIONS INHERITED FROM PARENT COMMANDS -.PP -\fB\-\-alsologtostderr\fP=false - log to standard error as well as files - -.PP -\fB\-\-api\-version\fP="" - The API version to use when talking to the server - -.PP -\fB\-\-certificate\-authority\fP="" - Path to a cert. file for the certificate authority. - -.PP -\fB\-\-client\-certificate\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-client\-key\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-cluster\fP="" - The name of the kubeconfig cluster to use - -.PP -\fB\-\-context\fP="" - The name of the kubeconfig context to use - -.PP -\fB\-\-insecure\-skip\-tls\-verify\fP=false - If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - -.PP -\fB\-\-kubeconfig\fP="" - Path to the kubeconfig file to use for CLI requests. - -.PP -\fB\-\-log\-backtrace\-at\fP=:0 - when logging hits line file:N, emit a stack trace - -.PP -\fB\-\-log\-dir\fP="" - If non\-empty, write log files in this directory - -.PP -\fB\-\-log\-flush\-frequency\fP=5s - Maximum number of seconds between log flushes - -.PP -\fB\-\-logtostderr\fP=true - log to standard error instead of files - -.PP -\fB\-\-match\-server\-version\fP=false - Require server version to match client version - -.PP -\fB\-\-namespace\fP="" - If present, the namespace scope for this CLI request. - -.PP -\fB\-\-password\fP="" - Password for basic authentication to the API server. - -.PP -\fB\-s\fP, \fB\-\-server\fP="" - The address and port of the Kubernetes API server - -.PP -\fB\-\-stderrthreshold\fP=2 - logs at or above this threshold go to stderr - -.PP -\fB\-\-token\fP="" - Bearer token for authentication to the API server. - -.PP -\fB\-\-user\fP="" - The name of the kubeconfig user to use - -.PP -\fB\-\-username\fP="" - Username for basic authentication to the API server. - -.PP -\fB\-\-v\fP=0 - log level for V logs - -.PP -\fB\-\-validate\fP=false - If true, use a schema to validate the input before sending it - -.PP -\fB\-\-vmodule\fP= - comma\-separated list of pattern=N settings for file\-filtered logging - - -.SH EXAMPLE -.PP -.RS - -.nf -// List all pods in ps output format. -$ kubectl get pods - -// List a single replication controller with specified NAME in ps output format. -$ kubectl get replicationcontroller web - -// List a single pod in JSON output format. -$ kubectl get \-o json pod web\-pod\-13je7 - -// Return only the phase value of the specified pod. -$ kubectl get \-o template web\-pod\-13je7 \-\-template=\{\{.status.phase\}\} \-\-api\-version=v1 - -// List all replication controllers and services together in ps output format. -$ kubectl get rc,services - -// List one or more resources by their type and names -$ kubectl get rc/web service/frontend pods/web\-pod\-13je7 - -.fi -.RE - - -.SH SEE ALSO -.PP -\fBkubectl(1)\fP, - - -.SH HISTORY -.PP -January 2015, Originally compiled by Eric Paris (eparis at redhat dot com) based on the kubernetes source material, but hopefully they have been automatically generated since! diff --git a/release-0.19.0/docs/man/man1/kubectl-label.1 b/release-0.19.0/docs/man/man1/kubectl-label.1 deleted file mode 100644 index 9fddbcd4a07..00000000000 --- a/release-0.19.0/docs/man/man1/kubectl-label.1 +++ /dev/null @@ -1,193 +0,0 @@ -.TH "KUBERNETES" "1" " kubernetes User Manuals" "Eric Paris" "Jan 2015" "" - - -.SH NAME -.PP -kubectl label \- Update the labels on a resource - - -.SH SYNOPSIS -.PP -\fBkubectl label\fP [OPTIONS] - - -.SH DESCRIPTION -.PP -Update the labels on a resource. - -.PP -A label must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores, up to 63 characters. -If \-\-overwrite is true, then existing labels can be overwritten, otherwise attempting to overwrite a label will result in an error. -If \-\-resource\-version is specified, then updates will use this resource version, otherwise the existing resource\-version will be used. - - -.SH OPTIONS -.PP -\fB\-\-all\fP=false - select all resources in the namespace of the specified resource types - -.PP -\fB\-h\fP, \fB\-\-help\fP=false - help for label - -.PP -\fB\-\-no\-headers\fP=false - When using the default output, don't print headers. - -.PP -\fB\-o\fP, \fB\-\-output\fP="" - Output format. One of: json|yaml|template|templatefile. - -.PP -\fB\-\-output\-version\fP="" - Output the formatted object with the given version (default api\-version). - -.PP -\fB\-\-overwrite\fP=false - If true, allow labels to be overwritten, otherwise reject label updates that overwrite existing labels. - -.PP -\fB\-\-resource\-version\fP="" - If non\-empty, the labels update will only succeed if this is the current resource\-version for the object. Only valid when specifying a single resource. - -.PP -\fB\-l\fP, \fB\-\-selector\fP="" - Selector (label query) to filter on - -.PP -\fB\-t\fP, \fB\-\-template\fP="" - Template string or path to template file to use when \-o=template or \-o=templatefile. The template format is golang templates [ -\[la]http://golang.org/pkg/text/template/#pkg-overview\[ra]] - - -.SH OPTIONS INHERITED FROM PARENT COMMANDS -.PP -\fB\-\-alsologtostderr\fP=false - log to standard error as well as files - -.PP -\fB\-\-api\-version\fP="" - The API version to use when talking to the server - -.PP -\fB\-\-certificate\-authority\fP="" - Path to a cert. file for the certificate authority. - -.PP -\fB\-\-client\-certificate\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-client\-key\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-cluster\fP="" - The name of the kubeconfig cluster to use - -.PP -\fB\-\-context\fP="" - The name of the kubeconfig context to use - -.PP -\fB\-\-insecure\-skip\-tls\-verify\fP=false - If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - -.PP -\fB\-\-kubeconfig\fP="" - Path to the kubeconfig file to use for CLI requests. - -.PP -\fB\-\-log\-backtrace\-at\fP=:0 - when logging hits line file:N, emit a stack trace - -.PP -\fB\-\-log\-dir\fP="" - If non\-empty, write log files in this directory - -.PP -\fB\-\-log\-flush\-frequency\fP=5s - Maximum number of seconds between log flushes - -.PP -\fB\-\-logtostderr\fP=true - log to standard error instead of files - -.PP -\fB\-\-match\-server\-version\fP=false - Require server version to match client version - -.PP -\fB\-\-namespace\fP="" - If present, the namespace scope for this CLI request. - -.PP -\fB\-\-password\fP="" - Password for basic authentication to the API server. - -.PP -\fB\-s\fP, \fB\-\-server\fP="" - The address and port of the Kubernetes API server - -.PP -\fB\-\-stderrthreshold\fP=2 - logs at or above this threshold go to stderr - -.PP -\fB\-\-token\fP="" - Bearer token for authentication to the API server. - -.PP -\fB\-\-user\fP="" - The name of the kubeconfig user to use - -.PP -\fB\-\-username\fP="" - Username for basic authentication to the API server. - -.PP -\fB\-\-v\fP=0 - log level for V logs - -.PP -\fB\-\-validate\fP=false - If true, use a schema to validate the input before sending it - -.PP -\fB\-\-vmodule\fP= - comma\-separated list of pattern=N settings for file\-filtered logging - - -.SH EXAMPLE -.PP -.RS - -.nf -// Update pod 'foo' with the label 'unhealthy' and the value 'true'. -$ kubectl label pods foo unhealthy=true - -// Update pod 'foo' with the label 'status' and the value 'unhealthy', overwriting any existing value. -$ kubectl label \-\-overwrite pods foo status=unhealthy - -// Update all pods in the namespace -$ kubectl label pods \-\-all status=unhealthy - -// Update pod 'foo' only if the resource is unchanged from version 1. -$ kubectl label pods foo status=unhealthy \-\-resource\-version=1 - -// Update pod 'foo' by removing a label named 'bar' if it exists. -// Does not require the \-\-overwrite flag. -$ kubectl label pods foo bar\- - -.fi -.RE - - -.SH SEE ALSO -.PP -\fBkubectl(1)\fP, - - -.SH HISTORY -.PP -January 2015, Originally compiled by Eric Paris (eparis at redhat dot com) based on the kubernetes source material, but hopefully they have been automatically generated since! diff --git a/release-0.19.0/docs/man/man1/kubectl-logs.1 b/release-0.19.0/docs/man/man1/kubectl-logs.1 deleted file mode 100644 index 148efd87678..00000000000 --- a/release-0.19.0/docs/man/man1/kubectl-logs.1 +++ /dev/null @@ -1,160 +0,0 @@ -.TH "KUBERNETES" "1" " kubernetes User Manuals" "Eric Paris" "Jan 2015" "" - - -.SH NAME -.PP -kubectl logs \- Print the logs for a container in a pod. - - -.SH SYNOPSIS -.PP -\fBkubectl logs\fP [OPTIONS] - - -.SH DESCRIPTION -.PP -Print the logs for a container in a pod. If the pod has only one container, the container name is optional. - - -.SH OPTIONS -.PP -\fB\-f\fP, \fB\-\-follow\fP=false - Specify if the logs should be streamed. - -.PP -\fB\-h\fP, \fB\-\-help\fP=false - help for logs - -.PP -\fB\-\-interactive\fP=true - If true, prompt the user for input when required. Default true. - -.PP -\fB\-p\fP, \fB\-\-previous\fP=false - If true, print the logs for the previous instance of the container in a pod if it exists. - - -.SH OPTIONS INHERITED FROM PARENT COMMANDS -.PP -\fB\-\-alsologtostderr\fP=false - log to standard error as well as files - -.PP -\fB\-\-api\-version\fP="" - The API version to use when talking to the server - -.PP -\fB\-\-certificate\-authority\fP="" - Path to a cert. file for the certificate authority. - -.PP -\fB\-\-client\-certificate\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-client\-key\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-cluster\fP="" - The name of the kubeconfig cluster to use - -.PP -\fB\-\-context\fP="" - The name of the kubeconfig context to use - -.PP -\fB\-\-insecure\-skip\-tls\-verify\fP=false - If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - -.PP -\fB\-\-kubeconfig\fP="" - Path to the kubeconfig file to use for CLI requests. - -.PP -\fB\-\-log\-backtrace\-at\fP=:0 - when logging hits line file:N, emit a stack trace - -.PP -\fB\-\-log\-dir\fP="" - If non\-empty, write log files in this directory - -.PP -\fB\-\-log\-flush\-frequency\fP=5s - Maximum number of seconds between log flushes - -.PP -\fB\-\-logtostderr\fP=true - log to standard error instead of files - -.PP -\fB\-\-match\-server\-version\fP=false - Require server version to match client version - -.PP -\fB\-\-namespace\fP="" - If present, the namespace scope for this CLI request. - -.PP -\fB\-\-password\fP="" - Password for basic authentication to the API server. - -.PP -\fB\-s\fP, \fB\-\-server\fP="" - The address and port of the Kubernetes API server - -.PP -\fB\-\-stderrthreshold\fP=2 - logs at or above this threshold go to stderr - -.PP -\fB\-\-token\fP="" - Bearer token for authentication to the API server. - -.PP -\fB\-\-user\fP="" - The name of the kubeconfig user to use - -.PP -\fB\-\-username\fP="" - Username for basic authentication to the API server. - -.PP -\fB\-\-v\fP=0 - log level for V logs - -.PP -\fB\-\-validate\fP=false - If true, use a schema to validate the input before sending it - -.PP -\fB\-\-vmodule\fP= - comma\-separated list of pattern=N settings for file\-filtered logging - - -.SH EXAMPLE -.PP -.RS - -.nf -// Returns snapshot of ruby\-container logs from pod 123456\-7890. -$ kubectl logs 123456\-7890 ruby\-container - -// Returns snapshot of previous terminated ruby\-container logs from pod 123456\-7890. -$ kubectl logs \-p 123456\-7890 ruby\-container - -// Starts streaming of ruby\-container logs from pod 123456\-7890. -$ kubectl logs \-f 123456\-7890 ruby\-container - -.fi -.RE - - -.SH SEE ALSO -.PP -\fBkubectl(1)\fP, - - -.SH HISTORY -.PP -January 2015, Originally compiled by Eric Paris (eparis at redhat dot com) based on the kubernetes source material, but hopefully they have been automatically generated since! diff --git a/release-0.19.0/docs/man/man1/kubectl-namespace.1 b/release-0.19.0/docs/man/man1/kubectl-namespace.1 deleted file mode 100644 index 94b04c52a9e..00000000000 --- a/release-0.19.0/docs/man/man1/kubectl-namespace.1 +++ /dev/null @@ -1,133 +0,0 @@ -.TH "KUBERNETES" "1" " kubernetes User Manuals" "Eric Paris" "Jan 2015" "" - - -.SH NAME -.PP -kubectl namespace \- SUPERCEDED: Set and view the current Kubernetes namespace - - -.SH SYNOPSIS -.PP -\fBkubectl namespace\fP [OPTIONS] - - -.SH DESCRIPTION -.PP -SUPERCEDED: Set and view the current Kubernetes namespace scope for command line requests. - -.PP -namespace has been superceded by the context.namespace field of .kubeconfig files. See 'kubectl config set\-context \-\-help' for more details. - - -.SH OPTIONS -.PP -\fB\-h\fP, \fB\-\-help\fP=false - help for namespace - - -.SH OPTIONS INHERITED FROM PARENT COMMANDS -.PP -\fB\-\-alsologtostderr\fP=false - log to standard error as well as files - -.PP -\fB\-\-api\-version\fP="" - The API version to use when talking to the server - -.PP -\fB\-\-certificate\-authority\fP="" - Path to a cert. file for the certificate authority. - -.PP -\fB\-\-client\-certificate\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-client\-key\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-cluster\fP="" - The name of the kubeconfig cluster to use - -.PP -\fB\-\-context\fP="" - The name of the kubeconfig context to use - -.PP -\fB\-\-insecure\-skip\-tls\-verify\fP=false - If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - -.PP -\fB\-\-kubeconfig\fP="" - Path to the kubeconfig file to use for CLI requests. - -.PP -\fB\-\-log\-backtrace\-at\fP=:0 - when logging hits line file:N, emit a stack trace - -.PP -\fB\-\-log\-dir\fP="" - If non\-empty, write log files in this directory - -.PP -\fB\-\-log\-flush\-frequency\fP=5s - Maximum number of seconds between log flushes - -.PP -\fB\-\-logtostderr\fP=true - log to standard error instead of files - -.PP -\fB\-\-match\-server\-version\fP=false - Require server version to match client version - -.PP -\fB\-\-namespace\fP="" - If present, the namespace scope for this CLI request. - -.PP -\fB\-\-password\fP="" - Password for basic authentication to the API server. - -.PP -\fB\-s\fP, \fB\-\-server\fP="" - The address and port of the Kubernetes API server - -.PP -\fB\-\-stderrthreshold\fP=2 - logs at or above this threshold go to stderr - -.PP -\fB\-\-token\fP="" - Bearer token for authentication to the API server. - -.PP -\fB\-\-user\fP="" - The name of the kubeconfig user to use - -.PP -\fB\-\-username\fP="" - Username for basic authentication to the API server. - -.PP -\fB\-\-v\fP=0 - log level for V logs - -.PP -\fB\-\-validate\fP=false - If true, use a schema to validate the input before sending it - -.PP -\fB\-\-vmodule\fP= - comma\-separated list of pattern=N settings for file\-filtered logging - - -.SH SEE ALSO -.PP -\fBkubectl(1)\fP, - - -.SH HISTORY -.PP -January 2015, Originally compiled by Eric Paris (eparis at redhat dot com) based on the kubernetes source material, but hopefully they have been automatically generated since! diff --git a/release-0.19.0/docs/man/man1/kubectl-port-forward.1 b/release-0.19.0/docs/man/man1/kubectl-port-forward.1 deleted file mode 100644 index 6d52308dd3e..00000000000 --- a/release-0.19.0/docs/man/man1/kubectl-port-forward.1 +++ /dev/null @@ -1,156 +0,0 @@ -.TH "KUBERNETES" "1" " kubernetes User Manuals" "Eric Paris" "Jan 2015" "" - - -.SH NAME -.PP -kubectl port\-forward \- Forward one or more local ports to a pod. - - -.SH SYNOPSIS -.PP -\fBkubectl port\-forward\fP [OPTIONS] - - -.SH DESCRIPTION -.PP -Forward one or more local ports to a pod. - - -.SH OPTIONS -.PP -\fB\-h\fP, \fB\-\-help\fP=false - help for port\-forward - -.PP -\fB\-p\fP, \fB\-\-pod\fP="" - Pod name - - -.SH OPTIONS INHERITED FROM PARENT COMMANDS -.PP -\fB\-\-alsologtostderr\fP=false - log to standard error as well as files - -.PP -\fB\-\-api\-version\fP="" - The API version to use when talking to the server - -.PP -\fB\-\-certificate\-authority\fP="" - Path to a cert. file for the certificate authority. - -.PP -\fB\-\-client\-certificate\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-client\-key\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-cluster\fP="" - The name of the kubeconfig cluster to use - -.PP -\fB\-\-context\fP="" - The name of the kubeconfig context to use - -.PP -\fB\-\-insecure\-skip\-tls\-verify\fP=false - If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - -.PP -\fB\-\-kubeconfig\fP="" - Path to the kubeconfig file to use for CLI requests. - -.PP -\fB\-\-log\-backtrace\-at\fP=:0 - when logging hits line file:N, emit a stack trace - -.PP -\fB\-\-log\-dir\fP="" - If non\-empty, write log files in this directory - -.PP -\fB\-\-log\-flush\-frequency\fP=5s - Maximum number of seconds between log flushes - -.PP -\fB\-\-logtostderr\fP=true - log to standard error instead of files - -.PP -\fB\-\-match\-server\-version\fP=false - Require server version to match client version - -.PP -\fB\-\-namespace\fP="" - If present, the namespace scope for this CLI request. - -.PP -\fB\-\-password\fP="" - Password for basic authentication to the API server. - -.PP -\fB\-s\fP, \fB\-\-server\fP="" - The address and port of the Kubernetes API server - -.PP -\fB\-\-stderrthreshold\fP=2 - logs at or above this threshold go to stderr - -.PP -\fB\-\-token\fP="" - Bearer token for authentication to the API server. - -.PP -\fB\-\-user\fP="" - The name of the kubeconfig user to use - -.PP -\fB\-\-username\fP="" - Username for basic authentication to the API server. - -.PP -\fB\-\-v\fP=0 - log level for V logs - -.PP -\fB\-\-validate\fP=false - If true, use a schema to validate the input before sending it - -.PP -\fB\-\-vmodule\fP= - comma\-separated list of pattern=N settings for file\-filtered logging - - -.SH EXAMPLE -.PP -.RS - -.nf - -// listens on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod -$ kubectl port\-forward \-p mypod 5000 6000 - -// listens on port 8888 locally, forwarding to 5000 in the pod -$ kubectl port\-forward \-p mypod 8888:5000 - -// listens on a random port locally, forwarding to 5000 in the pod -$ kubectl port\-forward \-p mypod :5000 - -// listens on a random port locally, forwarding to 5000 in the pod -$ kubectl port\-forward \-p mypod 0:5000 - -.fi -.RE - - -.SH SEE ALSO -.PP -\fBkubectl(1)\fP, - - -.SH HISTORY -.PP -January 2015, Originally compiled by Eric Paris (eparis at redhat dot com) based on the kubernetes source material, but hopefully they have been automatically generated since! diff --git a/release-0.19.0/docs/man/man1/kubectl-proxy.1 b/release-0.19.0/docs/man/man1/kubectl-proxy.1 deleted file mode 100644 index 49256350367..00000000000 --- a/release-0.19.0/docs/man/man1/kubectl-proxy.1 +++ /dev/null @@ -1,183 +0,0 @@ -.TH "KUBERNETES" "1" " kubernetes User Manuals" "Eric Paris" "Jan 2015" "" - - -.SH NAME -.PP -kubectl proxy \- Run a proxy to the Kubernetes API server - - -.SH SYNOPSIS -.PP -\fBkubectl proxy\fP [OPTIONS] - - -.SH DESCRIPTION -.PP -To proxy all of the kubernetes api and nothing else, use: - -.PP -kubectl proxy \-\-api\-prefix=/ - -.PP -To proxy only part of the kubernetes api and also some static files: - -.PP -kubectl proxy \-\-www=/my/files \-\-www\-prefix=/static/ \-\-api\-prefix=/api/ - -.PP -The above lets you 'curl localhost:8001/api/v1/pods'. - -.PP -To proxy the entire kubernetes api at a different root, use: - -.PP -kubectl proxy \-\-api\-prefix=/custom/ - -.PP -The above lets you 'curl localhost:8001/custom/api/v1/pods' - - -.SH OPTIONS -.PP -\fB\-\-api\-prefix\fP="/api/" - Prefix to serve the proxied API under. - -.PP -\fB\-h\fP, \fB\-\-help\fP=false - help for proxy - -.PP -\fB\-p\fP, \fB\-\-port\fP=8001 - The port on which to run the proxy. - -.PP -\fB\-w\fP, \fB\-\-www\fP="" - Also serve static files from the given directory under the specified prefix. - -.PP -\fB\-P\fP, \fB\-\-www\-prefix\fP="/static/" - Prefix to serve static files under, if static file directory is specified. - - -.SH OPTIONS INHERITED FROM PARENT COMMANDS -.PP -\fB\-\-alsologtostderr\fP=false - log to standard error as well as files - -.PP -\fB\-\-api\-version\fP="" - The API version to use when talking to the server - -.PP -\fB\-\-certificate\-authority\fP="" - Path to a cert. file for the certificate authority. - -.PP -\fB\-\-client\-certificate\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-client\-key\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-cluster\fP="" - The name of the kubeconfig cluster to use - -.PP -\fB\-\-context\fP="" - The name of the kubeconfig context to use - -.PP -\fB\-\-insecure\-skip\-tls\-verify\fP=false - If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - -.PP -\fB\-\-kubeconfig\fP="" - Path to the kubeconfig file to use for CLI requests. - -.PP -\fB\-\-log\-backtrace\-at\fP=:0 - when logging hits line file:N, emit a stack trace - -.PP -\fB\-\-log\-dir\fP="" - If non\-empty, write log files in this directory - -.PP -\fB\-\-log\-flush\-frequency\fP=5s - Maximum number of seconds between log flushes - -.PP -\fB\-\-logtostderr\fP=true - log to standard error instead of files - -.PP -\fB\-\-match\-server\-version\fP=false - Require server version to match client version - -.PP -\fB\-\-namespace\fP="" - If present, the namespace scope for this CLI request. - -.PP -\fB\-\-password\fP="" - Password for basic authentication to the API server. - -.PP -\fB\-s\fP, \fB\-\-server\fP="" - The address and port of the Kubernetes API server - -.PP -\fB\-\-stderrthreshold\fP=2 - logs at or above this threshold go to stderr - -.PP -\fB\-\-token\fP="" - Bearer token for authentication to the API server. - -.PP -\fB\-\-user\fP="" - The name of the kubeconfig user to use - -.PP -\fB\-\-username\fP="" - Username for basic authentication to the API server. - -.PP -\fB\-\-v\fP=0 - log level for V logs - -.PP -\fB\-\-validate\fP=false - If true, use a schema to validate the input before sending it - -.PP -\fB\-\-vmodule\fP= - comma\-separated list of pattern=N settings for file\-filtered logging - - -.SH EXAMPLE -.PP -.RS - -.nf -// Run a proxy to kubernetes apiserver on port 8011, serving static content from ./local/www/ -$ kubectl proxy \-\-port=8011 \-\-www=./local/www/ - -// Run a proxy to kubernetes apiserver, changing the api prefix to k8s\-api -// This makes e.g. the pods api available at localhost:8011/k8s\-api/v1/pods/ -$ kubectl proxy \-\-api\-prefix=/k8s\-api - -.fi -.RE - - -.SH SEE ALSO -.PP -\fBkubectl(1)\fP, - - -.SH HISTORY -.PP -January 2015, Originally compiled by Eric Paris (eparis at redhat dot com) based on the kubernetes source material, but hopefully they have been automatically generated since! diff --git a/release-0.19.0/docs/man/man1/kubectl-rolling-update.1 b/release-0.19.0/docs/man/man1/kubectl-rolling-update.1 deleted file mode 100644 index 6483932c16e..00000000000 --- a/release-0.19.0/docs/man/man1/kubectl-rolling-update.1 +++ /dev/null @@ -1,207 +0,0 @@ -.TH "KUBERNETES" "1" " kubernetes User Manuals" "Eric Paris" "Jan 2015" "" - - -.SH NAME -.PP -kubectl rolling\-update \- Perform a rolling update of the given ReplicationController. - - -.SH SYNOPSIS -.PP -\fBkubectl rolling\-update\fP [OPTIONS] - - -.SH DESCRIPTION -.PP -Perform a rolling update of the given ReplicationController. - -.PP -Replaces the specified controller with new controller, updating one pod at a time to use the -new PodTemplate. The new\-controller.json must specify the same namespace as the -existing controller and overwrite at least one (common) label in its replicaSelector. - - -.SH OPTIONS -.PP -\fB\-\-deployment\-label\-key\fP="deployment" - The key to use to differentiate between two different controllers, default 'deployment'. Only relevant when \-\-image is specified, ignored otherwise - -.PP -\fB\-\-dry\-run\fP=false - If true, print out the changes that would be made, but don't actually make them. - -.PP -\fB\-f\fP, \fB\-\-filename\fP="" - Filename or URL to file to use to create the new controller. - -.PP -\fB\-h\fP, \fB\-\-help\fP=false - help for rolling\-update - -.PP -\fB\-\-image\fP="" - Image to upgrade the controller to. Can not be used with \-\-filename/\-f - -.PP -\fB\-\-no\-headers\fP=false - When using the default output, don't print headers. - -.PP -\fB\-o\fP, \fB\-\-output\fP="" - Output format. One of: json|yaml|template|templatefile. - -.PP -\fB\-\-output\-version\fP="" - Output the formatted object with the given version (default api\-version). - -.PP -\fB\-\-poll\-interval\fP="3s" - Time delay between polling controller status after update. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". - -.PP -\fB\-\-rollback\fP=false - If true, this is a request to abort an existing rollout that is partially rolled out. It effectively reverses current and next and runs a rollout - -.PP -\fB\-t\fP, \fB\-\-template\fP="" - Template string or path to template file to use when \-o=template or \-o=templatefile. The template format is golang templates [ -\[la]http://golang.org/pkg/text/template/#pkg-overview\[ra]] - -.PP -\fB\-\-timeout\fP="5m0s" - Max time to wait for a controller to update before giving up. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". - -.PP -\fB\-\-update\-period\fP="1m0s" - Time to wait between updating pods. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". - - -.SH OPTIONS INHERITED FROM PARENT COMMANDS -.PP -\fB\-\-alsologtostderr\fP=false - log to standard error as well as files - -.PP -\fB\-\-api\-version\fP="" - The API version to use when talking to the server - -.PP -\fB\-\-certificate\-authority\fP="" - Path to a cert. file for the certificate authority. - -.PP -\fB\-\-client\-certificate\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-client\-key\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-cluster\fP="" - The name of the kubeconfig cluster to use - -.PP -\fB\-\-context\fP="" - The name of the kubeconfig context to use - -.PP -\fB\-\-insecure\-skip\-tls\-verify\fP=false - If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - -.PP -\fB\-\-kubeconfig\fP="" - Path to the kubeconfig file to use for CLI requests. - -.PP -\fB\-\-log\-backtrace\-at\fP=:0 - when logging hits line file:N, emit a stack trace - -.PP -\fB\-\-log\-dir\fP="" - If non\-empty, write log files in this directory - -.PP -\fB\-\-log\-flush\-frequency\fP=5s - Maximum number of seconds between log flushes - -.PP -\fB\-\-logtostderr\fP=true - log to standard error instead of files - -.PP -\fB\-\-match\-server\-version\fP=false - Require server version to match client version - -.PP -\fB\-\-namespace\fP="" - If present, the namespace scope for this CLI request. - -.PP -\fB\-\-password\fP="" - Password for basic authentication to the API server. - -.PP -\fB\-s\fP, \fB\-\-server\fP="" - The address and port of the Kubernetes API server - -.PP -\fB\-\-stderrthreshold\fP=2 - logs at or above this threshold go to stderr - -.PP -\fB\-\-token\fP="" - Bearer token for authentication to the API server. - -.PP -\fB\-\-user\fP="" - The name of the kubeconfig user to use - -.PP -\fB\-\-username\fP="" - Username for basic authentication to the API server. - -.PP -\fB\-\-v\fP=0 - log level for V logs - -.PP -\fB\-\-validate\fP=false - If true, use a schema to validate the input before sending it - -.PP -\fB\-\-vmodule\fP= - comma\-separated list of pattern=N settings for file\-filtered logging - - -.SH EXAMPLE -.PP -.RS - -.nf -// Update pods of frontend\-v1 using new controller data in frontend\-v2.json. -$ kubectl rolling\-update frontend\-v1 \-f frontend\-v2.json - -// Update pods of frontend\-v1 using JSON data passed into stdin. -$ cat frontend\-v2.json | kubectl rolling\-update frontend\-v1 \-f \- - -// Update the pods of frontend\-v1 to frontend\-v2 by just changing the image, and switching the -// name of the replication controller. -$ kubectl rolling\-update frontend\-v1 frontend\-v2 \-\-image=image:v2 - -// Update the pods of frontend by just changing the image, and keeping the old name -$ kubectl rolling\-update frontend \-\-image=image:v2 - - -.fi -.RE - - -.SH SEE ALSO -.PP -\fBkubectl(1)\fP, - - -.SH HISTORY -.PP -January 2015, Originally compiled by Eric Paris (eparis at redhat dot com) based on the kubernetes source material, but hopefully they have been automatically generated since! diff --git a/release-0.19.0/docs/man/man1/kubectl-run.1 b/release-0.19.0/docs/man/man1/kubectl-run.1 deleted file mode 100644 index 10244c69d65..00000000000 --- a/release-0.19.0/docs/man/man1/kubectl-run.1 +++ /dev/null @@ -1,201 +0,0 @@ -.TH "KUBERNETES" "1" " kubernetes User Manuals" "Eric Paris" "Jan 2015" "" - - -.SH NAME -.PP -kubectl run \- Run a particular image on the cluster. - - -.SH SYNOPSIS -.PP -\fBkubectl run\fP [OPTIONS] - - -.SH DESCRIPTION -.PP -Create and run a particular image, possibly replicated. -Creates a replication controller to manage the created container(s). - - -.SH OPTIONS -.PP -\fB\-\-dry\-run\fP=false - If true, only print the object that would be sent, without sending it. - -.PP -\fB\-\-generator\fP="run/v1" - The name of the API generator to use. Default is 'run\-controller/v1'. - -.PP -\fB\-h\fP, \fB\-\-help\fP=false - help for run - -.PP -\fB\-\-hostport\fP=\-1 - The host port mapping for the container port. To demonstrate a single\-machine container. - -.PP -\fB\-\-image\fP="" - The image for the container to run. - -.PP -\fB\-l\fP, \fB\-\-labels\fP="" - Labels to apply to the pod(s). - -.PP -\fB\-\-no\-headers\fP=false - When using the default output, don't print headers. - -.PP -\fB\-o\fP, \fB\-\-output\fP="" - Output format. One of: json|yaml|template|templatefile. - -.PP -\fB\-\-output\-version\fP="" - Output the formatted object with the given version (default api\-version). - -.PP -\fB\-\-overrides\fP="" - An inline JSON override for the generated object. If this is non\-empty, it is used to override the generated object. Requires that the object supply a valid apiVersion field. - -.PP -\fB\-\-port\fP=\-1 - The port that this container exposes. - -.PP -\fB\-r\fP, \fB\-\-replicas\fP=1 - Number of replicas to create for this container. Default is 1. - -.PP -\fB\-t\fP, \fB\-\-template\fP="" - Template string or path to template file to use when \-o=template or \-o=templatefile. The template format is golang templates [ -\[la]http://golang.org/pkg/text/template/#pkg-overview\[ra]] - - -.SH OPTIONS INHERITED FROM PARENT COMMANDS -.PP -\fB\-\-alsologtostderr\fP=false - log to standard error as well as files - -.PP -\fB\-\-api\-version\fP="" - The API version to use when talking to the server - -.PP -\fB\-\-certificate\-authority\fP="" - Path to a cert. file for the certificate authority. - -.PP -\fB\-\-client\-certificate\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-client\-key\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-cluster\fP="" - The name of the kubeconfig cluster to use - -.PP -\fB\-\-context\fP="" - The name of the kubeconfig context to use - -.PP -\fB\-\-insecure\-skip\-tls\-verify\fP=false - If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - -.PP -\fB\-\-kubeconfig\fP="" - Path to the kubeconfig file to use for CLI requests. - -.PP -\fB\-\-log\-backtrace\-at\fP=:0 - when logging hits line file:N, emit a stack trace - -.PP -\fB\-\-log\-dir\fP="" - If non\-empty, write log files in this directory - -.PP -\fB\-\-log\-flush\-frequency\fP=5s - Maximum number of seconds between log flushes - -.PP -\fB\-\-logtostderr\fP=true - log to standard error instead of files - -.PP -\fB\-\-match\-server\-version\fP=false - Require server version to match client version - -.PP -\fB\-\-namespace\fP="" - If present, the namespace scope for this CLI request. - -.PP -\fB\-\-password\fP="" - Password for basic authentication to the API server. - -.PP -\fB\-s\fP, \fB\-\-server\fP="" - The address and port of the Kubernetes API server - -.PP -\fB\-\-stderrthreshold\fP=2 - logs at or above this threshold go to stderr - -.PP -\fB\-\-token\fP="" - Bearer token for authentication to the API server. - -.PP -\fB\-\-user\fP="" - The name of the kubeconfig user to use - -.PP -\fB\-\-username\fP="" - Username for basic authentication to the API server. - -.PP -\fB\-\-v\fP=0 - log level for V logs - -.PP -\fB\-\-validate\fP=false - If true, use a schema to validate the input before sending it - -.PP -\fB\-\-vmodule\fP= - comma\-separated list of pattern=N settings for file\-filtered logging - - -.SH EXAMPLE -.PP -.RS - -.nf -// Starts a single instance of nginx. -$ kubectl run nginx \-\-image=nginx - -// Starts a replicated instance of nginx. -$ kubectl run nginx \-\-image=nginx \-\-replicas=5 - -// Dry run. Print the corresponding API objects without creating them. -$ kubectl run nginx \-\-image=nginx \-\-dry\-run - -// Start a single instance of nginx, but overload the spec of the replication controller with a partial set of values parsed from JSON. -$ kubectl run nginx \-\-image=nginx \-\-overrides='\{ "apiVersion": "v1", "spec": \{ ... \} \}' - -.fi -.RE - - -.SH SEE ALSO -.PP -\fBkubectl(1)\fP, - - -.SH HISTORY -.PP -January 2015, Originally compiled by Eric Paris (eparis at redhat dot com) based on the kubernetes source material, but hopefully they have been automatically generated since! diff --git a/release-0.19.0/docs/man/man1/kubectl-scale.1 b/release-0.19.0/docs/man/man1/kubectl-scale.1 deleted file mode 100644 index aa3cef1b1a0..00000000000 --- a/release-0.19.0/docs/man/man1/kubectl-scale.1 +++ /dev/null @@ -1,163 +0,0 @@ -.TH "KUBERNETES" "1" " kubernetes User Manuals" "Eric Paris" "Jan 2015" "" - - -.SH NAME -.PP -kubectl scale \- Set a new size for a Replication Controller. - - -.SH SYNOPSIS -.PP -\fBkubectl scale\fP [OPTIONS] - - -.SH DESCRIPTION -.PP -Set a new size for a Replication Controller. - -.PP -Scale also allows users to specify one or more preconditions for the scale action. -If \-\-current\-replicas or \-\-resource\-version is specified, it is validated before the -scale is attempted, and it is guaranteed that the precondition holds true when the -scale is sent to the server. - - -.SH OPTIONS -.PP -\fB\-\-current\-replicas\fP=\-1 - Precondition for current size. Requires that the current size of the replication controller match this value in order to scale. - -.PP -\fB\-h\fP, \fB\-\-help\fP=false - help for scale - -.PP -\fB\-\-replicas\fP=\-1 - The new desired number of replicas. Required. - -.PP -\fB\-\-resource\-version\fP="" - Precondition for resource version. Requires that the current resource version match this value in order to scale. - - -.SH OPTIONS INHERITED FROM PARENT COMMANDS -.PP -\fB\-\-alsologtostderr\fP=false - log to standard error as well as files - -.PP -\fB\-\-api\-version\fP="" - The API version to use when talking to the server - -.PP -\fB\-\-certificate\-authority\fP="" - Path to a cert. file for the certificate authority. - -.PP -\fB\-\-client\-certificate\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-client\-key\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-cluster\fP="" - The name of the kubeconfig cluster to use - -.PP -\fB\-\-context\fP="" - The name of the kubeconfig context to use - -.PP -\fB\-\-insecure\-skip\-tls\-verify\fP=false - If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - -.PP -\fB\-\-kubeconfig\fP="" - Path to the kubeconfig file to use for CLI requests. - -.PP -\fB\-\-log\-backtrace\-at\fP=:0 - when logging hits line file:N, emit a stack trace - -.PP -\fB\-\-log\-dir\fP="" - If non\-empty, write log files in this directory - -.PP -\fB\-\-log\-flush\-frequency\fP=5s - Maximum number of seconds between log flushes - -.PP -\fB\-\-logtostderr\fP=true - log to standard error instead of files - -.PP -\fB\-\-match\-server\-version\fP=false - Require server version to match client version - -.PP -\fB\-\-namespace\fP="" - If present, the namespace scope for this CLI request. - -.PP -\fB\-\-password\fP="" - Password for basic authentication to the API server. - -.PP -\fB\-s\fP, \fB\-\-server\fP="" - The address and port of the Kubernetes API server - -.PP -\fB\-\-stderrthreshold\fP=2 - logs at or above this threshold go to stderr - -.PP -\fB\-\-token\fP="" - Bearer token for authentication to the API server. - -.PP -\fB\-\-user\fP="" - The name of the kubeconfig user to use - -.PP -\fB\-\-username\fP="" - Username for basic authentication to the API server. - -.PP -\fB\-\-v\fP=0 - log level for V logs - -.PP -\fB\-\-validate\fP=false - If true, use a schema to validate the input before sending it - -.PP -\fB\-\-vmodule\fP= - comma\-separated list of pattern=N settings for file\-filtered logging - - -.SH EXAMPLE -.PP -.RS - -.nf -// Scale replication controller named 'foo' to 3. -$ kubectl scale \-\-replicas=3 replicationcontrollers foo - -// If the replication controller named foo's current size is 2, scale foo to 3. -$ kubectl scale \-\-current\-replicas=2 \-\-replicas=3 replicationcontrollers foo - -.fi -.RE - - -.SH SEE ALSO -.PP -\fBkubectl(1)\fP, - - -.SH HISTORY -.PP -January 2015, Originally compiled by Eric Paris (eparis at redhat dot com) based on the kubernetes source material, but hopefully they have been automatically generated since! diff --git a/release-0.19.0/docs/man/man1/kubectl-stop.1 b/release-0.19.0/docs/man/man1/kubectl-stop.1 deleted file mode 100644 index a00bcce9382..00000000000 --- a/release-0.19.0/docs/man/man1/kubectl-stop.1 +++ /dev/null @@ -1,179 +0,0 @@ -.TH "KUBERNETES" "1" " kubernetes User Manuals" "Eric Paris" "Jan 2015" "" - - -.SH NAME -.PP -kubectl stop \- Gracefully shut down a resource by id or filename. - - -.SH SYNOPSIS -.PP -\fBkubectl stop\fP [OPTIONS] - - -.SH DESCRIPTION -.PP -Gracefully shut down a resource by id or filename. - -.PP -Attempts to shut down and delete a resource that supports graceful termination. -If the resource is scalable it will be scaled to 0 before deletion. - - -.SH OPTIONS -.PP -\fB\-\-all\fP=false - [\-all] to select all the specified resources. - -.PP -\fB\-f\fP, \fB\-\-filename\fP=[] - Filename, directory, or URL to file of resource(s) to be stopped. - -.PP -\fB\-\-grace\-period\fP=\-1 - Period of time in seconds given to the resource to terminate gracefully. Ignored if negative. - -.PP -\fB\-h\fP, \fB\-\-help\fP=false - help for stop - -.PP -\fB\-\-ignore\-not\-found\fP=false - Treat "resource not found" as a successful stop. - -.PP -\fB\-l\fP, \fB\-\-selector\fP="" - Selector (label query) to filter on. - -.PP -\fB\-\-timeout\fP=0 - The length of time to wait before giving up on a delete, zero means determine a timeout from the size of the object - - -.SH OPTIONS INHERITED FROM PARENT COMMANDS -.PP -\fB\-\-alsologtostderr\fP=false - log to standard error as well as files - -.PP -\fB\-\-api\-version\fP="" - The API version to use when talking to the server - -.PP -\fB\-\-certificate\-authority\fP="" - Path to a cert. file for the certificate authority. - -.PP -\fB\-\-client\-certificate\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-client\-key\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-cluster\fP="" - The name of the kubeconfig cluster to use - -.PP -\fB\-\-context\fP="" - The name of the kubeconfig context to use - -.PP -\fB\-\-insecure\-skip\-tls\-verify\fP=false - If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - -.PP -\fB\-\-kubeconfig\fP="" - Path to the kubeconfig file to use for CLI requests. - -.PP -\fB\-\-log\-backtrace\-at\fP=:0 - when logging hits line file:N, emit a stack trace - -.PP -\fB\-\-log\-dir\fP="" - If non\-empty, write log files in this directory - -.PP -\fB\-\-log\-flush\-frequency\fP=5s - Maximum number of seconds between log flushes - -.PP -\fB\-\-logtostderr\fP=true - log to standard error instead of files - -.PP -\fB\-\-match\-server\-version\fP=false - Require server version to match client version - -.PP -\fB\-\-namespace\fP="" - If present, the namespace scope for this CLI request. - -.PP -\fB\-\-password\fP="" - Password for basic authentication to the API server. - -.PP -\fB\-s\fP, \fB\-\-server\fP="" - The address and port of the Kubernetes API server - -.PP -\fB\-\-stderrthreshold\fP=2 - logs at or above this threshold go to stderr - -.PP -\fB\-\-token\fP="" - Bearer token for authentication to the API server. - -.PP -\fB\-\-user\fP="" - The name of the kubeconfig user to use - -.PP -\fB\-\-username\fP="" - Username for basic authentication to the API server. - -.PP -\fB\-\-v\fP=0 - log level for V logs - -.PP -\fB\-\-validate\fP=false - If true, use a schema to validate the input before sending it - -.PP -\fB\-\-vmodule\fP= - comma\-separated list of pattern=N settings for file\-filtered logging - - -.SH EXAMPLE -.PP -.RS - -.nf -// Shut down foo. -$ kubectl stop replicationcontroller foo - -// Stop pods and services with label name=myLabel. -$ kubectl stop pods,services \-l name=myLabel - -// Shut down the service defined in service.json -$ kubectl stop \-f service.json - -// Shut down all resources in the path/to/resources directory -$ kubectl stop \-f path/to/resources - -.fi -.RE - - -.SH SEE ALSO -.PP -\fBkubectl(1)\fP, - - -.SH HISTORY -.PP -January 2015, Originally compiled by Eric Paris (eparis at redhat dot com) based on the kubernetes source material, but hopefully they have been automatically generated since! diff --git a/release-0.19.0/docs/man/man1/kubectl-update.1 b/release-0.19.0/docs/man/man1/kubectl-update.1 deleted file mode 100644 index 3441a6e09e8..00000000000 --- a/release-0.19.0/docs/man/man1/kubectl-update.1 +++ /dev/null @@ -1,152 +0,0 @@ -.TH "KUBERNETES" "1" " kubernetes User Manuals" "Eric Paris" "Jan 2015" "" - - -.SH NAME -.PP -kubectl update \- Update a resource by filename or stdin. - - -.SH SYNOPSIS -.PP -\fBkubectl update\fP [OPTIONS] - - -.SH DESCRIPTION -.PP -Update a resource by filename or stdin. - -.PP -JSON and YAML formats are accepted. - - -.SH OPTIONS -.PP -\fB\-f\fP, \fB\-\-filename\fP=[] - Filename, directory, or URL to file to use to update the resource. - -.PP -\fB\-h\fP, \fB\-\-help\fP=false - help for update - - -.SH OPTIONS INHERITED FROM PARENT COMMANDS -.PP -\fB\-\-alsologtostderr\fP=false - log to standard error as well as files - -.PP -\fB\-\-api\-version\fP="" - The API version to use when talking to the server - -.PP -\fB\-\-certificate\-authority\fP="" - Path to a cert. file for the certificate authority. - -.PP -\fB\-\-client\-certificate\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-client\-key\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-cluster\fP="" - The name of the kubeconfig cluster to use - -.PP -\fB\-\-context\fP="" - The name of the kubeconfig context to use - -.PP -\fB\-\-insecure\-skip\-tls\-verify\fP=false - If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - -.PP -\fB\-\-kubeconfig\fP="" - Path to the kubeconfig file to use for CLI requests. - -.PP -\fB\-\-log\-backtrace\-at\fP=:0 - when logging hits line file:N, emit a stack trace - -.PP -\fB\-\-log\-dir\fP="" - If non\-empty, write log files in this directory - -.PP -\fB\-\-log\-flush\-frequency\fP=5s - Maximum number of seconds between log flushes - -.PP -\fB\-\-logtostderr\fP=true - log to standard error instead of files - -.PP -\fB\-\-match\-server\-version\fP=false - Require server version to match client version - -.PP -\fB\-\-namespace\fP="" - If present, the namespace scope for this CLI request. - -.PP -\fB\-\-password\fP="" - Password for basic authentication to the API server. - -.PP -\fB\-s\fP, \fB\-\-server\fP="" - The address and port of the Kubernetes API server - -.PP -\fB\-\-stderrthreshold\fP=2 - logs at or above this threshold go to stderr - -.PP -\fB\-\-token\fP="" - Bearer token for authentication to the API server. - -.PP -\fB\-\-user\fP="" - The name of the kubeconfig user to use - -.PP -\fB\-\-username\fP="" - Username for basic authentication to the API server. - -.PP -\fB\-\-v\fP=0 - log level for V logs - -.PP -\fB\-\-validate\fP=false - If true, use a schema to validate the input before sending it - -.PP -\fB\-\-vmodule\fP= - comma\-separated list of pattern=N settings for file\-filtered logging - - -.SH EXAMPLE -.PP -.RS - -.nf -// Update a pod using the data in pod.json. -$ kubectl update \-f pod.json - -// Update a pod based on the JSON passed into stdin. -$ cat pod.json | kubectl update \-f \- - -.fi -.RE - - -.SH SEE ALSO -.PP -\fBkubectl(1)\fP, - - -.SH HISTORY -.PP -January 2015, Originally compiled by Eric Paris (eparis at redhat dot com) based on the kubernetes source material, but hopefully they have been automatically generated since! diff --git a/release-0.19.0/docs/man/man1/kubectl-version.1 b/release-0.19.0/docs/man/man1/kubectl-version.1 deleted file mode 100644 index d91fca6c10f..00000000000 --- a/release-0.19.0/docs/man/man1/kubectl-version.1 +++ /dev/null @@ -1,134 +0,0 @@ -.TH "KUBERNETES" "1" " kubernetes User Manuals" "Eric Paris" "Jan 2015" "" - - -.SH NAME -.PP -kubectl version \- Print the client and server version information. - - -.SH SYNOPSIS -.PP -\fBkubectl version\fP [OPTIONS] - - -.SH DESCRIPTION -.PP -Print the client and server version information. - - -.SH OPTIONS -.PP -\fB\-c\fP, \fB\-\-client\fP=false - Client version only (no server required). - -.PP -\fB\-h\fP, \fB\-\-help\fP=false - help for version - - -.SH OPTIONS INHERITED FROM PARENT COMMANDS -.PP -\fB\-\-alsologtostderr\fP=false - log to standard error as well as files - -.PP -\fB\-\-api\-version\fP="" - The API version to use when talking to the server - -.PP -\fB\-\-certificate\-authority\fP="" - Path to a cert. file for the certificate authority. - -.PP -\fB\-\-client\-certificate\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-client\-key\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-cluster\fP="" - The name of the kubeconfig cluster to use - -.PP -\fB\-\-context\fP="" - The name of the kubeconfig context to use - -.PP -\fB\-\-insecure\-skip\-tls\-verify\fP=false - If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - -.PP -\fB\-\-kubeconfig\fP="" - Path to the kubeconfig file to use for CLI requests. - -.PP -\fB\-\-log\-backtrace\-at\fP=:0 - when logging hits line file:N, emit a stack trace - -.PP -\fB\-\-log\-dir\fP="" - If non\-empty, write log files in this directory - -.PP -\fB\-\-log\-flush\-frequency\fP=5s - Maximum number of seconds between log flushes - -.PP -\fB\-\-logtostderr\fP=true - log to standard error instead of files - -.PP -\fB\-\-match\-server\-version\fP=false - Require server version to match client version - -.PP -\fB\-\-namespace\fP="" - If present, the namespace scope for this CLI request. - -.PP -\fB\-\-password\fP="" - Password for basic authentication to the API server. - -.PP -\fB\-s\fP, \fB\-\-server\fP="" - The address and port of the Kubernetes API server - -.PP -\fB\-\-stderrthreshold\fP=2 - logs at or above this threshold go to stderr - -.PP -\fB\-\-token\fP="" - Bearer token for authentication to the API server. - -.PP -\fB\-\-user\fP="" - The name of the kubeconfig user to use - -.PP -\fB\-\-username\fP="" - Username for basic authentication to the API server. - -.PP -\fB\-\-v\fP=0 - log level for V logs - -.PP -\fB\-\-validate\fP=false - If true, use a schema to validate the input before sending it - -.PP -\fB\-\-vmodule\fP= - comma\-separated list of pattern=N settings for file\-filtered logging - - -.SH SEE ALSO -.PP -\fBkubectl(1)\fP, - - -.SH HISTORY -.PP -January 2015, Originally compiled by Eric Paris (eparis at redhat dot com) based on the kubernetes source material, but hopefully they have been automatically generated since! diff --git a/release-0.19.0/docs/man/man1/kubectl.1 b/release-0.19.0/docs/man/man1/kubectl.1 deleted file mode 100644 index 5b92e11e358..00000000000 --- a/release-0.19.0/docs/man/man1/kubectl.1 +++ /dev/null @@ -1,132 +0,0 @@ -.TH "KUBERNETES" "1" " kubernetes User Manuals" "Eric Paris" "Jan 2015" "" - - -.SH NAME -.PP -kubectl \- kubectl controls the Kubernetes cluster manager - - -.SH SYNOPSIS -.PP -\fBkubectl\fP [OPTIONS] - - -.SH DESCRIPTION -.PP -kubectl controls the Kubernetes cluster manager. - -.PP -Find more information at -\[la]https://github.com/GoogleCloudPlatform/kubernetes\[ra]. - - -.SH OPTIONS -.PP -\fB\-\-alsologtostderr\fP=false - log to standard error as well as files - -.PP -\fB\-\-api\-version\fP="" - The API version to use when talking to the server - -.PP -\fB\-\-certificate\-authority\fP="" - Path to a cert. file for the certificate authority. - -.PP -\fB\-\-client\-certificate\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-client\-key\fP="" - Path to a client key file for TLS. - -.PP -\fB\-\-cluster\fP="" - The name of the kubeconfig cluster to use - -.PP -\fB\-\-context\fP="" - The name of the kubeconfig context to use - -.PP -\fB\-h\fP, \fB\-\-help\fP=false - help for kubectl - -.PP -\fB\-\-insecure\-skip\-tls\-verify\fP=false - If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. - -.PP -\fB\-\-kubeconfig\fP="" - Path to the kubeconfig file to use for CLI requests. - -.PP -\fB\-\-log\-backtrace\-at\fP=:0 - when logging hits line file:N, emit a stack trace - -.PP -\fB\-\-log\-dir\fP="" - If non\-empty, write log files in this directory - -.PP -\fB\-\-log\-flush\-frequency\fP=5s - Maximum number of seconds between log flushes - -.PP -\fB\-\-logtostderr\fP=true - log to standard error instead of files - -.PP -\fB\-\-match\-server\-version\fP=false - Require server version to match client version - -.PP -\fB\-\-namespace\fP="" - If present, the namespace scope for this CLI request. - -.PP -\fB\-\-password\fP="" - Password for basic authentication to the API server. - -.PP -\fB\-s\fP, \fB\-\-server\fP="" - The address and port of the Kubernetes API server - -.PP -\fB\-\-stderrthreshold\fP=2 - logs at or above this threshold go to stderr - -.PP -\fB\-\-token\fP="" - Bearer token for authentication to the API server. - -.PP -\fB\-\-user\fP="" - The name of the kubeconfig user to use - -.PP -\fB\-\-username\fP="" - Username for basic authentication to the API server. - -.PP -\fB\-\-v\fP=0 - log level for V logs - -.PP -\fB\-\-validate\fP=false - If true, use a schema to validate the input before sending it - -.PP -\fB\-\-vmodule\fP= - comma\-separated list of pattern=N settings for file\-filtered logging - - -.SH SEE ALSO -.PP -\fBkubectl\-get(1)\fP, \fBkubectl\-describe(1)\fP, \fBkubectl\-create(1)\fP, \fBkubectl\-update(1)\fP, \fBkubectl\-delete(1)\fP, \fBkubectl\-namespace(1)\fP, \fBkubectl\-logs(1)\fP, \fBkubectl\-rolling\-update(1)\fP, \fBkubectl\-scale(1)\fP, \fBkubectl\-exec(1)\fP, \fBkubectl\-port\-forward(1)\fP, \fBkubectl\-proxy(1)\fP, \fBkubectl\-run(1)\fP, \fBkubectl\-stop(1)\fP, \fBkubectl\-expose(1)\fP, \fBkubectl\-label(1)\fP, \fBkubectl\-config(1)\fP, \fBkubectl\-cluster\-info(1)\fP, \fBkubectl\-api\-versions(1)\fP, \fBkubectl\-version(1)\fP, - - -.SH HISTORY -.PP -January 2015, Originally compiled by Eric Paris (eparis at redhat dot com) based on the kubernetes source material, but hopefully they have been automatically generated since! diff --git a/release-0.19.0/docs/man/man1/kubelet.1 b/release-0.19.0/docs/man/man1/kubelet.1 deleted file mode 100644 index 3215e20ace5..00000000000 --- a/release-0.19.0/docs/man/man1/kubelet.1 +++ /dev/null @@ -1,339 +0,0 @@ -.TH "KUBERNETES" "1" " kubernetes User Manuals" "Scott Collier" "October 2014" "" - -.SH NAME -.PP -kubelet \- Processes a container manifest so the containers are launched according to how they are described. - -.SH SYNOPSIS -.PP -\fBkubelet\fP [OPTIONS] - -.SH DESCRIPTION -.PP -The \fBkubernetes\fP kubelet runs on each node. The Kubelet works in terms of a container manifest. A container manifest is a YAML or JSON file that describes a pod. The Kubelet takes a set of manifests that are provided in various mechanisms and ensures that the containers described in those manifests are started and continue running. - -.PP -There are 3 ways that a container manifest can be provided to the Kubelet: - -.PP -.RS - -.nf -File: Path passed as a flag on the command line. This file is rechecked every 20 seconds (configurable with a flag). -HTTP endpoint: HTTP endpoint passed as a parameter on the command line. This endpoint is checked every 20 seconds (also configurable with a flag). -HTTP server: The kubelet can also listen for HTTP and respond to a simple API (underspec'd currently) to submit a new manifest. - -.fi - -.SH OPTIONS -.PP -\fB\-\-address\fP=0.0.0.0 - The IP address for the info server to serve on (set to 0.0.0.0 for all interfaces) - -.PP -\fB\-\-allow\_dynamic\_housekeeping\fP=true - Whether to allow the housekeeping interval to be dynamic - -.PP -\fB\-\-allow\-privileged\fP=false - If true, allow containers to request privileged mode. [default=false] - -.PP -\fB\-\-alsologtostderr\fP=false - log to standard error as well as files - -.PP -\fB\-\-api\-servers\fP=[] - List of Kubernetes API servers for publishing events, and reading pods and services. (ip:port), comma separated. - -.PP -\fB\-\-boot\_id\_file\fP=/proc/sys/kernel/random/boot\_id - Comma\-separated list of files to check for boot\-id. Use the first one that exists. - -.PP -\fB\-\-cadvisor\-port\fP=4194 - The port of the localhost cAdvisor endpoint - -.PP -\fB\-\-cert\-dir\fP="/var/run/kubernetes" - The directory where the TLS certs are located (by default /var/run/kubernetes). If \-\-tls\_cert\_file and \-\-tls\_private\_key\_file are provided, this flag will be ignored. - -.PP -\fB\-\-cgroup\_root\fP="" - Optional root cgroup to use for pods. This is handled by the container runtime on a best effort basis. Default: '', which means use the container runtime default. - -.PP -\fB\-\-cloud\-config\fP="" - The path to the cloud provider configuration file. Empty string for no configuration file. - -.PP -\fB\-\-cloud\-provider\fP="" - The provider for cloud services. Empty string for no provider. - -.PP -\fB\-\-cluster\-dns\fP= - IP address for a cluster DNS server. If set, kubelet will configure all containers to use this for DNS resolution in addition to the host's DNS servers - -.PP -\fB\-\-cluster\-domain\fP="" - Domain for this cluster. If set, kubelet will configure all containers to search this domain in addition to the host's search domains - -.PP -\fB\-\-config\fP="" - Path to the config file or directory of files - -.PP -\fB\-\-configure\-cbr0\fP=false - If true, kubelet will configure cbr0 based on Node.Spec.PodCIDR. - -.PP -\fB\-\-container\_hints\fP=/etc/cadvisor/container\_hints.json - location of the container hints file - -.PP -\fB\-\-container\_runtime\fP="docker" - The container runtime to use. Possible values: 'docker', 'rkt'. Default: 'docker'. - -.PP -\fB\-\-docker\fP=unix:///var/run/docker.sock - docker endpoint - -.PP -\fB\-\-docker\-daemon\-container\fP="/docker\-daemon" - Optional resource\-only container in which to place the Docker Daemon. Empty for no container (Default: /docker\-daemon). - -.PP -\fB\-\-docker\-endpoint\fP="" - If non\-empty, use this for the docker endpoint to communicate with - -.PP -\fB\-\-docker\_only\fP=false - Only report docker containers in addition to root stats - -.PP -\fB\-\-docker\_root\fP=/var/lib/docker - Absolute path to the Docker state root directory (default: /var/lib/docker) - -.PP -\fB\-\-docker\_run\fP=/var/run/docker - Absolute path to the Docker run directory (default: /var/run/docker) - -.PP -\fB\-\-enable\-debugging\-handlers\fP=true - Enables server endpoints for log collection and local running of containers and commands - -.PP -\fB\-\-enable\_load\_reader\fP=false - Whether to enable cpu load reader - -.PP -\fB\-\-enable\-server\fP=true - Enable the info server - -.PP -\fB\-\-event\_storage\_age\_limit\fP=default=24h - Max length of time for which to store events (per type). Value is a comma separated list of key values, where the keys are event types (e.g.: creation, oom) or "default" and the value is a duration. Default is applied to all non\-specified event types - -.PP -\fB\-\-event\_storage\_event\_limit\fP=default=100000 - Max number of events to store (per type). Value is a comma separated list of key values, where the keys are event types (e.g.: creation, oom) or "default" and the value is an integer. Default is applied to all non\-specified event types - -.PP -\fB\-\-file\-check\-frequency\fP=20s - Duration between checking config files for new data - -.PP -\fB\-\-global\_housekeeping\_interval\fP=1m0s - Interval between global housekeepings - -.PP -\fB\-\-google\-json\-key\fP="" - The Google Cloud Platform Service Account JSON Key to use for authentication. - -.PP -\fB\-\-healthz\-bind\-address\fP=127.0.0.1 - The IP address for the healthz server to serve on, defaulting to 127.0.0.1 (set to 0.0.0.0 for all interfaces) - -.PP -\fB\-\-healthz\-port\fP=10248 - The port of the localhost healthz endpoint - -.PP -\fB\-\-host\-network\-sources\fP="file" - Comma\-separated list of sources from which the Kubelet allows pods to use of host network. For all sources use "*" [default="file"] - -.PP -\fB\-\-hostname\-override\fP="" - If non\-empty, will use this string as identification instead of the actual hostname. - -.PP -\fB\-\-housekeeping\_interval\fP=1s - Interval between container housekeepings - -.PP -\fB\-\-http\-check\-frequency\fP=20s - Duration between checking http for new data - -.PP -\fB\-\-image\-gc\-high\-threshold\fP=90 - The percent of disk usage after which image garbage collection is always run. Default: 90%% - -.PP -\fB\-\-image\-gc\-low\-threshold\fP=80 - The percent of disk usage before which image garbage collection is never run. Lowest disk usage to garbage collect to. Default: 80%% - -.PP -\fB\-\-kubeconfig\fP=/var/lib/kubelet/kubeconfig - Path to a kubeconfig file, specifying how to authenticate to API server (the master location is set by the api\-servers flag). - -.PP -\fB\-\-log\_backtrace\_at\fP=:0 - when logging hits line file:N, emit a stack trace - -.PP -\fB\-\-log\_cadvisor\_usage\fP=false - Whether to log the usage of the cAdvisor container - -.PP -\fB\-\-log\_dir\fP= - If non\-empty, write log files in this directory - -.PP -\fB\-\-log\_flush\_frequency\fP=5s - Maximum number of seconds between log flushes - -.PP -\fB\-\-logtostderr\fP=true - log to standard error instead of files - -.PP -\fB\-\-low\-diskspace\-threshold\-mb\fP=256 - The absolute free disk space, in MB, to maintain. When disk space falls below this threshold, new pods would be rejected. Default: 256 - -.PP -\fB\-\-machine\_id\_file\fP=/etc/machine\-id,/var/lib/dbus/machine\-id - Comma\-separated list of files to check for machine\-id. Use the first one that exists. - -.PP -\fB\-\-manifest\-url\fP="" - URL for accessing the container manifest - -.PP -\fB\-\-master\-service\-namespace\fP="default" - The namespace from which the kubernetes master services should be injected into pods - -.PP -\fB\-\-max\_housekeeping\_interval\fP=1m0s - Largest interval to allow between container housekeepings - -.PP -\fB\-\-max\_pods\fP=100 - Number of Pods that can run on this Kubelet. - -.PP -\fB\-\-maximum\-dead\-containers\fP=100 - Maximum number of old instances of a containers to retain globally. Each container takes up some disk space. Default: 100. - -.PP -\fB\-\-maximum\-dead\-containers\-per\-container\fP=5 - Maximum number of old instances of a container to retain per container. Each container takes up some disk space. Default: 5. - -.PP -\fB\-\-minimum\-container\-ttl\-duration\fP=1m0s - Minimum age for a finished container before it is garbage collected. Examples: '300ms', '10s' or '2h45m' - -.PP -\fB\-\-network\-plugin\fP="" - The name of the network plugin to be invoked for various events in kubelet/pod lifecycle - -.PP -\fB\-\-node\-status\-update\-frequency\fP=10s - Specifies how often kubelet posts node status to master. Note: be cautious when changing the constant, it must work with nodeMonitorGracePeriod in nodecontroller. Default: 10s - -.PP -\fB\-\-oom\-score\-adj\fP=\-900 - The oom\_score\_adj value for kubelet process. Values must be within the range [\-1000, 1000] - -.PP -\fB\-\-pod\-infra\-container\-image\fP="gcr.io/google\_containers/pause:0.8.0" - The image whose network/ipc namespaces containers in each pod will use. - -.PP -\fB\-\-port\fP=10250 - The port for the info server to serve on - -.PP -\fB\-\-read\-only\-port\fP=10255 - The read\-only port for the info server to serve on (set to 0 to disable) - -.PP -\fB\-\-registry\-burst\fP=10 - Maximum size of a bursty pulls, temporarily allows pulls to burst to this number, while still not exceeding registry\_qps. Only used if \-\-registry\_qps > 0 - -.PP -\fB\-\-registry\-qps\fP=0 - If > 0, limit registry pull QPS to this value. If 0, unlimited. [default=0.0] - -.PP -\fB\-\-resource\-container\fP="/kubelet" - Absolute name of the resource\-only container to create and run the Kubelet in (Default: /kubelet). - -.PP -\fB\-\-root\-dir\fP="/var/lib/kubelet" - Directory path for managing kubelet files (volume mounts,etc). - -.PP -\fB\-\-runonce\fP=false - If true, exit after spawning pods from local manifests or remote urls. Exclusive with \-\-api\_servers, and \-\-enable\-server - -.PP -\fB\-\-stderrthreshold\fP=2 - logs at or above this threshold go to stderr - -.PP -\fB\-\-streaming\-connection\-idle\-timeout\fP=0 - Maximum time a streaming connection can be idle before the connection is automatically closed. Example: '5m' - -.PP -\fB\-\-sync\-frequency\fP=10s - Max period between synchronizing running containers and config - -.PP -\fB\-\-tls\-cert\-file\fP="" - File /gmrvcontaining x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If \-\-tls\_cert\_file and \-\-tls\_private\_key\_file are not provided, a self\-signed certificate and key are generated for the public address and saved to the directory passed to \-\-cert\_dir. - -.PP -\fB\-\-tls\-private\-key\-file\fP="" - File containing x509 private key matching \-\-tls\_cert\_file. - -.PP -\fB\-\-v\fP=0 - log level for V logs - -.PP -\fB\-\-version\fP=false - Print version information and quit - -.PP -\fB\-\-vmodule\fP= - comma\-separated list of pattern=N settings for file\-filtered logging - -.SH EXAMPLES -.PP -.RS - -.nf -/usr/bin/kubelet \-\-logtostderr=true \-\-v=0 \-\-api\_servers=http://127.0.0.1:8080 \-\-address=127.0.0.1 \-\-port=10250 \-\-hostname\_override=127.0.0.1 \-\-allow\-privileged=false - -.fi - -.SH HISTORY -.PP -October 2014, Originally compiled by Scott Collier (scollier at redhat dot com) based - on the kubernetes source material and internal work. - -.PP -May 2015, Revised by Victor HU(huruifeng at huawei dot com) by kubernetes version 0.17 - -.PP -[]() diff --git a/release-0.19.0/docs/man/md2man-all.sh b/release-0.19.0/docs/man/md2man-all.sh deleted file mode 100755 index 5665b49d8f0..00000000000 --- a/release-0.19.0/docs/man/md2man-all.sh +++ /dev/null @@ -1,41 +0,0 @@ -#!/bin/bash - -# Copyright 2014 The Kubernetes Authors All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -set -e - -if [[ -z ${GO_MD2MAN} ]]; then - GO_MD2MAN="go-md2man" -fi - -# get into this script's directory -cd "$(dirname "$(readlink -f "$BASH_SOURCE")")" - -[ "$1" = '-q' ] || { - set -x - pwd -} - -for FILE in *.md; do - base="$(basename "$FILE")" - name="${base%.md}" - num="${name##*.}" - if [ -z "$num" -o "$name" = "$num" ]; then - # skip files that aren't of the format xxxx.N.md (like README.md) - continue - fi - mkdir -p "./man${num}" - ${GO_MD2MAN} -in "$FILE" -out "./man${num}/${name}" -done diff --git a/release-0.19.0/docs/namespaces.md b/release-0.19.0/docs/namespaces.md deleted file mode 100644 index 1807156c5af..00000000000 --- a/release-0.19.0/docs/namespaces.md +++ /dev/null @@ -1,13 +0,0 @@ -# Namespaces - -Namespaces help different projects, teams, or customers to share a kubernetes cluster. First, they provide a scope for [Names](identifiers.md). Second, as our access control code develops, it is expected that it will be convenient to attach authorization and other policy to namespaces. - -Use of multiple namespaces is optional. For small teams, they may not be needed. - -Namespaces are still under development. For now, the best documentation is the [Namespaces Design Document](design/namespaces.md). - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/namespaces.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/namespaces.md?pixel)]() diff --git a/release-0.19.0/docs/networking.md b/release-0.19.0/docs/networking.md deleted file mode 100644 index 43dbab9f06c..00000000000 --- a/release-0.19.0/docs/networking.md +++ /dev/null @@ -1,183 +0,0 @@ -# Networking in Kubernetes - -## Summary - -Kubernetes approaches networking somewhat differently that Docker's defaults. -We give every pod its own IP address allocated from an internal network, so you -do not need to explicitly create links between communicating pods. To do this, -you must set up your cluster networking correctly. - -Since pods can fail and be replaced with new pods with different IP addresses -on different nodes, we do not recommend having a pod directly talk to the IP -address of another Pod. Instead, if a pod, or collection of pods, provide some -service, then you should create a `service` object spanning those pods, and -clients should connect to the IP of the service object. See -[services](services.md). - -## Docker model - -Before discussing the Kubernetes approach to networking, it is worthwhile to -review the "normal" way that networking works with Docker. By default, Docker -uses host-private networking. It creates a virtual bridge, called `docker0` by -default, and allocates a subnet from one of the private address blocks defined -in [RFC1918](https://tools.ietf.org/html/rfc1918) for that bridge. For each -container that Docker creates, it allocates a virtual ethernet device (called -`veth`) which is attached to the bridge. The veth is mapped to appear as eth0 -in the container, using Linux namespaces. The in-container eth0 interface is -given an IP address from the bridge's address range. - -The result is that Docker containers can talk to other containers only if they -are on the same machine (and thus the same virtual bridge). Containers on -different machines can not reach each other - in fact they may end up with the -exact same network ranges and IP addresses. - -In order for Docker containers to communicate across nodes, they must be -allocated ports on the machine's own IP address, which are then forwarded or -proxied to the containers. This obviously means that containers must either -coordinate which ports they use very carefully or else be allocated ports -dynamically. - -## Kubernetes model - -Coordinating ports across multiple developers is very difficult to do at -scale and exposes users to cluster-level issues outside of their control. -Dynamic port allocation brings a lot of complications to the system - every -application has to take ports as flags, the API servers have to know how to -insert dynamic port numbers into configuration blocks, services have to know -how to find each other, etc. Rather than deal with this, Kubernetes takes a -different approach. - -Kubernetes imposes the following fundamental requirements on any networking -implementation (barring any intentional network segmentation policies): - * all containers can communicate with all other containers without NAT - * all nodes can communicate with all containers (and vice-versa) without NAT - * the IP that a container sees itself as is the same IP that others see it as - -What this means in practice is that you can not just take two computers -running Docker and expect Kubernetes to work. You must ensure that the -fundamental requirements are met. - -This model is not only less complex overall, but it is principally compatible -with the desire for Kubernetes to enable low-friction porting of apps from VMs -to containers. If your job previously ran in a VM, your VM had an IP and could -talk to other VMs in your project. This is the same basic model. - -Until now this document has talked about containers. In reality, Kubernetes -applies IP addresses at the `Pod` scope - containers within a `Pod` share their -network namespaces - including their IP address. This means that containers -within a `Pod` can all reach each other’s ports on `localhost`. This does imply -that containers within a `Pod` must coordinate port usage, but this is no -different that processes in a VM. We call this the "IP-per-pod" model. This -is implemented in Docker as a "pod container" which holds the network namespace -open while "app containers" (the things the user specified) join that namespace -with Docker's `--net=container:` function. - -As with Docker, it is possible to request host ports, but this is reduced to a -very niche operation. In this case a port will be allocated on the host `Node` -and traffic will be forwarded to the `Pod`. The `Pod` itself is blind to the -existence or non-existence of host ports. - -## How to achieve this - -There are a number of ways that this network model can be implemented. This -document is not an exhaustive study of the various methods, but hopefully serves -as an introduction to various technologies and serves as a jumping-off point. -If some techniques become vastly preferable to others, we might detail them more -here. - -### Google Compute Engine - -For the Google Compute Engine cluster configuration scripts, we use [advanced -routing](https://developers.google.com/compute/docs/networking#routing) to -assign each VM a subnet (default is /24 - 254 IPs). Any traffic bound for that -subnet will be routed directly to the VM by the GCE network fabric. This is in -addition to the "main" IP address assigned to the VM, which is NAT'ed for -outbound internet access. A linux bridge (called `cbr0`) is configured to exist -on that subnet, and is passed to docker's `--bridge` flag. - -We start Docker with: - -``` - DOCKER_OPTS="--bridge cbr0 --iptables=false --ip-masq=false" -``` - -We set up this bridge on each node with SaltStack, in -[container_bridge.py](../cluster/saltbase/salt/_states/container_bridge.py). - -``` -cbr0: - container_bridge.ensure: - - cidr: {{ grains['cbr-cidr'] }} - - mtu: 1460 -``` - -Docker will now allocate `Pod` IPs from the `cbr-cidr` block. Containers -can reach each other and `Nodes` over the `cbr0` bridge. Those IPs are all -routable within the GCE project network. - -GCE itself does not know anything about these IPs, though, so it will not NAT -them for outbound internet traffic. To achieve that we use an iptables rule to -masquerade (aka SNAT - to make it seem as if packets came from the `Node` -itself) traffic that is bound for IPs outside the GCE project network -(10.0.0.0/8). - -``` -iptables -t nat -A POSTROUTING ! -d 10.0.0.0/8 -o eth0 -j MASQUERADE -``` - -Lastly we enable IP forwarding in the kernel (so the kernel will process -packets for bridged containers): - -``` -sysctl net.ipv4.ip_forward=1 -``` - -The result of all this is that all `Pods` can reach each other and can egress -traffic to the internet. - -### L2 networks and linux bridging - -If you have a "dumb" L2 network, such as a simple switch in a "bare-metal" -environment, you should be able to do something similar to the above GCE setup. -Note that these instructions have only been tried very casually - it seems to -work, but has not been thoroughly tested. If you use this technique and -perfect the process, please let us know. - -Follow the "With Linux Bridge devices" section of [this very nice -tutorial](http://blog.oddbit.com/2014/08/11/four-ways-to-connect-a-docker/) from -Lars Kellogg-Stedman. - -### Flannel - -[Flannel](https://github.com/coreos/flannel#flannel) is a very simple overlay -network that satisfies the Kubernetes requirements. It installs in minutes and -should get you up and running if the above techniques are not working. Many -people have reported success with Flannel and Kubernetes. - -### OpenVSwitch - -[OpenVSwitch](./ovs-networking.md) is a somewhat more mature but also -complicated way to build an overlay network. This is endorsed by several of the -"Big Shops" for networking. - -### Weave - -[Weave](https://github.com/zettio/weave) is yet another way to build an overlay -network, primarily aiming at Docker integration. - -### Calico - -[Calico](https://github.com/Metaswitch/calico) uses BGP to enable real container -IPs. - -## Other reading - -The early design of the networking model and its rationale, and some future -plans are described in more detail in the [networking design -document](design/networking.md). - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/networking.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/networking.md?pixel)]() diff --git a/release-0.19.0/docs/node.md b/release-0.19.0/docs/node.md deleted file mode 100644 index a5c0a6086ae..00000000000 --- a/release-0.19.0/docs/node.md +++ /dev/null @@ -1,142 +0,0 @@ -# Node - -## What is a node? - -`Node` is a worker node in Kubernetes, previously known as `Minion`. Node -may be a VM or physical machine, depending on the cluster. Each node has -the services necessary to run [Pods](pods.md) and be managed from the master -systems. The services include docker, kubelet and network proxy. See -[The Kubernetes Node](design/architecture.md#the-kubernetes-node) section in design -doc for more details. - -## Node Status - -Node status describes current status of a node. For now, there are three -pieces of information: - -### HostIP - -Host IP address is queried from cloudprovider and stored as part of node -status. If kubernetes runs without cloudprovider, node's ID will be used. -IP address can change, and there are different kind of IPs, e.g. public -IP, private IP, dynamic IP, ipv6, etc. It makes more sense to save it as -a status rather than spec. - -### Node Phase - -Node Phase is the current lifecycle phase of node, one of `Pending`, -`Running` and `Terminated`. Node Phase management is under development, -here is a brief overview: In kubernetes, node will be created in `Pending` -phase, until it is discovered and checked in by kubernetes, at which time, -kubernetes will mark it as `Running`. The end of a node's lifecycle is -`Terminated`. A terminated node will not receive any scheduling request, -and any running pods will be removed from the node. - -Node with `Running` phase is necessary but not sufficient requirement for -scheduling Pods. For a node to be considered a scheduling candidate, it -must have appropriate conditions, see below. - -### Node Condition -Node Condition describes the conditions of `Running` nodes. Current valid -condition is `NodeReady`. In the future, we plan to add more. -`NodeReady` means kubelet is healthy and ready to accept pods. Different -condition provides different level of understanding for node health. -Node condition is represented as a json object. For example, -the following conditions mean the node is in sane state: -```json -"conditions": [ - { - "kind": "Ready", - "status": "True", - }, -] -``` - -## Node Management - -Unlike [Pod](pods.md) and [Service](services.md), `Node` is not inherently -created by Kubernetes: it is either created from cloud providers like GCE, -or from your physical or virtual machines. What this means is that when -Kubernetes creates a node, it only creates a representation for the node. -After creation, Kubernetes will check whether the node is valid or not. -For example, if you try to create a node from the following content: -```json -{ - "kind": "Node", - "apiVersion": "v1", - "metadata": { - "name": "10.240.79.157", - "labels": { - "name": "my-first-k8s-node" - } - } -} -``` - -Kubernetes will create a `Node` object internally (the representation), and -validate the node by health checking based on the `metadata.name` field: we -assume `metadata.name` can be resolved. If the node is valid, i.e. all necessary -services are running, it is eligible to run a `Pod`; otherwise, it will be -ignored for any cluster activity, until it becomes valid. Note that Kubernetes -will keep invalid node unless explicitly deleted by client, and it will keep -checking to see if it becomes valid. - -Currently, there are two agents that interacts with Kubernetes node interface: -Node Controller and Kube Admin. - -### Node Controller - -Node controller is a component in Kubernetes master which manages `Node` -objects. It performs two major functions: cluster-wide node synchronization -and single node life-cycle management. - -Node controller has a sync loop that creates/deletes `Node`s from Kubernetes -based on all matching VM instances listed from cloud provider. The sync period -can be controlled via flag "--node_sync_period". If a new instance -gets created, Node Controller creates a representation for it. If an existing -instance gets deleted, Node Controller deletes the representation. Note however, -Node Controller is unable to provision the node for you, i.e. it won't install -any binary; therefore, to -join Kubernetes cluster, you as an admin need to make sure proper services are -running in the node. In the future, we plan to automatically provision some node -services. - -### Self-Registration of nodes - -When kubelet flag `--register-node` is true (the default), then the kubelet will attempt to -register itself with the API server. This is the preferred pattern, used by most distros. - -For self-registration, the kubelet is started with the following options: - - `--apiservers=` tells the kubelet the location of the apiserver. - - `--kubeconfig` tells kubelet where to find credentials to authenticate itself to the apiserver. - - `--cloud_provider=` tells the kubelet how to talk to a cloud provider to read metadata about itself. - - `--register-node` tells the kubelet to create its own node resource. - -Currently, any kubelet is authorized to create/modify any node resource, but in practice it only creates/modifies -its own. (In the future, we plan to limit authorization to only allow a kubelet to modify its own Node resource.) - -#### Manual Node Administration - -A cluster administrator can create and modify Node objects. - -If the administrator wishes to create node objects manually, set kubelet flag -`--register-node=false`. - -The administrator can modify Node resources (regardless of the setting of `--register-node`). -Modifications include setting labels on the Node, and marking it unschedulable. - -Labels on nodes can be used in conjunction with node selectors on pods to control scheduling. - -Making a node unscheduleable will prevent new pods from being scheduled to that -node, but will not affect any existing pods on the node. This is useful as a -preparatory step before a node reboot, etc. For example, to mark a node -unschedulable, run this command: -``` -kubectl update nodes 10.1.2.3 --patch='{"apiVersion": "v1", "unschedulable": true}' -``` - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/node.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/node.md?pixel)]() diff --git a/release-0.19.0/docs/overview.md b/release-0.19.0/docs/overview.md deleted file mode 100644 index 01d0674467e..00000000000 --- a/release-0.19.0/docs/overview.md +++ /dev/null @@ -1,35 +0,0 @@ -# Kubernetes User Documentation - -Kubernetes is an open-source system for managing containerized applications across multiple hosts in a cluster. It provides mechanisms for application deployment, scheduling, updating, maintenance, and scaling. A key feature of Kubernetes is that it actively manages the containers to ensure that the state of the cluster continually matches the user's intentions. - -Today, Kubernetes supports just [Docker](http://www.docker.io) containers, but other container image formats and container runtimes will be supported in the future (e.g., [Rocket](https://coreos.com/blog/rocket/) support is in progress). Similarly, while Kubernetes currently focuses on continuously-running stateless (e.g. web server or in-memory object cache) and "cloud native" stateful applications (e.g. NoSQL datastores), in the near future it will support all the other workload types commonly found in production cluster environments, such as batch, stream processing, and traditional databases. - -In Kubernetes, all containers run inside [pods](pods.md). A pod can host a single container, or multiple cooperating containers; in the latter case, the containers in the pod are guaranteed to be co-located on the same machine and can share resources. A pod can also contain zero or more [volumes](volumes.md), which are directories that are private to a container or shared across containers in a pod. For each pod the user creates, the system finds a machine that is healthy and that has sufficient available capacity, and starts up the corresponding container(s) there. If a container fails it can be automatically restarted by Kubernetes' node agent, called the Kubelet. But if the pod or its machine fails, it is not automatically moved or restarted unless the user also defines a [replication controller](replication-controller.md), which we discuss next. - -Users can create and manage pods themselves, but Kubernetes drastically simplifies system management by allowing users to delegate two common pod-related activities: deploying multiple pod replicas based on the same pod configuration, and creating replacement pods when a pod or its machine fails. The Kubernetes API object that manages these behaviors is called a [replication controller](replication-controller.md). It defines a pod in terms of a template, that the system then instantiates as some number of pods (specified by the user). The replicated set of pods might constitute an entire application, a micro-service, or one layer in a multi-tier application. Once the pods are created, the system continually monitors their health and that of the machines they are running on; if a pod fails due to a software problem or machine failure, the replication controller automatically creates a new pod on a healthy machine, to maintain the set of pods at the desired replication level. Multiple pods from the same or different applications can share the same machine. Note that a replication controller is needed even in the case of a single non-replicated pod if the user wants it to be re-created when it or its machine fails. - -Frequently it is useful to refer to a set of pods, for example to limit the set of pods on which a mutating operation should be performed, or that should be queried for status. As a general mechanism, users can attach to most Kubernetes API objects arbitrary key-value pairs called [labels](labels.md), and then use a set of label selectors (key-value queries over labels) to constrain the target of API operations. Each resource also has a map of string keys and values that can be used by external tooling to store and retrieve arbitrary metadata about this object, called [annotations](annotations.md). - -Kubernetes supports a unique [networking model](networking.md). Kubernetes encourages a flat address space and does not dynamically allocate ports, instead allowing users to select whichever ports are convenient for them. To achieve this, it allocates an IP address for each pod. - -Modern Internet applications are commonly built by layering micro-services, for example a set of web front-ends talking to a distributed in-memory key-value store talking to a replicated storage service. To facilitate this architecture, Kubernetes offers the [service](services.md) abstraction, which provides a stable IP address and [DNS name](dns.md) that corresponds to a dynamic set of pods such as the set of pods constituting a micro-service. The set is defined using a label selector and thus can refer to any set of pods. When a container running in a Kubernetes pod connects to this address, the connection is forwarded by a local agent (called the kube proxy) running on the source machine, to one of the corresponding back-end containers. The exact back-end is chosen using a round-robin policy to balance load. The kube proxy takes care of tracking the dynamic set of back-ends as pods are replaced by new pods on new hosts, so that the service IP address (and DNS name) never changes. - -Every resource in Kubernetes, such as a pod, is identified by a URI and has a UID. Important components of the URI are the kind of object (e.g. pod), the object’s name, and the object’s [namespace](namespaces.md). Every name is unique within its namespace, and in contexts where an object name is provided without a namespace, it is assumed to be in the default namespace. UID is unique across time and space. - -Other details: - -* [API](api.md) -* [Client libraries](client-libraries.md) -* [Command-line interface](kubectl.md) -* [UI](ui.md) -* [Images and registries](images.md) -* [Container environment](container-environment.md) -* [Logging](logging.md) -* Monitoring using [CAdvisor](https://github.com/google/cadvisor) and [Heapster](https://github.com/GoogleCloudPlatform/heapster) - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/overview.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/overview.md?pixel)]() diff --git a/release-0.19.0/docs/ovs-networking.md b/release-0.19.0/docs/ovs-networking.md deleted file mode 100644 index 34f2bd8f54c..00000000000 --- a/release-0.19.0/docs/ovs-networking.md +++ /dev/null @@ -1,20 +0,0 @@ -# Kubernetes OpenVSwitch GRE/VxLAN networking - -This document describes how OpenVSwitch is used to setup networking between pods across nodes. -The tunnel type could be GRE or VxLAN. VxLAN is preferable when large scale isolation needs to be performed within the network. - -![ovs-networking](./ovs-networking.png "OVS Networking") - -The vagrant setup in Kubernetes does the following: - -The docker bridge is replaced with a brctl generated linux bridge (kbr0) with a 256 address space subnet. Basically, a node gets 10.244.x.0/24 subnet and docker is configured to use that bridge instead of the default docker0 bridge. - -Also, an OVS bridge is created(obr0) and added as a port to the kbr0 bridge. All OVS bridges across all nodes are linked with GRE tunnels. So, each node has an outgoing GRE tunnel to all other nodes. It does not need to be a complete mesh really, just meshier the better. STP (spanning tree) mode is enabled in the bridges to prevent loops. - -Routing rules enable any 10.244.0.0/16 target to become reachable via the OVS bridge connected with the tunnels. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/ovs-networking.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/ovs-networking.md?pixel)]() diff --git a/release-0.19.0/docs/ovs-networking.png b/release-0.19.0/docs/ovs-networking.png deleted file mode 100644 index ca75ab305b8..00000000000 Binary files a/release-0.19.0/docs/ovs-networking.png and /dev/null differ diff --git a/release-0.19.0/docs/pod-states.md b/release-0.19.0/docs/pod-states.md deleted file mode 100644 index b3326652e68..00000000000 --- a/release-0.19.0/docs/pod-states.md +++ /dev/null @@ -1,111 +0,0 @@ -# The life of a pod - -Updated: 4/14/2015 - -This document covers the lifecycle of a pod. It is not an exhaustive document, but an introduction to the topic. - -## Pod Phase - -As consistent with the overall [API convention](api-conventions.md#typical-status-properties), phase is a simple, high-level summary of the phase of the lifecycle of a pod. It is not intended to be a comprehensive rollup of observations of container-level or even pod-level conditions or other state, nor is it intended to be a comprehensive state machine. - -The number and meanings of `PodPhase` values are tightly guarded. Other than what is documented here, nothing should be assumed about pods with a given `PodPhase`. - -* Pending: The pod has been accepted by the system, but one or more of the container images has not been created. This includes time before being scheduled as well as time spent downloading images over the network, which could take a while. -* Running: The pod has been bound to a node, and all of the containers have been created. At least one container is still running, or is in the process of starting or restarting. -* Succeeded: All containers in the pod have terminated in success, and will not be restarted. -* Failed: All containers in the pod have terminated, at least one container has terminated in failure (exited with non-zero exit status or was terminated by the system). - -## Pod Conditions - -A pod containing containers that specify readiness probes will also report the Ready condition. Condition status values may be `True`, `False`, or `Unknown`. - -## Container Probes - -A [Probe](https://godoc.org/github.com/GoogleCloudPlatform/kubernetes/pkg/api/v1#Probe) is a diagnostic performed periodically by the kubelet on a container. Specifically the diagnostic is one of three [Handlers](https://godoc.org/github.com/GoogleCloudPlatform/kubernetes/pkg/api/v1#Handler): - -* `ExecAction`: executes a specified command inside the container expecting on success that the command exits with status code 0. -* `TCPAction`: performs a tcp check against the container's IP address on a specified port expecting on success that the port is open. -* `HTTPGetAction`: performs an HTTP Get againsts the container's IP address on a specified port and path expecting on success that the response has a status code greater than or equal to 200 and less than 400. - -Each probe will have one of three results: - -* `Success`: indicates that the container passed the diagnostic. -* `Failure`: indicates that the container failed the diagnostic. -* `Unknown`: indicates that the diagnostic failed so no action should be taken. - -Currently, the kubelet optionally performs two independent diagnostics on running containers which trigger action: - -* `LivenessProbe`: indicates whether the container is *live*, i.e. still running. The LivenessProbe hints to the kubelet when a container is unhealthy. If the LivenessProbe fails, the kubelet will kill the container and the container will be subjected to it's [RestartPolicy](#restartpolicy). The default state of Liveness before the initial delay is "Success". The state of Liveness for a container when no probe is provided is assumed to be "Success". -* `ReadinessProbe`: indicates whether the container is *ready* to service requests. If the ReadinessProbe fails, the endpoints controller will remove the pod's IP address from the endpoints of all services that match the pod. Thus, the ReadinessProbe is sometimes useful to signal to the endpoints controller that even though a pod may be running, it should not receive traffic from the proxy (e.g. the container has a long startup time before it starts listening or the container is down for maintenance). The default state of Readiness before the initial delay is "Failure". The state of Readiness for a container when no probe is provided is assumed to be "Success". - -## Container Statuses - -More detailed information about the current (and previous) container statuses can be found in `containerStatuses`. The information reported depends on the current ContainerState, which may be Waiting, Running, or Termination (sic). - -## RestartPolicy - -The possible values for RestartPolicy are `Always`, `OnFailure`, or `Never`. If RestartPolicy is not set, the default value is `Always`. RestartPolicy applies to all containers in the pod. RestartPolicy only refers to restarts of the containers by the Kubelet on the same node. As discussed in the [pods document](pods.md#durability-of-pods-or-lack-thereof), once bound to a node, a pod may never be rebound to another node. This means that some kind of controller is necessary in order for a pod to survive node failure, even if just a single pod at a time is desired. - -The only controller we have today is [`ReplicationController`](replication-controller.md). `ReplicationController` is *only* appropriate for pods with `RestartPolicy = Always`. `ReplicationController` should refuse to instantiate any pod that has a different restart policy. - -There is a legitimate need for a controller which keeps pods with other policies alive. Both of the other policies (`OnFailure` and `Never`) eventually terminate, at which point the controller should stop recreating them. Because of this fundamental distinction, let's hypothesize a new controller, called [`JobController`](https://github.com/GoogleCloudPlatform/kubernetes/issues/1624) for the sake of this document, which can implement this policy. - -## Pod lifetime - -In general, pods which are created do not disappear until someone destroys them. This might be a human or a `ReplicationController`. The only exception to this rule is that pods with a `PodPhase` of `Succeeded` or `Failed` for more than some duration (determined by the master) will expire and be automatically reaped. - -If a node dies or is disconnected from the rest of the cluster, some entity within the system (call it the NodeController for now) is responsible for applying policy (e.g. a timeout) and marking any pods on the lost node as `Failed`. - -## Examples - - * Pod is `Running`, 1 container, container exits success - * Log completion event - * If RestartPolicy is: - * Always: restart container, pod stays `Running` - * OnFailure: pod becomes `Succeeded` - * Never: pod becomes `Succeeded` - - * Pod is `Running`, 1 container, container exits failure - * Log failure event - * If RestartPolicy is: - * Always: restart container, pod stays `Running` - * OnFailure: restart container, pod stays `Running` - * Never: pod becomes `Failed` - - * Pod is `Running`, 2 containers, container 1 exits failure - * Log failure event - * If RestartPolicy is: - * Always: restart container, pod stays `Running` - * OnFailure: restart container, pod stays `Running` - * Never: pod stays `Running` - * When container 2 exits... - * Log failure event - * If RestartPolicy is: - * Always: restart container, pod stays `Running` - * OnFailure: restart container, pod stays `Running` - * Never: pod becomes `Failed` - - * Pod is `Running`, container becomes OOM - * Container terminates in failure - * Log OOM event - * If RestartPolicy is: - * Always: restart container, pod stays `Running` - * OnFailure: restart container, pod stays `Running` - * Never: log failure event, pod becomes `Failed` - - * Pod is `Running`, a disk dies - * All containers are killed - * Log appropriate event - * Pod becomes `Failed` - * If running under a controller, pod will be recreated elsewhere - - * Pod is `Running`, its node is segmented out - * NodeController waits for timeout - * NodeController marks pod `Failed` - * If running under a controller, pod will be recreated elsewhere - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/pod-states.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/pod-states.md?pixel)]() diff --git a/release-0.19.0/docs/pods.md b/release-0.19.0/docs/pods.md deleted file mode 100644 index 5a5f22918f1..00000000000 --- a/release-0.19.0/docs/pods.md +++ /dev/null @@ -1,85 +0,0 @@ -# Pods - -In Kubernetes, rather than individual application containers, _pods_ are the smallest deployable units that can be created, scheduled, and managed. - -## What is a _pod_? - -A _pod_ (as in a pod of whales or pea pod) corresponds to a colocated group of applications running with a shared context. Within that context, the applications may also have individual cgroup isolations applied. A pod models an application-specific "logical host" in a containerized environment. It may contain one or more applications which are relatively tightly coupled -- in a pre-container world, they would have executed on the same physical or virtual host. - -The context of the pod can be defined as the conjunction of several Linux namespaces: - -* PID namespace (applications within the pod can see each other's processes) -* network namespace (applications within the pod have access to the same IP and port space) -* IPC namespace (applications within the pod can use SystemV IPC or POSIX message queues to communicate) -* UTS namespace (applications within the pod share a hostname) - -Applications within a pod also have access to shared volumes, which are defined at the pod level and made available in each application's filesystem. Additionally, a pod may define top-level cgroup isolations which form an outer bound to any individual isolation applied to constituent applications. - -In terms of [Docker](https://www.docker.com/) constructs, a pod consists of a colocated group of Docker containers with shared [volumes](volumes.md). PID namespace sharing is not yet implemented with Docker. - -Like individual application containers, pods are considered to be relatively ephemeral rather than durable entities. As discussed in [life of a pod](pod-states.md), pods are scheduled to nodes and remain there until termination (according to restart policy) or deletion. When a node dies, the pods scheduled to that node are deleted. Specific pods are never rescheduled to new nodes; instead, they must be replaced (see [replication controller](replication-controller.md) for more details). (In the future, a higher-level API may support pod migration.) - -## Motivation for pods - -### Resource sharing and communication - -Pods facilitate data sharing and communication among their constituents. - -The applications in the pod all use the same network namespace/IP and port space, and can find and communicate with each other using localhost. Each pod has an IP address in a flat shared networking namespace that has full communication with other physical computers and containers across the network. The hostname is set to the pod's Name for the application containers within the pod. [More details on networking](networking.md). - -In addition to defining the application containers that run in the pod, the pod specifies a set of shared storage volumes. Volumes enable data to survive container restarts and to be shared among the applications within the pod. - -### Management - -Pods also simplify application deployment and management by providing a higher-level abstraction than the raw, low-level container interface. Pods serve as units of deployment and horizontal scaling/replication. Co-location (co-scheduling), fate sharing, coordinated replication, resource sharing, and dependency management are handled automatically. - -## Uses of pods - -Pods can be used to host vertically integrated application stacks, but their primary motivation is to support co-located, co-managed helper programs, such as: - -* content management systems, file and data loaders, local cache managers, etc. -* log and checkpoint backup, compression, rotation, snapshotting, etc. -* data change watchers, log tailers, logging and monitoring adapters, event publishers, etc. -* proxies, bridges, and adapters -* controllers, managers, configurators, and updaters - -Individual pods are not intended to run multiple instances of the same application, in general. - -## Alternatives considered - -_Why not just run multiple programs in a single (Docker) container?_ - -1. Transparency. Making the containers within the pod visible to the infrastructure enables the infrastructure to provide services to those containers, such as process management and resource monitoring. This facilitates a number of conveniences for users. -2. Decoupling software dependencies. The individual containers may be rebuilt and redeployed independently. Kubernetes may even support live updates of individual containers someday. -3. Ease of use. Users don't need to run their own process managers, worry about signal and exit-code propagation, etc. -4. Efficiency. Because the infrastructure takes on more responsibility, containers can be lighter weight. - -_Why not support affinity-based co-scheduling of containers?_ - -That approach would provide co-location, but would not provide most of the benefits of pods, such as resource sharing, IPC, guaranteed fate sharing, and simplified management. - -## Durability of pods (or lack thereof) - -Pods aren't intended to be treated as durable [pets](https://blog.engineyard.com/2014/pets-vs-cattle). They won't survive scheduling failures, node failures, or other evictions, such as due to lack of resources, or in the case of node maintenance. - -In general, users shouldn't need to create pods directly. They should almost always use controllers (e.g., [replication controller](replication-controller.md)), even for singletons. Controllers provide self-healing with a cluster scope, as well as replication and rollout management. - -The use of collective APIs as the primary user-facing primitive is relatively common among cluster scheduling systems, including [Borg](https://research.google.com/pubs/pub43438.html), [Marathon](https://mesosphere.github.io/marathon/docs/rest-api.html), [Aurora](http://aurora.apache.org/documentation/latest/configuration-reference/#job-schema), and [Tupperware](http://www.slideshare.net/Docker/aravindnarayanan-facebook140613153626phpapp02-37588997). - -Pod is exposed as a primitive in order to facilitate: - -* scheduler and controller pluggability -* support for pod-level operations without the need to "proxy" them via controller APIs -* decoupling of pod lifetime from controller lifetime, such as for bootstrapping -* decoupling of controllers and services -- the endpoint controller just watches pods -* clean composition of Kubelet-level functionality with cluster-level functionality -- Kubelet is effectively the "pod controller" -* high-availability applications, which will expect pods to be replaced in advance of their termination and certainly in advance of deletion, such as in the case of planned evictions, image prefetching, or live pod migration [#3949](https://github.com/GoogleCloudPlatform/kubernetes/issues/3949) - -The current best practice for pets is to create a replication controller with `replicas` equal to `1` and a corresponding service. If you find this cumbersome, please comment on [issue #260](https://github.com/GoogleCloudPlatform/kubernetes/issues/260). - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/pods.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/pods.md?pixel)]() diff --git a/release-0.19.0/docs/proposals/autoscaling.md b/release-0.19.0/docs/proposals/autoscaling.md deleted file mode 100644 index 5657e20222c..00000000000 --- a/release-0.19.0/docs/proposals/autoscaling.md +++ /dev/null @@ -1,260 +0,0 @@ -## Abstract -Auto-scaling is a data-driven feature that allows users to increase or decrease capacity as needed by controlling the -number of pods deployed within the system automatically. - -## Motivation - -Applications experience peaks and valleys in usage. In order to respond to increases and decreases in load, administrators -scale their applications by adding computing resources. In the cloud computing environment this can be -done automatically based on statistical analysis and thresholds. - -### Goals - -* Provide a concrete proposal for implementing auto-scaling pods within Kubernetes -* Implementation proposal should be in line with current discussions in existing issues: - * Scale verb - [1629](https://github.com/GoogleCloudPlatform/kubernetes/issues/1629) - * Config conflicts - [Config](https://github.com/GoogleCloudPlatform/kubernetes/blob/c7cb991987193d4ca33544137a5cb7d0292cf7df/docs/config.md#automated-re-configuration-processes) - * Rolling updates - [1353](https://github.com/GoogleCloudPlatform/kubernetes/issues/1353) - * Multiple scalable types - [1624](https://github.com/GoogleCloudPlatform/kubernetes/issues/1624) - -## Constraints and Assumptions - -* This proposal is for horizontal scaling only. Vertical scaling will be handled in [issue 2072](https://github.com/GoogleCloudPlatform/kubernetes/issues/2072) -* `ReplicationControllers` will not know about the auto-scaler, they are the target of the auto-scaler. The `ReplicationController` responsibilities are -constrained to only ensuring that the desired number of pods are operational per the [Replication Controller Design](http://docs.k8s.io/replication-controller.md#responsibilities-of-the-replication-controller) -* Auto-scalers will be loosely coupled with data gathering components in order to allow a wide variety of input sources -* Auto-scalable resources will support a scale verb ([1629](https://github.com/GoogleCloudPlatform/kubernetes/issues/1629)) -such that the auto-scaler does not directly manipulate the underlying resource. -* Initially, most thresholds will be set by application administrators. It should be possible for an autoscaler to be -written later that sets thresholds automatically based on past behavior (CPU used vs incoming requests). -* The auto-scaler must be aware of user defined actions so it does not override them unintentionally (for instance someone -explicitly setting the replica count to 0 should mean that the auto-scaler does not try to scale the application up) -* It should be possible to write and deploy a custom auto-scaler without modifying existing auto-scalers -* Auto-scalers must be able to monitor multiple replication controllers while only targeting a single scalable -object (for now a ReplicationController, but in the future it could be a job or any resource that implements scale) - -## Use Cases - -### Scaling based on traffic - -The current, most obvious, use case is scaling an application based on network traffic like requests per second. Most -applications will expose one or more network endpoints for clients to connect to. Many of those endpoints will be load -balanced or situated behind a proxy - the data from those proxies and load balancers can be used to estimate client to -server traffic for applications. This is the primary, but not sole, source of data for making decisions. - -Within Kubernetes a [kube proxy](http://docs.k8s.io/services.md#ips-and-vips) -running on each node directs service requests to the underlying implementation. - -While the proxy provides internal inter-pod connections, there will be L3 and L7 proxies and load balancers that manage -traffic to backends. OpenShift, for instance, adds a "route" resource for defining external to internal traffic flow. -The "routers" are HAProxy or Apache load balancers that aggregate many different services and pods and can serve as a -data source for the number of backends. - -### Scaling based on predictive analysis - -Scaling may also occur based on predictions of system state like anticipated load, historical data, etc. Hand in hand -with scaling based on traffic, predictive analysis may be used to determine anticipated system load and scale the application automatically. - -### Scaling based on arbitrary data - -Administrators may wish to scale the application based on any number of arbitrary data points such as job execution time or -duration of active sessions. There are any number of reasons an administrator may wish to increase or decrease capacity which -means the auto-scaler must be a configurable, extensible component. - -## Specification - -In order to facilitate talking about auto-scaling the following definitions are used: - -* `ReplicationController` - the first building block of auto scaling. Pods are deployed and scaled by a `ReplicationController`. -* kube proxy - The proxy handles internal inter-pod traffic, an example of a data source to drive an auto-scaler -* L3/L7 proxies - A routing layer handling outside to inside traffic requests, an example of a data source to drive an auto-scaler -* auto-scaler - scales replicas up and down by using the `scale` endpoint provided by scalable resources (`ReplicationController`) - - -### Auto-Scaler - -The Auto-Scaler is a state reconciler responsible for checking data against configured scaling thresholds -and calling the `scale` endpoint to change the number of replicas. The scaler will -use a client/cache implementation to receive watch data from the data aggregators and respond to them by -scaling the application. Auto-scalers are created and defined like other resources via REST endpoints and belong to the -namespace just as a `ReplicationController` or `Service`. - -Since an auto-scaler is a durable object it is best represented as a resource. - -```go - //The auto scaler interface - type AutoScalerInterface interface { - //ScaleApplication adjusts a resource's replica count. Calls scale endpoint. - //Args to this are based on what the endpoint - //can support. See https://github.com/GoogleCloudPlatform/kubernetes/issues/1629 - ScaleApplication(num int) error - } - - type AutoScaler struct { - //common construct - TypeMeta - //common construct - ObjectMeta - - //Spec defines the configuration options that drive the behavior for this auto-scaler - Spec AutoScalerSpec - - //Status defines the current status of this auto-scaler. - Status AutoScalerStatus - } - - type AutoScalerSpec struct { - //AutoScaleThresholds holds a collection of AutoScaleThresholds that drive the auto scaler - AutoScaleThresholds []AutoScaleThreshold - - //Enabled turns auto scaling on or off - Enabled boolean - - //MaxAutoScaleCount defines the max replicas that the auto scaler can use. - //This value must be greater than 0 and >= MinAutoScaleCount - MaxAutoScaleCount int - - //MinAutoScaleCount defines the minimum number replicas that the auto scaler can reduce to, - //0 means that the application is allowed to idle - MinAutoScaleCount int - - //TargetSelector provides the scalable target(s). Right now this is a ReplicationController - //in the future it could be a job or any resource that implements scale. - TargetSelector map[string]string - - //MonitorSelector defines a set of capacity that the auto-scaler is monitoring - //(replication controllers). Monitored objects are used by thresholds to examine - //statistics. Example: get statistic X for object Y to see if threshold is passed - MonitorSelector map[string]string - } - - type AutoScalerStatus struct { - // TODO: open for discussion on what meaningful information can be reported in the status - // The status may return the replica count here but we may want more information - // such as if the count reflects a threshold being passed - } - - - //AutoScaleThresholdInterface abstracts the data analysis from the auto-scaler - //example: scale by 1 (Increment) when RequestsPerSecond (Type) pass - //comparison (Comparison) of 50 (Value) for 30 seconds (Duration) - type AutoScaleThresholdInterface interface { - //called by the auto-scaler to determine if this threshold is met or not - ShouldScale() boolean - } - - - //AutoScaleThreshold is a single statistic used to drive the auto-scaler in scaling decisions - type AutoScaleThreshold struct { - // Type is the type of threshold being used, intention or value - Type AutoScaleThresholdType - - // ValueConfig holds the config for value based thresholds - ValueConfig AutoScaleValueThresholdConfig - - // IntentionConfig holds the config for intention based thresholds - IntentionConfig AutoScaleIntentionThresholdConfig - } - - // AutoScaleIntentionThresholdConfig holds configuration for intention based thresholds - // a intention based threshold defines no increment, the scaler will adjust by 1 accordingly - // and maintain once the intention is reached. Also, no selector is defined, the intention - // should dictate the selector used for statistics. Same for duration although we - // may want a configurable duration later so intentions are more customizable. - type AutoScaleIntentionThresholdConfig struct { - // Intent is the lexicon of what intention is requested - Intent AutoScaleIntentionType - - // Value is intention dependent in terms of above, below, equal and represents - // the value to check against - Value float - } - - // AutoScaleValueThresholdConfig holds configuration for value based thresholds - type AutoScaleValueThresholdConfig struct { - //Increment determines how the auot-scaler should scale up or down (positive number to - //scale up based on this threshold negative number to scale down by this threshold) - Increment int - //Selector represents the retrieval mechanism for a statistic value from statistics - //storage. Once statistics are better defined the retrieval mechanism may change. - //Ultimately, the selector returns a representation of a statistic that can be - //compared against the threshold value. - Selector map[string]string - //Duration is the time lapse after which this threshold is considered passed - Duration time.Duration - //Value is the number at which, after the duration is passed, this threshold is considered - //to be triggered - Value float - //Comparison component to be applied to the value. - Comparison string - } - - // AutoScaleThresholdType is either intention based or value based - type AutoScaleThresholdType string - - // AutoScaleIntentionType is a lexicon for intentions such as "cpu-utilization", - // "max-rps-per-endpoint" - type AutoScaleIntentionType string -``` - -#### Boundary Definitions -The `AutoScaleThreshold` definitions provide the boundaries for the auto-scaler. By defining comparisons that form a range -along with positive and negative increments you may define bi-directional scaling. For example the upper bound may be -specified as "when requests per second rise above 50 for 30 seconds scale the application up by 1" and a lower bound may -be specified as "when requests per second fall below 25 for 30 seconds scale the application down by 1 (implemented by using -1)". - -### Data Aggregator - -This section has intentionally been left empty. I will defer to folks who have more experience gathering and analyzing -time series statistics. - -Data aggregation is opaque to the the auto-scaler resource. The auto-scaler is configured to use `AutoScaleThresholds` -that know how to work with the underlying data in order to know if an application must be scaled up or down. Data aggregation -must feed a common data structure to ease the development of `AutoScaleThreshold`s but it does not matter to the -auto-scaler whether this occurs in a push or pull implementation, whether or not the data is stored at a granular level, -or what algorithm is used to determine the final statistics value. Ultimately, the auto-scaler only requires that a statistic -resolves to a value that can be checked against a configured threshold. - -Of note: If the statistics gathering mechanisms can be initialized with a registry other components storing statistics can -potentially piggyback on this registry. - -### Multi-target Scaling Policy -If multiple scalable targets satisfy the `TargetSelector` criteria the auto-scaler should be configurable as to which -target(s) are scaled. To begin with, if multiple targets are found the auto-scaler will scale the largest target up -or down as appropriate. In the future this may be more configurable. - -### Interactions with a deployment - -In a deployment it is likely that multiple replication controllers must be monitored. For instance, in a [rolling deployment](http://docs.k8s.io/replication-controller.md#rolling-updates) -there will be multiple replication controllers, with one scaling up and another scaling down. This means that an -auto-scaler must be aware of the entire set of capacity that backs a service so it does not fight with the deployer. `AutoScalerSpec.MonitorSelector` -is what provides this ability. By using a selector that spans the entire service the auto-scaler can monitor capacity -of multiple replication controllers and check that capacity against the `AutoScalerSpec.MaxAutoScaleCount` and -`AutoScalerSpec.MinAutoScaleCount` while still only targeting a specific set of `ReplicationController`s with `TargetSelector`. - -In the course of a deployment it is up to the deployment orchestration to decide how to manage the labels -on the replication controllers if it needs to ensure that only specific replication controllers are targeted by -the auto-scaler. By default, the auto-scaler will scale the largest replication controller that meets the target label -selector criteria. - -During deployment orchestration the auto-scaler may be making decisions to scale its target up or down. In order to prevent -the scaler from fighting with a deployment process that is scaling one replication controller up and scaling another one -down the deployment process must assume that the current replica count may be changed by objects other than itself and -account for this in the scale up or down process. Therefore, the deployment process may no longer target an exact number -of instances to be deployed. It must be satisfied that the replica count for the deployment meets or exceeds the number -of requested instances. - -Auto-scaling down in a deployment scenario is a special case. In order for the deployment to complete successfully the -deployment orchestration must ensure that the desired number of instances that are supposed to be deployed has been met. -If the auto-scaler is trying to scale the application down (due to no traffic, or other statistics) then the deployment -process and auto-scaler are fighting to increase and decrease the count of the targeted replication controller. In order -to prevent this, deployment orchestration should notify the auto-scaler that a deployment is occurring. This will -temporarily disable negative decrement thresholds until the deployment process is completed. It is more important for -an auto-scaler to be able to grow capacity during a deployment than to shrink the number of instances precisely. - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/proposals/autoscaling.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/proposals/autoscaling.md?pixel)]() diff --git a/release-0.19.0/docs/proposals/federation-high-level-arch.png b/release-0.19.0/docs/proposals/federation-high-level-arch.png deleted file mode 100644 index 8a416cc1e68..00000000000 Binary files a/release-0.19.0/docs/proposals/federation-high-level-arch.png and /dev/null differ diff --git a/release-0.19.0/docs/proposals/federation.md b/release-0.19.0/docs/proposals/federation.md deleted file mode 100644 index a9792e5eb21..00000000000 --- a/release-0.19.0/docs/proposals/federation.md +++ /dev/null @@ -1,437 +0,0 @@ -#Kubernetes Cluster Federation -##(a.k.a. "Ubernetes") - -## Requirements Analysis and Product Proposal - -## _by Quinton Hoole ([quinton@google.com](mailto:quinton@google.com))_ -_Initial revision: 2015-03-05_ -_Last updated: 2015-03-09_ -This doc: [tinyurl.com/ubernetes](http://tinyurl.com/ubernetes) -Slides: [tinyurl.com/ubernetes-slides](http://tinyurl.com/ubernetes-slides) - -## Introduction - -Today, each Kubernetes cluster is a relatively self-contained unit, -which typically runs in a single "on-premise" data centre or single -availability zone of a cloud provider (Google's GCE, Amazon's AWS, -etc). - -Several current and potential Kubernetes users and customers have -expressed a keen interest in tying together ("federating") multiple -clusters in some sensible way in order to enable the following kinds -of use cases (intentionally vague): - -1. _"Preferentially run my workloads in my on-premise cluster(s), but - automatically overflow to my cloud-hosted cluster(s) if I run out - of on-premise capacity"_. -1. _"Most of my workloads should run in my preferred cloud-hosted - cluster(s), but some are privacy-sensitive, and should be - automatically diverted to run in my secure, on-premise - cluster(s)"_. -1. _"I want to avoid vendor lock-in, so I want my workloads to run - across multiple cloud providers all the time. I change my set of - such cloud providers, and my pricing contracts with them, - periodically"_. -1. _"I want to be immune to any single data centre or cloud - availability zone outage, so I want to spread my service across - multiple such zones (and ideally even across multiple cloud - providers)."_ - -The above use cases are by necessity left imprecisely defined. The -rest of this document explores these use cases and their implications -in further detail, and compares a few alternative high level -approaches to addressing them. The idea of cluster federation has -informally become known as_ "Ubernetes"_. - -## Summary/TL;DR - -TBD - -## What exactly is a Kubernetes Cluster? - -A central design concept in Kubernetes is that of a _cluster_. While -loosely speaking, a cluster can be thought of as running in a single -data center, or cloud provider availability zone, a more precise -definition is that each cluster provides: - -1. a single Kubernetes API entry point, -1. a consistent, cluster-wide resource naming scheme -1. a scheduling/container placement domain -1. a service network routing domain -1. (in future) an authentication and authorization model. -1. .... - -The above in turn imply the need for a relatively performant, reliable -and cheap network within each cluster. - -There is also assumed to be some degree of failure correlation across -a cluster, i.e. whole clusters are expected to fail, at least -occasionally (due to cluster-wide power and network failures, natural -disasters etc). Clusters are often relatively homogenous in that all -compute nodes are typically provided by a single cloud provider or -hardware vendor, and connected by a common, unified network fabric. -But these are not hard requirements of Kubernetes. - -Other classes of Kubernetes deployments than the one sketched above -are technically feasible, but come with some challenges of their own, -and are not yet common or explicitly supported. - -More specifically, having a Kubernetes cluster span multiple -well-connected availability zones within a single geographical region -(e.g. US North East, UK, Japan etc) is worthy of further -consideration, in particular because it potentially addresses -some of these requirements. - -## What use cases require Cluster Federation? - -Let's name a few concrete use cases to aid the discussion: - -## 1.Capacity Overflow - -_"I want to preferentially run my workloads in my on-premise cluster(s), but automatically "overflow" to my cloud-hosted cluster(s) when I run out of on-premise capacity."_ - -This idea is known in some circles as "[cloudbursting](http://searchcloudcomputing.techtarget.com/definition/cloud-bursting)". - -**Clarifying questions:** What is the unit of overflow? Individual - pods? Probably not always. Replication controllers and their - associated sets of pods? Groups of replication controllers - (a.k.a. distributed applications)? How are persistent disks - overflowed? Can the "overflowed" pods communicate with their - brethren and sistren pods and services in the other cluster(s)? - Presumably yes, at higher cost and latency, provided that they use - external service discovery. Is "overflow" enabled only when creating - new workloads/replication controllers, or are existing workloads - dynamically migrated between clusters based on fluctuating available - capacity? If so, what is the desired behaviour, and how is it - achieved? How, if at all, does this relate to quota enforcement - (e.g. if we run out of on-premise capacity, can all or only some - quotas transfer to other, potentially more expensive off-premise - capacity?) - -It seems that most of this boils down to: - -1. **location affinity** (pods relative to each other, and to other - stateful services like persistent storage - how is this expressed - and enforced?) -1. **cross-cluster scheduling** (given location affinity constraints - and other scheduling policy, which resources are assigned to which - clusters, and by what?) -1. **cross-cluster service discovery** (how do pods in one cluster - discover and communicate with pods in another cluster?) -1. **cross-cluster migration** (how do compute and storage resources, - and the distributed applications to which they belong, move from - one cluster to another) - -## 2. Sensitive Workloads - -_"I want most of my workloads to run in my preferred cloud-hosted -cluster(s), but some are privacy-sensitive, and should be -automatically diverted to run in my secure, on-premise cluster(s). The -list of privacy-sensitive workloads changes over time, and they're -subject to external auditing."_ - -**Clarifying questions:** What kinds of rules determine which - workloads go where? Is a static mapping from container (or more - typically, replication controller) to cluster maintained and - enforced? If so, is it only enforced on startup, or are things - migrated between clusters when the mappings change? This starts to - look quite similar to "1. Capacity Overflow", and again seems to - boil down to: - -1. location affinity -1. cross-cluster scheduling -1. cross-cluster service discovery -1. cross-cluster migration -with the possible addition of: - -+ cross-cluster monitoring and auditing (which is conveniently deemed - to be outside the scope of this document, for the time being at - least) - -## 3. Vendor lock-in avoidance - -_"My CTO wants us to avoid vendor lock-in, so she wants our workloads -to run across multiple cloud providers at all times. She changes our -set of preferred cloud providers and pricing contracts with them -periodically, and doesn't want to have to communicate and manually -enforce these policy changes across the organization every time this -happens. She wants it centrally and automatically enforced, monitored -and audited."_ - -**Clarifying questions:** Again, I think that this can potentially be - reformulated as a Capacity Overflow problem - the fundamental - principles seem to be the same or substantially similar to those - above. - -## 4. "Unavailability Zones" - -_"I want to be immune to any single data centre or cloud availability -zone outage, so I want to spread my service across multiple such zones -(and ideally even across multiple cloud providers), and have my -service remain available even if one of the availability zones or -cloud providers "goes down"_. - -It seems useful to split this into two sub use cases: - -1. Multiple availability zones within a single cloud provider (across - which feature sets like private networks, load balancing, - persistent disks, data snapshots etc are typically consistent and - explicitly designed to inter-operate). -1. Multiple cloud providers (typically with inconsistent feature sets - and more limited interoperability). - -The single cloud provider case might be easier to implement (although -the multi-cloud provider implementation should just work for a single -cloud provider). Propose high-level design catering for both, with -initial implementation targeting single cloud provider only. - -**Clarifying questions:** -**How does global external service discovery work?** In the steady - state, which external clients connect to which clusters? GeoDNS or - similar? What is the tolerable failover latency if a cluster goes - down? Maybe something like (make up some numbers, notwithstanding - some buggy DNS resolvers, TTL's, caches etc) ~3 minutes for ~90% of - clients to re-issue DNS lookups and reconnect to a new cluster when - their home cluster fails is good enough for most Kubernetes users - (or at least way better than the status quo), given that these sorts - of failure only happen a small number of times a year? - -**How does dynamic load balancing across clusters work, if at all?** - One simple starting point might be "it doesn't". i.e. if a service - in a cluster is deemed to be "up", it receives as much traffic as is - generated "nearby" (even if it overloads). If the service is deemed - to "be down" in a given cluster, "all" nearby traffic is redirected - to some other cluster within some number of seconds (failover could - be automatic or manual). Failover is essentially binary. An - improvement would be to detect when a service in a cluster reaches - maximum serving capacity, and dynamically divert additional traffic - to other clusters. But how exactly does all of this work, and how - much of it is provided by Kubernetes, as opposed to something else - bolted on top (e.g. external monitoring and manipulation of GeoDNS)? - -**How does this tie in with auto-scaling of services?** More - specifically, if I run my service across _n_ clusters globally, and - one (or more) of them fail, how do I ensure that the remaining _n-1_ - clusters have enough capacity to serve the additional, failed-over - traffic? Either: - -1. I constantly over-provision all clusters by 1/n (potentially expensive), or -1. I "manually" update my replica count configurations in the - remaining clusters by 1/n when the failure occurs, and Kubernetes - takes care of the rest for me, or -1. Auto-scaling (not yet available) in the remaining clusters takes - care of it for me automagically as the additional failed-over - traffic arrives (with some latency). -1. I manually specify "additional resources to be provisioned" per - remaining cluster, possibly proportional to both the remaining functioning resources - and the unavailable resources in the failed cluster(s). - (All the benefits of over-provisioning, without expensive idle resources.) - -Doing nothing (i.e. forcing users to choose between 1 and 2 on their -own) is probably an OK starting point. Kubernetes autoscaling can get -us to 3 at some later date. - -Up to this point, this use case ("Unavailability Zones") seems materially different from all the others above. It does not require dynamic cross-cluster service migration (we assume that the service is already running in more than one cluster when the failure occurs). Nor does it necessarily involve cross-cluster service discovery or location affinity. As a result, I propose that we address this use case somewhat independently of the others (although I strongly suspect that it will become substantially easier once we've solved the others). - -All of the above (regarding "Unavailibility Zones") refers primarily -to already-running user-facing services, and minimizing the impact on -end users of those services becoming unavailable in a given cluster. -What about the people and systems that deploy Kubernetes services -(devops etc)? Should they be automatically shielded from the impact -of the cluster outage? i.e. have their new resource creation requests -automatically diverted to another cluster during the outage? While -this specific requirement seems non-critical (manual fail-over seems -relatively non-arduous, ignoring the user-facing issues above), it -smells a lot like the first three use cases listed above ("Capacity -Overflow, Sensitive Services, Vendor lock-in..."), so if we address -those, we probably get this one free of charge. - -## Core Challenges of Cluster Federation - -As we saw above, a few common challenges fall out of most of the use -cases considered above, namely: - -## Location Affinity - -Can the pods comprising a single distributed application be -partitioned across more than one cluster? More generally, how far -apart, in network terms, can a given client and server within a -distributed application reasonably be? A server need not necessarily -be a pod, but could instead be a persistent disk housing data, or some -other stateful network service. What is tolerable is typically -application-dependent, primarily influenced by network bandwidth -consumption, latency requirements and cost sensitivity. - -For simplicity, lets assume that all Kubernetes distributed -applications fall into one of three categories with respect to relative -location affinity: - -1. **"Strictly Coupled"**: Those applications that strictly cannot be - partitioned between clusters. They simply fail if they are - partitioned. When scheduled, all pods _must_ be scheduled to the - same cluster. To move them, we need to shut the whole distributed - application down (all pods) in one cluster, possibly move some - data, and then bring the up all of the pods in another cluster. To - avoid downtime, we might bring up the replacement cluster and - divert traffic there before turning down the original, but the - principle is much the same. In some cases moving the data might be - prohibitively expensive or time-consuming, in which case these - applications may be effectively _immovable_. -1. **"Strictly Decoupled"**: Those applications that can be - indefinitely partitioned across more than one cluster, to no - disadvantage. An embarrassingly parallel YouTube porn detector, - where each pod repeatedly dequeues a video URL from a remote work - queue, downloads and chews on the video for a few hours, and - arrives at a binary verdict, might be one such example. The pods - derive no benefit from being close to each other, or anything else - (other than the source of YouTube videos, which is assumed to be - equally remote from all clusters in this example). Each pod can be - scheduled independently, in any cluster, and moved at any time. -1. **"Preferentially Coupled"**: Somewhere between Coupled and Decoupled. These applications prefer to have all of their pods located in the same cluster (e.g. for failure correlation, network latency or bandwidth cost reasons), but can tolerate being partitioned for "short" periods of time (for example while migrating the application from one cluster to another). Most small to medium sized LAMP stacks with not-very-strict latency goals probably fall into this category (provided that they use sane service discovery and reconnect-on-fail, which they need to do anyway to run effectively, even in a single Kubernetes cluster). - -And then there's what I'll call _absolute_ location affinity. Some -applications are required to run in bounded geographical or network -topology locations. The reasons for this are typically -political/legislative (data privacy laws etc), or driven by network -proximity to consumers (or data providers) of the application ("most -of our users are in Western Europe, U.S. West Coast" etc). - -**Proposal:** First tackle Strictly Decoupled applications (which can - be trivially scheduled, partitioned or moved, one pod at a time). - Then tackle Preferentially Coupled applications (which must be - scheduled in totality in a single cluster, and can be moved, but - ultimately in total, and necessarily within some bounded time). - Leave strictly coupled applications to be manually moved between - clusters as required for the foreseeable future. - -## Cross-cluster service discovery - -I propose having pods use standard discovery methods used by external clients of Kubernetes applications (i.e. DNS). DNS might resolve to a public endpoint in the local or a remote cluster. Other than Strictly Coupled applications, software should be largely oblivious of which of the two occurs. -_Aside:_ How do we avoid "tromboning" through an external VIP when DNS -resolves to a public IP on the local cluster? Strictly speaking this -would be an optimization, and probably only matters to high bandwidth, -low latency communications. We could potentially eliminate the -trombone with some kube-proxy magic if necessary. More detail to be -added here, but feel free to shoot down the basic DNS idea in the mean -time. - -## Cross-cluster Scheduling - -This is closely related to location affinity above, and also discussed -there. The basic idea is that some controller, logically outside of -the basic kubernetes control plane of the clusters in question, needs -to be able to: - -1. Receive "global" resource creation requests. -1. Make policy-based decisions as to which cluster(s) should be used - to fulfill each given resource request. In a simple case, the - request is just redirected to one cluster. In a more complex case, - the request is "demultiplexed" into multiple sub-requests, each to - a different cluster. Knowledge of the (albeit approximate) - available capacity in each cluster will be required by the - controller to sanely split the request. Similarly, knowledge of - the properties of the application (Location Affinity class -- - Strictly Coupled, Strictly Decoupled etc, privacy class etc) will - be required. -1. Multiplex the responses from the individual clusters into an - aggregate response. - -## Cross-cluster Migration - -Again this is closely related to location affinity discussed above, -and is in some sense an extension of Cross-cluster Scheduling. When -certain events occur, it becomes necessary or desirable for the -cluster federation system to proactively move distributed applications -(either in part or in whole) from one cluster to another. Examples of -such events include: - -1. A low capacity event in a cluster (or a cluster failure). -1. A change of scheduling policy ("we no longer use cloud provider X"). -1. A change of resource pricing ("cloud provider Y dropped their prices - lets migrate there"). - -Strictly Decoupled applications can be trivially moved, in part or in whole, one pod at a time, to one or more clusters. -For Preferentially Decoupled applications, the federation system must first locate a single cluster with sufficient capacity to accommodate the entire application, then reserve that capacity, and incrementally move the application, one (or more) resources at a time, over to the new cluster, within some bounded time period (and possibly within a predefined "maintenance" window). -Strictly Coupled applications (with the exception of those deemed -completely immovable) require the federation system to: - -1. start up an entire replica application in the destination cluster -1. copy persistent data to the new application instance -1. switch traffic across -1. tear down the original application instance - -It is proposed that support for automated migration of Strictly Coupled applications be -deferred to a later date. - -## Other Requirements - -These are often left implicit by customers, but are worth calling out explicitly: - -1. Software failure isolation between Kubernetes clusters should be - retained as far as is practically possible. The federation system - should not materially increase the failure correlation across - clusters. For this reason the federation system should ideally be - completely independent of the Kubernetes cluster control software, - and look just like any other Kubernetes API client, with no special - treatment. If the federation system fails catastrophically, the - underlying Kubernetes clusters should remain independently usable. -1. Unified monitoring, alerting and auditing across federated Kubernetes clusters. -1. Unified authentication, authorization and quota management across - clusters (this is in direct conflict with failure isolation above, - so there are some tough trade-offs to be made here). - -## Proposed High-Level Architecture - -TBD: All very hand-wavey still, but some initial thoughts to get the conversation going... - -![image](federation-high-level-arch.png) - -## Ubernetes API - -This looks a lot like the existing Kubernetes API but is explicitly multi-cluster. - -+ Clusters become first class objects, which can be registered, listed, described, deregistered etc via the API. -+ Compute resources can be explicitly requested in specific clusters, or automatically scheduled to the "best" cluster by Ubernetes (by a pluggable Policy Engine). -+ There is a federated equivalent of a replication controller type, which is multicluster-aware, and delegates to cluster-specific replication controllers as required (e.g. a federated RC for n replicas might simply spawn multiple replication controllers in different clusters to do the hard work). -+ These federated replication controllers (and in fact all the - services comprising the Ubernetes Control Plane) have to run - somewhere. For high availability Ubernetes deployments, these - services may run in a dedicated Kubernetes cluster, not physically - co-located with any of the federated clusters. But for simpler - deployments, they may be run in one of the federated clusters (but - when that cluster goes down, Ubernetes is down, obviously). - -## Policy Engine and Migration/Replication Controllers - -The Policy Engine decides which parts of each application go into each -cluster at any point in time, and stores this desired state in the -Desired Federation State store (an etcd or -similar). Migration/Replication Controllers reconcile this against the -desired states stored in the underlying Kubernetes clusters (by -watching both, and creating or updating the underlying Replication -Controllers and related Services accordingly). - -## Authentication and Authorization - -This should ideally be delegated to some external auth system, shared -by the underlying clusters, to avoid duplication and inconsistency. -Either that, or we end up with multilevel auth. Local readonly -eventually consistent auth slaves in each cluster and in Ubernetes -could potentially cache auth, to mitigate an SPOF auth system. - -## Proposed Next Steps - -Identify concrete applications of each use case and configure a proof -of concept service that exercises the use case. For example, cluster -failure tolerance seems popular, so set up an apache frontend with -replicas in each of three availability zones with either an Amazon Elastic -Load Balancer or Google Cloud Load Balancer pointing at them? What -does the zookeeper config look like for N=3 across 3 AZs -- and how -does each replica find the other replicas and how do clients find -their primary zookeeper replica? And now how do I do a shared, highly -available redis database? - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/proposals/federation.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/proposals/federation.md?pixel)]() diff --git a/release-0.19.0/docs/proposals/high-availability.md b/release-0.19.0/docs/proposals/high-availability.md deleted file mode 100644 index 679ed9f3a74..00000000000 --- a/release-0.19.0/docs/proposals/high-availability.md +++ /dev/null @@ -1,52 +0,0 @@ -# High Availability of Scheduling and Controller Components in Kubernetes -This document serves as a proposal for high availability of the scheduler and controller components in kubernetes. This proposal is intended to provide a simple High Availability api for kubernetes components with the potential to extend to services running on kubernetes. Those services would be subject to their own constraints. - -## Design Options -For complete reference see [this](https://www.ibm.com/developerworks/community/blogs/RohitShetty/entry/high_availability_cold_warm_hot?lang=en) - -1. Hot Standby: In this scenario, data and state are shared between the two components such that an immediate failure in one component causes the the standby deamon to take over exactly where the failed component had left off. This would be an ideal solution for kubernetes, however it poses a series of challenges in the case of controllers where component-state is cached locally and not persisted in a transactional way to a storage facility. This would also introduce additional load on the apiserver, which is not desirable. As a result, we are **NOT** planning on this approach at this time. - -2. **Warm Standby**: In this scenario there is only one active component acting as the master and additional components running but not providing service or responding to requests. Data and state are not shared between the active and standby components. When a failure occurs, the standby component that becomes the master must determine the current state of the system before resuming functionality. This is the apprach that this proposal will leverage. - -3. Active-Active (Load Balanced): Clients can simply load-balance across any number of servers that are currently running. Their general availability can be continuously updated, or published, such that load balancing only occurs across active participants. This aspect of HA is outside of the scope of *this* proposal because there is already a partial implementation in the apiserver. - -## Design Discussion Notes on Leader Election -Implementation References: -* [zookeeper](http://zookeeper.apache.org/doc/trunk/recipes.html#sc_leaderElection) -* [etcd](https://groups.google.com/forum/#!topic/etcd-dev/EbAa4fjypb4) -* [initialPOC](https://github.com/rrati/etcd-ha) - -In HA, the apiserver will provide an api for sets of replicated clients to do master election: acquire the lease, renew the lease, and release the lease. This api is component agnostic, so a client will need to provide the component type and the lease duration when attemping to become master. The lease duration should be tuned per component. The apiserver will attempt to create a key in etcd based on the component type that contains the client's hostname/ip and port information. This key will be created with a ttl from the lease duration provided in the request. Failure to create this key means there is already a master of that component type, and the error from etcd will propigate to the client. Successfully creating the key means the client making the request is the master. Only the current master can renew the lease. When renewing the lease, the apiserver will update the existing key with a new ttl. The location in etcd for the HA keys is TBD. - -The first component to request leadership will become the master. All other components of that type will fail until the current leader releases the lease, or fails to renew the lease within the expiration time. On startup, all components should attempt to become master. The component that succeeds becomes the master, and should perform all functions of that component. The components that fail to become the master should not perform any tasks and sleep for their lease duration and then attempt to become the master again. A clean shutdown of the leader will cause a release of the lease and a new master will be elected. - -The component that becomes master should create a thread to manage the lease. This thread should be created with a channel that the main process can use to release the master lease. The master should release the lease in cases of an unrecoverable error and clean shutdown. Otherwise, this process will renew the lease and sleep, waiting for the next renewal time or notification to release the lease. If there is a failure to renew the lease, this process should force the entire component to exit. Daemon exit is meant to prevent potential split-brain conditions. Daemon restart is implied in this scenario, by either the init system (systemd), or possible watchdog processes. (See Design Discussion Notes) - -## Options added to components with HA functionality -Some command line options would be added to components that can do HA: - -* Lease Duration - How long a component can be master - -## Design Discussion Notes -Some components may run numerous threads in order to perform tasks in parallel. Upon losing master status, such components should exit instantly instead of attempting to gracefully shut down such threads. This is to ensure that, in the case there's some propagation delay in informing the threads they should stop, the lame-duck threads won't interfere with the new master. The component should exit with an exit code indicating that the component is not the master. Since all components will be run by systemd or some other monitoring system, this will just result in a restart. - -There is a short window after a new master acquires the lease, during which data from the old master might be committed. This is because there is currently no way to condition a write on its source being the master. Having the daemons exit shortens this window but does not eliminate it. A proper solution for this problem will be addressed at a later date. The proposed solution is: - -1. This requires transaction support in etcd (which is already planned - see [coreos/etcd#2675](https://github.com/coreos/etcd/pull/2675)) - -2. The entry in etcd that is tracking the lease for a given component (the "current master" entry) would have as its value the host:port of the lease-holder (as described earlier) and a sequence number. The sequence number is incremented whenever a new master gets the lease. - -3. Master replica is aware of the latest sequence number. - -4. Whenever master replica sends a mutating operation to the API server, it includes the sequence number. - -5. When the API server makes the corresponding write to etcd, it includes it in a transaction that does a compare-and-swap on the "current master" entry (old value == new value == host:port and sequence number from the replica that sent the mutating operation). This basically guarantees that if we elect the new master, all transactions coming from the old master will fail. You can think of this as the master attaching a "precondition" of its belief about who is the latest master. - -## Open Questions: -* Is there a desire to keep track of all nodes for a specific component type? - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/proposals/high-availability.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/proposals/high-availability.md?pixel)]() diff --git a/release-0.19.0/docs/replication-controller.md b/release-0.19.0/docs/replication-controller.md deleted file mode 100644 index 646aeab6a62..00000000000 --- a/release-0.19.0/docs/replication-controller.md +++ /dev/null @@ -1,71 +0,0 @@ -# Replication Controller - -## What is a _replication controller_? - -A _replication controller_ ensures that a specified number of pod "replicas" are running at any one time. If there are too many, it will kill some. If there are too few, it will start more. Unlike in the case where a user directly created pods, a replication controller replaces pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a replication controller even if your application requires only a single pod. Think of it similarly to a process supervisor, only it supervises multiple pods across multiple nodes instead of individual processes on a single node. A replication controller delegates local container restarts to some agent on the node (e.g., Kubelet or Docker). - -As discussed in [life of a pod](pod-states.md), `ReplicationController` is *only* appropriate for pods with `RestartPolicy = Always` (Note: If `RestartPolicy` is not set, the default value is `Always`.). `ReplicationController` should refuse to instantiate any pod that has a different restart policy. As discussed in [issue #503](https://github.com/GoogleCloudPlatform/kubernetes/issues/503#issuecomment-50169443), we expect other types of controllers to be added to Kubernetes to handle other types of workloads, such as build/test and batch workloads, in the future. - -A replication controller will never terminate on its own, but it isn't expected to be as long-lived as services. Services may be composed of pods controlled by multiple replication controllers, and it is expected that many replication controllers may be created and destroyed over the lifetime of a service. Both services themselves and their clients should remain oblivious to the replication controllers that maintain the pods of the services. - -## How does a replication controller work? - -### Pod template - -A replication controller creates new pods from a template, which is currently inline in the `ReplicationController` object, but which we plan to extract into its own resource [#170](https://github.com/GoogleCloudPlatform/kubernetes/issues/170). - -Rather than specifying the current desired state of all replicas, pod templates are like cookie cutters. Once a cookie has been cut, the cookie has no relationship to the cutter. There is no quantum entanglement. Subsequent changes to the template or even switching to a new template has no direct effect on the pods already created. Similarly, pods created by a replication controller may subsequently be updated directly. This is in deliberate contrast to pods, which do specify the current desired state of all containers belonging to the pod. This approach radically simplifies system semantics and increases the flexibility of the primitive, as demonstrated by the use cases explained below. - -Pods created by a replication controller are intended to be fungible and semantically identical, though their configurations may become heterogeneous over time. This is an obvious fit for replicated stateless servers, but replication controllers can also be used to maintain availability of master-elected, sharded, and worker-pool applications. Such applications should use dynamic work assignment mechanisms, such as the [etcd lock module](https://coreos.com/docs/distributed-configuration/etcd-modules/) or [RabbitMQ work queues](https://www.rabbitmq.com/tutorials/tutorial-two-python.html), as opposed to static/one-time customization of the configuration of each pod, which is considered an anti-pattern. Any pod customization performed, such as vertical auto-sizing of resources (e.g., cpu or memory), should be performed by another online controller process, not unlike the replication controller itself. - -### Labels - -The population of pods that a `ReplicationController` is monitoring is defined with a [label selector](labels.md), which creates a loosely coupled relationship between the controller and the pods controlled, in contrast to pods, which are more tightly coupled. We deliberately chose not to represent the set of pods controlled using a fixed-length array of pod specifications, because our experience is that that approach increases complexity of management operations, for both clients and the system. - -The replication controller should verify that the pods created from the specified template have labels that match its label selector. Though it isn't verified yet, you should also ensure that only one replication controller controls any given pod, by ensuring that the label selectors of replication controllers do not target overlapping sets. - -Note that `ReplicationController`s may themselves have labels and would generally carry the labels their corresponding pods have in common, but these labels do not affect the behavior of the replication controllers. - -Pods may be removed from a replication controller's target set by changing their labels. This technique may be used to remove pods from service for debugging, data recovery, etc. Pods that are removed in this way will be replaced automatically (assuming that the number of replicas is not also changed). - -Similarly, deleting a replication controller does not affect the pods it created. Its `replicas` field must first be set to 0 in order to delete the pods controlled. In the future, we may provide a feature to do this and the deletion in a single client operation. - -## Responsibilities of the replication controller - -The replication controller simply ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, [readiness](https://github.com/GoogleCloudPlatform/kubernetes/issues/620) and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies. - -The replication controller is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in [#492](https://github.com/GoogleCloudPlatform/kubernetes/issues/492)), which would change its `replicas` field. We will not add scheduling policies (e.g., [spreading](https://github.com/GoogleCloudPlatform/kubernetes/issues/367#issuecomment-48428019)) to the replication controller. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation ([#170](https://github.com/GoogleCloudPlatform/kubernetes/issues/170)). - -The replication controller is intended to be a composable building-block primitive. We expect higher-level APIs and/or tools to be built on top of it and other complementary primitives for user convenience in the future. The "macro" operations currently supported by kubectl (run, stop, scale, rolling-update) are proof-of-concept examples of this. For instance, we could imagine something like [Asgard](http://techblog.netflix.com/2012/06/asgard-web-based-cloud-management-and.html) managing replication controllers, auto-scalers, services, scheduling policies, canaries, etc. - -## Common usage patterns - -### Rescheduling - -As mentioned above, whether you have 1 pod you want to keep running, or 1000, a replication controller will ensure that the specified number of pods exists, even in the event of node failure or pod termination (e.g., due to an action by another control agent). - -### Scaling - -The replication controller makes it easy to scale the number of replicas up or down, either manually or by an auto-scaling control agent, by simply updating the `replicas` field. - -### Rolling updates - -The replication controller is designed to facilitate rolling updates to a service by replacing pods one-by-one. - -As explained in [#1353](https://github.com/GoogleCloudPlatform/kubernetes/issues/1353), the recommended approach is to create a new replication controller with 1 replica, scale the new (+1) and old (-1) controllers one by one, and then delete the old controller after it reaches 0 replicas. This predictably updates the set of pods regardless of unexpected failures. - -Ideally, the rolling update controller would take application readiness into account, and would ensure that a sufficient number of pods were productively serving at any given time. - -The two replication controllers would need to create pods with at least one differentiating label, such as the image tag of the primary container of the pod, since it is typically image updates that motivate rolling updates. - -### Multiple release tracks - -In addition to running multiple releases of an application while a rolling update is in progress, it's common to run multiple releases for an extended period of time, or even continuously, using multiple release tracks. The tracks would be differentiated by labels. - -For instance, a service might target all pods with `tier in (frontend), environment in (prod)`. Now say you have 10 replicated pods that make up this tier. But you want to be able to 'canary' a new version of this component. You could set up a `ReplicationController` with `replicas` set to 9 for the bulk of the replicas, with labels `tier=frontend, environment=prod, track=stable`, and another `ReplicationController` with `replicas` set to 1 for the canary, with labels `tier=frontend, environment=prod, track=canary`. Now the service is covering both the canary and non-canary pods. But you can mess with the `ReplicationController`s separately to test things out, monitor the results, etc. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/replication-controller.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/replication-controller.md?pixel)]() diff --git a/release-0.19.0/docs/resource_quota_admin.md b/release-0.19.0/docs/resource_quota_admin.md deleted file mode 100644 index 59b08dc4efe..00000000000 --- a/release-0.19.0/docs/resource_quota_admin.md +++ /dev/null @@ -1,107 +0,0 @@ -# Administering Resource Quotas - -Kubernetes can limit both the number of objects created in a namespace, and the -total amount of resources requested by pods in a namespace. This facilitates -sharing of a single Kubernetes cluster by several teams or tenants, each in -a namespace. - -## Enabling Resource Quota - -Resource Quota support is enabled by default for many kubernetes distributions. It is -enabled when the apiserver `--admission_control=` flag has `ResourceQuota` as -one of its arguments. - -Resource Quota is enforced in a particular namespace when there is a -`ResourceQuota` object in that namespace. There should be at most one -`ResourceQuota` object in a namespace. - -## Object Count Quota -The number of objects of a given type can be restricted. The following types -are supported: - -| ResourceName | Description | -| ------------ | ----------- | -| pods | Total number of pods | -| services | Total number of services | -| replicationcontrollers | Total number of replication controllers | -| resourcequotas | Total number of resource quotas | -| secrets | Total number of secrets | -| persistentvolumeclaims | Total number of persistent volume claims | - -For example, `pods` quota counts and enforces a maximum on the number of `pods` -created in a single namespace. - -## Compute Resource Quota -The total number of objects of a given type can be restricted. The following types -are supported: - -| ResourceName | Description | -| ------------ | ----------- | -| cpu | Total cpu limits of containers | -| memory | Total memory usage limits of containers -| `example.com/customresource` | Total of `resources.limits."example.com/customresource"` of containers | - -For example, `cpu` quota sums up the `resources.limits.cpu` fields of every -container of every pod in the namespace, and enforces a maximum on that sum. - -Any resource that is not part of core Kubernetes must follow the resource naming convention prescribed by Kubernetes. - -This means the resource must have a fully-qualified name (i.e. mycompany.org/shinynewresource) - -## Viewing and Setting Quotas -Kubectl supports creating, updating, and viewing quotas -``` -$ kubectl namespace myspace -$ cat < quota.json -{ - "apiVersion": "v1", - "kind": "ResourceQuota", - "metadata": { - "name": "quota", - }, - "spec": { - "hard": { - "memory": "1Gi", - "cpu": "20", - "pods": "10", - "services": "5", - "replicationcontrollers":"20", - "resourcequotas":"1", - }, - } -} -EOF -$ kubectl create -f quota.json -$ kubectl get quota -NAME -quota -$ kubectl describe quota quota -Name: quota -Resource Used Hard --------- ---- ---- -cpu 0m 20 -memory 0 1Gi -pods 5 10 -replicationcontrollers 5 20 -resourcequotas 1 1 -services 3 5 -``` - -## Quota and Cluster Capacity -Resource Quota objects are independent of the Cluster Capacity. They are -expressed in absolute units. - -Sometimes more complex policies may be desired, such as: - - proportionally divide total cluster resources among several teams. - - allow each tenant to grow resource usage as needed, but have a generous - limit to prevent accidental resource exhaustion. - -Such policies could be implemented using ResourceQuota as a building-block, by -writing a controller which watches the quota usage and adjusts the quota -hard limits of each namespace. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/resource_quota_admin.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/resource_quota_admin.md?pixel)]() diff --git a/release-0.19.0/docs/resources.md b/release-0.19.0/docs/resources.md deleted file mode 100644 index ae98d4709d2..00000000000 --- a/release-0.19.0/docs/resources.md +++ /dev/null @@ -1,214 +0,0 @@ -**Note that the model described in this document has not yet been implemented. The tracking issue for implementation of this model is [#168](https://github.com/GoogleCloudPlatform/kubernetes/issues/168). Currently, only memory and cpu limits on containers (not pods) are supported. "memory" is in bytes and "cpu" is in milli-cores.** - -# The Kubernetes resource model - -To do good pod placement, Kubernetes needs to know how big pods are, as well as the sizes of the nodes onto which they are being placed. The definition of "how big" is given by the Kubernetes resource model - the subject of this document. - -The resource model aims to be: -* simple, for common cases; -* extensible, to accommodate future growth; -* regular, with few special cases; and -* precise, to avoid misunderstandings and promote pod portability. - -## The resource model -A Kubernetes _resource_ is something that can be requested by, allocated to, or consumed by a pod or container. Examples include memory (RAM), CPU, disk-time, and network bandwidth. - -Once resources on a node have been allocated to one pod, they should not be allocated to another until that pod is removed or exits. This means that Kubernetes schedulers should ensure that the sum of the resources allocated (requested and granted) to its pods never exceeds the usable capacity of the node. Testing whether a pod will fit on a node is called _feasibility checking_. - -Note that the resource model currently prohibits over-committing resources; we will want to relax that restriction later. - -### Resource types - -All resources have a _type_ that is identified by their _typename_ (a string, e.g., "memory"). Several resource types are predefined by Kubernetes (a full list is below), although only two will be supported at first: CPU and memory. Users and system administrators can define their own resource types if they wish (e.g., Hadoop slots). - -A fully-qualified resource typename is constructed from a DNS-style _subdomain_, followed by a slash `/`, followed by a name. -* The subdomain must conform to [RFC 1123](http://www.ietf.org/rfc/rfc1123.txt) (e.g., `kubernetes.io`, `example.com`). -* The name must be not more than 63 characters, consisting of upper- or lower-case alphanumeric characters, with the `-`, `_`, and `.` characters allowed anywhere except the first or last character. -* As a shorthand, any resource typename that does not start with a subdomain and a slash will automatically be prefixed with the built-in Kubernetes _namespace_, `kubernetes.io/` in order to fully-qualify it. This namespace is reserved for code in the open source Kubernetes repository; as a result, all user typenames MUST be fully qualified, and cannot be created in this namespace. - -Some example typenames include `memory` (which will be fully-qualified as `kubernetes.io/memory`), and `example.com/Shiny_New-Resource.Type`. - -For future reference, note that some resources, such as CPU and network bandwidth, are _compressible_, which means that their usage can potentially be throttled in a relatively benign manner. All other resources are _incompressible_, which means that any attempt to throttle them is likely to cause grief. This distinction will be important if a Kubernetes implementation supports over-committing of resources. - -### Resource quantities - -Initially, all Kubernetes resource types are _quantitative_, and have an associated _unit_ for quantities of the associated resource (e.g., bytes for memory, bytes per seconds for bandwidth, instances for software licences). The units will always be a resource type's natural base units (e.g., bytes, not MB), to avoid confusion between binary and decimal multipliers and the underlying unit multiplier (e.g., is memory measured in MiB, MB, or GB?). - -Resource quantities can be added and subtracted: for example, a node has a fixed quantity of each resource type that can be allocated to pods/containers; once such an allocation has been made, the allocated resources cannot be made available to other pods/containers without over-committing the resources. - -To make life easier for people, quantities can be represented externally as unadorned integers, or as fixed-point integers with one of these SI suffices (E, P, T, G, M, K, m) or their power-of-two equivalents (Ei, Pi, Ti, Gi, Mi, Ki). For example, the following represent roughly the same value: 128974848, "129e6", "129M" , "123Mi". Small quantities can be represented directly as decimals (e.g., 0.3), or using milli-units (e.g., "300m"). - * "Externally" means in user interfaces, reports, graphs, and in JSON or YAML resource specifications that might be generated or read by people. - * Case is significant: "m" and "M" are not the same, so "k" is not a valid SI suffix. There are no power-of-two equivalents for SI suffixes that represent multipliers less than 1. - * These conventions only apply to resource quantities, not arbitrary values. - -Internally (i.e., everywhere else), Kubernetes will represent resource quantities as integers so it can avoid problems with rounding errors, and will not use strings to represent numeric values. To achieve this, quantities that naturally have fractional parts (e.g., CPU seconds/second) will be scaled to integral numbers of milli-units (e.g., milli-CPUs) as soon as they are read in. Internal APIs, data structures, and protobufs will use these scaled integer units. Raw measurement data such as usage may still need to be tracked and calculated using floating point values, but internally they should be rescaled to avoid some values being in milli-units and some not. - * Note that reading in a resource quantity and writing it out again may change the way its values are represented, and truncate precision (e.g., 1.0001 may become 1.000), so comparison and difference operations (e.g., by an updater) must be done on the internal representations. - * Avoiding milli-units in external representations has advantages for people who will use Kubernetes, but runs the risk of developers forgetting to rescale or accidentally using floating-point representations. That seems like the right choice. We will try to reduce the risk by providing libraries that automatically do the quantization for JSON/YAML inputs. - -### Resource specifications - -Both users and a number of system components, such as schedulers, (horizontal) auto-scalers, (vertical) auto-sizers, load balancers, and worker-pool managers need to reason about resource requirements of workloads, resource capacities of nodes, and resource usage. Kubernetes divides specifications of *desired state*, aka the Spec, and representations of *current state*, aka the Status. Resource requirements and total node capacity fall into the specification category, while resource usage, characterizations derived from usage (e.g., maximum usage, histograms), and other resource demand signals (e.g., CPU load) clearly fall into the status category and are discussed in the Appendix for now. - -Resource requirements for a container or pod should have the following form: -``` -resourceRequirementSpec: [ - request: [ cpu: 2.5, memory: "40Mi" ], - limit: [ cpu: 4.0, memory: "99Mi" ], -] -``` -Where: -* _request_ [optional]: the amount of resources being requested, or that were requested and have been allocated. Scheduler algorithms will use these quantities to test feasibility (whether a pod will fit onto a node). If a container (or pod) tries to use more resources than its _request_, any associated SLOs are voided - e.g., the program it is running may be throttled (compressible resource types), or the attempt may be denied. If _request_ is omitted for a container, it defaults to _limit_ if that is explicitly specified, otherwise to an implementation-defined value; this will always be 0 for a user-defined resource type. If _request_ is omitted for a pod, it defaults to the sum of the (explicit or implicit) _request_ values for the containers it encloses. - -* _limit_ [optional]: an upper bound or cap on the maximum amount of resources that will be made available to a container or pod; if a container or pod uses more resources than its _limit_, it may be terminated. The _limit_ defaults to "unbounded"; in practice, this probably means the capacity of an enclosing container, pod, or node, but may result in non-deterministic behavior, especially for memory. - -Total capacity for a node should have a similar structure: -``` -resourceCapacitySpec: [ - total: [ cpu: 12, memory: "128Gi" ] -] -``` -Where: -* _total_: the total allocatable resources of a node. Initially, the resources at a given scope will bound the resources of the sum of inner scopes. - -#### Notes - - * It is an error to specify the same resource type more than once in each list. - - * It is an error for the _request_ or _limit_ values for a pod to be less than the sum of the (explicit or defaulted) values for the containers it encloses. (We may relax this later.) - - * If multiple pods are running on the same node and attempting to use more resources than they have requested, the result is implementation-defined. For example: unallocated or unused resources might be spread equally across claimants, or the assignment might be weighted by the size of the original request, or as a function of limits, or priority, or the phase of the moon, perhaps modulated by the direction of the tide. Thus, although it's not mandatory to provide a _request_, it's probably a good idea. (Note that the _request_ could be filled in by an automated system that is observing actual usage and/or historical data.) - - * Internally, the Kubernetes master can decide the defaulting behavior and the kubelet implementation may expected an absolute specification. For example, if the master decided that "the default is unbounded" it would pass 2^64 to the kubelet. - - - -## Kubernetes-defined resource types -The following resource types are predefined ("reserved") by Kubernetes in the `kubernetes.io` namespace, and so cannot be used for user-defined resources. Note that the syntax of all resource types in the resource spec is deliberately similar, but some resource types (e.g., CPU) may receive significantly more support than simply tracking quantities in the schedulers and/or the Kubelet. - -### Processor cycles - * Name: `cpu` (or `kubernetes.io/cpu`) - * Units: Kubernetes Compute Unit seconds/second (i.e., CPU cores normalized to a canonical "Kubernetes CPU") - * Internal representation: milli-KCUs - * Compressible? yes - * Qualities: this is a placeholder for the kind of thing that may be supported in the future -- see [#147](https://github.com/GoogleCloudPlatform/kubernetes/issues/147) - * [future] `schedulingLatency`: as per lmctfy - * [future] `cpuConversionFactor`: property of a node: the speed of a CPU core on the node's processor divided by the speed of the canonical Kubernetes CPU (a floating point value; default = 1.0). - -To reduce performance portability problems for pods, and to avoid worse-case provisioning behavior, the units of CPU will be normalized to a canonical "Kubernetes Compute Unit" (KCU, pronounced ˈkoÍokoÍžo), which will roughly be equivalent to a single CPU hyperthreaded core for some recent x86 processor. The normalization may be implementation-defined, although some reasonable defaults will be provided in the open-source Kubernetes code. - -Note that requesting 2 KCU won't guarantee that precisely 2 physical cores will be allocated - control of aspects like this will be handled by resource _qualities_ (a future feature). - - -### Memory - * Name: `memory` (or `kubernetes.io/memory`) - * Units: bytes - * Compressible? no (at least initially) - -The precise meaning of what "memory" means is implementation dependent, but the basic idea is to rely on the underlying `memcg` mechanisms, support, and definitions. - -Note that most people will want to use power-of-two suffixes (Mi, Gi) for memory quantities -rather than decimal ones: "64MiB" rather than "64MB". - - -## Resource metadata -A resource type may have an associated read-only ResourceType structure, that contains metadata about the type. For example: -``` -resourceTypes: [ - "kubernetes.io/memory": [ - isCompressible: false, ... - ] - "kubernetes.io/cpu": [ - isCompressible: true, internalScaleExponent: 3, ... - ] - "kubernetes.io/disk-space": [ ... } -] -``` - -Kubernetes will provide ResourceType metadata for its predefined types. If no resource metadata can be found for a resource type, Kubernetes will assume that it is a quantified, incompressible resource that is not specified in milli-units, and has no default value. - -The defined properties are as follows: - -| field name | type | contents | -| ---------- | ---- | -------- | -| name | string, required | the typename, as a fully-qualified string (e.g., `kubernetes.io/cpu`) | -| internalScaleExponent | int, default=0 | external values are multiplied by 10 to this power for internal storage (e.g., 3 for milli-units) | -| units | string, required | format: `unit* [per unit+]` (e.g., `second`, `byte per second`). An empty unit field means "dimensionless". | -| isCompressible | bool, default=false | true if the resource type is compressible | -| defaultRequest | string, default=none | in the same format as a user-supplied value | -| _[future]_ quantization | number, default=1 | smallest granularity of allocation: requests may be rounded up to a multiple of this unit; implementation-defined unit (e.g., the page size for RAM). | - - -# Appendix: future extensions - -The following are planned future extensions to the resource model, included here to encourage comments. - -## Usage data - -Because resource usage and related metrics change continuously, need to be tracked over time (i.e., historically), can be characterized in a variety of ways, and are fairly voluminous, we will not include usage in core API objects, such as [Pods](pods.md) and Nodes, but will provide separate APIs for accessing and managing that data. See the Appendix for possible representations of usage data, but the representation we'll use is TBD. - -Singleton values for observed and predicted future usage will rapidly prove inadequate, so we will support the following structure for extended usage information: - -``` -resourceStatus: [ - usage: [ cpu: , memory: ], - maxusage: [ cpu: , memory: ], - predicted: [ cpu: , memory: ], -] -``` - -where a `` or `` structure looks like this: -``` -{ - mean: # arithmetic mean - max: # minimum value - min: # maximum value - count: # number of data points - percentiles: [ # map from %iles to values - "10": <10th-percentile-value>, - "50": , - "99": <99th-percentile-value>, - "99.9": <99.9th-percentile-value>, - ... - ] - } -``` -All parts of this structure are optional, although we strongly encourage including quantities for 50, 90, 95, 99, 99.5, and 99.9 percentiles. _[In practice, it will be important to include additional info such as the length of the time window over which the averages are calculated, the confidence level, and information-quality metrics such as the number of dropped or discarded data points.]_ -and predicted - -## Future resource types - -### _[future] Network bandwidth_ - * Name: "network-bandwidth" (or `kubernetes.io/network-bandwidth`) - * Units: bytes per second - * Compressible? yes - -### _[future] Network operations_ - * Name: "network-iops" (or `kubernetes.io/network-iops`) - * Units: operations (messages) per second - * Compressible? yes - -### _[future] Storage space_ - * Name: "storage-space" (or `kubernetes.io/storage-space`) - * Units: bytes - * Compressible? no - -The amount of secondary storage space available to a container. The main target is local disk drives and SSDs, although this could also be used to qualify remotely-mounted volumes. Specifying whether a resource is a raw disk, an SSD, a disk array, or a file system fronting any of these, is left for future work. - -### _[future] Storage time_ - * Name: storage-time (or `kubernetes.io/storage-time`) - * Units: seconds per second of disk time - * Internal representation: milli-units - * Compressible? yes - -This is the amount of time a container spends accessing disk, including actuator and transfer time. A standard disk drive provides 1.0 diskTime seconds per second. - -### _[future] Storage operations_ - * Name: "storage-iops" (or `kubernetes.io/storage-iops`) - * Units: operations per second - * Compressible? yes - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/resources.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/resources.md?pixel)]() diff --git a/release-0.19.0/docs/roadmap.md b/release-0.19.0/docs/roadmap.md deleted file mode 100644 index bb7d49634a0..00000000000 --- a/release-0.19.0/docs/roadmap.md +++ /dev/null @@ -1,97 +0,0 @@ -# Kubernetes v1 - -Updated May 28, 2015 - -This document is intended to capture the set of supported use cases, features, -docs, and patterns that we feel are required to call Kubernetes “feature -complete†for a 1.0 release candidate. - -This list does not emphasize the bug fixes and stabilization that will be required to take it all the way to -production ready. Please see the [Github issues] (https://github.com/GoogleCloudPlatform/kubernetes/issues) for a more detailed view. - -This is a living document, where suggested changes can be made via a pull request. - -## Target workloads - -Most realistic examples of production services include a load-balanced web -frontend exposed to the public Internet, with a stateful backend, such as a -clustered database or key-value store. We will target such workloads for our -1.0 release. - -## v1 APIs -For existing and future workloads, we want to provide a consistent, stable set of APIs, over which developers can build and extend Kubernetes. This includes input validation, a consistent API structure, clean semantics, and improved diagnosability of the system. -||||||| merged common ancestors -## APIs and core features -1. Consistent v1 API - - Status: DONE. [v1beta3](http://kubernetesio.blogspot.com/2015/04/introducing-kubernetes-v1beta3.html) was developed as the release candidate for the v1 API. -2. Multi-port services for apps which need more than one port on the same portal IP ([#1802](https://github.com/GoogleCloudPlatform/kubernetes/issues/1802)) - - Status: DONE. Released in 0.15.0 -3. Nominal services for applications which need one stable IP per pod instance ([#260](https://github.com/GoogleCloudPlatform/kubernetes/issues/260)) - - Status: #2585 covers some design options. -4. API input is scrubbed of status fields in favor of a new API to set status ([#4248](https://github.com/GoogleCloudPlatform/kubernetes/issues/4248)) - - Status: DONE -5. Input validation reporting versioned field names ([#3084](https://github.com/GoogleCloudPlatform/kubernetes/issues/3084)) - - Status: in progress -6. Error reporting: Report common problems in ways that users can discover - - Status: -7. Event management: Make events usable and useful - - Status: -8. Persistent storage support ([#5105](https://github.com/GoogleCloudPlatform/kubernetes/issues/5105)) - - Status: in progress -9. Allow nodes to join/leave a cluster ([#6087](https://github.com/GoogleCloudPlatform/kubernetes/issues/6087),[#3168](https://github.com/GoogleCloudPlatform/kubernetes/issues/3168)) - - Status: in progress ([#6949](https://github.com/GoogleCloudPlatform/kubernetes/pull/6949)) -10. Handle node death - - Status: mostly covered by nodes joining/leaving a cluster -11. Allow live cluster upgrades ([#6075](https://github.com/GoogleCloudPlatform/kubernetes/issues/6075),[#6079](https://github.com/GoogleCloudPlatform/kubernetes/issues/6079)) - - Status: design in progress -12. Allow kernel upgrades - - Status: mostly covered by nodes joining/leaving a cluster, need demonstration -13. Allow rolling-updates to fail gracefully ([#1353](https://github.com/GoogleCloudPlatform/kubernetes/issues/1353)) - - Status: -14. Easy .dockercfg - - Status: -15. Demonstrate cluster stability over time - - Status -16. Kubelet use the kubernetes API to fetch jobs to run (instead of etcd) on supported platforms - - Status: DONE - -## Reliability and performance - -1. Restart system components in case of crash (#2884) - - Status: in progress -2. Scale to 100 nodes (#3876) - - Status: in progress -3. Scale to 30-50 pods (1-2 containers each) per node (#4188) - - Status: -4. Scheduling throughput: 99% of scheduling decisions made in less than 1s on 100 node, 3000 pod cluster; linear time to number of nodes and pods (#3954) -5. Startup time: 99% of end-to-end pod startup time with prepulled images is less than 5s on 100 node, 3000 pod cluster; linear time to number of nodes and pods (#3952, #3954) - - Status: -6. API performance: 99% of API calls return in less than 1s; constant time to number of nodes and pods (#4521) - - Status: -7. Manage and report disk space on nodes (#4135) - - Status: in progress -8. API test coverage more than 85% in e2e tests - - Status: - -In addition, we will provide versioning and deprecation policies for the APIs. - -## Cluster Environment -Currently, a cluster is a set of nodes (VMs, machines), managed by a master, running a version of Kubernetes. This master is the cluster-level control-plane. For the purpose of running production workloads, members of the cluster must be serviceable and upgradeable. - -## Micro-services and Resources -For applications / micro-services that run on Kubernetes, we want deployments to be easy but powerful. An Operations user should be able to launch a micro-service, letting the scheduler find the right placement. That micro-service should be able to require “pet storage†resources, fulfilled by external storage and with help from the cluster. We also want to improve the tools, experience for how users can roll-out applications through patterns like canary deployments. - -## Performance and Reliability -The system should be performant, especially from the perspective of micro-service running on top of the cluster and for Operations users. As part of being production grade, the system should have a measured availability and be resilient to failures, including fatal failures due to hardware. - -In terms of performance, the objectives include: -- API call return times at 99%tile ([#4521](https://github.com/GoogleCloudPlatform/kubernetes/issues/4521)) -- scale to 100 nodes with 30-50 pods (1-2 containers) per node -- scheduling throughput at the 99%tile ([#3954](https://github.com/GoogleCloudPlatform/kubernetes/issues/3954)) -- startup time at the 99%tile ([#3552](https://github.com/GoogleCloudPlatform/kubernetes/issues/3952)) - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/roadmap.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/roadmap.md?pixel)]() diff --git a/release-0.19.0/docs/salt.md b/release-0.19.0/docs/salt.md deleted file mode 100644 index 6e4e1b5676a..00000000000 --- a/release-0.19.0/docs/salt.md +++ /dev/null @@ -1,104 +0,0 @@ -# Using Salt to configure Kubernetes - -The Kubernetes cluster can be configured using Salt. - -The Salt scripts are shared across multiple hosting providers, so it's important to understand some background information prior to making a modification to ensure your changes do not break hosting Kubernetes across multiple environments. Depending on where you host your Kubernetes cluster, you may be using different operating systems and different networking configurations. As a result, it's important to understand some background information before making Salt changes in order to minimize introducing failures for other hosting providers. - -## Salt cluster setup - -The **salt-master** service runs on the kubernetes-master node [(except on the default GCE setup)](#standalone-salt-configuration-on-gce). - -The **salt-minion** service runs on the kubernetes-master node and each kubernetes-minion node in the cluster. - -Each salt-minion service is configured to interact with the **salt-master** service hosted on the kubernetes-master via the **master.conf** file [(except on GCE)](#standalone-salt-configuration-on-gce). - -``` -[root@kubernetes-master] $ cat /etc/salt/minion.d/master.conf -master: kubernetes-master -``` -The salt-master is contacted by each salt-minion and depending upon the machine information presented, the salt-master will provision the machine as either a kubernetes-master or kubernetes-minion with all the required capabilities needed to run Kubernetes. - -If you are running the Vagrant based environment, the **salt-api** service is running on the kubernetes-master. It is configured to enable the vagrant user to introspect the salt cluster in order to find out about machines in the Vagrant environment via a REST API. - -## Standalone Salt Configuration on GCE - -On GCE, the master and nodes are all configured as [standalone minions](http://docs.saltstack.com/en/latest/topics/tutorials/standalone_minion.html). The configuration for each VM is derived from the VM's [instance metadata](https://cloud.google.com/compute/docs/metadata) and then stored in Salt grains (`/etc/salt/minion.d/grains.conf`) and pillars (`/srv/salt-overlay/pillar/cluster-params.sls`) that local Salt uses to enforce state. - -All remaining sections that refer to master/minion setups should be ignored for GCE. One fallout of the GCE setup is that the Salt mine doesn't exist - there is no sharing of configuration amongst nodes. - -## Salt security - -*(Not applicable on default GCE setup.)* - -Security is not enabled on the salt-master, and the salt-master is configured to auto-accept incoming requests from minions. It is not recommended to use this security configuration in production environments without deeper study. (In some environments this isn't as bad as it might sound if the salt master port isn't externally accessible and you trust everyone on your network.) - -``` -[root@kubernetes-master] $ cat /etc/salt/master.d/auto-accept.conf -open_mode: True -auto_accept: True -``` - -## Salt minion configuration - -Each minion in the salt cluster has an associated configuration that instructs the salt-master how to provision the required resources on the machine. - -An example file is presented below using the Vagrant based environment. - -``` -[root@kubernetes-master] $ cat /etc/salt/minion.d/grains.conf -grains: - etcd_servers: $MASTER_IP - cloud_provider: vagrant - roles: - - kubernetes-master -``` - -Each hosting environment has a slightly different grains.conf file that is used to build conditional logic where required in the Salt files. - -The following enumerates the set of defined key/value pairs that are supported today. If you add new ones, please make sure to update this list. - -Key | Value -------------- | ------------- -`api_servers` | (Optional) The IP address / host name where a kubelet can get read-only access to kube-apiserver -`cbr-cidr` | (Optional) The minion IP address range used for the docker container bridge. -`cloud` | (Optional) Which IaaS platform is used to host kubernetes, *gce*, *azure*, *aws*, *vagrant* -`etcd_servers` | (Optional) Comma-delimited list of IP addresses the kube-apiserver and kubelet use to reach etcd. Uses the IP of the first machine in the kubernetes_master role, or 127.0.0.1 on GCE. -`hostnamef` | (Optional) The full host name of the machine, i.e. uname -n -`node_ip` | (Optional) The IP address to use to address this node -`hostname_override` | (Optional) Mapped to the kubelet hostname_override -`network_mode` | (Optional) Networking model to use among nodes: *openvswitch* -`networkInterfaceName` | (Optional) Networking interface to use to bind addresses, default value *eth0* -`publicAddressOverride` | (Optional) The IP address the kube-apiserver should use to bind against for external read-only access -`roles` | (Required) 1. `kubernetes-master` means this machine is the master in the kubernetes cluster. 2. `kubernetes-pool` means this machine is a kubernetes-minion. Depending on the role, the Salt scripts will provision different resources on the machine. - -These keys may be leveraged by the Salt sls files to branch behavior. - -In addition, a cluster may be running a Debian based operating system or Red Hat based operating system (Centos, Fedora, RHEL, etc.). As a result, its important to sometimes distinguish behavior based on operating system using if branches like the following. - -``` -{% if grains['os_family'] == 'RedHat' %} -// something specific to a RedHat environment (Centos, Fedora, RHEL) where you may use yum, systemd, etc. -{% else %} -// something specific to Debian environment (apt-get, initd) -{% endif %} -``` - -## Best Practices - -1. When configuring default arguments for processes, its best to avoid the use of EnvironmentFiles (Systemd in Red Hat environments) or init.d files (Debian distributions) to hold default values that should be common across operating system environments. This helps keep our Salt template files easy to understand for editors that may not be familiar with the particulars of each distribution. - -## Future enhancements (Networking) - -Per pod IP configuration is provider specific, so when making networking changes, its important to sand-box these as all providers may not use the same mechanisms (iptables, openvswitch, etc.) - -We should define a grains.conf key that captures more specifically what network configuration environment is being used to avoid future confusion across providers. - -## Further reading - -The [cluster/saltbase](../cluster/saltbase) tree has more details on the current SaltStack configuration. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/salt.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/salt.md?pixel)]() diff --git a/release-0.19.0/docs/secrets.md b/release-0.19.0/docs/secrets.md deleted file mode 100644 index 848ecd0922a..00000000000 --- a/release-0.19.0/docs/secrets.md +++ /dev/null @@ -1,367 +0,0 @@ -# Secrets - -Objects of type `secret` are intended to hold sensitive information, such as -passwords, OAuth tokens, and ssh keys. Putting this information in a `secret` -is safer and more flexible than putting it verbatim in a `pod` definition or in -a docker image. - -### Creating and Using Secrets -To make use of secrets requires at least two steps: - 1. create a `secret` resource with secret data - 1. create a pod that has a volume of type `secret` and a container - which mounts that volume. - -This is an example of a simple secret, in json format: -```json -{ - "apiVersion": "v1", - "kind": "Secret", - "metadata" : { - "name": "mysecret", - "namespace": "myns" - }, - "data": { - "username": "dmFsdWUtMQ0K", - "password": "dmFsdWUtMg0KDQo=" - } -} -``` - -The data field is a map. -Its keys must match [DNS_SUBDOMAIN](design/identifiers.md). -The values are arbitrary data, encoded using base64. - -This is an example of a pod that uses a secret, in json format: -```json -{ - "apiVersion": "v1", - "kind": "Pod", - "metadata": { - "name": "mypod", - "namespace": "myns" - }, - "spec": { - "containers": [{ - "name": "mypod", - "image": "redis", - "volumeMounts": [{ - "name": "foo", - "mountPath": "/etc/foo", - "readOnly": true - }] - }], - "volumes": [{ - "name": "foo", - "secret": { - "secretName": "mysecret" - } - }] - } -} -``` - -### Restrictions -Secret volume sources are validated to ensure that the specified object -reference actually points to an object of type `Secret`. Therefore, a secret -needs to be created before any pods that depend on it. - -Secret API objects reside in a namespace. They can only be referenced by pods -in that same namespace. - -Individual secrets are limited to 1MB in size. This is to discourage creation -of very large secrets which would exhaust apiserver and kubelet memory. -However, creation of many smaller secrets could also exhaust memory. More -comprehensive limits on memory usage due to secrets is a planned feature. - -Kubelet only supports use of secrets for Pods it gets from the API server. -This includes any pods created using kubectl, or indirectly via a replication -controller. It does not include pods created via the kubelets -`--manifest-url` flag, its `--config` flag, or its REST API (these are -not common ways to create pods.) - -### Consuming Secret Values - -The program in a container is responsible for reading the secret(s) from the -files. Currently, if a program expects a secret to be stored in an environment -variable, then the user needs to modify the image to populate the environment -variable from the file as an step before running the main program. Future -versions of Kubernetes are expected to provide more automation for populating -environment variables from files. - - -## Changes to Secrets - -Once a pod is created, its secret volumes will not change, even if the secret -resource is modified. To change the secret used, the original pod must be -deleted, and a new pod (perhaps with an identical PodSpec) must be created. -Therefore, updating a secret follows the same workflow as deploying a new -container image. The `kubectl rolling-update` command can be used ([man -page](kubectl_rolling-update.md)). - -The resourceVersion of the secret is not specified when it is referenced. -Therefore, if a secret is updated at about the same time as pods are starting, -then it is not defined which version of the secret will be used for the pod. It -is not possible currently to check what resource version of a secret object was -used when a pod was created. It is planned that pods will report this -information, so that a controller could restart ones using a old -resourceVersion. In the interim, if this is a concern, it is recommended to not -update the data of existing secrets, but to create new ones with distinct names. - -## Use cases - -### Use-Case: Pod with ssh keys - -To create a pod that uses an ssh key stored as a secret, we first need to create a secret: - -```json -{ - "kind": "Secret", - "apiVersion": "v1", - "metadata": { - "name": "ssh-key-secret" - }, - "data": { - "id-rsa": "dmFsdWUtMg0KDQo=", - "id-rsa.pub": "dmFsdWUtMQ0K" - } -} -``` - -**Note:** The serialized JSON and YAML values of secret data are encoded as -base64 strings. Newlines are not valid within these strings and must be -omitted. - -Now we can create a pod which references the secret with the ssh key and -consumes it in a volume: - -```json -{ - "kind": "Pod", - "apiVersion": "v1", - "metadata": { - "name": "secret-test-pod", - "labels": { - "name": "secret-test" - } - }, - "spec": { - "volumes": [ - { - "name": "secret-volume", - "secret": { - "secretName": "ssh-key-secret" - } - } - ], - "containers": [ - { - "name": "ssh-test-container", - "image": "mySshImage", - "volumeMounts": [ - { - "name": "secret-volume", - "readOnly": true, - "mountPath": "/etc/secret-volume" - } - ] - } - ] - } -} -``` - -When the container's command runs, the pieces of the key will be available in: - - /etc/secret-volume/id-rsa.pub - /etc/secret-volume/id-rsa - -The container is then free to use the secret data to establish an ssh connection. - -### Use-Case: Pods with prod / test credentials - -This example illustrates a pod which consumes a secret containing prod -credentials and another pod which consumes a secret with test environment -credentials. - -The secrets: - -```json -{ - "apiVersion": "v1", - "kind": "List", - "items": - [{ - "kind": "Secret", - "apiVersion": "v1", - "metadata": { - "name": "prod-db-secret" - }, - "data": { - "password": "dmFsdWUtMg0KDQo=", - "username": "dmFsdWUtMQ0K" - } - }, - { - "kind": "Secret", - "apiVersion": "v1", - "metadata": { - "name": "test-db-secret" - }, - "data": { - "password": "dmFsdWUtMg0KDQo=", - "username": "dmFsdWUtMQ0K" - } - }] -} -``` - -The pods: - -```json -{ - "apiVersion": "v1", - "kind": "List", - "items": - [{ - "kind": "Pod", - "apiVersion": "v1", - "metadata": { - "name": "prod-db-client-pod", - "labels": { - "name": "prod-db-client" - } - }, - "spec": { - "volumes": [ - { - "name": "secret-volume", - "secret": { - "secretName": "prod-db-secret" - } - } - ], - "containers": [ - { - "name": "db-client-container", - "image": "myClientImage", - "volumeMounts": [ - { - "name": "secret-volume", - "readOnly": true, - "mountPath": "/etc/secret-volume" - } - ] - } - ] - } - }, - { - "kind": "Pod", - "apiVersion": "v1", - "metadata": { - "name": "test-db-client-pod", - "labels": { - "name": "test-db-client" - } - }, - "spec": { - "volumes": [ - { - "name": "secret-volume", - "secret": { - "secretName": "test-db-secret" - } - } - ], - "containers": [ - { - "name": "db-client-container", - "image": "myClientImage", - "volumeMounts": [ - { - "name": "secret-volume", - "readOnly": true, - "mountPath": "/etc/secret-volume" - } - ] - } - ] - } - }] -} -``` - -Both containers will have the following files present on their filesystems: -``` - /etc/secret-volume/username - /etc/secret-volume/password -``` - -Note how the specs for the two pods differ only in one field; this facilitates -creating pods with different capabilities from a common pod config template. - -### Use-case: Secret visible to one container in a pod - - -Consider a program that needs to handle HTTP requests, do some complex business -logic, and then sign some messages with an HMAC. Because it has complex -application logic, there might be an unnoticed remote file reading exploit in -the server, which could expose the private key to an attacker. - -This could be divided into two processes in two containers: a frontend container -which handles user interaction and business logic, but which cannot see the -private key; and a signer container that can see the private key, and responds -to simple signing requests from the frontend (e.g. over localhost networking). - -With this partitioned approach, an attacker now has to trick the application -server into doing something rather arbitrary, which may be harder than getting -it to read a file. - -## Security Properties - -### Protections - -Because `secret` objects can be created independently of the `pods` that use -them, there is less risk of the secret being exposed during the workflow of -creating, viewing, and editing pods. The system can also take additional -precautions with `secret` objects, such as avoiding writing them to disk where -possible. - -A secret is only sent to a node if a pod on that node requires it. It is not -written to disk. It is stored in a tmpfs. It is deleted once the pod that -depends on it is deleted. - -On most Kubernetes-project-maintained distributions, communication between user -to the apiserver, and from apiserver to the kubelets, is protected by SSL/TLS. -Secrets are protected when transmitted over these channels. - -There may be secrets for several pods on the same node. However, only the -secrets that a pod requests are potentially visible within its containers. -Therefore, one Pod does not have access to the secrets of another pod. - -There may be several containers in a pod. However, each container in a pod has -to request the secret volume in its `volumeMounts` for it to be visible within -the container. This can be used to construct useful [security partitions at the -Pod level](#use-case-two-containers). - -### Risks - - - Applications still need to protect the value of secret after reading it from the volume, - such as not accidentally logging it or transmitting it to an untrusted party. - - A user who can create a pod that uses a secret can also see the value of that secret. Even - if apiserver policy does not allow that user to read the secret object, the user could - run a pod which exposes the secret. - If multiple replicas of etcd are run, then the secrets will be shared between them. - By default, etcd does not secure peer-to-peer communication with SSL/TLS, though this can be configured. - - It is not possible currently to control which users of a kubernetes cluster can - access a secret. Support for this is planned. - - Currently, anyone with root on any node can read any secret from the apiserver, - by impersonating the kubelet. It is a planned feature to only send secrets to - nodes that actually require them, to restrict the impact of a root exploit on a - single node. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/secrets.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/secrets.md?pixel)]() diff --git a/release-0.19.0/docs/security_context.md b/release-0.19.0/docs/security_context.md deleted file mode 100644 index 6fb10065da7..00000000000 --- a/release-0.19.0/docs/security_context.md +++ /dev/null @@ -1,9 +0,0 @@ -# Security Contexts - -A security context defines the operating system security settings (uid, gid, capabilities, SELinux role, etc..) applied to a container. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/security_context.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/security_context.md?pixel)]() diff --git a/release-0.19.0/docs/service_accounts.md b/release-0.19.0/docs/service_accounts.md deleted file mode 100644 index 470871f2324..00000000000 --- a/release-0.19.0/docs/service_accounts.md +++ /dev/null @@ -1,17 +0,0 @@ -# Service Accounts -A serviceAccount provides an identity for processes that run in a Pod. -The behavior of the the serviceAccount object is implemented via a plugin -called an [Admission Controller]( admission_controllers.md). When this plugin is active -(and it is by default on most distributions), then it does the following when a pod is created or modified: - 1. If the pod does not have a ```ServiceAccount```, it modifies the pod's ```ServiceAccount``` to "default". - 2. It ensures that the ```ServiceAccount``` referenced by a pod exists. - 3. If ```LimitSecretReferences``` is true, it rejects the pod if the pod references ```Secret``` objects which the pods -```ServiceAccount``` does not reference. - 4. If the pod does not contain any ```ImagePullSecrets```, the ```ImagePullSecrets``` of the -```ServiceAccount``` are added to the pod. - 5. If ```MountServiceAccountToken``` is true, it adds a ```VolumeMount``` with the pod's ```ServiceAccount``` API token secret to containers in the pod. - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/service_accounts.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/service_accounts.md?pixel)]() diff --git a/release-0.19.0/docs/services-firewalls.md b/release-0.19.0/docs/services-firewalls.md deleted file mode 100644 index d36138d5349..00000000000 --- a/release-0.19.0/docs/services-firewalls.md +++ /dev/null @@ -1,40 +0,0 @@ -# Services and Firewalls - -Many cloud providers (e.g. Google Compute Engine) define firewalls that help keep prevent inadvertent -exposure to the internet. When exposing a service to the external world, you may need to open up -one or more ports in these firewalls to serve traffic. This document describes this process, as -well as any provider specific details that may be necessary. - - -### Google Compute Engine -Google Compute Engine firewalls are documented [elsewhere](https://cloud.google.com/compute/docs/networking#firewalls_1). - -You can add a firewall with the ```gcloud``` command line tool: - -``` -gcloud compute firewall-rules create my-rule --allow=tcp: -``` - -**Note** -There is one important security note when using firewalls on Google Compute Engine: - -Firewalls are defined per-vm, rather than per-ip address. This means that if you open a firewall for that service's ports, -anything that serves on that port on that VM's host IP address may potentially serve traffic. - -Note that this is not a problem for other Kubernetes services, as they listen on IP addresses that are different than the -host node's external IP address. - -Consider: - * You create a Service with an external load balancer (IP Address 1.2.3.4) and port 80 - * You open the firewall for port 80 for all nodes in your cluster, so that the external Service actually can deliver packets to your Service - * You start an nginx server, running on port 80 on the host virtual machine (IP Address 2.3.4.5). This nginx is **also** exposed to the internet on the VM's external IP address. - -Consequently, please be careful when opening firewalls in Google Compute Engine or Google Container Engine. You may accidentally be exposing other services to the wilds of the internet. - -### Other cloud providers -Coming soon. - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/services-firewalls.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/services-firewalls.md?pixel)]() diff --git a/release-0.19.0/docs/services.md b/release-0.19.0/docs/services.md deleted file mode 100644 index d05a61e54f8..00000000000 --- a/release-0.19.0/docs/services.md +++ /dev/null @@ -1,468 +0,0 @@ -# Services in Kubernetes - -## Overview - -Kubernetes [`Pods`](pods.md) are mortal. They are born and they die, and they -are not resurrected. [`ReplicationControllers`](replication-controller.md) in -particular create and destroy `Pods` dynamically (e.g. when scaling up or down -or when doing rolling updates). While each `Pod` gets its own IP address, even -those IP addresses cannot be relied upon to be stable over time. This leads to -a problem: if some set of `Pods` (let's call them backends) provides -functionality to other `Pods` (let's call them frontends) inside the Kubernetes -cluster, how do those frontends find out and keep track of which backends are -in that set? - -Enter `Services`. - -A Kubernetes `Service` is an abstraction which defines a logical set of `Pods` -and a policy by which to access them - sometimes called a micro-service. The -set of `Pods` targeted by a `Service` is (usually) determined by a [`Label -Selector`](labels.md) (see below for why you might want a `Service` without a -selector). - -As an example, consider an image-processing backend which is running with 3 -replicas. Those replicas are fungible - frontends do not care which backend -they use. While the actual `Pods` that compose the backend set may change, the -frontend clients should not need to be aware of that or keep track of the list -of backends themselves. The `Service` abstraction enables this decoupling. - -For Kubernetes-native applications, Kubernetes offers a simple `Endpoints` API -that is updated whenever the set of `Pods` in a `Service` changes. For -non-native applications, Kubernetes offers a virtual-IP-based bridge to Services -which redirects to the backend `Pods`. - -## Defining a service - -A `Service` in Kubernetes is a REST object, similar to a `Pod`. Like all of the -REST objects, a `Service` definition can be POSTed to the apiserver to create a -new instance. For example, suppose you have a set of `Pods` that each expose -port 9376 and carry a label "app=MyApp". - -```json -{ - "kind": "Service", - "apiVersion": "v1", - "metadata": { - "name": "my-service" - }, - "spec": { - "selector": { - "app": "MyApp" - }, - "ports": [ - { - "protocol": "TCP", - "port": 80, - "targetPort": 9376 - } - ] - } -} -``` - -This specification will create a new `Service` object named "my-service" which -targets TCP port 9376 on any `Pod` with the "app=MyApp" label. This `Service` -will also be assigned an IP address (sometimes called the "cluster IP"), which -is used by the service proxies (see below). The `Service`'s selector will be -evaluated continuously and the results will be posted in an `Endpoints` object -also named "my-service". - -Note that a `Service` can map an incoming port to any `targetPort`. By default -the `targetPort` is the same as the `port` field. Perhaps more interesting is -that `targetPort` can be a string, referring to the name of a port in the -backend `Pod`s. The actual port number assigned to that name can be different -in each backend `Pod`. This offers a lot of flexibility for deploying and -evolving your `Service`s. For example, you can change the port number that -pods expose in the next version of your backend software, without breaking -clients. - -Kubernetes `Service`s support `TCP` and `UDP` for protocols. The default -is `TCP`. - -### Services without selectors - -Services generally abstract access to Kubernetes `Pods`, but they can also -abstract other kinds of backends. For example: - - * You want to have an external database cluster in production, but in test - you use your own databases. - * You want to point your service to a service in another - [`Namespace`](namespaces.md) or on another cluster. - * You are migrating your workload to Kubernetes and some of your backends run - outside of Kubernetes. - -In any of these scenarios you can define a service without a selector: - -```json -{ - "kind": "Service", - "apiVersion": "v1", - "metadata": { - "name": "my-service" - }, - "spec": { - "ports": [ - { - "protocol": "TCP", - "port": 80, - "targetPort": 9376 - } - ] - } -} -``` - -Because this has no selector, the corresponding `Endpoints` object will not be -created. You can manually map the service to your own specific endpoints: - -```json -{ - "kind": "Endpoints", - "apiVersion": "v1", - "metadata": { - "name": "my-service" - }, - "subsets": [ - { - "addresses": [ - { "IP": "1.2.3.4" } - ], - "ports": [ - { "port": 80 } - ] - } - ] -} -``` - -Accessing a `Service` without a selector works the same as if it had selector. -The traffic will be routed to endpoints defined by the user (`1.2.3.4:80` in -this example). - -## Virtual IPs and service proxies - -Every node in a Kubernetes cluster runs a `kube-proxy`. This application -watches the Kubernetes master for the addition and removal of `Service` -and `Endpoints` objects. For each `Service` it opens a port (random) on the -local node. Any connections made to that port will be proxied to one of the -corresponding backend `Pods`. Which backend to use is decided based on the -`SessionAffinity` of the `Service`. Lastly, it installs iptables rules which -capture traffic to the `Service`'s `Port` on the `Service`'s cluster IP (which -is entirely virtual) and redirects that traffic to the previously described -port. - -The net result is that any traffic bound for the `Service` is proxied to an -appropriate backend without the clients knowing anything about Kubernetes or -`Services` or `Pods`. - -![Services overview diagram](services_overview.png) - -By default, the choice of backend is random. Client-IP based session affinity -can be selected by setting `service.spec.sessionAffinity` to `"ClientIP"` (the -default is `"None"`). - -As of Kubernetes 1.0, `Service`s are a "layer 3" (TCP/UDP over IP) construct. We do not -yet have a concept of "layer 7" (HTTP) services. - -## Multi-Port Services - -Many `Service`s need to expose more than one port. For this case, Kubernetes -supports multiple port definitions on a `Service` object. When using multiple -ports you must give all of your ports names, so that endpoints can be -disambiguated. For example: - -```json -{ - "kind": "Service", - "apiVersion": "v1", - "metadata": { - "name": "my-service" - }, - "spec": { - "selector": { - "app": "MyApp" - }, - "ports": [ - { - "name": "http", - "protocol": "TCP", - "port": 80, - "targetPort": 9376 - }, - { - "name": "https", - "protocol": "TCP", - "port": 443, - "targetPort": 9377 - } - ] - } -} -``` - -## Choosing your own IP address - -A user can specify their own cluster IP address as part of a `Service` creation -request. To do this, set the `spec.clusterIP` field. For example, if they -already have an existing DNS entry that they wish to replace, or legacy systems -that are configured for a specific IP address and difficult to re-configure. -The IP address that a user chooses must be a valid IP address and within the -service_cluster_ip_range CIDR range that is specified by flag to the API server. -If the IP address value is invalid, the apiserver returns a 422 HTTP status code -to indicate that the value is invalid. - -### Why not use round-robin DNS? - -A question that pops up every now and then is why we do all this stuff with -virtual IPs rather than just use standard round-robin DNS. There are a few -reasons: - - * There is a long history of DNS libraries not respecting DNS TTLs and - caching the results of name lookups. - * Many apps do DNS lookups once and cache the results. - * Even if apps and libraries did proper re-resolution, the load of every - client re-resolving DNS over and over would be difficult to manage. - -We try to discourage users from doing things that hurt themselves. That said, -if enough people ask for this, we may implement it as an alternative. - -## Discovering services - -Kubernetes supports 2 primary modes of finding a `Service` - environment -variables and DNS. - -### Environment variables - -When a `Pod` is run on a `Node`, the kubelet adds a set of environment variables -for each active `Service`. It supports both [Docker links -compatible](https://docs.docker.com/userguide/dockerlinks/) variables (see -[makeLinkVariables](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/pkg/kubelet/envvars/envvars.go#L49)) -and simpler `{SVCNAME}_SERVICE_HOST` and `{SVCNAME}_SERVICE_PORT` variables, -where the Service name is upper-cased and dashes are converted to underscores. - -For example, the Service "redis-master" which exposes TCP port 6379 and has been -allocated cluster IP address 10.0.0.11 produces the following environment -variables: - -``` -REDIS_MASTER_SERVICE_HOST=10.0.0.11 -REDIS_MASTER_SERVICE_PORT=6379 -REDIS_MASTER_PORT=tcp://10.0.0.11:6379 -REDIS_MASTER_PORT_6379_TCP=tcp://10.0.0.11:6379 -REDIS_MASTER_PORT_6379_TCP_PROTO=tcp -REDIS_MASTER_PORT_6379_TCP_PORT=6379 -REDIS_MASTER_PORT_6379_TCP_ADDR=10.0.0.11 -``` - -*This does imply an ordering requirement* - any `Service` that a `Pod` wants to -access must be created before the `Pod` itself, or else the environment -variables will not be populated. DNS does not have this restriction. - -### DNS - -An optional (though strongly recommended) cluster add-on is a DNS server. The -DNS server watches the Kubernetes API for new `Services` and creates a set of -DNS records for each. If DNS has been enabled throughout the cluster then all -`Pods` should be able to do name resolution of `Services` automatically. - -For example, if you have a `Service` called "my-service" in Kubernetes -`Namespace` "my-ns" a DNS record for "my-service.my-ns" is created. `Pods` -which exist in the "my-ns" `Namespace` should be able to find it by simply doing -a name lookup for "my-service". `Pods` which exist in other `Namespace`s must -qualify the name as "my-service.my-ns". The result of these name lookups is the -cluster IP. - -We will soon add DNS support for multi-port `Service`s in the form of SRV -records. - -## Headless services - -Sometimes you don't need or want load-balancing and a single service IP. In -this case, you can create "headless" services by specifying `"None"` for the -cluster IP (`spec.clusterIP`). -For such `Service`s, a cluster IP is not allocated and service-specific -environment variables for `Pod`s are not created. DNS is configured to return -multiple A records (addresses) for the `Service` name, which point directly to -the `Pod`s backing the `Service`. Additionally, the kube proxy does not handle -these services and there is no load balancing or proxying done by the platform -for them. The endpoints controller will still create `Endpoints` records in -the API. - -This option allows developers to reduce coupling to the Kubernetes system, if -they desire, but leaves them freedom to do discovery in their own way. -Applications can still use a self-registration pattern and adapters for other -discovery systems could easily be built upon this API. - -##External services - -For some parts of your application (e.g. frontends) you may want to expose a -Service onto an external (outside of your cluster, maybe public internet) IP -address. Kubernetes supports two ways of doing this: `NodePort`s and -`LoadBalancer`s. - -Every `Service` has a `Type` field which defines how the `Service` can be -accessed. Valid values for this field are: - - * `ClusterIP`: use a cluster-internal IP only - this is the default - * `NodePort`: use a cluster IP, but also expose the service on a port on each - node of the cluster (the same port on each) - * `LoadBalancer`: use a ClusterIP and a NodePort, but also ask the cloud - provider for a load balancer which forwards to the `Service` - -Note that while `NodePort`s can be TCP or UDP, `LoadBalancer`s only support TCP -as of Kubernetes 1.0. - -### Type = NodePort - -If you set the `type` field to `"NodePort"`, the Kubernetes master will -allocate you a port (from a flag-configured range) on each node for each port -exposed by your `Service`. That port will be reported in your `Service`'s -`spec.ports[*].nodePort` field. If you specify a value in that field, the -system will allocate you that port or else will fail the API transaction. - -This gives developers the freedom to set up their own load balancers, to -configure cloud environments that are not fully supported by Kubernetes, or -even to just expose one or more nodes' IPs directly. - -### Type = LoadBalancer - -On cloud providers which support external load balancers, setting the `type` -field to `"LoadBalancer"` will provision a load balancer for your `Service`. -The actual creation of the load balancer happens asynchronously, and -information about the provisioned balancer will be published in the `Service`'s -`status.loadBalancer` field. For example: - -```json -{ - "kind": "Service", - "apiVersion": "v1", - "metadata": { - "name": "my-service" - }, - "spec": { - "selector": { - "app": "MyApp" - }, - "ports": [ - { - "protocol": "TCP", - "port": 80, - "targetPort": 9376, - "nodePort": 30061 - } - ], - "clusterIP": "10.0.171.239", - "type": "LoadBalancer" - }, - "status": { - "loadBalancer": { - "ingress": [ - { - "ip": "146.148.47.155" - } - ] - } - } -} -``` - -Traffic from the external load balancer will be directed at the backend `Pods`, -though exactly how that works depends on the cloud provider. - -## Shortcomings - -We expect that using iptables and userspace proxies for VIPs will work at -small to medium scale, but may not scale to very large clusters with thousands -of Services. See [the original design proposal for -portals](https://github.com/GoogleCloudPlatform/kubernetes/issues/1107) for more -details. - -Using the kube-proxy obscures the source-IP of a packet accessing a `Service`. -This makes some kinds of firewalling impossible. - -LoadBalancers only support TCP, not UDP. - -The `Type` field is designed as nested functionality - each level adds to the -previous. This is not strictly required on all cloud providers (e.g. GCE does -not need to allocate a `NodePort` to make `LoadBalancer` work, but AWS does) -but the current API requires it. - -## Future work - -In the future we envision that the proxy policy can become more nuanced than -simple round robin balancing, for example master-elected or sharded. We also -envision that some `Services` will have "real" load balancers, in which case the -VIP will simply transport the packets there. - -There's a -[proposal](https://github.com/GoogleCloudPlatform/kubernetes/issues/3760) to -eliminate userspace proxying in favor of doing it all in iptables. This should -perform better and fix the source-IP obfuscation, though is less flexible than -arbitrary userspace code. - -We intend to have first-class support for L7 (HTTP) `Service`s. - -We intend to have more flexible ingress modes for `Service`s which encompass -the current `ClusterIP`, `NodePort`, and `LoadBalancer` modes and more. - -## The gory details of virtual IPs - -The previous information should be sufficient for many people who just want to -use `Services`. However, there is a lot going on behind the scenes that may be -worth understanding. - -### Avoiding collisions - -One of the primary philosophies of Kubernetes is that users should not be -exposed to situations that could cause their actions to fail through no fault -of their own. In this situation, we are looking at network ports - users -should not have to choose a port number if that choice might collide with -another user. That is an isolation failure. - -In order to allow users to choose a port number for their `Services`, we must -ensure that no two `Services` can collide. We do that by allocating each -`Service` its own IP address. - -To ensure each service receives a unique IP, an internal allocator atomically -updates a global allocation map in etcd prior to each service. The map object -must exist in the registry for services to get IPs, otherwise creations will -fail with a message indicating an IP could not be allocated. A background -controller is responsible for creating that map (to migrate from older versions -of Kubernetes that used in memory locking) as well as checking for invalid -assignments due to administrator intervention and cleaning up any any IPs -that were allocated but which no service currently uses. - -### IPs and VIPs - -Unlike `Pod` IP addresses, which actually route to a fixed destination, -`Service` IPs are not actually answered by a single host. Instead, we use -`iptables` (packet processing logic in Linux) to define virtual IP addresses -which are transparently redirected as needed. When clients connect to the -VIP, their traffic is automatically transported to an appropriate endpoint. -The environment variables and DNS for `Services` are actually populated in -terms of the `Service`'s VIP and port. - -As an example, consider the image processing application described above. -When the backend `Service` is created, the Kubernetes master assigns a virtual -IP address, for example 10.0.0.1. Assuming the `Service` port is 1234, the -`Service` is observed by all of the `kube-proxy` instances in the cluster. -When a proxy sees a new `Service`, it opens a new random port, establishes an -iptables redirect from the VIP to this new port, and starts accepting -connections on it. - -When a client connects to the VIP the iptables rule kicks in, and redirects -the packets to the `Service proxy`'s own port. The `Service proxy` chooses a -backend, and starts proxying traffic from the client to the backend. - -This means that `Service` owners can choose any port they want without risk of -collision. Clients can simply connect to an IP and port, without being aware -of which `Pod`s they are actually accessing. - -![Services detailed diagram](services_detail.png) - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/services.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/services.md?pixel)]() diff --git a/release-0.19.0/docs/services_detail.png b/release-0.19.0/docs/services_detail.png deleted file mode 100644 index 7ff19b8209b..00000000000 Binary files a/release-0.19.0/docs/services_detail.png and /dev/null differ diff --git a/release-0.19.0/docs/services_detail.svg b/release-0.19.0/docs/services_detail.svg deleted file mode 100644 index cafaf29eb8f..00000000000 --- a/release-0.19.0/docs/services_detail.svg +++ /dev/null @@ -1,570 +0,0 @@ - - - - - - - - - - image/svg+xml - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Backend Pod 1 - labels: app=MyAppport: 9376 - - - - - - Backend Pod 2 - labels: app=MyAppport: 9376 - - - - - - Backend Pod 3 - labels: app=MyAppport: 9376 - - - - - - - - - - - - - - - Client - - - - - iptables - - - - - kube-proxy - - - - - - - apiserver - - - - 3) connect to 10.0.0.1:1234 - 4) redirect to (random)proxy port - 1) watch Services and Endpoints - 2) open proxy port and set portal rules - 5) proxy to a backend - - diff --git a/release-0.19.0/docs/services_overview.png b/release-0.19.0/docs/services_overview.png deleted file mode 100644 index 564bd857e87..00000000000 Binary files a/release-0.19.0/docs/services_overview.png and /dev/null differ diff --git a/release-0.19.0/docs/services_overview.svg b/release-0.19.0/docs/services_overview.svg deleted file mode 100644 index 8b45677ad00..00000000000 --- a/release-0.19.0/docs/services_overview.svg +++ /dev/null @@ -1,417 +0,0 @@ - - - - - - - - - - image/svg+xml - - - - - - - - - - - - - - - - - - - - - - - Backend Pod 1 - labels: app=MyAppport: 9376 - - - - - - Backend Pod 2 - labels: app=MyAppport: 9376 - - - - - - Backend Pod 3 - labels: app=MyAppport: 9376 - - - - - - - - - - - - - - - Client - - - - - - kube-proxy - - - - - - - apiserver - - - - - diff --git a/release-0.19.0/docs/sharing-clusters.md b/release-0.19.0/docs/sharing-clusters.md deleted file mode 100644 index 269a3594d1a..00000000000 --- a/release-0.19.0/docs/sharing-clusters.md +++ /dev/null @@ -1,112 +0,0 @@ -# Sharing Cluster Access - -Client access to a running kubernetes cluster can be shared by copying -the `kubectl` client config bundle ([.kubeconfig](kubeconfig-file.md)). -This config bundle lives in `$HOME/.kube/config`, and is generated -by `cluster/kube-up.sh`. Sample steps for sharing `kubeconfig` below. - -**1. Create a cluster** -```bash -cluster/kube-up.sh -``` -**2. Copy `kubeconfig` to new host** -```bash -scp $HOME/.kube/config user@remotehost:/path/to/.kube/config -``` - -**3. On new host, make copied `config` available to `kubectl`** - -* Option A: copy to default location -```bash -mv /path/to/.kube/config $HOME/.kube/config -``` -* Option B: copy to working directory (from which kubectl is run) -```bash -mv /path/to/.kube/config $PWD -``` -* Option C: manually pass `kubeconfig` location to `.kubectl` -```bash -# via environment variable -export KUBECONFIG=/path/to/.kube/config - -# via commandline flag -kubectl ... --kubeconfig=/path/to/.kube/config -``` - -## Manually Generating `kubeconfig` - -`kubeconfig` is generated by `kube-up` but you can generate your own -using (any desired subset of) the following commands. - -```bash -# create kubeconfig entry -kubectl config set-cluster $CLUSTER_NICK - --server=https://1.1.1.1 \ - --certificate-authority=/path/to/apiserver/ca_file \ - --embed-certs=true \ - # Or if tls not needed, replace --certificate-authority and --embed-certs with - --insecure-skip-tls-verify=true - --kubeconfig=/path/to/standalone/.kube/config - -# create user entry -kubectl config set-credentials $USER_NICK - # bearer token credentials, generated on kube master - --token=$token \ - # use either username|password or token, not both - --username=$username \ - --password=$password \ - --client-certificate=/path/to/crt_file \ - --client-key=/path/to/key_file \ - --embed-certs=true - --kubeconfig=/path/to/standalone/.kubeconfig - -# create context entry -kubectl config set-context $CONTEXT_NAME --cluster=$CLUSTER_NICKNAME --user=$USER_NICK -``` -Notes: -* The `--embed-certs` flag is needed to generate a standalone -`kubeconfig`, that will work as-is on another host. -* `--kubeconfig` is both the preferred file to load config from and the file to -save config too. In the above commands the `--kubeconfig` file could be -omitted if you first run -```bash -export KUBECONFIG=/path/to/standalone/.kube/config -``` -* The ca_file, key_file, and cert_file referenced above are generated on the -kube master at cluster turnup. They can be found on the master under -`/srv/kubernetes`. Bearer token/basic auth are also generated on the kube master. - -For more details on `kubeconfig` see [kubeconfig-file.md](kubeconfig-file.md), -and/or run `kubectl config -h`. - -## Merging `kubeconfig` Example - -`kubectl` loads and merges config from the following locations (in order) - -1. `--kubeconfig=path/to/.kube/config` commandline flag -2. `KUBECONFIG=path/to/.kube/config` env variable -3. `$PWD/.kubeconfig` -4. `$HOME/.kube/config` - -If you create clusters A, B on host1, and clusters C, D on host2, you can -make all four clusters available on both hosts by running - -```bash -# on host2, copy host1's default kubeconfig, and merge it from env -scp host1:/path/to/home1/.kube/config path/to/other/.kube/config - -export $KUBECONFIG=path/to/other/.kube/config - -# on host1, copy host2's default kubeconfig and merge it from env -scp host2:/path/to/home2/.kube/config path/to/other/.kube/config - -export $KUBECONFIG=path/to/other/.kube/config -``` -Detailed examples and explanation of `kubeconfig` loading/merging rules can be found in [kubeconfig-file.md](http://docs.k8s.io/kubeconfig-file.md). - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/sharing-clusters.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/sharing-clusters.md?pixel)]() diff --git a/release-0.19.0/docs/ui.md b/release-0.19.0/docs/ui.md deleted file mode 100644 index 04111536b82..00000000000 --- a/release-0.19.0/docs/ui.md +++ /dev/null @@ -1,23 +0,0 @@ -# Kubernetes UI Instructions - -## Kubernetes User Interface -Kubernetes has an extensible user interface with default functionality that describes the current cluster. See the [README](../www/README.md) in the www directory for more information. - -### Running locally -Assuming that you have a cluster running locally at `localhost:8080`, as described [here](getting-started-guides/locally.md), you can run the UI against it with kubectl: - -```sh -kubectl proxy --www=www/app --www-prefix=/ -``` - -You should now be able to access it by visiting [localhost:8001](http://localhost:8001/). - -You can also use other web servers to serve the contents of the www/app directory, as described [here](../www/README.md#serving-the-app-during-development). - -### Running remotely -When Kubernetes is deployed remotely, the api server deploys the UI. To access it, visit `/static/app/` or `/ui`, which redirects to `/static/app/`, on your master server. - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/ui.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/ui.md?pixel)]() diff --git a/release-0.19.0/docs/user-guide.md b/release-0.19.0/docs/user-guide.md deleted file mode 100644 index 8c257b22a57..00000000000 --- a/release-0.19.0/docs/user-guide.md +++ /dev/null @@ -1,99 +0,0 @@ -# Kubernetes User Guide - -The user guide is intended for anyone who wants to run programs and services -on an existing Kubernetes cluster. Setup and administration of a -Kubernetes cluster is described in the [Cluster Admin Guide](cluster-admin-guide.md). -The developer guide is for anyone wanting to either write code which directly accesses the -kubernetes API, or to contribute directly to the kubernetes project. - -## Primary concepts - -* **Overview** ([overview.md](overview.md)): A brief overview - of Kubernetes concepts. - -* **Nodes** ([node.md](node.md)): A node is a worker machine in Kubernetes. - -* **Pods** ([pods.md](pods.md)): A pod is a tightly-coupled group of containers - with shared volumes. - -* **The Life of a Pod** ([pod-states.md](pod-states.md)): - Covers the intersection of pod states, the PodStatus type, the life-cycle - of a pod, events, restart policies, and replication controllers. - -* **Replication Controllers** ([replication-controller.md](replication-controller.md)): - A replication controller ensures that a specified number of pod "replicas" are - running at any one time. - -* **Services** ([services.md](services.md)): A Kubernetes service is an abstraction - which defines a logical set of pods and a policy by which to access them. - -* **Volumes** ([volumes.md](volumes.md)): A Volume is a directory, possibly with some - data in it, which is accessible to a Container. - -* **Labels** ([labels.md](labels.md)): Labels are key/value pairs that are - attached to objects, such as pods. Labels can be used to organize and to - select subsets of objects. - -* **Secrets** ([secrets.md](secrets.md)): A Secret stores sensitive data - (e.g. ssh keys, passwords) separately from the Pods that use them, protecting - the sensitive data from proliferation by tools that process pods. - -* **Accessing the API and other cluster services via a Proxy** [accessing-the-cluster.md](../docs/accessing-the-cluster.md) - -* **API Overview** ([api.md](api.md)): Pointers to API documentation on various topics - and explanation of Kubernetes's approaches to API changes and API versioning. - -* **Kubernetes Web Interface** ([ui.md](ui.md)): Accessing the Kubernetes - web user interface. - -* **Kubectl Command Line Interface** ([kubectl.md](kubectl.md)): - The `kubectl` command line reference. - -* **Sharing Cluster Access** ([sharing-clusters.md](sharing-clusters.md)): - How to share client credentials for a kubernetes cluster. - -* **Roadmap** ([roadmap.md](roadmap.md)): The set of supported use cases, features, - docs, and patterns that are required before Kubernetes 1.0. - -* **Glossary** ([glossary.md](glossary.md)): Terms and concepts. - -## Further reading - - -* **Annotations** ([annotations.md](annotations.md)): Attaching - arbitrary non-identifying metadata. - -* **Kubernetes Container Environment** ([container-environment.md](container-environment.md)): - Describes the environment for Kubelet managed containers on a Kubernetes - node. - -* **DNS Integration with SkyDNS** ([dns.md](dns.md)): - Resolving a DNS name directly to a Kubernetes service. - -* **Identifiers** ([identifiers.md](identifiers.md)): Names and UIDs - explained. - -* **Images** ([images.md](images.md)): Information about container images - and private registries. - -* **Logging** ([logging.md](logging.md)): Pointers to logging info. - -* **Namespaces** ([namespaces.md](namespaces.md)): Namespaces help different - projects, teams, or customers to share a kubernetes cluster. - -* **Networking** ([networking.md](networking.md)): Pod networking overview. - -* **The Kubernetes Resource Model** ([resources.md](resources.md)): - Provides resource information such as size, type, and quantity to assist in - assigning Kubernetes resources appropriately. - -* The [API object documentation](http://kubernetes.io/third_party/swagger-ui/). - -* Frequently asked questions are answered on this project's [wiki](https://github.com/GoogleCloudPlatform/kubernetes/wiki). - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/user-guide.md?pixel)]() diff --git a/release-0.19.0/docs/versioning.md b/release-0.19.0/docs/versioning.md deleted file mode 100644 index de882759d30..00000000000 --- a/release-0.19.0/docs/versioning.md +++ /dev/null @@ -1,51 +0,0 @@ -# Kubernetes API and Release Versioning - -Legend: - -* **Kube <major>.<minor>.<patch>** refers to the version of Kubernetes that is released. This versions all components: apiserver, kubelet, kubectl, etc. -* **API vX[betaY]** refers to the version of the HTTP API. - -## Release Timeline - -### Minor version timeline - -* Kube 1.0.0 -* Kube 1.0.x: We create a 1.0-patch branch and backport critical bugs and security issues to it. Patch releases occur as needed. -* Kube 1.1-alpha1: Cut from HEAD, smoke tested and released two weeks after Kube 1.0's release. Roughly every two weeks a new alpha is released from HEAD. The timeline is flexible; for example, if there is a critical bugfix, a new alpha can be released ahead of schedule. (This applies to the beta and rc releases as well.) -* Kube 1.1-beta1: When HEAD is feature complete, we create a 1.1-snapshot branch and release it as a beta. (The 1.1-snapshot branch may be created earlier if something that definitely won't be in 1.1 needs to be merged to HEAD.) This should occur 6-8 weeks after Kube 1.0. Development continues at HEAD and only fixes are backported to 1.1-snapshot. -* Kube 1.1-rc1: Released from 1.1-snapshot when it is considered stable and ready for testing. Most users should be able to upgrade to this version in production. -* Kube 1.1: Final release. Should occur between 3 and 4 months after 1.0. - -### Major version timeline - -There is no mandated timeline for major versions. They only occur when we need to start the clock on deprecating features. A given major version should be the latest major version for at least one year from its original release date. - -## Release versions as related to API versions - -Here is an example major release cycle: - -* **Kube 1.0 should have API v1 without v1beta\* API versions** - * The last version of Kube before 1.0 (e.g. 0.14 or whatever it is) will have the stable v1 API. This enables you to migrate all your objects off of the beta API versions of the API and allows us to remove those beta API versions in Kube 1.0 with no effect. There will be tooling to help you detect and migrate any v1beta\* data versions or calls to v1 before you do the upgrade. -* **Kube 1.x may have API v2beta*** - * The first incarnation of a new (backwards-incompatible) API in HEAD is v2beta1. By default this will be unregistered in apiserver, so it can change freely. Once it is available by default in apiserver (which may not happen for several minor releases), it cannot change ever again because we serialize objects in versioned form, and we always need to be able to deserialize any objects that are saved in etcd, even between alpha versions. If further changes to v2beta1 need to be made, v2beta2 is created, and so on, in subsequent 1.x versions. -* **Kube 1.y (where y is the last version of the 1.x series) must have final API v2** - * Before Kube 2.0 is cut, API v2 must be released in 1.x. This enables two things: (1) users can upgrade to API v2 when running Kube 1.x and then switch over to Kube 2.x transparently, and (2) in the Kube 2.0 release itself we can cleanup and remove all API v2beta\* versions because no one should have v2beta\* objects left in their database. As mentioned above, tooling will exist to make sure there are no calls or references to a given API version anywhere inside someone's kube installation before someone upgrades. - * Kube 2.0 must include the v1 API, but Kube 3.0 must include the v2 API only. It *may* include the v1 API as well if the burden is not high - this will be determined on a per-major-version basis. - -## Rationale for API v2 being complete before v2.0's release - -It may seem a bit strange to complete the v2 API before v2.0 is released, but *adding* a v2 API is not a breaking change. *Removing* the v2beta\* APIs *is* a breaking change, which is what necessitates the major version bump. There are other ways to do this, but having the major release be the fresh start of that release's API without the baggage of its beta versions seems most intuitive out of the available options. - -# Upgrades - -* Users can upgrade from any Kube 1.x release to any other Kube 1.x release as a rolling upgrade across their cluster. (Rolling upgrade means being able to upgrade the master first, then one node at a time. See #4855 for details.) -* No hard breaking changes over version boundaries. - * For example, if a user is at Kube 1.x, we may require them to upgrade to Kube 1.x+y before upgrading to Kube 2.x. In others words, an upgrade across major versions (e.g. Kube 1.x to Kube 2.x) should effectively be a no-op and as graceful as an upgrade from Kube 1.x to Kube 1.x+1. But you can require someone to go from 1.x to 1.x+y before they go to 2.x. - -There is a separate question of how to track the capabilities of a kubelet to facilitate rolling upgrades. That is not addressed here. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/versioning.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/versioning.md?pixel)]() diff --git a/release-0.19.0/docs/volumes.md b/release-0.19.0/docs/volumes.md deleted file mode 100644 index 7059de0d613..00000000000 --- a/release-0.19.0/docs/volumes.md +++ /dev/null @@ -1,96 +0,0 @@ -# Volumes -This document describes the current state of Volumes in kubernetes. Familiarity with [pods](./pods.md) is suggested. - -A Volume is a directory, possibly with some data in it, which is accessible to a Container. Kubernetes Volumes are similar to but not the same as [Docker Volumes](https://docs.docker.com/userguide/dockervolumes/). - -A Pod specifies which Volumes its containers need in its [spec.volumes](http://kubernetes.io/third_party/swagger-ui/#!/v1/createPod) property. - -A process in a Container sees a filesystem view composed from two sources: a single Docker image and zero or more Volumes. A [Docker image](https://docs.docker.com/userguide/dockerimages/) is at the root of the file hierarchy. Any Volumes are mounted at points on the Docker image; Volumes do not mount on other Volumes and do not have hard links to other Volumes. Each container in the Pod independently specifies where on its image to mount each Volume. This is specified in each container's VolumeMounts property. - -## Resources - -The storage media (Disk, SSD, or memory) of a volume is determined by the media of the filesystem holding the kubelet root dir (typically `/var/lib/kubelet`). -There is no limit on how much space an EmptyDir or HostPath volume can consume, and no isolation between containers or between pods. - -In the future, we expect that EmptyDir and HostPath volumes will be able to request a certain amount of space using a [resource](./resources.md) specification, and to select the type of media to use, for clusters that have several media types. -## Types of Volumes - -Kubernetes currently supports multiple types of Volumes. The community welcomes additional contributions. - -### EmptyDir - -An EmptyDir volume is created when a Pod is bound to a Node. It is initially empty, when the first Container command starts. Containers in the same pod can all read and write the same files in the EmptyDir. When a Pod is unbound, the data in the EmptyDir is deleted forever. - -Some uses for an EmptyDir are: - - scratch space, such as for a disk-based mergesort or checkpointing a long computation. - - a directory that a content-manager container fills with data while a webserver container serves the data. - -Currently, the user cannot control what kind of media is used for an EmptyDir. If the Kubelet is configured to use a disk drive, then all EmptyDirectories will be created on that disk drive. In the future, it is expected that Pods can control whether the EmptyDir is on a disk drive, SSD, or tmpfs. - -### HostPath -A Volume with a HostPath property allows access to files on the current node. - -Some uses for a HostPath are: - - running a container that needs access to Docker internals; use a HostPath of /var/lib/docker. - - running cAdvisor in a container; use a HostPath of /dev/cgroups. - -Watch out when using this type of volume, because: - - pods with identical configuration (such as created from a podTemplate) may behave differently on different nodes due to different files on different nodes. - - When Kubernetes adds resource-aware scheduling, as is planned, it will not be able to account for resources used by a HostPath. - -### GCEPersistentDisk -__Important: You must create a PD using ```gcloud``` or the GCE API before you can use it__ - -A Volume with a GCEPersistentDisk property allows access to files on a Google Compute Engine (GCE) -[Persistent Disk](http://cloud.google.com/compute/docs/disks). - -There are some restrictions when using a GCEPersistentDisk: - - the nodes (what the kubelet runs on) need to be GCE VMs - - those VMs need to be in the same GCE project and zone as the PD - - avoid creating multiple pods that use the same Volume if any mount it read/write. - - if a pod P already mounts a volume read/write, and a second pod Q attempts to use the volume, regardless of if it tries to use it read-only or read/write, Q will fail. - - if a pod P already mounts a volume read-only, and a second pod Q attempts to use the volume read/write, Q will fail. - - replication controllers with replicas > 1 can only be created for pods that use read-only mounts. - -#### Creating a PD -Before you can use a GCE PD with a pod, you need to create it. - -```sh -gcloud compute disks create --size=500GB --zone=us-central1-a my-data-disk -``` - -#### GCE PD Example configuration: -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: testpd -spec: - containers: - - image: kubernetes/pause - name: testcontainer - volumeMounts: - - mountPath: /testpd - name: testvolume - volumes: - - name: testvolume - # This GCE PD must already exist. - gcePersistentDisk: - pdName: test - fsType: ext4 -``` -### NFS - -Kubernetes NFS volumes allow an existing NFS share to be made available to containers within a pod. - -See the [NFS Pod examples](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/examples/nfs/) section for more details. -For example, [nfs-web-pod.yaml](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/examples/nfs/nfs-web-pod.yaml) demonstrates how to specify the usage of an NFS volume within a pod. -In this example one can see that a `volumeMount` called "nfs" is being mounted onto `/var/www/html` in the container "web". -The volume "nfs" is defined as type `nfs`, with the NFS server serving from `nfs-server.default.kube.local` and exporting directory `/` as the share. -The mount being created in this example is not read only. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/volumes.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/docs/volumes.md?pixel)]() diff --git a/release-0.19.0/examples/README.md b/release-0.19.0/examples/README.md deleted file mode 100644 index 850c4b624f7..00000000000 --- a/release-0.19.0/examples/README.md +++ /dev/null @@ -1,16 +0,0 @@ -# Examples - -This directory contains a number of different examples of how to run applications with Kubernetes. - -**Note** -This documentation is current for 0.19.0. - -Examples for previous releases is available in their respective branches: - * [v0.18.1](https://github.com/GoogleCloudPlatform/kubernetes/tree/release-0.18/examples) - * [v0.17.1](https://github.com/GoogleCloudPlatform/kubernetes/tree/release-0.17/examples) - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/README.md?pixel)]() diff --git a/release-0.19.0/examples/cassandra/README.md b/release-0.19.0/examples/cassandra/README.md deleted file mode 100644 index a61aa5a8193..00000000000 --- a/release-0.19.0/examples/cassandra/README.md +++ /dev/null @@ -1,271 +0,0 @@ -## Cloud Native Deployments of Cassandra using Kubernetes - -The following document describes the development of a _cloud native_ [Cassandra](http://cassandra.apache.org/) deployment on Kubernetes. When we say _cloud native_ we mean an application which understands that it is running within a cluster manager, and uses this cluster management infrastructure to help implement the application. In particular, in this instance, a custom Cassandra ```SeedProvider``` is used to enable Cassandra to dynamically discover new Cassandra nodes as they join the cluster. - -This document also attempts to describe the core components of Kubernetes: _Pods_, _Services_, and _Replication Controllers_. - -### Prerequisites -This example assumes that you have a Kubernetes cluster installed and running, and that you have installed the ```kubectl``` command line tool somewhere in your path. Please see the [getting started](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/docs/getting-started-guides) for installation instructions for your platform. - -### A note for the impatient -This is a somewhat long tutorial. If you want to jump straight to the "do it now" commands, please see the [tl; dr](#tl-dr) at the end. - -### Simple Single Pod Cassandra Node -In Kubernetes, the atomic unit of an application is a [_Pod_](../../docs/pods.md). A Pod is one or more containers that _must_ be scheduled onto the same host. All containers in a pod share a network namespace, and may optionally share mounted volumes. In this simple case, we define a single container running Cassandra for our pod: - -```yaml -apiVersion: v1beta3 -kind: Pod -metadata: - labels: - name: cassandra - name: cassandra -spec: - containers: - - args: - - /run.sh - resources: - limits: - cpu: "1" - image: kubernetes/cassandra:v2 - name: cassandra - ports: - - name: cql - containerPort: 9042 - - name: thrift - containerPort: 9160 - volumeMounts: - - name: data - mountPath: /cassandra_data - env: - - name: MAX_HEAP_SIZE - value: 512M - - name: HEAP_NEWSIZE - value: 100M - - name: KUBERNETES_API_PROTOCOL - value: http - volumes: - - name: data - emptyDir: {} -``` - -There are a few things to note in this description. First is that we are running the ```kubernetes/cassandra``` image. This is a standard Cassandra installation on top of Debian. However it also adds a custom [```SeedProvider```](https://svn.apache.org/repos/asf/cassandra/trunk/src/java/org/apache/cassandra/locator/SeedProvider.java) to Cassandra. In Cassandra, a ```SeedProvider``` bootstraps the gossip protocol that Cassandra uses to find other nodes. The ```KubernetesSeedProvider``` discovers the Kubernetes API Server using the built in Kubernetes discovery service, and then uses the Kubernetes API to find new nodes (more on this later) - -You may also note that we are setting some Cassandra parameters (```MAX_HEAP_SIZE``` and ```HEAP_NEWSIZE```). We also tell Kubernetes that the container exposes both the ```CQL``` and ```Thrift``` API ports. Finally, we tell the cluster manager that we need 1 cpu (1 core). - -Given this configuration, we can create the pod as follows - -```sh -$ kubectl create -f cassandra.yaml -``` - -After a few moments, you should be able to see the pod running: - -```sh -$ kubectl get pods cassandra -POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE -cassandra 10.244.3.3 kubernetes-minion-sft2/104.197.42.181 name=cassandra Running 21 seconds - cassandra kubernetes/cassandra:v2 Running 3 seconds -``` - - -### Adding a Cassandra Service -In Kubernetes a _[Service](../../docs/services.md)_ describes a set of Pods that perform the same task. For example, the set of nodes in a Cassandra cluster, or even the single node we created above. An important use for a Service is to create a load balancer which distributes traffic across members of the set. But a _Service_ can also be used as a standing query which makes a dynamically changing set of Pods (or the single Pod we've already created) available via the Kubernetes API. This is the way that we use initially use Services with Cassandra. - -Here is the service description: -```yaml -apiVersion: v1beta3 -kind: Service -metadata: - labels: - name: cassandra - name: cassandra -spec: - ports: - - port: 9042 - targetPort: 9042 - selector: - name: cassandra -``` - -The important thing to note here is the ```selector```. It is a query over labels, that identifies the set of _Pods_ contained by the _Service_. In this case the selector is ```name=cassandra```. If you look back at the Pod specification above, you'll see that the pod has the corresponding label, so it will be selected for membership in this Service. - -Create this service as follows: -```sh -$ kubectl create -f cassandra-service.yaml -``` - -Once the service is created, you can query it's endpoints: -```sh -$ kubectl get endpoints cassandra -o yaml -apiVersion: v1beta3 -kind: Endpoints -metadata: - creationTimestamp: 2015-04-23T17:21:27Z - name: cassandra - namespace: default - resourceVersion: "857" - selfLink: /api/v1beta3/namespaces/default/endpoints/cassandra - uid: 2c7d36bf-e9dd-11e4-a7ed-42010af011dd -subsets: -- addresses: - - IP: 10.244.3.3 - targetRef: - kind: Pod - name: cassandra - namespace: default - resourceVersion: "769" - uid: d185872c-e9dc-11e4-a7ed-42010af011dd - ports: - - port: 9042 - protocol: TCP - -``` - -You can see that the _Service_ has found the pod we created in step one. - -### Adding replicated nodes -Of course, a single node cluster isn't particularly interesting. The real power of Kubernetes and Cassandra lies in easily building a replicated, scalable Cassandra cluster. - -In Kubernetes a _[Replication Controller](../../docs/replication-controller.md)_ is responsible for replicating sets of identical pods. Like a _Service_ it has a selector query which identifies the members of it's set. Unlike a _Service_ it also has a desired number of replicas, and it will create or delete _Pods_ to ensure that the number of _Pods_ matches up with it's desired state. - -Replication Controllers will "adopt" existing pods that match their selector query, so let's create a Replication Controller with a single replica to adopt our existing Cassandra Pod. - -```yaml -apiVersion: v1beta3 -kind: ReplicationController -metadata: - labels: - name: cassandra - name: cassandra -spec: - replicas: 1 - selector: - name: cassandra - template: - metadata: - labels: - name: cassandra - spec: - containers: - - command: - - /run.sh - resources: - limits: - cpu: 1 - env: - - name: MAX_HEAP_SIZE - key: MAX_HEAP_SIZE - value: 512M - - name: HEAP_NEWSIZE - key: HEAP_NEWSIZE - value: 100M - image: "kubernetes/cassandra:v2" - name: cassandra - ports: - - containerPort: 9042 - name: cql - - containerPort: 9160 - name: thrift - volumeMounts: - - mountPath: /cassandra_data - name: data - volumes: - - name: data - emptyDir: {} -``` - -The bulk of the replication controller config is actually identical to the Cassandra pod declaration above, it simply gives the controller a recipe to use when creating new pods. The other parts are the ```replicaSelector``` which contains the controller's selector query, and the ```replicas``` parameter which specifies the desired number of replicas, in this case 1. - -Create this controller: - -```sh -$ kubectl create -f cassandra-controller.yaml -``` - -Now this is actually not that interesting, since we haven't actually done anything new. Now it will get interesting. - -Let's scale our cluster to 2: -```sh -$ kubectl scale rc cassandra --replicas=2 -``` - -Now if you list the pods in your cluster, you should see two cassandra pods: - -```sh -$ kubectl get pods -POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE -cassandra 10.244.3.3 kubernetes-minion-sft2/104.197.42.181 name=cassandra Running 7 minutes - cassandra kubernetes/cassandra:v2 Running 7 minutes -cassandra-gnhk8 10.244.0.5 kubernetes-minion-dqz3/104.197.2.71 name=cassandra Running About a minute - cassandra kubernetes/cassandra:v2 Running 51 seconds - -``` - -Notice that one of the pods has the human readable name ```cassandra``` that you specified in your config before, and one has a random string, since it was named by the replication controller. - -To prove that this all works, you can use the ```nodetool``` command to examine the status of the cluster, for example: - -```sh -$ ssh 104.197.42.181 -$ docker exec nodetool status -Datacenter: datacenter1 -======================= -Status=Up/Down -|/ State=Normal/Leaving/Joining/Moving --- Address Load Tokens Owns (effective) Host ID Rack -UN 10.244.0.5 74.09 KB 256 100.0% 86feda0f-f070-4a5b-bda1-2eeb0ad08b77 rack1 -UN 10.244.3.3 51.28 KB 256 100.0% dafe3154-1d67-42e1-ac1d-78e7e80dce2b rack1 -``` - -Now let's scale our cluster to 4 nodes: -```sh -$ kubectl scale rc cassandra --replicas=4 -``` - -Examining the status again: -```sh -$ docker exec nodetool status -Datacenter: datacenter1 -======================= -Status=Up/Down -|/ State=Normal/Leaving/Joining/Moving --- Address Load Tokens Owns (effective) Host ID Rack -UN 10.244.2.3 57.61 KB 256 49.1% 9d560d8e-dafb-4a88-8e2f-f554379c21c3 rack1 -UN 10.244.1.7 41.1 KB 256 50.2% 68b8cc9c-2b76-44a4-b033-31402a77b839 rack1 -UN 10.244.0.5 74.09 KB 256 49.7% 86feda0f-f070-4a5b-bda1-2eeb0ad08b77 rack1 -UN 10.244.3.3 51.28 KB 256 51.0% dafe3154-1d67-42e1-ac1d-78e7e80dce2b rack1 -``` - -### tl; dr; -For those of you who are impatient, here is the summary of the commands we ran in this tutorial. - -```sh -# create a single cassandra node -kubectl create -f cassandra.yaml - -# create a service to track all cassandra nodes -kubectl create -f cassandra-service.yaml - -# create a replication controller to replicate cassandra nodes -kubectl create -f cassandra-controller.yaml - -# scale up to 2 nodes -kubectl scale rc cassandra --replicas=2 - -# validate the cluster -docker exec nodetool status - -# scale up to 4 nodes -kubectl scale rc cassandra --replicas=4 -``` - -### Seed Provider Source - -See -[here](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/examples/cassandra/java/src/io/k8s/cassandra/KubernetesSeedProvider.java). - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/cassandra/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/cassandra/README.md?pixel)]() diff --git a/release-0.19.0/examples/cassandra/cassandra-controller.yaml b/release-0.19.0/examples/cassandra/cassandra-controller.yaml deleted file mode 100644 index 1e10c503222..00000000000 --- a/release-0.19.0/examples/cassandra/cassandra-controller.yaml +++ /dev/null @@ -1,39 +0,0 @@ -apiVersion: v1beta3 -kind: ReplicationController -metadata: - labels: - name: cassandra - name: cassandra -spec: - replicas: 1 - selector: - name: cassandra - template: - metadata: - labels: - name: cassandra - spec: - containers: - - command: - - /run.sh - resources: - limits: - cpu: 1 - env: - - name: MAX_HEAP_SIZE - value: 512M - - name: HEAP_NEWSIZE - value: 100M - image: gcr.io/google_containers/cassandra:v3 - name: cassandra - ports: - - containerPort: 9042 - name: cql - - containerPort: 9160 - name: thrift - volumeMounts: - - mountPath: /cassandra_data - name: data - volumes: - - name: data - emptyDir: {} diff --git a/release-0.19.0/examples/cassandra/cassandra-service.yaml b/release-0.19.0/examples/cassandra/cassandra-service.yaml deleted file mode 100644 index 580c0a85551..00000000000 --- a/release-0.19.0/examples/cassandra/cassandra-service.yaml +++ /dev/null @@ -1,12 +0,0 @@ -apiVersion: v1beta3 -kind: Service -metadata: - labels: - name: cassandra - name: cassandra -spec: - ports: - - port: 9042 - targetPort: 9042 - selector: - name: cassandra diff --git a/release-0.19.0/examples/cassandra/cassandra.yaml b/release-0.19.0/examples/cassandra/cassandra.yaml deleted file mode 100644 index 5240899cf47..00000000000 --- a/release-0.19.0/examples/cassandra/cassandra.yaml +++ /dev/null @@ -1,31 +0,0 @@ -apiVersion: v1beta3 -kind: Pod -metadata: - labels: - name: cassandra - name: cassandra -spec: - containers: - - args: - - /run.sh - resources: - limits: - cpu: "1" - image: gcr.io/google_containers/cassandra:v3 - name: cassandra - ports: - - name: cql - containerPort: 9042 - - name: thrift - containerPort: 9160 - volumeMounts: - - name: data - mountPath: /cassandra_data - env: - - name: MAX_HEAP_SIZE - value: 512M - - name: HEAP_NEWSIZE - value: 100M - volumes: - - name: data - emptyDir: {} diff --git a/release-0.19.0/examples/cassandra/image/Dockerfile b/release-0.19.0/examples/cassandra/image/Dockerfile deleted file mode 100644 index 5e8c92c213e..00000000000 --- a/release-0.19.0/examples/cassandra/image/Dockerfile +++ /dev/null @@ -1,22 +0,0 @@ -FROM google/debian:wheezy - -COPY cassandra.list /etc/apt/sources.list.d/cassandra.list - -RUN gpg --keyserver pgp.mit.edu --recv-keys F758CE318D77295D -RUN gpg --export --armor F758CE318D77295D | apt-key add - - -RUN gpg --keyserver pgp.mit.edu --recv-keys 2B5C1B00 -RUN gpg --export --armor 2B5C1B00 | apt-key add - - -RUN gpg --keyserver pgp.mit.edu --recv-keys 0353B12C -RUN gpg --export --armor 0353B12C | apt-key add - - -RUN apt-get update -RUN apt-get -qq -y install cassandra - -COPY cassandra.yaml /etc/cassandra/cassandra.yaml -COPY run.sh /run.sh -COPY kubernetes-cassandra.jar /kubernetes-cassandra.jar -RUN chmod a+x /run.sh - -CMD /run.sh diff --git a/release-0.19.0/examples/cassandra/image/cassandra.list b/release-0.19.0/examples/cassandra/image/cassandra.list deleted file mode 100644 index 02e06f2d1ea..00000000000 --- a/release-0.19.0/examples/cassandra/image/cassandra.list +++ /dev/null @@ -1,3 +0,0 @@ -deb http://www.apache.org/dist/cassandra/debian 21x main -deb-src http://www.apache.org/dist/cassandra/debian 21x main - diff --git a/release-0.19.0/examples/cassandra/image/cassandra.yaml b/release-0.19.0/examples/cassandra/image/cassandra.yaml deleted file mode 100644 index b1543f2405b..00000000000 --- a/release-0.19.0/examples/cassandra/image/cassandra.yaml +++ /dev/null @@ -1,764 +0,0 @@ -# Cassandra storage config YAML - -# NOTE: -# See http://wiki.apache.org/cassandra/StorageConfiguration for -# full explanations of configuration directives -# /NOTE - -# The name of the cluster. This is mainly used to prevent machines in -# one logical cluster from joining another. -cluster_name: 'Test Cluster' - -# This defines the number of tokens randomly assigned to this node on the ring -# The more tokens, relative to other nodes, the larger the proportion of data -# that this node will store. You probably want all nodes to have the same number -# of tokens assuming they have equal hardware capability. -# -# If you leave this unspecified, Cassandra will use the default of 1 token for legacy compatibility, -# and will use the initial_token as described below. -# -# Specifying initial_token will override this setting on the node's initial start, -# on subsequent starts, this setting will apply even if initial token is set. -# -# If you already have a cluster with 1 token per node, and wish to migrate to -# multiple tokens per node, see http://wiki.apache.org/cassandra/Operations -num_tokens: 256 - -# initial_token allows you to specify tokens manually. While you can use # it with -# vnodes (num_tokens > 1, above) -- in which case you should provide a -# comma-separated list -- it's primarily used when adding nodes # to legacy clusters -# that do not have vnodes enabled. -# initial_token: - -# See http://wiki.apache.org/cassandra/HintedHandoff -# May either be "true" or "false" to enable globally, or contain a list -# of data centers to enable per-datacenter. -# hinted_handoff_enabled: DC1,DC2 -hinted_handoff_enabled: true -# this defines the maximum amount of time a dead host will have hints -# generated. After it has been dead this long, new hints for it will not be -# created until it has been seen alive and gone down again. -max_hint_window_in_ms: 10800000 # 3 hours -# Maximum throttle in KBs per second, per delivery thread. This will be -# reduced proportionally to the number of nodes in the cluster. (If there -# are two nodes in the cluster, each delivery thread will use the maximum -# rate; if there are three, each will throttle to half of the maximum, -# since we expect two nodes to be delivering hints simultaneously.) -hinted_handoff_throttle_in_kb: 1024 -# Number of threads with which to deliver hints; -# Consider increasing this number when you have multi-dc deployments, since -# cross-dc handoff tends to be slower -max_hints_delivery_threads: 2 - -# Maximum throttle in KBs per second, total. This will be -# reduced proportionally to the number of nodes in the cluster. -batchlog_replay_throttle_in_kb: 1024 - -# Authentication backend, implementing IAuthenticator; used to identify users -# Out of the box, Cassandra provides org.apache.cassandra.auth.{AllowAllAuthenticator, -# PasswordAuthenticator}. -# -# - AllowAllAuthenticator performs no checks - set it to disable authentication. -# - PasswordAuthenticator relies on username/password pairs to authenticate -# users. It keeps usernames and hashed passwords in system_auth.credentials table. -# Please increase system_auth keyspace replication factor if you use this authenticator. -authenticator: AllowAllAuthenticator - -# Authorization backend, implementing IAuthorizer; used to limit access/provide permissions -# Out of the box, Cassandra provides org.apache.cassandra.auth.{AllowAllAuthorizer, -# CassandraAuthorizer}. -# -# - AllowAllAuthorizer allows any action to any user - set it to disable authorization. -# - CassandraAuthorizer stores permissions in system_auth.permissions table. Please -# increase system_auth keyspace replication factor if you use this authorizer. -authorizer: AllowAllAuthorizer - -# Validity period for permissions cache (fetching permissions can be an -# expensive operation depending on the authorizer, CassandraAuthorizer is -# one example). Defaults to 2000, set to 0 to disable. -# Will be disabled automatically for AllowAllAuthorizer. -permissions_validity_in_ms: 2000 - -# The partitioner is responsible for distributing groups of rows (by -# partition key) across nodes in the cluster. You should leave this -# alone for new clusters. The partitioner can NOT be changed without -# reloading all data, so when upgrading you should set this to the -# same partitioner you were already using. -# -# Besides Murmur3Partitioner, partitioners included for backwards -# compatibility include RandomPartitioner, ByteOrderedPartitioner, and -# OrderPreservingPartitioner. -# -partitioner: org.apache.cassandra.dht.Murmur3Partitioner - -# Directories where Cassandra should store data on disk. Cassandra -# will spread data evenly across them, subject to the granularity of -# the configured compaction strategy. -# If not set, the default directory is $CASSANDRA_HOME/data/data. -data_file_directories: - - /cassandra_data/data - -# commit log. when running on magnetic HDD, this should be a -# separate spindle than the data directories. -# If not set, the default directory is $CASSANDRA_HOME/data/commitlog. -commitlog_directory: /cassandra_data/commitlog - -# policy for data disk failures: -# die: shut down gossip and Thrift and kill the JVM for any fs errors or -# single-sstable errors, so the node can be replaced. -# stop_paranoid: shut down gossip and Thrift even for single-sstable errors. -# stop: shut down gossip and Thrift, leaving the node effectively dead, but -# can still be inspected via JMX. -# best_effort: stop using the failed disk and respond to requests based on -# remaining available sstables. This means you WILL see obsolete -# data at CL.ONE! -# ignore: ignore fatal errors and let requests fail, as in pre-1.2 Cassandra -disk_failure_policy: stop - -# policy for commit disk failures: -# die: shut down gossip and Thrift and kill the JVM, so the node can be replaced. -# stop: shut down gossip and Thrift, leaving the node effectively dead, but -# can still be inspected via JMX. -# stop_commit: shutdown the commit log, letting writes collect but -# continuing to service reads, as in pre-2.0.5 Cassandra -# ignore: ignore fatal errors and let the batches fail -commit_failure_policy: stop - -# Maximum size of the key cache in memory. -# -# Each key cache hit saves 1 seek and each row cache hit saves 2 seeks at the -# minimum, sometimes more. The key cache is fairly tiny for the amount of -# time it saves, so it's worthwhile to use it at large numbers. -# The row cache saves even more time, but must contain the entire row, -# so it is extremely space-intensive. It's best to only use the -# row cache if you have hot rows or static rows. -# -# NOTE: if you reduce the size, you may not get you hottest keys loaded on startup. -# -# Default value is empty to make it "auto" (min(5% of Heap (in MB), 100MB)). Set to 0 to disable key cache. -key_cache_size_in_mb: - -# Duration in seconds after which Cassandra should -# save the key cache. Caches are saved to saved_caches_directory as -# specified in this configuration file. -# -# Saved caches greatly improve cold-start speeds, and is relatively cheap in -# terms of I/O for the key cache. Row cache saving is much more expensive and -# has limited use. -# -# Default is 14400 or 4 hours. -key_cache_save_period: 14400 - -# Number of keys from the key cache to save -# Disabled by default, meaning all keys are going to be saved -# key_cache_keys_to_save: 100 - -# Maximum size of the row cache in memory. -# NOTE: if you reduce the size, you may not get you hottest keys loaded on startup. -# -# Default value is 0, to disable row caching. -row_cache_size_in_mb: 0 - -# Duration in seconds after which Cassandra should -# save the row cache. Caches are saved to saved_caches_directory as specified -# in this configuration file. -# -# Saved caches greatly improve cold-start speeds, and is relatively cheap in -# terms of I/O for the key cache. Row cache saving is much more expensive and -# has limited use. -# -# Default is 0 to disable saving the row cache. -row_cache_save_period: 0 - -# Number of keys from the row cache to save -# Disabled by default, meaning all keys are going to be saved -# row_cache_keys_to_save: 100 - -# Maximum size of the counter cache in memory. -# -# Counter cache helps to reduce counter locks' contention for hot counter cells. -# In case of RF = 1 a counter cache hit will cause Cassandra to skip the read before -# write entirely. With RF > 1 a counter cache hit will still help to reduce the duration -# of the lock hold, helping with hot counter cell updates, but will not allow skipping -# the read entirely. Only the local (clock, count) tuple of a counter cell is kept -# in memory, not the whole counter, so it's relatively cheap. -# -# NOTE: if you reduce the size, you may not get you hottest keys loaded on startup. -# -# Default value is empty to make it "auto" (min(2.5% of Heap (in MB), 50MB)). Set to 0 to disable counter cache. -# NOTE: if you perform counter deletes and rely on low gcgs, you should disable the counter cache. -counter_cache_size_in_mb: - -# Duration in seconds after which Cassandra should -# save the counter cache (keys only). Caches are saved to saved_caches_directory as -# specified in this configuration file. -# -# Default is 7200 or 2 hours. -counter_cache_save_period: 7200 - -# Number of keys from the counter cache to save -# Disabled by default, meaning all keys are going to be saved -# counter_cache_keys_to_save: 100 - -# The off-heap memory allocator. Affects storage engine metadata as -# well as caches. Experiments show that JEMAlloc saves some memory -# than the native GCC allocator (i.e., JEMalloc is more -# fragmentation-resistant). -# -# Supported values are: NativeAllocator, JEMallocAllocator -# -# If you intend to use JEMallocAllocator you have to install JEMalloc as library and -# modify cassandra-env.sh as directed in the file. -# -# Defaults to NativeAllocator -# memory_allocator: NativeAllocator - -# saved caches -# If not set, the default directory is $CASSANDRA_HOME/data/saved_caches. -saved_caches_directory: /var/lib/cassandra/saved_caches - -# commitlog_sync may be either "periodic" or "batch." -# When in batch mode, Cassandra won't ack writes until the commit log -# has been fsynced to disk. It will wait up to -# commitlog_sync_batch_window_in_ms milliseconds for other writes, before -# performing the sync. -# -# commitlog_sync: batch -# commitlog_sync_batch_window_in_ms: 50 -# -# the other option is "periodic" where writes may be acked immediately -# and the CommitLog is simply synced every commitlog_sync_period_in_ms -# milliseconds. commitlog_periodic_queue_size allows 1024*(CPU cores) pending -# entries on the commitlog queue by default. If you are writing very large -# blobs, you should reduce that; 16*cores works reasonably well for 1MB blobs. -# It should be at least as large as the concurrent_writes setting. -commitlog_sync: periodic -commitlog_sync_period_in_ms: 10000 -# commitlog_periodic_queue_size: - -# The size of the individual commitlog file segments. A commitlog -# segment may be archived, deleted, or recycled once all the data -# in it (potentially from each columnfamily in the system) has been -# flushed to sstables. -# -# The default size is 32, which is almost always fine, but if you are -# archiving commitlog segments (see commitlog_archiving.properties), -# then you probably want a finer granularity of archiving; 8 or 16 MB -# is reasonable. -commitlog_segment_size_in_mb: 32 - -# any class that implements the SeedProvider interface and has a -# constructor that takes a Map of parameters will do. -seed_provider: - # Addresses of hosts that are deemed contact points. - # Cassandra nodes use this list of hosts to find each other and learn - # the topology of the ring. You must change this if you are running - # multiple nodes! - - class_name: io.k8s.cassandra.KubernetesSeedProvider - parameters: - # seeds is actually a comma-delimited list of addresses. - # Ex: ",," - - seeds: "%%ip%%" - -# For workloads with more data than can fit in memory, Cassandra's -# bottleneck will be reads that need to fetch data from -# disk. "concurrent_reads" should be set to (16 * number_of_drives) in -# order to allow the operations to enqueue low enough in the stack -# that the OS and drives can reorder them. Same applies to -# "concurrent_counter_writes", since counter writes read the current -# values before incrementing and writing them back. -# -# On the other hand, since writes are almost never IO bound, the ideal -# number of "concurrent_writes" is dependent on the number of cores in -# your system; (8 * number_of_cores) is a good rule of thumb. -concurrent_reads: 32 -concurrent_writes: 32 -concurrent_counter_writes: 32 - -# Total memory to use for sstable-reading buffers. Defaults to -# the smaller of 1/4 of heap or 512MB. -# file_cache_size_in_mb: 512 - -# Total permitted memory to use for memtables. Cassandra will stop -# accepting writes when the limit is exceeded until a flush completes, -# and will trigger a flush based on memtable_cleanup_threshold -# If omitted, Cassandra will set both to 1/4 the size of the heap. -# memtable_heap_space_in_mb: 2048 -# memtable_offheap_space_in_mb: 2048 - -# Ratio of occupied non-flushing memtable size to total permitted size -# that will trigger a flush of the largest memtable. Lager mct will -# mean larger flushes and hence less compaction, but also less concurrent -# flush activity which can make it difficult to keep your disks fed -# under heavy write load. -# -# memtable_cleanup_threshold defaults to 1 / (memtable_flush_writers + 1) -# memtable_cleanup_threshold: 0.11 - -# Specify the way Cassandra allocates and manages memtable memory. -# Options are: -# heap_buffers: on heap nio buffers -# offheap_buffers: off heap (direct) nio buffers -# offheap_objects: native memory, eliminating nio buffer heap overhead -memtable_allocation_type: heap_buffers - -# Total space to use for commitlogs. Since commitlog segments are -# mmapped, and hence use up address space, the default size is 32 -# on 32-bit JVMs, and 8192 on 64-bit JVMs. -# -# If space gets above this value (it will round up to the next nearest -# segment multiple), Cassandra will flush every dirty CF in the oldest -# segment and remove it. So a small total commitlog space will tend -# to cause more flush activity on less-active columnfamilies. -# commitlog_total_space_in_mb: 8192 - -# This sets the amount of memtable flush writer threads. These will -# be blocked by disk io, and each one will hold a memtable in memory -# while blocked. -# -# memtable_flush_writers defaults to the smaller of (number of disks, -# number of cores), with a minimum of 2 and a maximum of 8. -# -# If your data directories are backed by SSD, you should increase this -# to the number of cores. -#memtable_flush_writers: 8 - -# A fixed memory pool size in MB for for SSTable index summaries. If left -# empty, this will default to 5% of the heap size. If the memory usage of -# all index summaries exceeds this limit, SSTables with low read rates will -# shrink their index summaries in order to meet this limit. However, this -# is a best-effort process. In extreme conditions Cassandra may need to use -# more than this amount of memory. -index_summary_capacity_in_mb: - -# How frequently index summaries should be resampled. This is done -# periodically to redistribute memory from the fixed-size pool to sstables -# proportional their recent read rates. Setting to -1 will disable this -# process, leaving existing index summaries at their current sampling level. -index_summary_resize_interval_in_minutes: 60 - -# Whether to, when doing sequential writing, fsync() at intervals in -# order to force the operating system to flush the dirty -# buffers. Enable this to avoid sudden dirty buffer flushing from -# impacting read latencies. Almost always a good idea on SSDs; not -# necessarily on platters. -trickle_fsync: false -trickle_fsync_interval_in_kb: 10240 - -# TCP port, for commands and data -storage_port: 7000 - -# SSL port, for encrypted communication. Unused unless enabled in -# encryption_options -ssl_storage_port: 7001 - -# Address or interface to bind to and tell other Cassandra nodes to connect to. -# You _must_ change this if you want multiple nodes to be able to communicate! -# -# Set listen_address OR listen_interface, not both. Interfaces must correspond -# to a single address, IP aliasing is not supported. -# -# Leaving it blank leaves it up to InetAddress.getLocalHost(). This -# will always do the Right Thing _if_ the node is properly configured -# (hostname, name resolution, etc), and the Right Thing is to use the -# address associated with the hostname (it might not be). -# -# Setting listen_address to 0.0.0.0 is always wrong. -listen_address: %%ip%% -# listen_interface: eth0 - -# Address to broadcast to other Cassandra nodes -# Leaving this blank will set it to the same value as listen_address -# broadcast_address: 1.2.3.4 - -# Internode authentication backend, implementing IInternodeAuthenticator; -# used to allow/disallow connections from peer nodes. -# internode_authenticator: org.apache.cassandra.auth.AllowAllInternodeAuthenticator - -# Whether to start the native transport server. -# Please note that the address on which the native transport is bound is the -# same as the rpc_address. The port however is different and specified below. -start_native_transport: true -# port for the CQL native transport to listen for clients on -native_transport_port: 9042 -# The maximum threads for handling requests when the native transport is used. -# This is similar to rpc_max_threads though the default differs slightly (and -# there is no native_transport_min_threads, idle threads will always be stopped -# after 30 seconds). -# native_transport_max_threads: 128 -# -# The maximum size of allowed frame. Frame (requests) larger than this will -# be rejected as invalid. The default is 256MB. -# native_transport_max_frame_size_in_mb: 256 - -# Whether to start the thrift rpc server. -start_rpc: true - -# The address or interface to bind the Thrift RPC service and native transport -# server to. -# -# Set rpc_address OR rpc_interface, not both. Interfaces must correspond -# to a single address, IP aliasing is not supported. -# -# Leaving rpc_address blank has the same effect as on listen_address -# (i.e. it will be based on the configured hostname of the node). -# -# Note that unlike listen_address, you can specify 0.0.0.0, but you must also -# set broadcast_rpc_address to a value other than 0.0.0.0. -rpc_address: %%ip%% -# rpc_interface: eth1 - -# port for Thrift to listen for clients on -rpc_port: 9160 - -# RPC address to broadcast to drivers and other Cassandra nodes. This cannot -# be set to 0.0.0.0. If left blank, this will be set to the value of -# rpc_address. If rpc_address is set to 0.0.0.0, broadcast_rpc_address must -# be set. -# broadcast_rpc_address: 1.2.3.4 - -# enable or disable keepalive on rpc/native connections -rpc_keepalive: true - -# Cassandra provides two out-of-the-box options for the RPC Server: -# -# sync -> One thread per thrift connection. For a very large number of clients, memory -# will be your limiting factor. On a 64 bit JVM, 180KB is the minimum stack size -# per thread, and that will correspond to your use of virtual memory (but physical memory -# may be limited depending on use of stack space). -# -# hsha -> Stands for "half synchronous, half asynchronous." All thrift clients are handled -# asynchronously using a small number of threads that does not vary with the amount -# of thrift clients (and thus scales well to many clients). The rpc requests are still -# synchronous (one thread per active request). If hsha is selected then it is essential -# that rpc_max_threads is changed from the default value of unlimited. -# -# The default is sync because on Windows hsha is about 30% slower. On Linux, -# sync/hsha performance is about the same, with hsha of course using less memory. -# -# Alternatively, can provide your own RPC server by providing the fully-qualified class name -# of an o.a.c.t.TServerFactory that can create an instance of it. -rpc_server_type: sync - -# Uncomment rpc_min|max_thread to set request pool size limits. -# -# Regardless of your choice of RPC server (see above), the number of maximum requests in the -# RPC thread pool dictates how many concurrent requests are possible (but if you are using the sync -# RPC server, it also dictates the number of clients that can be connected at all). -# -# The default is unlimited and thus provides no protection against clients overwhelming the server. You are -# encouraged to set a maximum that makes sense for you in production, but do keep in mind that -# rpc_max_threads represents the maximum number of client requests this server may execute concurrently. -# -# rpc_min_threads: 16 -# rpc_max_threads: 2048 - -# uncomment to set socket buffer sizes on rpc connections -# rpc_send_buff_size_in_bytes: -# rpc_recv_buff_size_in_bytes: - -# Uncomment to set socket buffer size for internode communication -# Note that when setting this, the buffer size is limited by net.core.wmem_max -# and when not setting it it is defined by net.ipv4.tcp_wmem -# See: -# /proc/sys/net/core/wmem_max -# /proc/sys/net/core/rmem_max -# /proc/sys/net/ipv4/tcp_wmem -# /proc/sys/net/ipv4/tcp_wmem -# and: man tcp -# internode_send_buff_size_in_bytes: -# internode_recv_buff_size_in_bytes: - -# Frame size for thrift (maximum message length). -thrift_framed_transport_size_in_mb: 15 - -# Set to true to have Cassandra create a hard link to each sstable -# flushed or streamed locally in a backups/ subdirectory of the -# keyspace data. Removing these links is the operator's -# responsibility. -incremental_backups: false - -# Whether or not to take a snapshot before each compaction. Be -# careful using this option, since Cassandra won't clean up the -# snapshots for you. Mostly useful if you're paranoid when there -# is a data format change. -snapshot_before_compaction: false - -# Whether or not a snapshot is taken of the data before keyspace truncation -# or dropping of column families. The STRONGLY advised default of true -# should be used to provide data safety. If you set this flag to false, you will -# lose data on truncation or drop. -auto_snapshot: true - -# When executing a scan, within or across a partition, we need to keep the -# tombstones seen in memory so we can return them to the coordinator, which -# will use them to make sure other replicas also know about the deleted rows. -# With workloads that generate a lot of tombstones, this can cause performance -# problems and even exaust the server heap. -# (http://www.datastax.com/dev/blog/cassandra-anti-patterns-queues-and-queue-like-datasets) -# Adjust the thresholds here if you understand the dangers and want to -# scan more tombstones anyway. These thresholds may also be adjusted at runtime -# using the StorageService mbean. -tombstone_warn_threshold: 1000 -tombstone_failure_threshold: 100000 - -# Granularity of the collation index of rows within a partition. -# Increase if your rows are large, or if you have a very large -# number of rows per partition. The competing goals are these: -# 1) a smaller granularity means more index entries are generated -# and looking up rows withing the partition by collation column -# is faster -# 2) but, Cassandra will keep the collation index in memory for hot -# rows (as part of the key cache), so a larger granularity means -# you can cache more hot rows -column_index_size_in_kb: 64 - - -# Log WARN on any batch size exceeding this value. 5kb per batch by default. -# Caution should be taken on increasing the size of this threshold as it can lead to node instability. -batch_size_warn_threshold_in_kb: 5 - -# Number of simultaneous compactions to allow, NOT including -# validation "compactions" for anti-entropy repair. Simultaneous -# compactions can help preserve read performance in a mixed read/write -# workload, by mitigating the tendency of small sstables to accumulate -# during a single long running compactions. The default is usually -# fine and if you experience problems with compaction running too -# slowly or too fast, you should look at -# compaction_throughput_mb_per_sec first. -# -# concurrent_compactors defaults to the smaller of (number of disks, -# number of cores), with a minimum of 2 and a maximum of 8. -# -# If your data directories are backed by SSD, you should increase this -# to the number of cores. -#concurrent_compactors: 1 - -# Throttles compaction to the given total throughput across the entire -# system. The faster you insert data, the faster you need to compact in -# order to keep the sstable count down, but in general, setting this to -# 16 to 32 times the rate you are inserting data is more than sufficient. -# Setting this to 0 disables throttling. Note that this account for all types -# of compaction, including validation compaction. -compaction_throughput_mb_per_sec: 16 - -# When compacting, the replacement sstable(s) can be opened before they -# are completely written, and used in place of the prior sstables for -# any range that has been written. This helps to smoothly transfer reads -# between the sstables, reducing page cache churn and keeping hot rows hot -sstable_preemptive_open_interval_in_mb: 50 - -# Throttles all outbound streaming file transfers on this node to the -# given total throughput in Mbps. This is necessary because Cassandra does -# mostly sequential IO when streaming data during bootstrap or repair, which -# can lead to saturating the network connection and degrading rpc performance. -# When unset, the default is 200 Mbps or 25 MB/s. -# stream_throughput_outbound_megabits_per_sec: 200 - -# Throttles all streaming file transfer between the datacenters, -# this setting allows users to throttle inter dc stream throughput in addition -# to throttling all network stream traffic as configured with -# stream_throughput_outbound_megabits_per_sec -# inter_dc_stream_throughput_outbound_megabits_per_sec: - -# How long the coordinator should wait for read operations to complete -read_request_timeout_in_ms: 5000 -# How long the coordinator should wait for seq or index scans to complete -range_request_timeout_in_ms: 10000 -# How long the coordinator should wait for writes to complete -write_request_timeout_in_ms: 2000 -# How long the coordinator should wait for counter writes to complete -counter_write_request_timeout_in_ms: 5000 -# How long a coordinator should continue to retry a CAS operation -# that contends with other proposals for the same row -cas_contention_timeout_in_ms: 1000 -# How long the coordinator should wait for truncates to complete -# (This can be much longer, because unless auto_snapshot is disabled -# we need to flush first so we can snapshot before removing the data.) -truncate_request_timeout_in_ms: 60000 -# The default timeout for other, miscellaneous operations -request_timeout_in_ms: 10000 - -# Enable operation timeout information exchange between nodes to accurately -# measure request timeouts. If disabled, replicas will assume that requests -# were forwarded to them instantly by the coordinator, which means that -# under overload conditions we will waste that much extra time processing -# already-timed-out requests. -# -# Warning: before enabling this property make sure to ntp is installed -# and the times are synchronized between the nodes. -cross_node_timeout: false - -# Enable socket timeout for streaming operation. -# When a timeout occurs during streaming, streaming is retried from the start -# of the current file. This _can_ involve re-streaming an important amount of -# data, so you should avoid setting the value too low. -# Default value is 0, which never timeout streams. -# streaming_socket_timeout_in_ms: 0 - -# phi value that must be reached for a host to be marked down. -# most users should never need to adjust this. -# phi_convict_threshold: 8 - -# endpoint_snitch -- Set this to a class that implements -# IEndpointSnitch. The snitch has two functions: -# - it teaches Cassandra enough about your network topology to route -# requests efficiently -# - it allows Cassandra to spread replicas around your cluster to avoid -# correlated failures. It does this by grouping machines into -# "datacenters" and "racks." Cassandra will do its best not to have -# more than one replica on the same "rack" (which may not actually -# be a physical location) -# -# IF YOU CHANGE THE SNITCH AFTER DATA IS INSERTED INTO THE CLUSTER, -# YOU MUST RUN A FULL REPAIR, SINCE THE SNITCH AFFECTS WHERE REPLICAS -# ARE PLACED. -# -# Out of the box, Cassandra provides -# - SimpleSnitch: -# Treats Strategy order as proximity. This can improve cache -# locality when disabling read repair. Only appropriate for -# single-datacenter deployments. -# - GossipingPropertyFileSnitch -# This should be your go-to snitch for production use. The rack -# and datacenter for the local node are defined in -# cassandra-rackdc.properties and propagated to other nodes via -# gossip. If cassandra-topology.properties exists, it is used as a -# fallback, allowing migration from the PropertyFileSnitch. -# - PropertyFileSnitch: -# Proximity is determined by rack and data center, which are -# explicitly configured in cassandra-topology.properties. -# - Ec2Snitch: -# Appropriate for EC2 deployments in a single Region. Loads Region -# and Availability Zone information from the EC2 API. The Region is -# treated as the datacenter, and the Availability Zone as the rack. -# Only private IPs are used, so this will not work across multiple -# Regions. -# - Ec2MultiRegionSnitch: -# Uses public IPs as broadcast_address to allow cross-region -# connectivity. (Thus, you should set seed addresses to the public -# IP as well.) You will need to open the storage_port or -# ssl_storage_port on the public IP firewall. (For intra-Region -# traffic, Cassandra will switch to the private IP after -# establishing a connection.) -# - RackInferringSnitch: -# Proximity is determined by rack and data center, which are -# assumed to correspond to the 3rd and 2nd octet of each node's IP -# address, respectively. Unless this happens to match your -# deployment conventions, this is best used as an example of -# writing a custom Snitch class and is provided in that spirit. -# -# You can use a custom Snitch by setting this to the full class name -# of the snitch, which will be assumed to be on your classpath. -endpoint_snitch: SimpleSnitch - -# controls how often to perform the more expensive part of host score -# calculation -dynamic_snitch_update_interval_in_ms: 100 -# controls how often to reset all host scores, allowing a bad host to -# possibly recover -dynamic_snitch_reset_interval_in_ms: 600000 -# if set greater than zero and read_repair_chance is < 1.0, this will allow -# 'pinning' of replicas to hosts in order to increase cache capacity. -# The badness threshold will control how much worse the pinned host has to be -# before the dynamic snitch will prefer other replicas over it. This is -# expressed as a double which represents a percentage. Thus, a value of -# 0.2 means Cassandra would continue to prefer the static snitch values -# until the pinned host was 20% worse than the fastest. -dynamic_snitch_badness_threshold: 0.1 - -# request_scheduler -- Set this to a class that implements -# RequestScheduler, which will schedule incoming client requests -# according to the specific policy. This is useful for multi-tenancy -# with a single Cassandra cluster. -# NOTE: This is specifically for requests from the client and does -# not affect inter node communication. -# org.apache.cassandra.scheduler.NoScheduler - No scheduling takes place -# org.apache.cassandra.scheduler.RoundRobinScheduler - Round robin of -# client requests to a node with a separate queue for each -# request_scheduler_id. The scheduler is further customized by -# request_scheduler_options as described below. -request_scheduler: org.apache.cassandra.scheduler.NoScheduler - -# Scheduler Options vary based on the type of scheduler -# NoScheduler - Has no options -# RoundRobin -# - throttle_limit -- The throttle_limit is the number of in-flight -# requests per client. Requests beyond -# that limit are queued up until -# running requests can complete. -# The value of 80 here is twice the number of -# concurrent_reads + concurrent_writes. -# - default_weight -- default_weight is optional and allows for -# overriding the default which is 1. -# - weights -- Weights are optional and will default to 1 or the -# overridden default_weight. The weight translates into how -# many requests are handled during each turn of the -# RoundRobin, based on the scheduler id. -# -# request_scheduler_options: -# throttle_limit: 80 -# default_weight: 5 -# weights: -# Keyspace1: 1 -# Keyspace2: 5 - -# request_scheduler_id -- An identifier based on which to perform -# the request scheduling. Currently the only valid option is keyspace. -# request_scheduler_id: keyspace - -# Enable or disable inter-node encryption -# Default settings are TLS v1, RSA 1024-bit keys (it is imperative that -# users generate their own keys) TLS_RSA_WITH_AES_128_CBC_SHA as the cipher -# suite for authentication, key exchange and encryption of the actual data transfers. -# Use the DHE/ECDHE ciphers if running in FIPS 140 compliant mode. -# NOTE: No custom encryption options are enabled at the moment -# The available internode options are : all, none, dc, rack -# -# If set to dc cassandra will encrypt the traffic between the DCs -# If set to rack cassandra will encrypt the traffic between the racks -# -# The passwords used in these options must match the passwords used when generating -# the keystore and truststore. For instructions on generating these files, see: -# http://download.oracle.com/javase/6/docs/technotes/guides/security/jsse/JSSERefGuide.html#CreateKeystore -# -server_encryption_options: - internode_encryption: none - keystore: conf/.keystore - keystore_password: cassandra - truststore: conf/.truststore - truststore_password: cassandra - # More advanced defaults below: - # protocol: TLS - # algorithm: SunX509 - # store_type: JKS - # cipher_suites: [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA] - # require_client_auth: false - -# enable or disable client/server encryption. -client_encryption_options: - enabled: false - keystore: conf/.keystore - keystore_password: cassandra - # require_client_auth: false - # Set trustore and truststore_password if require_client_auth is true - # truststore: conf/.truststore - # truststore_password: cassandra - # More advanced defaults below: - # protocol: TLS - # algorithm: SunX509 - # store_type: JKS - # cipher_suites: [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA] - -# internode_compression controls whether traffic between nodes is -# compressed. -# can be: all - all traffic is compressed -# dc - traffic between different datacenters is compressed -# none - nothing is compressed. -internode_compression: all - -# Enable or disable tcp_nodelay for inter-dc communication. -# Disabling it will result in larger (but fewer) network packets being sent, -# reducing overhead from the TCP protocol itself, at the cost of increasing -# latency if you block for cross-datacenter responses. -inter_dc_tcp_nodelay: false diff --git a/release-0.19.0/examples/cassandra/image/kubernetes-cassandra.jar b/release-0.19.0/examples/cassandra/image/kubernetes-cassandra.jar deleted file mode 100644 index 93f492965b7..00000000000 Binary files a/release-0.19.0/examples/cassandra/image/kubernetes-cassandra.jar and /dev/null differ diff --git a/release-0.19.0/examples/cassandra/image/run.sh b/release-0.19.0/examples/cassandra/image/run.sh deleted file mode 100644 index 4ca7babd163..00000000000 --- a/release-0.19.0/examples/cassandra/image/run.sh +++ /dev/null @@ -1,19 +0,0 @@ -#!/bin/bash - -# Copyright 2014 The Kubernetes Authors All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -perl -pi -e "s/%%ip%%/$(hostname -I)/g" /etc/cassandra/cassandra.yaml -export CLASSPATH=/kubernetes-cassandra.jar -cassandra -f diff --git a/release-0.19.0/examples/cassandra/java/pom.xml b/release-0.19.0/examples/cassandra/java/pom.xml deleted file mode 100644 index 0df1e54675d..00000000000 --- a/release-0.19.0/examples/cassandra/java/pom.xml +++ /dev/null @@ -1,47 +0,0 @@ - - 4.0.0 - io.k8s.cassandra - kubernetes-cassandra - 0.0.3 - - src - - - maven-compiler-plugin - 2.3.2 - - 1.7 - 1.7 - - - - - - - junit - junit - 3.8.1 - test - - - org.slf4j - slf4j-log4j12 - 1.7.5 - - - org.codehaus.jackson - jackson-core-asl - 1.6.3 - - - org.codehaus.jackson - jackson-mapper-asl - 1.6.3 - - - org.apache.cassandra - cassandra-all - 2.0.11 - - - diff --git a/release-0.19.0/examples/cassandra/java/src/io/k8s/cassandra/KubernetesSeedProvider.java b/release-0.19.0/examples/cassandra/java/src/io/k8s/cassandra/KubernetesSeedProvider.java deleted file mode 100644 index 338c7f7e082..00000000000 --- a/release-0.19.0/examples/cassandra/java/src/io/k8s/cassandra/KubernetesSeedProvider.java +++ /dev/null @@ -1,149 +0,0 @@ -package io.k8s.cassandra; - -import java.io.IOException; -import java.nio.file.Files; -import java.nio.file.Path; -import java.nio.file.Paths; -import java.net.InetAddress; -import java.net.UnknownHostException; -import java.net.URL; -import java.net.URLConnection; -import java.security.cert.X509Certificate; -import java.security.KeyManagementException; -import java.security.NoSuchAlgorithmException; -import java.security.SecureRandom; -import java.util.ArrayList; -import java.util.HashMap; -import java.util.List; -import java.util.Map; - -import javax.net.ssl.HostnameVerifier; -import javax.net.ssl.HttpsURLConnection; -import javax.net.ssl.SSLContext; -import javax.net.ssl.SSLSession; -import javax.net.ssl.TrustManager; -import javax.net.ssl.X509TrustManager; - -import org.codehaus.jackson.JsonNode; -import org.codehaus.jackson.annotate.JsonIgnoreProperties; -import org.codehaus.jackson.map.ObjectMapper; -import org.apache.cassandra.locator.SeedProvider; -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; - -public class KubernetesSeedProvider implements SeedProvider { - - @JsonIgnoreProperties(ignoreUnknown = true) - static class Address { - public String IP; - } - - @JsonIgnoreProperties(ignoreUnknown = true) - static class Subset { - public List
addresses; - } - - @JsonIgnoreProperties(ignoreUnknown = true) - static class Endpoints { - public List subsets; - } - - private static String getEnvOrDefault(String var, String def) { - String val = System.getenv(var); - if (val == null) { - val = def; - } - return val; - } - - private static String getServiceAccountToken() throws IOException { - String file = "/var/run/secrets/kubernetes.io/serviceaccount/token"; - return new String(Files.readAllBytes(Paths.get(file))); - } - - private static final Logger logger = LoggerFactory.getLogger(KubernetesSeedProvider.class); - - private List defaultSeeds; - private TrustManager[] trustAll; - private HostnameVerifier trustAllHosts; - - public KubernetesSeedProvider(Map params) { - // Taken from SimpleSeedProvider.java - // These are used as a fallback, if we get nothing from k8s. - String[] hosts = params.get("seeds").split(",", -1); - defaultSeeds = new ArrayList(hosts.length); - for (String host : hosts) - { - try { - defaultSeeds.add(InetAddress.getByName(host.trim())); - } - catch (UnknownHostException ex) - { - // not fatal... DD will bark if there end up being zero seeds. - logger.warn("Seed provider couldn't lookup host " + host); - } - } - // TODO: Load the CA cert when it is available on all platforms. - trustAll = new TrustManager[] { - new X509TrustManager() { - public void checkServerTrusted(X509Certificate[] certs, String authType) {} - public void checkClientTrusted(X509Certificate[] certs, String authType) {} - public X509Certificate[] getAcceptedIssuers() { return null; } - } - }; - trustAllHosts = new HostnameVerifier() { - public boolean verify(String hostname, SSLSession session) { - return true; - } - }; - } - - public List getSeeds() { - List list = new ArrayList(); - String host = "https://kubernetes.default.cluster.local"; - String serviceName = getEnvOrDefault("CASSANDRA_SERVICE", "cassandra"); - String path = "/api/v1beta3/namespaces/default/endpoints/"; - try { - String token = getServiceAccountToken(); - - SSLContext ctx = SSLContext.getInstance("SSL"); - ctx.init(null, trustAll, new SecureRandom()); - - URL url = new URL(host + path + serviceName); - HttpsURLConnection conn = (HttpsURLConnection)url.openConnection(); - - // TODO: Remove this once the CA cert is propogated everywhere, and replace - // with loading the CA cert. - conn.setSSLSocketFactory(ctx.getSocketFactory()); - conn.setHostnameVerifier(trustAllHosts); - - conn.addRequestProperty("Authorization", "Bearer " + token); - ObjectMapper mapper = new ObjectMapper(); - Endpoints endpoints = mapper.readValue(conn.getInputStream(), Endpoints.class); - if (endpoints != null) { - // Here is a problem point, endpoints.subsets can be null in first node cases. - if (endpoints.subsets != null && !endpoints.subsets.isEmpty()){ - for (Subset subset : endpoints.subsets) { - for (Address address : subset.addresses) { - list.add(InetAddress.getByName(address.IP)); - } - } - } - } - } catch (IOException | NoSuchAlgorithmException | KeyManagementException ex) { - logger.warn("Request to kubernetes apiserver failed", ex); - } - if (list.size() == 0) { - // If we got nothing, we might be the first instance, in that case - // fall back on the seeds that were passed in cassandra.yaml. - return defaultSeeds; - } - return list; - } - - // Simple main to test the implementation - public static void main(String[] args) { - SeedProvider provider = new KubernetesSeedProvider(new HashMap()); - System.out.println(provider.getSeeds()); - } -} diff --git a/release-0.19.0/examples/cassandra/java/target/classes/io/k8s/cassandra/KubernetesSeedProvider$1.class b/release-0.19.0/examples/cassandra/java/target/classes/io/k8s/cassandra/KubernetesSeedProvider$1.class deleted file mode 100644 index 411292e8647..00000000000 Binary files a/release-0.19.0/examples/cassandra/java/target/classes/io/k8s/cassandra/KubernetesSeedProvider$1.class and /dev/null differ diff --git a/release-0.19.0/examples/cassandra/java/target/classes/io/k8s/cassandra/KubernetesSeedProvider$2.class b/release-0.19.0/examples/cassandra/java/target/classes/io/k8s/cassandra/KubernetesSeedProvider$2.class deleted file mode 100644 index 58bd50d0328..00000000000 Binary files a/release-0.19.0/examples/cassandra/java/target/classes/io/k8s/cassandra/KubernetesSeedProvider$2.class and /dev/null differ diff --git a/release-0.19.0/examples/cassandra/java/target/classes/io/k8s/cassandra/KubernetesSeedProvider$Address.class b/release-0.19.0/examples/cassandra/java/target/classes/io/k8s/cassandra/KubernetesSeedProvider$Address.class deleted file mode 100644 index 6efe8f5efdf..00000000000 Binary files a/release-0.19.0/examples/cassandra/java/target/classes/io/k8s/cassandra/KubernetesSeedProvider$Address.class and /dev/null differ diff --git a/release-0.19.0/examples/cassandra/java/target/classes/io/k8s/cassandra/KubernetesSeedProvider$Endpoints.class b/release-0.19.0/examples/cassandra/java/target/classes/io/k8s/cassandra/KubernetesSeedProvider$Endpoints.class deleted file mode 100644 index 7755bfbf39c..00000000000 Binary files a/release-0.19.0/examples/cassandra/java/target/classes/io/k8s/cassandra/KubernetesSeedProvider$Endpoints.class and /dev/null differ diff --git a/release-0.19.0/examples/cassandra/java/target/classes/io/k8s/cassandra/KubernetesSeedProvider$Subset.class b/release-0.19.0/examples/cassandra/java/target/classes/io/k8s/cassandra/KubernetesSeedProvider$Subset.class deleted file mode 100644 index bdbca660c9b..00000000000 Binary files a/release-0.19.0/examples/cassandra/java/target/classes/io/k8s/cassandra/KubernetesSeedProvider$Subset.class and /dev/null differ diff --git a/release-0.19.0/examples/cassandra/java/target/classes/io/k8s/cassandra/KubernetesSeedProvider.class b/release-0.19.0/examples/cassandra/java/target/classes/io/k8s/cassandra/KubernetesSeedProvider.class deleted file mode 100644 index 1a415bbb07b..00000000000 Binary files a/release-0.19.0/examples/cassandra/java/target/classes/io/k8s/cassandra/KubernetesSeedProvider.class and /dev/null differ diff --git a/release-0.19.0/examples/cassandra/java/target/kubernetes-cassandra-0.0.2.jar b/release-0.19.0/examples/cassandra/java/target/kubernetes-cassandra-0.0.2.jar deleted file mode 100644 index 8fac473fe85..00000000000 Binary files a/release-0.19.0/examples/cassandra/java/target/kubernetes-cassandra-0.0.2.jar and /dev/null differ diff --git a/release-0.19.0/examples/cassandra/java/target/kubernetes-cassandra-0.0.3.jar b/release-0.19.0/examples/cassandra/java/target/kubernetes-cassandra-0.0.3.jar deleted file mode 100644 index 93f492965b7..00000000000 Binary files a/release-0.19.0/examples/cassandra/java/target/kubernetes-cassandra-0.0.3.jar and /dev/null differ diff --git a/release-0.19.0/examples/cassandra/java/target/maven-archiver/pom.properties b/release-0.19.0/examples/cassandra/java/target/maven-archiver/pom.properties deleted file mode 100644 index 2e9bdc0c907..00000000000 --- a/release-0.19.0/examples/cassandra/java/target/maven-archiver/pom.properties +++ /dev/null @@ -1,5 +0,0 @@ -#Generated by Maven -#Sat May 16 02:26:42 BST 2015 -version=0.0.3 -groupId=io.k8s.cassandra -artifactId=kubernetes-cassandra diff --git a/release-0.19.0/examples/celery-rabbitmq/README.md b/release-0.19.0/examples/celery-rabbitmq/README.md deleted file mode 100644 index 1fcf94ee332..00000000000 --- a/release-0.19.0/examples/celery-rabbitmq/README.md +++ /dev/null @@ -1,238 +0,0 @@ -# Example: Distributed task queues with Celery, RabbitMQ and Flower - -## Introduction - -Celery is an asynchronous task queue based on distributed message passing. It is used to create execution units (i.e. tasks) which are then executed on one or more worker nodes, either synchronously or asynchronously. - -Celery is implemented in Python. - -Since Celery is based on message passing, it requires some middleware (to handle translation of the message between sender and receiver) called a _message broker_. RabbitMQ is a message broker often used in conjunction with Celery. - -This example will show you how to use Kubernetes to set up a very basic distributed task queue using Celery as the task queue and RabbitMQ as the message broker. It will also show you how to set up a Flower-based front end to monitor the tasks. - -## Goal - -At the end of the example, we will have: - -* Three pods: - * A Celery task queue - * A RabbitMQ message broker - * A Flower frontend -* A service that provides access to the message broker -* A basic celery task that can be passed to the worker node - - -## Prerequisites - -You should already have turned up a Kubernetes cluster. To get the most of this example, ensure that Kubernetes will create more than one minion (e.g. by setting your `NUM_MINIONS` environment variable to 2 or more). - - -## Step 1: Start the RabbitMQ service - -The Celery task queue will need to communicate with the RabbitMQ broker. RabbitMQ will eventually appear on a separate pod, but since pods are ephemeral we need a service that can transparently route requests to RabbitMQ. - -Use the file [`examples/celery-rabbitmq/rabbitmq-service.yaml`](rabbitmq-service.yaml): - -```yaml -apiVersion: v1beta3 -kind: Service -metadata: - labels: - name: rabbitmq - name: rabbitmq-service -spec: - ports: - - port: 5672 - protocol: TCP - targetPort: 5672 - selector: - app: taskQueue - component: rabbitmq -``` - -To start the service, run: - -```shell -$ kubectl create -f examples/celery-rabbitmq/rabbitmq-service.yaml -``` - -This service allows other pods to connect to the rabbitmq. To them, it will be seen as available on port 5672, although the service is routing the traffic to the container (also via port 5672). - - -## Step 2: Fire up RabbitMQ - -A RabbitMQ broker can be turned up using the file [`examples/celery-rabbitmq/rabbitmq-controller.yaml`](rabbitmq-controller.yaml): - -```yaml -apiVersion: v1beta3 -kind: ReplicationController -metadata: - labels: - name: rabbitmq - name: rabbitmq-controller -spec: - replicas: 1 - selector: - component: rabbitmq - template: - metadata: - labels: - app: taskQueue - component: rabbitmq - spec: - containers: - - image: rabbitmq - name: rabbitmq - ports: - - containerPort: 5672 - protocol: TCP - resources: - limits: - cpu: 100m -``` - -Running `$ kubectl create -f examples/celery-rabbitmq/rabbitmq-controller.yaml` brings up a replication controller that ensures one pod exists which is running a RabbitMQ instance. - -Note that bringing up the pod includes pulling down a docker image, which may take a few moments. This applies to all other pods in this example. - - -## Step 3: Fire up Celery - -Bringing up the celery worker is done by running `$ kubectl create -f examples/celery-rabbitmq/celery-controller.yaml`, which contains this: - -```yaml -apiVersion: v1beta3 -kind: ReplicationController -metadata: - labels: - name: celery - name: celery-controller -spec: - replicas: 1 - selector: - component: celery - template: - metadata: - labels: - app: taskQueue - component: celery - spec: - containers: - - image: endocode/celery-app-add - name: celery - ports: - - containerPort: 5672 - protocol: TCP - resources: - limits: - cpu: 100m -``` - -There are several things to point out here... - -Like the RabbitMQ controller, this controller ensures that there is always a pod is running a Celery worker instance. The celery-app-add Docker image is an extension of the standard Celery image. This is the Dockerfile: - -``` -FROM library/celery - -ADD celery_conf.py /data/celery_conf.py -ADD run_tasks.py /data/run_tasks.py -ADD run.sh /usr/local/bin/run.sh - -ENV C_FORCE_ROOT 1 - -CMD ["/bin/bash", "/usr/local/bin/run.sh"] -``` - -The celery\_conf.py contains the definition of a simple Celery task that adds two numbers. This last line starts the Celery worker. - -**NOTE:** `ENV C_FORCE_ROOT 1` forces Celery to be run as the root user, which is *not* recommended in production! - -The celery\_conf.py file contains the following: - -```python -import os - -from celery import Celery - -# Get Kubernetes-provided address of the broker service -broker_service_host = os.environ.get('RABBITMQ_SERVICE_SERVICE_HOST') - -app = Celery('tasks', broker='amqp://guest@%s//' % broker_service_host, backend='amqp') - -@app.task -def add(x, y): - return x + y -``` - -Assuming you're already familiar with how Celery works, everything here should be familiar, except perhaps the part `os.environ.get('RABBITMQ_SERVICE_SERVICE_HOST')`. This environment variable contains the IP address of the RabbitMQ service we created in step 1. Kubernetes automatically provides this environment variable to all containers which have the same app label as that defined in the RabbitMQ service (in this case "taskQueue"). In the Python code above, this has the effect of automatically filling in the broker address when the pod is started. - -The second python script (run\_tasks.py) periodically executes the `add()` task every 5 seconds with a couple of random numbers. - -The question now is, how do you see what's going on? - - -## Step 4: Put a frontend in place - -Flower is a web-based tool for monitoring and administrating Celery clusters. By connecting to the node that contains Celery, you can see the behaviour of all the workers and their tasks in real-time. - -To bring up the frontend, run this command `$ kubectl create -f examples/celery-rabbitmq/flower-controller.yaml`. This controller is defined as so: - -```yaml -apiVersion: v1beta3 -kind: ReplicationController -metadata: - labels: - name: flower - name: flower-controller -spec: - replicas: 1 - selector: - component: flower - template: - metadata: - labels: - app: taskQueue - component: flower - spec: - containers: - - image: endocode/flower - name: flower - ports: - - containerPort: 5555 - hostPort: 5555 - protocol: TCP - resources: - limits: - cpu: 100m -``` - -This will bring up a new pod with Flower installed and port 5555 (Flower's default port) exposed. This image uses the following command to start Flower: - -```sh -flower --broker=amqp://guest:guest@${RABBITMQ_SERVICE_SERVICE_HOST:localhost}:5672// -``` - -Again, it uses the Kubernetes-provided environment variable to obtain the address of the RabbitMQ service. - -Once all pods are up and running, running `kubectl get pods` will display something like this: - -``` -POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS -celery-controller-h3x9k 10.246.1.11 celery endocode/celery-app-add 10.245.1.3/10.245.1.3 app=taskQueue,name=celery Running -flower-controller-cegta 10.246.2.17 flower endocode/flower 10.245.1.4/10.245.1.4 app=taskQueue,name=flower Running -kube-dns-fplln 10.246.1.3 etcd quay.io/coreos/etcd:latest 10.245.1.3/10.245.1.3 k8s-app=kube-dns,kubernetes.io/cluster-service=true Running - kube2sky kubernetes/kube2sky:1.0 - skydns kubernetes/skydns:2014-12-23-001 -rabbitmq-controller-pjzb3 10.246.2.16 rabbitmq library/rabbitmq 10.245.1.4/10.245.1.4 app=taskQueue,name=rabbitmq Running - -``` - -Now you know on which host Flower is running (in this case, 10.245.1.4), you can open your browser and enter the address (e.g. `http://10.245.1.4:5555`. If you click on the tab called "Tasks", you should see an ever-growing list of tasks called "celery_conf.add" which the run\_tasks.py script is dispatching. - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/celery-rabbitmq/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/celery-rabbitmq/README.md?pixel)]() diff --git a/release-0.19.0/examples/celery-rabbitmq/celery-app-add/Dockerfile b/release-0.19.0/examples/celery-rabbitmq/celery-app-add/Dockerfile deleted file mode 100644 index f507e0df647..00000000000 --- a/release-0.19.0/examples/celery-rabbitmq/celery-app-add/Dockerfile +++ /dev/null @@ -1,9 +0,0 @@ -FROM library/celery - -ADD celery_conf.py /data/celery_conf.py -ADD run_tasks.py /data/run_tasks.py -ADD run.sh /usr/local/bin/run.sh - -ENV C_FORCE_ROOT 1 - -CMD ["/bin/bash", "/usr/local/bin/run.sh"] diff --git a/release-0.19.0/examples/celery-rabbitmq/celery-app-add/celery_conf.py b/release-0.19.0/examples/celery-rabbitmq/celery-app-add/celery_conf.py deleted file mode 100644 index 237028cae3b..00000000000 --- a/release-0.19.0/examples/celery-rabbitmq/celery-app-add/celery_conf.py +++ /dev/null @@ -1,29 +0,0 @@ -#!/usr/bin/env python - -# Copyright 2015 The Kubernetes Authors All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import os - -from celery import Celery - -# Get Kubernetes-provided address of the broker service -broker_service_host = os.environ.get('RABBITMQ_SERVICE_SERVICE_HOST') - -app = Celery('tasks', broker='amqp://guest@%s//' % broker_service_host, backend='amqp') - -@app.task -def add(x, y): - return x + y - diff --git a/release-0.19.0/examples/celery-rabbitmq/celery-app-add/run.sh b/release-0.19.0/examples/celery-rabbitmq/celery-app-add/run.sh deleted file mode 100644 index 9f1c5435b3a..00000000000 --- a/release-0.19.0/examples/celery-rabbitmq/celery-app-add/run.sh +++ /dev/null @@ -1,21 +0,0 @@ -#!/bin/bash - -# Copyright 2014 The Kubernetes Authors All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# Run the celery worker -/usr/local/bin/celery -A celery_conf worker -f /data/celery.log & - -# Start firing periodic tasks automatically -python /data/run_tasks.py diff --git a/release-0.19.0/examples/celery-rabbitmq/celery-app-add/run_tasks.py b/release-0.19.0/examples/celery-rabbitmq/celery-app-add/run_tasks.py deleted file mode 100644 index e07afc5010e..00000000000 --- a/release-0.19.0/examples/celery-rabbitmq/celery-app-add/run_tasks.py +++ /dev/null @@ -1,29 +0,0 @@ -#!/usr/bin/env python - -# Copyright 2015 The Kubernetes Authors All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import random -import syslog -import time - -from celery_conf import add - -while True: - x = random.randint(1, 10) - y = random.randint(1, 10) - res = add.delay(x, y) - time.sleep(5) - if res.ready(): - res.get() diff --git a/release-0.19.0/examples/celery-rabbitmq/celery-controller.yaml b/release-0.19.0/examples/celery-rabbitmq/celery-controller.yaml deleted file mode 100644 index 4fc7004dbd7..00000000000 --- a/release-0.19.0/examples/celery-rabbitmq/celery-controller.yaml +++ /dev/null @@ -1,25 +0,0 @@ -apiVersion: v1beta3 -kind: ReplicationController -metadata: - labels: - name: celery - name: celery-controller -spec: - replicas: 1 - selector: - component: celery - template: - metadata: - labels: - app: taskQueue - component: celery - spec: - containers: - - image: endocode/celery-app-add - name: celery - ports: - - containerPort: 5672 - protocol: TCP - resources: - limits: - cpu: 100m diff --git a/release-0.19.0/examples/celery-rabbitmq/flower-controller.yaml b/release-0.19.0/examples/celery-rabbitmq/flower-controller.yaml deleted file mode 100644 index 16481b9727e..00000000000 --- a/release-0.19.0/examples/celery-rabbitmq/flower-controller.yaml +++ /dev/null @@ -1,26 +0,0 @@ -apiVersion: v1beta3 -kind: ReplicationController -metadata: - labels: - name: flower - name: flower-controller -spec: - replicas: 1 - selector: - component: flower - template: - metadata: - labels: - app: taskQueue - component: flower - spec: - containers: - - image: endocode/flower - name: flower - ports: - - containerPort: 5555 - hostPort: 5555 - protocol: TCP - resources: - limits: - cpu: 100m diff --git a/release-0.19.0/examples/celery-rabbitmq/flower/Dockerfile b/release-0.19.0/examples/celery-rabbitmq/flower/Dockerfile deleted file mode 100644 index 387ce3b98fd..00000000000 --- a/release-0.19.0/examples/celery-rabbitmq/flower/Dockerfile +++ /dev/null @@ -1,15 +0,0 @@ -FROM ubuntu:trusty - -# update the package repository and install python pip -RUN apt-get -y update && apt-get -y install python-dev python-pip - -# install flower -RUN pip install flower - -# Make sure we expose port 5555 so that we can connect to it -EXPOSE 5555 - -ADD run_flower.sh /usr/local/bin/run_flower.sh - -# Running flower -CMD ["/bin/bash", "/usr/local/bin/run_flower.sh"] diff --git a/release-0.19.0/examples/celery-rabbitmq/flower/run_flower.sh b/release-0.19.0/examples/celery-rabbitmq/flower/run_flower.sh deleted file mode 100644 index ce20727c96c..00000000000 --- a/release-0.19.0/examples/celery-rabbitmq/flower/run_flower.sh +++ /dev/null @@ -1,17 +0,0 @@ -#!/bin/bash - -# Copyright 2014 The Kubernetes Authors All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -flower --broker=amqp://guest:guest@${RABBITMQ_SERVICE_SERVICE_HOST:localhost}:5672// diff --git a/release-0.19.0/examples/celery-rabbitmq/rabbitmq-controller.yaml b/release-0.19.0/examples/celery-rabbitmq/rabbitmq-controller.yaml deleted file mode 100644 index 4bd7f608541..00000000000 --- a/release-0.19.0/examples/celery-rabbitmq/rabbitmq-controller.yaml +++ /dev/null @@ -1,25 +0,0 @@ -apiVersion: v1beta3 -kind: ReplicationController -metadata: - labels: - name: rabbitmq - name: rabbitmq-controller -spec: - replicas: 1 - selector: - component: rabbitmq - template: - metadata: - labels: - app: taskQueue - component: rabbitmq - spec: - containers: - - image: rabbitmq - name: rabbitmq - ports: - - containerPort: 5672 - protocol: TCP - resources: - limits: - cpu: 100m diff --git a/release-0.19.0/examples/celery-rabbitmq/rabbitmq-service.yaml b/release-0.19.0/examples/celery-rabbitmq/rabbitmq-service.yaml deleted file mode 100644 index 80bbb1a21d4..00000000000 --- a/release-0.19.0/examples/celery-rabbitmq/rabbitmq-service.yaml +++ /dev/null @@ -1,14 +0,0 @@ -apiVersion: v1beta3 -kind: Service -metadata: - labels: - name: rabbitmq - name: rabbitmq-service -spec: - ports: - - port: 5672 - protocol: TCP - targetPort: 5672 - selector: - app: taskQueue - component: rabbitmq diff --git a/release-0.19.0/examples/cluster-dns/README.md b/release-0.19.0/examples/cluster-dns/README.md deleted file mode 100644 index be19e17fea8..00000000000 --- a/release-0.19.0/examples/cluster-dns/README.md +++ /dev/null @@ -1,144 +0,0 @@ -## Kubernetes DNS example - -This is a toy example demonstrating how to use kubernetes DNS. - -### Step Zero: Prerequisites - -This example assumes that you have forked the repository and [turned up a Kubernetes cluster](../../docs/getting-started-guides). Make sure DNS is enabled in your setup, see [DNS doc](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/cluster/addons/dns). - -```shell -$ cd kubernetes -$ hack/dev-build-and-up.sh -``` - -### Step One: Create two namespaces - -We'll see how cluster DNS works across multiple [namespaces](../../docs/namespaces.md), first we need to create two namespaces: - -```shell -$ kubectl create -f examples/cluster-dns/namespace-dev.yaml -$ kubectl create -f examples/cluster-dns/namespace-prod.yaml -``` - -Now list all namespaces: - -```shell -$ kubectl get namespaces -NAME LABELS STATUS -default Active -development name=development Active -production name=production Active -``` - -For kubectl client to work with each namespace, we define two contexts: - -```shell -$ kubectl config set-context dev --namespace=development --cluster=${CLUSTER_NAME} --user=${USER_NAME} -$ kubectl config set-context prod --namespace=production --cluster=${CLUSTER_NAME} --user=${USER_NAME} -``` - -### Step Two: Create backend replication controller in each namespace - -Use the file [`examples/cluster-dns/dns-backend-rc.yaml`](dns-backend-rc.yaml) to create a backend server [replication controller](../../docs/replication-controller.md) in each namespace. - -```shell -$ kubectl config use-context dev -$ kubectl create -f examples/cluster-dns/dns-backend-rc.yaml -``` - -Once that's up you can list the pod in the cluster: - -```shell -$ kubectl get rc -CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS -dns-backend dns-backend ddysher/dns-backend name=dns-backend 1 -``` - -Now repeat the above commands to create a replication controller in prod namespace: - -```shell -$ kubectl config use-context prod -$ kubectl create -f examples/cluster-dns/dns-backend-rc.yaml -$ kubectl get rc -CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS -dns-backend dns-backend ddysher/dns-backend name=dns-backend 1 -``` - -### Step Three: Create backend service - -Use the file [`examples/cluster-dns/dns-backend-service.yaml`](dns-backend-service.yaml) to create -a [service](../../docs/services.md) for the backend server. - -```shell -$ kubectl config use-context dev -$ kubectl create -f examples/cluster-dns/dns-backend-service.yaml -``` - -Once that's up you can list the service in the cluster: - -```shell -$ kubectl get service dns-backend -NAME LABELS SELECTOR IP(S) PORT(S) -dns-backend name=dns-backend 10.0.236.129 8000/TCP -``` - -Again, repeat the same process for prod namespace: - -```shell -$ kubectl config use-context prod -$ kubectl create -f examples/cluster-dns/dns-backend-service.yaml -$ kubectl get service dns-backend -NAME LABELS SELECTOR IP(S) PORT(S) -dns-backend name=dns-backend 10.0.35.246 8000/TCP -``` - -### Step Four: Create client pod in one namespace - -Use the file [`examples/cluster-dns/dns-frontend-pod.yaml`](dns-frontend-pod.yaml) to create a client [pod](../../docs/pods.md) in dev namespace. The client pod will make a connection to backend and exit. Specifically, it tries to connect to address `http://dns-backend.development.kubernetes.local:8000`. - -```shell -$ kubectl config use-context dev -$ kubectl create -f examples/cluster-dns/dns-frontend-pod.yaml -``` - -Once that's up you can list the pod in the cluster: - -```shell -$ kubectl get pods dns-frontend -POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE -dns-frontend 10.244.2.9 kubernetes-minion-sswf/104.154.55.211 name=dns-frontend Running 3 seconds - dns-frontend ddysher/dns-frontend Running 2 seconds -``` - -Wait until the pod succeeds, then we can see the output from the client pod: - -```shell -$ kubectl log dns-frontend -2015-05-07T20:13:54.147664936Z 10.0.236.129 -2015-05-07T20:13:54.147721290Z Send request to: http://dns-backend.development.kubernetes.local:8000 -2015-05-07T20:13:54.147733438Z -2015-05-07T20:13:54.147738295Z Hello World! -``` - -Please refer to the [source code](./images/frontend/client.py) about the logs. First line prints out the ip address associated with the service in dev namespace; remaining lines print out our request and server response. If we switch to prod namespace with the same pod config, we'll see the same result, i.e. dns will resolve across namespace. - -```shell -$ kubectl config use-context prod -$ kubectl create -f examples/cluster-dns/dns-frontend-pod.yaml -$ kubectl log dns-frontend -2015-05-07T20:13:54.147664936Z 10.0.236.129 -2015-05-07T20:13:54.147721290Z Send request to: http://dns-backend.development.kubernetes.local:8000 -2015-05-07T20:13:54.147733438Z -2015-05-07T20:13:54.147738295Z Hello World! -``` - - -#### Note about default namespace - -If you prefer not using namespace, then all your services can be addressed using `default` namespace, e.g. `http://dns-backend.default.kubernetes.local:8000`, or shorthand version `http://dns-backend:8000` - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/cluster-dns/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/cluster-dns/README.md?pixel)]() diff --git a/release-0.19.0/examples/cluster-dns/dns-backend-rc.yaml b/release-0.19.0/examples/cluster-dns/dns-backend-rc.yaml deleted file mode 100644 index 34530a5865a..00000000000 --- a/release-0.19.0/examples/cluster-dns/dns-backend-rc.yaml +++ /dev/null @@ -1,22 +0,0 @@ -apiVersion: v1beta3 -kind: ReplicationController -metadata: - name: dns-backend - labels: - name: dns-backend -spec: - replicas: 1 - selector: - name: dns-backend - template: - metadata: - labels: - name: dns-backend - spec: - containers: - - name: dns-backend - image: ddysher/dns-backend - ports: - - name: backend-port - containerPort: 8000 - protocol: tcp diff --git a/release-0.19.0/examples/cluster-dns/dns-backend-service.yaml b/release-0.19.0/examples/cluster-dns/dns-backend-service.yaml deleted file mode 100644 index 09077855e18..00000000000 --- a/release-0.19.0/examples/cluster-dns/dns-backend-service.yaml +++ /dev/null @@ -1,9 +0,0 @@ -kind: Service -apiVersion: v1beta3 -metadata: - name: dns-backend -spec: - ports: - - port: 8000 - selector: - name: dns-backend diff --git a/release-0.19.0/examples/cluster-dns/dns-frontend-pod.yaml b/release-0.19.0/examples/cluster-dns/dns-frontend-pod.yaml deleted file mode 100644 index fee1c81a374..00000000000 --- a/release-0.19.0/examples/cluster-dns/dns-frontend-pod.yaml +++ /dev/null @@ -1,16 +0,0 @@ -apiVersion: v1beta3 -kind: Pod -metadata: - name: dns-frontend - labels: - name: dns-frontend -spec: - containers: - - name: dns-frontend - image: ddysher/dns-frontend - command: - - python - - client.py - - http://dns-backend.development.kubernetes.local:8000 - imagePullPolicy: Always - restartPolicy: Never diff --git a/release-0.19.0/examples/cluster-dns/images/backend/Dockerfile b/release-0.19.0/examples/cluster-dns/images/backend/Dockerfile deleted file mode 100644 index 915a2d19020..00000000000 --- a/release-0.19.0/examples/cluster-dns/images/backend/Dockerfile +++ /dev/null @@ -1,6 +0,0 @@ -FROM python:2.7 - -COPY . /dns-backend -WORKDIR /dns-backend - -CMD ["python", "server.py"] diff --git a/release-0.19.0/examples/cluster-dns/images/backend/server.py b/release-0.19.0/examples/cluster-dns/images/backend/server.py deleted file mode 100644 index fdb8edfac67..00000000000 --- a/release-0.19.0/examples/cluster-dns/images/backend/server.py +++ /dev/null @@ -1,37 +0,0 @@ -#!/usr/bin/env python - -# Copyright 2015 The Kubernetes Authors All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer - -PORT_NUMBER = 8000 - -# This class will handles any incoming request. -class HTTPHandler(BaseHTTPRequestHandler): - # Handler for the GET requests - def do_GET(self): - self.send_response(200) - self.send_header('Content-type','text/html') - self.end_headers() - self.wfile.write("Hello World!") - -try: - # Create a web server and define the handler to manage the incoming request. - server = HTTPServer(('', PORT_NUMBER), HTTPHandler) - print 'Started httpserver on port ' , PORT_NUMBER - server.serve_forever() -except KeyboardInterrupt: - print '^C received, shutting down the web server' - server.socket.close() diff --git a/release-0.19.0/examples/cluster-dns/images/frontend/Dockerfile b/release-0.19.0/examples/cluster-dns/images/frontend/Dockerfile deleted file mode 100644 index 6046b7e1afb..00000000000 --- a/release-0.19.0/examples/cluster-dns/images/frontend/Dockerfile +++ /dev/null @@ -1,8 +0,0 @@ -FROM python:2.7 - -RUN pip install requests - -COPY . /dns-frontend -WORKDIR /dns-frontend - -CMD ["python", "client.py"] diff --git a/release-0.19.0/examples/cluster-dns/images/frontend/client.py b/release-0.19.0/examples/cluster-dns/images/frontend/client.py deleted file mode 100644 index cbb27644936..00000000000 --- a/release-0.19.0/examples/cluster-dns/images/frontend/client.py +++ /dev/null @@ -1,46 +0,0 @@ -#!/usr/bin/env python - -# Copyright 2015 The Kubernetes Authors All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import argparse -import requests -import socket - -from urlparse import urlparse - - -def CheckServiceAddress(address): - hostname = urlparse(address).hostname - service_address = socket.gethostbyname(hostname) - print service_address - - -def GetServerResponse(address): - print 'Send request to:', address - response = requests.get(address) - print response - print response.content - - -def Main(): - parser = argparse.ArgumentParser() - parser.add_argument('address') - args = parser.parse_args() - CheckServiceAddress(args.address) - GetServerResponse(args.address) - - -if __name__ == "__main__": - Main() diff --git a/release-0.19.0/examples/cluster-dns/namespace-dev.yaml b/release-0.19.0/examples/cluster-dns/namespace-dev.yaml deleted file mode 100644 index 492eddb9f4a..00000000000 --- a/release-0.19.0/examples/cluster-dns/namespace-dev.yaml +++ /dev/null @@ -1,6 +0,0 @@ -apiVersion: v1beta3 -kind: Namespace -metadata: - name: "development" - labels: - name: "development" diff --git a/release-0.19.0/examples/cluster-dns/namespace-prod.yaml b/release-0.19.0/examples/cluster-dns/namespace-prod.yaml deleted file mode 100644 index 7cd820ca9ad..00000000000 --- a/release-0.19.0/examples/cluster-dns/namespace-prod.yaml +++ /dev/null @@ -1,6 +0,0 @@ -apiVersion: v1beta3 -kind: Namespace -metadata: - name: "production" - labels: - name: "production" diff --git a/release-0.19.0/examples/doc.go b/release-0.19.0/examples/doc.go deleted file mode 100644 index d976f88e65a..00000000000 --- a/release-0.19.0/examples/doc.go +++ /dev/null @@ -1,18 +0,0 @@ -/* -Copyright 2014 The Kubernetes Authors All rights reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Examples contains sample applications for trying out the concepts in Kubernetes. -package examples diff --git a/release-0.19.0/examples/downward-api/README.md b/release-0.19.0/examples/downward-api/README.md deleted file mode 100644 index 84956f4ef7c..00000000000 --- a/release-0.19.0/examples/downward-api/README.md +++ /dev/null @@ -1,39 +0,0 @@ -# Downward API example - -Following this example, you will create a pod with a containers that consumes the pod's name and -namespace using the downward API. - -## Step Zero: Prerequisites - -This example assumes you have a Kubernetes cluster installed and running, and that you have -installed the ```kubectl``` command line tool somewhere in your path. Please see the [getting -started](../../docs/getting-started-guides) for installation instructions for your platform. - -## Step One: Create the pod - -Containers consume the downward API using environment variables. The downward API allows -containers to be injected with the name and namespace of the pod the container is in. - -Use the [`examples/downward-api/dapi-pod.yaml`](dapi-pod.yaml) file to create a Pod with a container that consumes the -downward API. - -```shell -$ kubectl create -f examples/downward-api/dapi-pod.yaml -``` - -### Examine the logs - -This pod runs the `env` command in a container that consumes the downward API. You can grep -through the pod logs to see that the pod was injected with the correct values: - -```shell -$ kubectl log dapi-test-pod | grep POD_ -2015-04-30T20:22:18.568024817Z POD_NAME=dapi-test-pod -2015-04-30T20:22:18.568087688Z POD_NAMESPACE=default -``` - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/downward-api/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/downward-api/README.md?pixel)]() diff --git a/release-0.19.0/examples/downward-api/dapi-pod.yaml b/release-0.19.0/examples/downward-api/dapi-pod.yaml deleted file mode 100644 index 09e8bbe8c17..00000000000 --- a/release-0.19.0/examples/downward-api/dapi-pod.yaml +++ /dev/null @@ -1,19 +0,0 @@ -apiVersion: v1beta3 -kind: Pod -metadata: - name: dapi-test-pod -spec: - containers: - - name: test-container - image: gcr.io/google_containers/busybox - command: [ "/bin/sh", "-c", "env" ] - env: - - name: POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - restartPolicy: Never diff --git a/release-0.19.0/examples/elasticsearch/Dockerfile b/release-0.19.0/examples/elasticsearch/Dockerfile deleted file mode 100644 index fd47488abcc..00000000000 --- a/release-0.19.0/examples/elasticsearch/Dockerfile +++ /dev/null @@ -1,18 +0,0 @@ -FROM java:7-jre - -RUN apt-get update && \ - apt-get install -y curl && \ - apt-get clean - -RUN cd / && \ - curl -O https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.5.2.tar.gz && \ - tar xf elasticsearch-1.5.2.tar.gz && \ - rm elasticsearch-1.5.2.tar.gz - -COPY elasticsearch.yml /elasticsearch-1.5.2/config/elasticsearch.yml -COPY run.sh / -COPY elasticsearch_discovery / - -EXPOSE 9200 9300 - -CMD ["/run.sh"] \ No newline at end of file diff --git a/release-0.19.0/examples/elasticsearch/Makefile b/release-0.19.0/examples/elasticsearch/Makefile deleted file mode 100644 index ae1794e6b70..00000000000 --- a/release-0.19.0/examples/elasticsearch/Makefile +++ /dev/null @@ -1,14 +0,0 @@ -.PHONY: elasticsearch_discovery build push all - -TAG = 1.0 - -build: - docker build -t kubernetes/elasticsearch:$(TAG) . - -push: - docker push kubernetes/elasticsearch:$(TAG) - -elasticsearch_discovery: - go build elasticsearch_discovery.go - -all: elasticsearch_discovery build push diff --git a/release-0.19.0/examples/elasticsearch/README.md b/release-0.19.0/examples/elasticsearch/README.md deleted file mode 100644 index 5743be293e7..00000000000 --- a/release-0.19.0/examples/elasticsearch/README.md +++ /dev/null @@ -1,324 +0,0 @@ -# Elasticsearch for Kubernetes - -This directory contains the source for a Docker image that creates an instance -of [Elasticsearch](https://www.elastic.co/products/elasticsearch) 1.5.2 which can -be used to automatically form clusters when used -with [replication controllers](../../docs/replication-controller.md). This will not work with the library Elasticsearch image -because multicast discovery will not find the other pod IPs needed to form a cluster. This -image detects other Elasticsearch [pods](../../docs/pods.md) running in a specified [namespace](../../docs/namespaces.md) with a given -label selector. The detected instances are used to form a list of peer hosts which -are used as part of the unicast discovery mechansim for Elasticsearch. The detection -of the peer nodes is done by a program which communicates with the Kubernetes API -server to get a list of matching Elasticsearch pods. To enable authenticated -communication this image needs a [secret](../../docs/secrets.md) to be mounted at `/etc/apiserver-secret` -with the basic authentication username and password. - -Here is an example replication controller specification that creates 4 instances of Elasticsearch which is in the file -[music-rc.yaml](music-rc.yaml). -``` -apiVersion: v1beta3 -kind: ReplicationController -metadata: - labels: - name: music-db - namespace: mytunes - name: music-db -spec: - replicas: 4 - selector: - name: music-db - template: - metadata: - labels: - name: music-db - spec: - containers: - - name: es - image: kubernetes/elasticsearch:1.0 - env: - - name: "CLUSTER_NAME" - value: "mytunes-db" - - name: "SELECTOR" - value: "name=music-db" - - name: "NAMESPACE" - value: "mytunes" - ports: - - name: es - containerPort: 9200 - - name: es-transport - containerPort: 9300 - volumeMounts: - - name: apiserver-secret - mountPath: /etc/apiserver-secret - readOnly: true - volumes: - - name: apiserver-secret - secret: - secretName: apiserver-secret -``` -The `CLUSTER_NAME` variable gives a name to the cluster and allows multiple separate clusters to -exist in the same namespace. -The `SELECTOR` variable should be set to a label query that identifies the Elasticsearch -nodes that should participate in this cluster. For our example we specify `name=music-db` to -match all pods that have the label `name` set to the value `music-db`. -The `NAMESPACE` variable identifies the namespace -to be used to search for Elasticsearch pods and this should be the same as the namespace specified -for the replication controller (in this case `mytunes`). - -Before creating pods with the replication controller a secret containing the bearer authentication token -should be set up. A template is provided in the file [apiserver-secret.yaml](apiserver-secret.yaml): -``` -apiVersion: v1beta3 -kind: Secret -metadata: - name: apiserver-secret - namespace: NAMESPACE -data: - token: "TOKEN" - -``` -Replace `NAMESPACE` with the actual namespace to be used and `TOKEN` with the basic64 encoded -versions of the bearer token reported by `kubectl config view` e.g. -``` -$ kubectl config view -... -- name: kubernetes-logging_kubernetes-basic-auth -... - token: yGlDcMvSZPX4PyP0Q5bHgAYgi1iyEHv2 - ... -$ echo yGlDcMvSZPX4PyP0Q5bHgAYgi1iyEHv2 | base64 -eUdsRGNNdlNaUFg0UHlQMFE1YkhnQVlnaTFpeUVIdjIK= - -``` -resulting in the file: -``` -apiVersion: v1beta3 -kind: Secret -metadata: - name: apiserver-secret - namespace: mytunes -data: - token: "eUdsRGNNdlNaUFg0UHlQMFE1YkhnQVlnaTFpeUVIdjIK=" - -``` -which can be used to create the secret in your namespace: -``` -kubectl create -f apiserver-secret.yaml --namespace=mytunes -secrets/apiserver-secret - -``` -Now you are ready to create the replication controller which will then create the pods: -``` -$ kubectl create -f music-rc.yaml --namespace=mytunes -replicationcontrollers/music-db - -``` -It's also useful to have a [service](../../docs/services.md) with an external load balancer for accessing the Elasticsearch -cluster which can be found in the file [music-service.yaml](music-service.yaml). -``` -apiVersion: v1beta3 -kind: Service -metadata: - name: music-server - namespace: mytunes - labels: - name: music-db -spec: - selector: - name: music-db - ports: - - name: db - port: 9200 - targetPort: es - createExternalLoadBalancer: true -``` -Let's create the service with an external load balancer: -``` -$ kubectl create -f music-service.yaml --namespace=mytunes -services/music-server - -``` -Let's see what we've got: -``` -$ kubectl get pods,rc,services,secrets --namespace=mytunes - -POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE -music-db-0fwsu 10.244.2.48 kubernetes-minion-m49b/104.197.35.221 name=music-db Running 6 minutes - es kubernetes/elasticsearch:1.0 Running 29 seconds -music-db-5pc2e 10.244.0.24 kubernetes-minion-3c8c/146.148.41.184 name=music-db Running 6 minutes - es kubernetes/elasticsearch:1.0 Running 6 minutes -music-db-bjqmv 10.244.3.31 kubernetes-minion-zey5/104.154.59.10 name=music-db Running 6 minutes - es kubernetes/elasticsearch:1.0 Running 19 seconds -music-db-swtrs 10.244.1.37 kubernetes-minion-f9dw/130.211.159.230 name=music-db Running 6 minutes - es kubernetes/elasticsearch:1.0 Running 6 minutes -CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS -music-db es kubernetes/elasticsearch:1.0 name=music-db 4 -NAME LABELS SELECTOR IP(S) PORT(S) -music-server name=music-db name=music-db 10.0.138.61 9200/TCP - 104.197.12.157 -NAME TYPE DATA -apiserver-secret Opaque 2 -``` -This shows 4 instances of Elasticsearch running. After making sure that port 9200 is accessible for this cluster (e.g. using a firewall rule for GCE) we can make queries via the service which will be fielded by the matching Elasticsearch pods. -``` -$ curl 104.197.12.157:9200 -{ - "status" : 200, - "name" : "Warpath", - "cluster_name" : "mytunes-db", - "version" : { - "number" : "1.5.2", - "build_hash" : "62ff9868b4c8a0c45860bebb259e21980778ab1c", - "build_timestamp" : "2015-04-27T09:21:06Z", - "build_snapshot" : false, - "lucene_version" : "4.10.4" - }, - "tagline" : "You Know, for Search" -} -$ curl 104.197.12.157:9200 -{ - "status" : 200, - "name" : "Callisto", - "cluster_name" : "mytunes-db", - "version" : { - "number" : "1.5.2", - "build_hash" : "62ff9868b4c8a0c45860bebb259e21980778ab1c", - "build_timestamp" : "2015-04-27T09:21:06Z", - "build_snapshot" : false, - "lucene_version" : "4.10.4" - }, - "tagline" : "You Know, for Search" -} -``` -We can query the nodes to confirm that an Elasticsearch cluster has been formed. -``` -$ curl 104.197.12.157:9200/_nodes?pretty=true -{ - "cluster_name" : "mytunes-db", - "nodes" : { - "u-KrvywFQmyaH5BulSclsA" : { - "name" : "Jonas Harrow", -... - "discovery" : { - "zen" : { - "ping" : { - "unicast" : { - "hosts" : [ "10.244.2.48", "10.244.0.24", "10.244.3.31", "10.244.1.37" ] - }, -... - "name" : "Warpath", -... - "discovery" : { - "zen" : { - "ping" : { - "unicast" : { - "hosts" : [ "10.244.2.48", "10.244.0.24", "10.244.3.31", "10.244.1.37" ] - }, -... - "name" : "Callisto", -... - "discovery" : { - "zen" : { - "ping" : { - "unicast" : { - "hosts" : [ "10.244.2.48", "10.244.0.24", "10.244.3.31", "10.244.1.37" ] - }, -... - "name" : "Vapor", -... - "discovery" : { - "zen" : { - "ping" : { - "unicast" : { - "hosts" : [ "10.244.2.48", "10.244.0.24", "10.244.3.31", "10.244.1.37" ] -... -``` -Let's ramp up the number of Elasticsearch nodes from 4 to 10: -``` -$ kubectl scale --replicas=10 replicationcontrollers music-db --namespace=mytunes -scaled -$ kubectl get pods --namespace=mytunes -POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE -music-db-0fwsu 10.244.2.48 kubernetes-minion-m49b/104.197.35.221 name=music-db Running 33 minutes - es kubernetes/elasticsearch:1.0 Running 26 minutes -music-db-2erje 10.244.2.50 kubernetes-minion-m49b/104.197.35.221 name=music-db Running 48 seconds - es kubernetes/elasticsearch:1.0 Running 46 seconds -music-db-5pc2e 10.244.0.24 kubernetes-minion-3c8c/146.148.41.184 name=music-db Running 33 minutes - es kubernetes/elasticsearch:1.0 Running 32 minutes -music-db-8rkvp 10.244.3.33 kubernetes-minion-zey5/104.154.59.10 name=music-db Running 48 seconds - es kubernetes/elasticsearch:1.0 Running 46 seconds -music-db-bjqmv 10.244.3.31 kubernetes-minion-zey5/104.154.59.10 name=music-db Running 33 minutes - es kubernetes/elasticsearch:1.0 Running 26 minutes -music-db-efc46 10.244.2.49 kubernetes-minion-m49b/104.197.35.221 name=music-db Running 48 seconds - es kubernetes/elasticsearch:1.0 Running 46 seconds -music-db-fhqyg 10.244.0.25 kubernetes-minion-3c8c/146.148.41.184 name=music-db Running 48 seconds - es kubernetes/elasticsearch:1.0 Running 47 seconds -music-db-guxe4 10.244.3.32 kubernetes-minion-zey5/104.154.59.10 name=music-db Running 48 seconds - es kubernetes/elasticsearch:1.0 Running 46 seconds -music-db-pbiq1 10.244.1.38 kubernetes-minion-f9dw/130.211.159.230 name=music-db Running 48 seconds - es kubernetes/elasticsearch:1.0 Running 47 seconds -music-db-swtrs 10.244.1.37 kubernetes-minion-f9dw/130.211.159.230 name=music-db Running 33 minutes - es kubernetes/elasticsearch:1.0 Running 32 minutes - -``` -Let's check to make sure that these 10 nodes are part of the same Elasticsearch cluster: -``` -$ curl 104.197.12.157:9200/_nodes?pretty=true | grep name -"cluster_name" : "mytunes-db", - "name" : "Killraven", - "name" : "Killraven", - "name" : "mytunes-db" - "vm_name" : "OpenJDK 64-Bit Server VM", - "name" : "eth0", - "name" : "Tefral the Surveyor", - "name" : "Tefral the Surveyor", - "name" : "mytunes-db" - "vm_name" : "OpenJDK 64-Bit Server VM", - "name" : "eth0", - "name" : "Jonas Harrow", - "name" : "Jonas Harrow", - "name" : "mytunes-db" - "vm_name" : "OpenJDK 64-Bit Server VM", - "name" : "eth0", - "name" : "Warpath", - "name" : "Warpath", - "name" : "mytunes-db" - "vm_name" : "OpenJDK 64-Bit Server VM", - "name" : "eth0", - "name" : "Brute I", - "name" : "Brute I", - "name" : "mytunes-db" - "vm_name" : "OpenJDK 64-Bit Server VM", - "name" : "eth0", - "name" : "Callisto", - "name" : "Callisto", - "name" : "mytunes-db" - "vm_name" : "OpenJDK 64-Bit Server VM", - "name" : "eth0", - "name" : "Vapor", - "name" : "Vapor", - "name" : "mytunes-db" - "vm_name" : "OpenJDK 64-Bit Server VM", - "name" : "eth0", - "name" : "Timeslip", - "name" : "Timeslip", - "name" : "mytunes-db" - "vm_name" : "OpenJDK 64-Bit Server VM", - "name" : "eth0", - "name" : "Magik", - "name" : "Magik", - "name" : "mytunes-db" - "vm_name" : "OpenJDK 64-Bit Server VM", - "name" : "eth0", - "name" : "Brother Voodoo", - "name" : "Brother Voodoo", - "name" : "mytunes-db" - "vm_name" : "OpenJDK 64-Bit Server VM", - "name" : "eth0", - -``` - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/elasticsearch/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/elasticsearch/README.md?pixel)]() diff --git a/release-0.19.0/examples/elasticsearch/apiserver-secret.yaml b/release-0.19.0/examples/elasticsearch/apiserver-secret.yaml deleted file mode 100644 index 1d0c8522005..00000000000 --- a/release-0.19.0/examples/elasticsearch/apiserver-secret.yaml +++ /dev/null @@ -1,8 +0,0 @@ -apiVersion: v1beta3 -kind: Secret -metadata: - name: apiserver-secret - namespace: NAMESPACE -data: - token: "TOKEN" - diff --git a/release-0.19.0/examples/elasticsearch/elasticsearch.yml b/release-0.19.0/examples/elasticsearch/elasticsearch.yml deleted file mode 100644 index ff0237a2eb2..00000000000 --- a/release-0.19.0/examples/elasticsearch/elasticsearch.yml +++ /dev/null @@ -1,385 +0,0 @@ -##################### Elasticsearch Configuration Example ##################### - -# This file contains an overview of various configuration settings, -# targeted at operations staff. Application developers should -# consult the guide at . -# -# The installation procedure is covered at -# . -# -# Elasticsearch comes with reasonable defaults for most settings, -# so you can try it out without bothering with configuration. -# -# Most of the time, these defaults are just fine for running a production -# cluster. If you're fine-tuning your cluster, or wondering about the -# effect of certain configuration option, please _do ask_ on the -# mailing list or IRC channel [http://elasticsearch.org/community]. - -# Any element in the configuration can be replaced with environment variables -# by placing them in ${...} notation. For example: -# -#node.rack: ${RACK_ENV_VAR} - -# For information on supported formats and syntax for the config file, see -# - - -################################### Cluster ################################### - -# Cluster name identifies your cluster for auto-discovery. If you're running -# multiple clusters on the same network, make sure you're using unique names. -# -cluster.name: ${CLUSTER_NAME} - - -#################################### Node ##################################### - -# Node names are generated dynamically on startup, so you're relieved -# from configuring them manually. You can tie this node to a specific name: -# -#node.name: "Franz Kafka" - -# Every node can be configured to allow or deny being eligible as the master, -# and to allow or deny to store the data. -# -# Allow this node to be eligible as a master node (enabled by default): -# -node.master: ${NODE_MASTER} -# -# Allow this node to store data (enabled by default): -# -node.data: ${NODE_DATA} - -# You can exploit these settings to design advanced cluster topologies. -# -# 1. You want this node to never become a master node, only to hold data. -# This will be the "workhorse" of your cluster. -# -#node.master: false -#node.data: true -# -# 2. You want this node to only serve as a master: to not store any data and -# to have free resources. This will be the "coordinator" of your cluster. -# -#node.master: true -#node.data: false -# -# 3. You want this node to be neither master nor data node, but -# to act as a "search load balancer" (fetching data from nodes, -# aggregating results, etc.) -# -#node.master: false -#node.data: false - -# Use the Cluster Health API [http://localhost:9200/_cluster/health], the -# Node Info API [http://localhost:9200/_nodes] or GUI tools -# such as , -# , -# and -# to inspect the cluster state. - -# A node can have generic attributes associated with it, which can later be used -# for customized shard allocation filtering, or allocation awareness. An attribute -# is a simple key value pair, similar to node.key: value, here is an example: -# -#node.rack: rack314 - -# By default, multiple nodes are allowed to start from the same installation location -# to disable it, set the following: -#node.max_local_storage_nodes: 1 - - -#################################### Index #################################### - -# You can set a number of options (such as shard/replica options, mapping -# or analyzer definitions, translog settings, ...) for indices globally, -# in this file. -# -# Note, that it makes more sense to configure index settings specifically for -# a certain index, either when creating it or by using the index templates API. -# -# See and -# -# for more information. - -# Set the number of shards (splits) of an index (5 by default): -# -#index.number_of_shards: 5 - -# Set the number of replicas (additional copies) of an index (1 by default): -# -#index.number_of_replicas: 1 - -# Note, that for development on a local machine, with small indices, it usually -# makes sense to "disable" the distributed features: -# -#index.number_of_shards: 1 -#index.number_of_replicas: 0 - -# These settings directly affect the performance of index and search operations -# in your cluster. Assuming you have enough machines to hold shards and -# replicas, the rule of thumb is: -# -# 1. Having more *shards* enhances the _indexing_ performance and allows to -# _distribute_ a big index across machines. -# 2. Having more *replicas* enhances the _search_ performance and improves the -# cluster _availability_. -# -# The "number_of_shards" is a one-time setting for an index. -# -# The "number_of_replicas" can be increased or decreased anytime, -# by using the Index Update Settings API. -# -# Elasticsearch takes care about load balancing, relocating, gathering the -# results from nodes, etc. Experiment with different settings to fine-tune -# your setup. - -# Use the Index Status API () to inspect -# the index status. - - -#################################### Paths #################################### - -# Path to directory containing configuration (this file and logging.yml): -# -#path.conf: /path/to/conf - -# Path to directory where to store index data allocated for this node. -# -#path.data: /path/to/data -# -# Can optionally include more than one location, causing data to be striped across -# the locations (a la RAID 0) on a file level, favouring locations with most free -# space on creation. For example: -# -#path.data: /path/to/data1,/path/to/data2 - -# Path to temporary files: -# -#path.work: /path/to/work - -# Path to log files: -# -#path.logs: /path/to/logs - -# Path to where plugins are installed: -# -#path.plugins: /path/to/plugins - - -#################################### Plugin ################################### - -# If a plugin listed here is not installed for current node, the node will not start. -# -#plugin.mandatory: mapper-attachments,lang-groovy - - -################################### Memory #################################### - -# Elasticsearch performs poorly when JVM starts swapping: you should ensure that -# it _never_ swaps. -# -# Set this property to true to lock the memory: -# -#bootstrap.mlockall: true - -# Make sure that the ES_MIN_MEM and ES_MAX_MEM environment variables are set -# to the same value, and that the machine has enough memory to allocate -# for Elasticsearch, leaving enough memory for the operating system itself. -# -# You should also make sure that the Elasticsearch process is allowed to lock -# the memory, eg. by using `ulimit -l unlimited`. - - -############################## Network And HTTP ############################### - -# Elasticsearch, by default, binds itself to the 0.0.0.0 address, and listens -# on port [9200-9300] for HTTP traffic and on port [9300-9400] for node-to-node -# communication. (the range means that if the port is busy, it will automatically -# try the next port). - -# Set the bind address specifically (IPv4 or IPv6): -# -#network.bind_host: 192.168.0.1 - -# Set the address other nodes will use to communicate with this node. If not -# set, it is automatically derived. It must point to an actual IP address. -# -#network.publish_host: 192.168.0.1 - -# Set both 'bind_host' and 'publish_host': -# -#network.host: 192.168.0.1 - -# Set a custom port for the node to node communication (9300 by default): -# -transport.tcp.port: ${TRANSPORT_PORT} - -# Enable compression for all communication between nodes (disabled by default): -# -#transport.tcp.compress: true - -# Set a custom port to listen for HTTP traffic: -# -http.port: ${HTTP_PORT} - -# Set a custom allowed content length: -# -#http.max_content_length: 100mb - -# Disable HTTP completely: -# -#http.enabled: false - - -################################### Gateway ################################### - -# The gateway allows for persisting the cluster state between full cluster -# restarts. Every change to the state (such as adding an index) will be stored -# in the gateway, and when the cluster starts up for the first time, -# it will read its state from the gateway. - -# There are several types of gateway implementations. For more information, see -# . - -# The default gateway type is the "local" gateway (recommended): -# -#gateway.type: local - -# Settings below control how and when to start the initial recovery process on -# a full cluster restart (to reuse as much local data as possible when using shared -# gateway). - -# Allow recovery process after N nodes in a cluster are up: -# -#gateway.recover_after_nodes: 1 - -# Set the timeout to initiate the recovery process, once the N nodes -# from previous setting are up (accepts time value): -# -#gateway.recover_after_time: 5m - -# Set how many nodes are expected in this cluster. Once these N nodes -# are up (and recover_after_nodes is met), begin recovery process immediately -# (without waiting for recover_after_time to expire): -# -#gateway.expected_nodes: 2 - - -############################# Recovery Throttling ############################# - -# These settings allow to control the process of shards allocation between -# nodes during initial recovery, replica allocation, rebalancing, -# or when adding and removing nodes. - -# Set the number of concurrent recoveries happening on a node: -# -# 1. During the initial recovery -# -#cluster.routing.allocation.node_initial_primaries_recoveries: 4 -# -# 2. During adding/removing nodes, rebalancing, etc -# -#cluster.routing.allocation.node_concurrent_recoveries: 2 - -# Set to throttle throughput when recovering (eg. 100mb, by default 20mb): -# -#indices.recovery.max_bytes_per_sec: 20mb - -# Set to limit the number of open concurrent streams when -# recovering a shard from a peer: -# -#indices.recovery.concurrent_streams: 5 - - -################################## Discovery ################################## - -# Discovery infrastructure ensures nodes can be found within a cluster -# and master node is elected. Multicast discovery is the default. - -# Set to ensure a node sees N other master eligible nodes to be considered -# operational within the cluster. This should be set to a quorum/majority of -# the master-eligible nodes in the cluster. -# -#discovery.zen.minimum_master_nodes: 1 - -# Set the time to wait for ping responses from other nodes when discovering. -# Set this option to a higher value on a slow or congested network -# to minimize discovery failures: -# -#discovery.zen.ping.timeout: 3s - -# For more information, see -# - -# Unicast discovery allows to explicitly control which nodes will be used -# to discover the cluster. It can be used when multicast is not present, -# or to restrict the cluster communication-wise. -# -# 1. Disable multicast discovery (enabled by default): -# -discovery.zen.ping.multicast.enabled: ${MULTICAST} -# -# 2. Configure an initial list of master nodes in the cluster -# to perform discovery when new nodes (master or data) are started: -# -#discovery.zen.ping.unicast.hosts: ${UNICAST_HOSTS} - -# EC2 discovery allows to use AWS EC2 API in order to perform discovery. -# -# You have to install the cloud-aws plugin for enabling the EC2 discovery. -# -# For more information, see -# -# -# See -# for a step-by-step tutorial. - -# GCE discovery allows to use Google Compute Engine API in order to perform discovery. -# -# You have to install the cloud-gce plugin for enabling the GCE discovery. -# -# For more information, see . - -# Azure discovery allows to use Azure API in order to perform discovery. -# -# You have to install the cloud-azure plugin for enabling the Azure discovery. -# -# For more information, see . - -################################## Slow Log ################################## - -# Shard level query and fetch threshold logging. - -#index.search.slowlog.threshold.query.warn: 10s -#index.search.slowlog.threshold.query.info: 5s -#index.search.slowlog.threshold.query.debug: 2s -#index.search.slowlog.threshold.query.trace: 500ms - -#index.search.slowlog.threshold.fetch.warn: 1s -#index.search.slowlog.threshold.fetch.info: 800ms -#index.search.slowlog.threshold.fetch.debug: 500ms -#index.search.slowlog.threshold.fetch.trace: 200ms - -#index.indexing.slowlog.threshold.index.warn: 10s -#index.indexing.slowlog.threshold.index.info: 5s -#index.indexing.slowlog.threshold.index.debug: 2s -#index.indexing.slowlog.threshold.index.trace: 500ms - -################################## GC Logging ################################ - -#monitor.jvm.gc.young.warn: 1000ms -#monitor.jvm.gc.young.info: 700ms -#monitor.jvm.gc.young.debug: 400ms - -#monitor.jvm.gc.old.warn: 10s -#monitor.jvm.gc.old.info: 5s -#monitor.jvm.gc.old.debug: 2s - -################################## Security ################################ - -# Uncomment if you want to enable JSONP as a valid return transport on the -# http server. With this enabled, it may pose a security risk, so disabling -# it unless you need it is recommended (it is disabled by default). -# -#http.jsonp.enable: true diff --git a/release-0.19.0/examples/elasticsearch/elasticsearch_discovery.go b/release-0.19.0/examples/elasticsearch/elasticsearch_discovery.go deleted file mode 100644 index 100ba01260c..00000000000 --- a/release-0.19.0/examples/elasticsearch/elasticsearch_discovery.go +++ /dev/null @@ -1,97 +0,0 @@ -/* -Copyright 2015 The Kubernetes Authors All rights reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package main - -import ( - "flag" - "fmt" - "os" - "strings" - "time" - - "github.com/GoogleCloudPlatform/kubernetes/pkg/api" - "github.com/GoogleCloudPlatform/kubernetes/pkg/client" - "github.com/GoogleCloudPlatform/kubernetes/pkg/fields" - "github.com/GoogleCloudPlatform/kubernetes/pkg/labels" - "github.com/golang/glog" -) - -var ( - token = flag.String("token", "", "Bearer token for authentication to the API server.") - server = flag.String("server", "", "The address and port of the Kubernetes API server") - namespace = flag.String("namespace", api.NamespaceDefault, "The namespace containing Elasticsearch pods") - selector = flag.String("selector", "", "Selector (label query) for selecting Elasticsearch pods") -) - -func main() { - flag.Parse() - glog.Info("Elasticsearch discovery") - apiServer := *server - if apiServer == "" { - kubernetesService := os.Getenv("KUBERNETES_SERVICE_HOST") - if kubernetesService == "" { - glog.Fatalf("Please specify the Kubernetes server with --server") - } - apiServer = fmt.Sprintf("https://%s:%s", kubernetesService, os.Getenv("KUBERNETES_SERVICE_PORT")) - } - - glog.Infof("Server: %s", apiServer) - glog.Infof("Namespace: %q", *namespace) - glog.Infof("selector: %q", *selector) - - config := client.Config{ - Host: apiServer, - BearerToken: *token, - Insecure: true, - } - - c, err := client.New(&config) - if err != nil { - glog.Fatalf("Failed to make client: %v", err) - } - - l, err := labels.Parse(*selector) - if err != nil { - glog.Fatalf("Failed to parse selector %q: %v", *selector, err) - } - pods, err := c.Pods(*namespace).List(l, fields.Everything()) - if err != nil { - glog.Fatalf("Failed to list pods: %v", err) - } - - glog.Infof("Elasticsearch pods in namespace %s with selector %q", *namespace, *selector) - podIPs := []string{} - for i := range pods.Items { - p := &pods.Items[i] - for attempt := 0; attempt < 10; attempt++ { - glog.Infof("%d: %s PodIP: %s", i, p.Name, p.Status.PodIP) - if p.Status.PodIP != "" { - podIPs = append(podIPs, fmt.Sprintf(`"%s"`, p.Status.PodIP)) - break - } - time.Sleep(1 * time.Second) - p, err = c.Pods(*namespace).Get(p.Name) - if err != nil { - glog.Warningf("Failed to get pod %s: %v", p.Name, err) - } - } - if p.Status.PodIP == "" { - glog.Warningf("Failed to obtain PodIP for %s", p.Name) - } - } - fmt.Printf("discovery.zen.ping.unicast.hosts: [%s]\n", strings.Join(podIPs, ", ")) -} diff --git a/release-0.19.0/examples/elasticsearch/music-rc.yaml b/release-0.19.0/examples/elasticsearch/music-rc.yaml deleted file mode 100644 index eec1e9accce..00000000000 --- a/release-0.19.0/examples/elasticsearch/music-rc.yaml +++ /dev/null @@ -1,39 +0,0 @@ -apiVersion: v1beta3 -kind: ReplicationController -metadata: - labels: - name: music-db - namespace: mytunes - name: music-db -spec: - replicas: 4 - selector: - name: music-db - template: - metadata: - labels: - name: music-db - spec: - containers: - - name: es - image: kubernetes/elasticsearch:1.0 - env: - - name: "CLUSTER_NAME" - value: "mytunes-db" - - name: "SELECTOR" - value: "name=music-db" - - name: "NAMESPACE" - value: "mytunes" - ports: - - name: es - containerPort: 9200 - - name: es-transport - containerPort: 9300 - volumeMounts: - - name: apiserver-secret - mountPath: /etc/apiserver-secret - readOnly: true - volumes: - - name: apiserver-secret - secret: - secretName: apiserver-secret diff --git a/release-0.19.0/examples/elasticsearch/music-service.yaml b/release-0.19.0/examples/elasticsearch/music-service.yaml deleted file mode 100644 index 3dc45fae440..00000000000 --- a/release-0.19.0/examples/elasticsearch/music-service.yaml +++ /dev/null @@ -1,15 +0,0 @@ -apiVersion: v1beta3 -kind: Service -metadata: - name: music-server - namespace: mytunes - labels: - name: music-db -spec: - selector: - name: music-db - ports: - - name: db - port: 9200 - targetPort: es - createExternalLoadBalancer: true diff --git a/release-0.19.0/examples/elasticsearch/run.sh b/release-0.19.0/examples/elasticsearch/run.sh deleted file mode 100755 index 2b0447e3bcc..00000000000 --- a/release-0.19.0/examples/elasticsearch/run.sh +++ /dev/null @@ -1,25 +0,0 @@ -#!/bin/bash - -# Copyright 2015 The Kubernetes Authors All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -export CLUSTER_NAME=${CLUSTER_NAME:-elasticsearch-default} -export NODE_MASTER=${NODE_MASTER:-true} -export NODE_DATA=${NODE_DATA:-true} -export MULTICAST=${MULTICAST:-false} -readonly TOKEN=$(cat /etc/apiserver-secret/token) -/elasticsearch_discovery --namespace="${NAMESPACE}" --token="${TOKEN}" --selector="${SELECTOR}" >> /elasticsearch-1.5.2/config/elasticsearch.yml -export HTTP_PORT=${HTTP_PORT:-9200} -export TRANSPORT_PORT=${TRANSPORT_PORT:-9300} -/elasticsearch-1.5.2/bin/elasticsearch diff --git a/release-0.19.0/examples/environment-guide/README.md b/release-0.19.0/examples/environment-guide/README.md deleted file mode 100644 index 6d985709119..00000000000 --- a/release-0.19.0/examples/environment-guide/README.md +++ /dev/null @@ -1,95 +0,0 @@ -Environment Guide Example -========================= -This example demonstrates running pods, replication controllers, and -services. It shows two types of pods: frontend and backend, with -services on top of both. Accessing the frontend pod will return -environment information about itself, and a backend pod that it has -accessed through the service. The goal is to illuminate the -environment metadata available to running containers inside the -Kubernetes cluster. The documentation for the kubernetes environment -is [here](/docs/container-environment.md). - -![Diagram](diagram.png) - -Prerequisites -------------- -This example assumes that you have a Kubernetes cluster installed and -running, and that you have installed the `kubectl` command line tool -somewhere in your path. Please see the [getting -started](/docs/getting-started-guides) for installation instructions -for your platform. - -Optional: Build your own containers ------------------------------------ -The code for the containers is under -[containers/](containers) - -Get everything running ----------------------- - - kubectl create -f ./backend-rc.yaml - kubectl create -f ./backend-srv.yaml - kubectl create -f ./show-rc.yaml - kubectl create -f ./show-srv.yaml - -Query the service ------------------ -Use `kubectl describe service show-srv` to determine the public IP of -your service. - -> Note: If your platform does not support external load balancers, - you'll need to open the proper port and direct traffic to the - internal IP shown for the frontend service with the above command - -Run `curl :80` to query the service. You should get -something like this back: - -``` -Pod Name: show-rc-xxu6i -Pod Namespace: default -USER_VAR: important information - -Kubenertes environment variables -BACKEND_SRV_SERVICE_HOST = 10.147.252.185 -BACKEND_SRV_SERVICE_PORT = 5000 -KUBERNETES_RO_SERVICE_HOST = 10.147.240.1 -KUBERNETES_RO_SERVICE_PORT = 80 -KUBERNETES_SERVICE_HOST = 10.147.240.2 -KUBERNETES_SERVICE_PORT = 443 -KUBE_DNS_SERVICE_HOST = 10.147.240.10 -KUBE_DNS_SERVICE_PORT = 53 - -Found backend ip: 10.147.252.185 port: 5000 -Response from backend -Backend Container -Backend Pod Name: backend-rc-6qiya -Backend Namespace: default -``` - -First the frontend pod's information is printed. The pod name and -[namespace](/docs/design/namespaces.md) are retreived from the -[Downward API](/docs/downward_api.md). Next, `USER_VAR` is the name of -an environment variable set in the [pod -definition](show-rc.yaml). Then, the dynamic kubernetes environment -variables are scanned and printed. These are used to find the backend -service, named `backend-srv`. Finally, the frontend pod queries the -backend service and prints the information returned. Again the backend -pod returns its own pod name and namespace. - -Try running the `curl` command a few times, and notice what -changes. Ex: `watch -n 1 curl -s ` Firstly, the frontend service -is directing your request to different frontend pods each time. The -frontend pods are always contacting the backend through the backend -service. This results in a different backend pod servicing each -request as well. - -Cleanup -------- - kubectl delete rc,service -l type=show-type - kubectl delete rc,service -l type=backend-type - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/environment-guide/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/environment-guide/README.md?pixel)]() diff --git a/release-0.19.0/examples/environment-guide/backend-rc.yaml b/release-0.19.0/examples/environment-guide/backend-rc.yaml deleted file mode 100644 index 6c57b95dac9..00000000000 --- a/release-0.19.0/examples/environment-guide/backend-rc.yaml +++ /dev/null @@ -1,30 +0,0 @@ ---- -apiVersion: v1 -kind: ReplicationController -metadata: - name: backend-rc - labels: - type: backend-type -spec: - replicas: 3 - template: - metadata: - labels: - type: backend-type - spec: - containers: - - name: backend-container - image: gcr.io/google-samples/env-backend:1.1 - imagePullPolicy: Always - ports: - - containerPort: 5000 - protocol: TCP - env: - - name: POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace diff --git a/release-0.19.0/examples/environment-guide/backend-srv.yaml b/release-0.19.0/examples/environment-guide/backend-srv.yaml deleted file mode 100644 index 7083b37bf88..00000000000 --- a/release-0.19.0/examples/environment-guide/backend-srv.yaml +++ /dev/null @@ -1,13 +0,0 @@ ---- -apiVersion: v1 -kind: Service -metadata: - name: backend-srv - labels: - type: backend-type -spec: - ports: - - port: 5000 - protocol: TCP - selector: - type: backend-type diff --git a/release-0.19.0/examples/environment-guide/containers/README.md b/release-0.19.0/examples/environment-guide/containers/README.md deleted file mode 100644 index 8ab18ef83eb..00000000000 --- a/release-0.19.0/examples/environment-guide/containers/README.md +++ /dev/null @@ -1,26 +0,0 @@ -Building --------- -For each container, the build steps are the same. The examples below -are for the `show` container. Replace `show` with `backend` for the -backend container. - -GCR ---- - docker build -t gcr.io//show . - gcloud preview docker push gcr.io//show - -Docker Hub ----------- - docker build -t /show . - docker push /show - -Change Pod Definitions ----------------------- -Edit both `show-rc.yaml` and `backend-rc.yaml` and replace the -specified `image:` with the one that you built. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/environment-guide/containers/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/environment-guide/containers/README.md?pixel)]() diff --git a/release-0.19.0/examples/environment-guide/containers/backend/Dockerfile b/release-0.19.0/examples/environment-guide/containers/backend/Dockerfile deleted file mode 100644 index 3fa58ff7abe..00000000000 --- a/release-0.19.0/examples/environment-guide/containers/backend/Dockerfile +++ /dev/null @@ -1,2 +0,0 @@ -FROM golang:onbuild -EXPOSE 8080 diff --git a/release-0.19.0/examples/environment-guide/containers/backend/backend.go b/release-0.19.0/examples/environment-guide/containers/backend/backend.go deleted file mode 100644 index b4edf75ff5d..00000000000 --- a/release-0.19.0/examples/environment-guide/containers/backend/backend.go +++ /dev/null @@ -1,37 +0,0 @@ -/* -Copyright 2015 The Kubernetes Authors All rights reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package main - -import ( - "fmt" - "log" - "net/http" - "os" -) - -func printInfo(resp http.ResponseWriter, req *http.Request) { - name := os.Getenv("POD_NAME") - namespace := os.Getenv("POD_NAMESPACE") - fmt.Fprintf(resp, "Backend Container\n") - fmt.Fprintf(resp, "Backend Pod Name: %v\n", name) - fmt.Fprintf(resp, "Backend Namespace: %v\n", namespace) -} - -func main() { - http.HandleFunc("/", printInfo) - log.Fatal(http.ListenAndServe(":5000", nil)) -} diff --git a/release-0.19.0/examples/environment-guide/containers/show/Dockerfile b/release-0.19.0/examples/environment-guide/containers/show/Dockerfile deleted file mode 100644 index 3fa58ff7abe..00000000000 --- a/release-0.19.0/examples/environment-guide/containers/show/Dockerfile +++ /dev/null @@ -1,2 +0,0 @@ -FROM golang:onbuild -EXPOSE 8080 diff --git a/release-0.19.0/examples/environment-guide/containers/show/show.go b/release-0.19.0/examples/environment-guide/containers/show/show.go deleted file mode 100644 index 56bd988b400..00000000000 --- a/release-0.19.0/examples/environment-guide/containers/show/show.go +++ /dev/null @@ -1,95 +0,0 @@ -/* -Copyright 2015 The Kubernetes Authors All rights reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package main - -import ( - "fmt" - "io" - "log" - "net/http" - "os" - "sort" - "strings" -) - -func getKubeEnv() (map[string]string, error) { - environS := os.Environ() - environ := make(map[string]string) - for _, val := range environS { - split := strings.Split(val, "=") - if len(split) != 2 { - return environ, fmt.Errorf("Some weird env vars") - } - environ[split[0]] = split[1] - } - for key := range environ { - if !(strings.HasSuffix(key, "_SERVICE_HOST") || - strings.HasSuffix(key, "_SERVICE_PORT")) { - delete(environ, key) - } - } - return environ, nil -} - -func printInfo(resp http.ResponseWriter, req *http.Request) { - kubeVars, err := getKubeEnv() - if err != nil { - http.Error(resp, err.Error(), http.StatusInternalServerError) - return - } - - backendHost := os.Getenv("BACKEND_SRV_SERVICE_HOST") - backendPort := os.Getenv("BACKEND_SRV_SERVICE_PORT") - backendRsp, backendErr := http.Get(fmt.Sprintf( - "http://%v:%v/", - backendHost, - backendPort)) - if backendErr == nil { - defer backendRsp.Body.Close() - } - - name := os.Getenv("POD_NAME") - namespace := os.Getenv("POD_NAMESPACE") - fmt.Fprintf(resp, "Pod Name: %v \n", name) - fmt.Fprintf(resp, "Pod Namespace: %v \n", namespace) - - envvar := os.Getenv("USER_VAR") - fmt.Fprintf(resp, "USER_VAR: %v \n", envvar) - - fmt.Fprintf(resp, "\nKubenertes environment variables\n") - var keys []string - for key := range kubeVars { - keys = append(keys, key) - } - sort.Strings(keys) - for _, key := range keys { - fmt.Fprintf(resp, "%v = %v \n", key, kubeVars[key]) - } - - fmt.Fprintf(resp, "\nFound backend ip: %v port: %v\n", backendHost, backendPort) - if backendErr == nil { - fmt.Fprintf(resp, "Response from backend\n") - io.Copy(resp, backendRsp.Body) - } else { - fmt.Fprintf(resp, "Error from backend: %v", backendErr.Error()) - } -} - -func main() { - http.HandleFunc("/", printInfo) - log.Fatal(http.ListenAndServe(":8080", nil)) -} diff --git a/release-0.19.0/examples/environment-guide/diagram.png b/release-0.19.0/examples/environment-guide/diagram.png deleted file mode 100644 index dd5d1551631..00000000000 Binary files a/release-0.19.0/examples/environment-guide/diagram.png and /dev/null differ diff --git a/release-0.19.0/examples/environment-guide/show-rc.yaml b/release-0.19.0/examples/environment-guide/show-rc.yaml deleted file mode 100644 index 4de94c06ca3..00000000000 --- a/release-0.19.0/examples/environment-guide/show-rc.yaml +++ /dev/null @@ -1,32 +0,0 @@ ---- -apiVersion: v1 -kind: ReplicationController -metadata: - name: show-rc - labels: - type: show-type -spec: - replicas: 3 - template: - metadata: - labels: - type: show-type - spec: - containers: - - name: show-container - image: gcr.io/google-samples/env-show:1.1 - imagePullPolicy: Always - ports: - - containerPort: 8080 - protocol: TCP - env: - - name: USER_VAR - value: important information - - name: POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace diff --git a/release-0.19.0/examples/environment-guide/show-srv.yaml b/release-0.19.0/examples/environment-guide/show-srv.yaml deleted file mode 100644 index 25a2d7473e0..00000000000 --- a/release-0.19.0/examples/environment-guide/show-srv.yaml +++ /dev/null @@ -1,15 +0,0 @@ ---- -apiVersion: v1 -kind: Service -metadata: - name: show-srv - labels: - type: show-type -spec: - type: LoadBalancer - ports: - - port: 80 - protocol: TCP - targetPort: 8080 - selector: - type: show-type diff --git a/release-0.19.0/examples/examples_test.go b/release-0.19.0/examples/examples_test.go deleted file mode 100644 index 103d28a6c18..00000000000 --- a/release-0.19.0/examples/examples_test.go +++ /dev/null @@ -1,438 +0,0 @@ -/* -Copyright 2014 The Kubernetes Authors All rights reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package examples_test - -import ( - "fmt" - "io/ioutil" - "os" - "path/filepath" - "regexp" - "strings" - "testing" - - "github.com/GoogleCloudPlatform/kubernetes/pkg/api" - "github.com/GoogleCloudPlatform/kubernetes/pkg/api/latest" - "github.com/GoogleCloudPlatform/kubernetes/pkg/api/validation" - "github.com/GoogleCloudPlatform/kubernetes/pkg/capabilities" - "github.com/GoogleCloudPlatform/kubernetes/pkg/runtime" - "github.com/GoogleCloudPlatform/kubernetes/pkg/util/yaml" - "github.com/golang/glog" -) - -func validateObject(obj runtime.Object) (errors []error) { - switch t := obj.(type) { - case *api.ReplicationController: - if t.Namespace == "" { - t.Namespace = api.NamespaceDefault - } - errors = validation.ValidateReplicationController(t) - case *api.ReplicationControllerList: - for i := range t.Items { - errors = append(errors, validateObject(&t.Items[i])...) - } - case *api.Service: - if t.Namespace == "" { - t.Namespace = api.NamespaceDefault - } - errors = validation.ValidateService(t) - case *api.ServiceList: - for i := range t.Items { - errors = append(errors, validateObject(&t.Items[i])...) - } - case *api.Pod: - if t.Namespace == "" { - t.Namespace = api.NamespaceDefault - } - errors = validation.ValidatePod(t) - case *api.PodList: - for i := range t.Items { - errors = append(errors, validateObject(&t.Items[i])...) - } - case *api.PersistentVolume: - errors = validation.ValidatePersistentVolume(t) - case *api.PersistentVolumeClaim: - if t.Namespace == "" { - t.Namespace = api.NamespaceDefault - } - errors = validation.ValidatePersistentVolumeClaim(t) - case *api.PodTemplate: - if t.Namespace == "" { - t.Namespace = api.NamespaceDefault - } - errors = validation.ValidatePodTemplate(t) - case *api.Endpoints: - if t.Namespace == "" { - t.Namespace = api.NamespaceDefault - } - errors = validation.ValidateEndpoints(t) - case *api.Namespace: - errors = validation.ValidateNamespace(t) - case *api.Secret: - if t.Namespace == "" { - t.Namespace = api.NamespaceDefault - } - errors = validation.ValidateSecret(t) - case *api.LimitRange: - if t.Namespace == "" { - t.Namespace = api.NamespaceDefault - } - errors = validation.ValidateLimitRange(t) - case *api.ResourceQuota: - if t.Namespace == "" { - t.Namespace = api.NamespaceDefault - } - errors = validation.ValidateResourceQuota(t) - default: - return []error{fmt.Errorf("no validation defined for %#v", obj)} - } - return errors -} - -func walkJSONFiles(inDir string, fn func(name, path string, data []byte)) error { - return filepath.Walk(inDir, func(path string, info os.FileInfo, err error) error { - if err != nil { - return err - } - - if info.IsDir() && path != inDir { - return filepath.SkipDir - } - - file := filepath.Base(path) - if ext := filepath.Ext(file); ext == ".json" || ext == ".yaml" { - glog.Infof("Testing %s", path) - data, err := ioutil.ReadFile(path) - if err != nil { - return err - } - name := strings.TrimSuffix(file, ext) - - if ext == ".yaml" { - out, err := yaml.ToJSON(data) - if err != nil { - return fmt.Errorf("%s: %v", path, err) - } - data = out - } - - fn(name, path, data) - } - return nil - }) -} - -func TestExampleObjectSchemas(t *testing.T) { - cases := map[string]map[string]runtime.Object{ - "../cmd/integration": { - "v1beta3-controller": &api.ReplicationController{}, - "v1-controller": &api.ReplicationController{}, - }, - "../examples/guestbook": { - "frontend-controller": &api.ReplicationController{}, - "redis-slave-controller": &api.ReplicationController{}, - "redis-master-controller": &api.ReplicationController{}, - "frontend-service": &api.Service{}, - "redis-master-service": &api.Service{}, - "redis-slave-service": &api.Service{}, - }, - "../examples/guestbook-go": { - "guestbook-controller": &api.ReplicationController{}, - "redis-slave-controller": &api.ReplicationController{}, - "redis-master-controller": &api.ReplicationController{}, - "guestbook-service": &api.Service{}, - "redis-master-service": &api.Service{}, - "redis-slave-service": &api.Service{}, - }, - "../examples/walkthrough": { - "pod1": &api.Pod{}, - "pod2": &api.Pod{}, - "pod-with-http-healthcheck": &api.Pod{}, - "service": &api.Service{}, - "replication-controller": &api.ReplicationController{}, - "podtemplate": &api.PodTemplate{}, - }, - "../examples/update-demo": { - "kitten-rc": &api.ReplicationController{}, - "nautilus-rc": &api.ReplicationController{}, - }, - "../examples/persistent-volumes/volumes": { - "local-01": &api.PersistentVolume{}, - "local-02": &api.PersistentVolume{}, - "gce": &api.PersistentVolume{}, - "nfs": &api.PersistentVolume{}, - }, - "../examples/persistent-volumes/claims": { - "claim-01": &api.PersistentVolumeClaim{}, - "claim-02": &api.PersistentVolumeClaim{}, - "claim-03": &api.PersistentVolumeClaim{}, - }, - "../examples/persistent-volumes/simpletest": { - "namespace": &api.Namespace{}, - "pod": &api.Pod{}, - "service": &api.Service{}, - }, - "../examples/iscsi": { - "iscsi": &api.Pod{}, - }, - "../examples/glusterfs": { - "glusterfs-pod": &api.Pod{}, - "glusterfs-endpoints": &api.Endpoints{}, - }, - "../examples/liveness": { - "exec-liveness": &api.Pod{}, - "http-liveness": &api.Pod{}, - }, - "../examples": { - "pod": &api.Pod{}, - "replication": &api.ReplicationController{}, - }, - "../examples/rbd/secret": { - "ceph-secret": &api.Secret{}, - }, - "../examples/rbd/v1beta3": { - "rbd": &api.Pod{}, - "rbd-with-secret": &api.Pod{}, - }, - "../examples/cassandra": { - "cassandra-controller": &api.ReplicationController{}, - "cassandra-service": &api.Service{}, - "cassandra": &api.Pod{}, - }, - "../examples/celery-rabbitmq": { - "celery-controller": &api.ReplicationController{}, - "flower-controller": &api.ReplicationController{}, - "rabbitmq-controller": &api.ReplicationController{}, - "rabbitmq-service": &api.Service{}, - }, - "../examples/cluster-dns": { - "dns-backend-rc": &api.ReplicationController{}, - "dns-backend-service": &api.Service{}, - "dns-frontend-pod": &api.Pod{}, - "namespace-dev": &api.Namespace{}, - "namespace-prod": &api.Namespace{}, - }, - "../examples/downward-api": { - "dapi-pod": &api.Pod{}, - }, - "../examples/elasticsearch": { - "apiserver-secret": nil, - "music-rc": &api.ReplicationController{}, - "music-service": &api.Service{}, - }, - "../examples/explorer": { - "pod": &api.Pod{}, - }, - "../examples/hazelcast": { - "hazelcast-controller": &api.ReplicationController{}, - "hazelcast-service": &api.Service{}, - }, - "../examples/kubernetes-namespaces": { - "namespace-dev": &api.Namespace{}, - "namespace-prod": &api.Namespace{}, - }, - "../examples/limitrange": { - "invalid-pod": &api.Pod{}, - "limit-range": &api.LimitRange{}, - "valid-pod": &api.Pod{}, - }, - "../examples/logging-demo": { - "synthetic_0_25lps": &api.Pod{}, - "synthetic_10lps": &api.Pod{}, - }, - "../examples/meteor": { - "meteor-controller": &api.ReplicationController{}, - "meteor-service": &api.Service{}, - "mongo-pod": &api.Pod{}, - "mongo-service": &api.Service{}, - }, - "../examples/mysql-wordpress-pd": { - "mysql-service": &api.Service{}, - "mysql": &api.Pod{}, - "wordpress-service": &api.Service{}, - "wordpress": &api.Pod{}, - }, - "../examples/nfs": { - "nfs-server-pod": &api.Pod{}, - "nfs-server-service": &api.Service{}, - "nfs-web-pod": &api.Pod{}, - }, - "../examples/node-selection": { - "pod": &api.Pod{}, - }, - "../examples/openshift-origin": { - "openshift-controller": &api.ReplicationController{}, - "openshift-service": &api.Service{}, - }, - "../examples/phabricator": { - "authenticator-controller": &api.ReplicationController{}, - "phabricator-controller": &api.ReplicationController{}, - "phabricator-service": &api.Service{}, - }, - "../examples/redis": { - "redis-controller": &api.ReplicationController{}, - "redis-master": &api.Pod{}, - "redis-proxy": &api.Pod{}, - "redis-sentinel-controller": &api.ReplicationController{}, - "redis-sentinel-service": &api.Service{}, - }, - "../examples/resourcequota": { - "namespace": &api.Namespace{}, - "limits": &api.LimitRange{}, - "quota": &api.ResourceQuota{}, - }, - "../examples/rethinkdb": { - "admin-pod": &api.Pod{}, - "admin-service": &api.Service{}, - "driver-service": &api.Service{}, - "rc": &api.ReplicationController{}, - }, - "../examples/secrets": { - "secret-pod": &api.Pod{}, - "secret": &api.Secret{}, - }, - "../examples/spark": { - "spark-master-service": &api.Service{}, - "spark-master": &api.Pod{}, - "spark-worker-controller": &api.ReplicationController{}, - }, - "../examples/storm": { - "storm-nimbus-service": &api.Service{}, - "storm-nimbus": &api.Pod{}, - "storm-worker-controller": &api.ReplicationController{}, - "zookeeper-service": &api.Service{}, - "zookeeper": &api.Pod{}, - }, - } - - capabilities.SetForTests(capabilities.Capabilities{ - AllowPrivileged: true, - }) - - for path, expected := range cases { - tested := 0 - err := walkJSONFiles(path, func(name, path string, data []byte) { - expectedType, found := expected[name] - if !found { - t.Errorf("%s: %s does not have a test case defined", path, name) - return - } - tested++ - if expectedType == nil { - t.Logf("skipping : %s/%s\n", path, name) - return - } - if err := latest.Codec.DecodeInto(data, expectedType); err != nil { - t.Errorf("%s did not decode correctly: %v\n%s", path, err, string(data)) - return - } - if errors := validateObject(expectedType); len(errors) > 0 { - t.Errorf("%s did not validate correctly: %v", path, errors) - } - }) - if err != nil { - t.Errorf("Expected no error, Got %v", err) - } - if tested != len(expected) { - t.Errorf("Expected %d examples, Got %d", len(expected), tested) - } - } -} - -// This regex is tricky, but it works. For future me, here is the decode: -// -// Flags: (?ms) = multiline match, allow . to match \n -// 1) Look for a line that starts with ``` (a markdown code block) -// 2) (?: ... ) = non-capturing group -// 3) (P) = capture group as "name" -// 4) Look for #1 followed by either: -// 4a) "yaml" followed by any word-characters followed by a newline (e.g. ```yamlfoo\n) -// 4b) "any word-characters followed by a newline (e.g. ```json\n) -// 5) Look for either: -// 5a) #4a followed by one or more characters (non-greedy) -// 5b) #4b followed by { followed by one or more characters (non-greedy) followed by } -// 6) Look for #5 followed by a newline followed by ``` (end of the code block) -// -// This could probably be simplified, but is already too delicate. Before any -// real changes, we should have a testscase that just tests this regex. -var sampleRegexp = regexp.MustCompile("(?ms)^```(?:(?Pyaml)\\w*\\n(?P.+?)|\\w*\\n(?P\\{.+?\\}))\\n^```") -var subsetRegexp = regexp.MustCompile("(?ms)\\.{3}") - -func TestReadme(t *testing.T) { - paths := []struct { - file string - expectedType []runtime.Object - }{ - {"../README.md", []runtime.Object{&api.Pod{}}}, - {"../examples/walkthrough/README.md", []runtime.Object{&api.Pod{}}}, - {"../examples/iscsi/README.md", []runtime.Object{&api.Pod{}}}, - {"../examples/simple-yaml.md", []runtime.Object{&api.Pod{}, &api.ReplicationController{}}}, - } - - for _, path := range paths { - data, err := ioutil.ReadFile(path.file) - if err != nil { - t.Errorf("Unable to read file %s: %v", path, err) - continue - } - - matches := sampleRegexp.FindAllStringSubmatch(string(data), -1) - if matches == nil { - continue - } - ix := 0 - for _, match := range matches { - var content, subtype string - for i, name := range sampleRegexp.SubexpNames() { - if name == "type" { - subtype = match[i] - } - if name == "content" && match[i] != "" { - content = match[i] - } - } - if subtype == "yaml" && subsetRegexp.FindString(content) != "" { - t.Logf("skipping (%s): \n%s", subtype, content) - continue - } - - var expectedType runtime.Object - if len(path.expectedType) == 1 { - expectedType = path.expectedType[0] - } else { - expectedType = path.expectedType[ix] - ix++ - } - json, err := yaml.ToJSON([]byte(content)) - if err != nil { - t.Errorf("%s could not be converted to JSON: %v\n%s", path, err, string(content)) - } - if err := latest.Codec.DecodeInto(json, expectedType); err != nil { - t.Errorf("%s did not decode correctly: %v\n%s", path, err, string(content)) - continue - } - if errors := validateObject(expectedType); len(errors) > 0 { - t.Errorf("%s did not validate correctly: %v", path, errors) - } - _, err = latest.Codec.Encode(expectedType) - if err != nil { - t.Errorf("Could not encode object: %v", err) - continue - } - } - } -} diff --git a/release-0.19.0/examples/explorer/Dockerfile b/release-0.19.0/examples/explorer/Dockerfile deleted file mode 100644 index e6545402f20..00000000000 --- a/release-0.19.0/examples/explorer/Dockerfile +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright 2015 The Kubernetes Authors. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -FROM scratch -MAINTAINER Daniel Smith -ADD explorer explorer -ADD README.md README.md -EXPOSE 8080 -ENTRYPOINT ["/explorer"] diff --git a/release-0.19.0/examples/explorer/Makefile b/release-0.19.0/examples/explorer/Makefile deleted file mode 100644 index bbccac4e36b..00000000000 --- a/release-0.19.0/examples/explorer/Makefile +++ /dev/null @@ -1,16 +0,0 @@ -all: push - -# Keep this one version ahead, so no one accidentally blows away the latest published version. -TAG = 1.1 - -explorer: explorer.go - CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -ldflags '-w' ./explorer.go - -container: explorer - docker build -t gcr.io/google_containers/explorer:$(TAG) . - -push: container - gcloud preview docker push gcr.io/google_containers/explorer:$(TAG) - -clean: - rm -f explorer diff --git a/release-0.19.0/examples/explorer/README.md b/release-0.19.0/examples/explorer/README.md deleted file mode 100644 index dac1f3b73dc..00000000000 --- a/release-0.19.0/examples/explorer/README.md +++ /dev/null @@ -1,133 +0,0 @@ -### explorer - -Explorer is a little container for examining the runtime environment kubernetes produces for your pods. - -The intended use is to substitute gcr.io/google_containers/explorer for your intended container, and then visit it via the proxy. - -Currently, you can look at: - * The environment variables to make sure kubernetes is doing what you expect. - * The filesystem to make sure the mounted volumes and files are also what you expect. - * Perform DNS lookups, to see how DNS works. - -`pod.json` is supplied as an example. You can control the port it serves on with the -port flag. - -Example from command line (the DNS lookup looks better from a web browser): -``` -$ kubectl create -f pod.json -$ kubectl proxy & -Starting to serve on localhost:8001 - -$ curl localhost:8001/api/v1beta3/proxy/namespaces/default/pods/explorer:8080/vars/ -PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin -HOSTNAME=explorer -KIBANA_LOGGING_PORT_5601_TCP_PORT=5601 -KUBERNETES_SERVICE_HOST=10.0.0.2 -MONITORING_GRAFANA_PORT_80_TCP_PROTO=tcp -MONITORING_INFLUXDB_UI_PORT_80_TCP_PROTO=tcp -KIBANA_LOGGING_SERVICE_PORT=5601 -MONITORING_HEAPSTER_PORT_80_TCP_PORT=80 -MONITORING_INFLUXDB_UI_PORT_80_TCP_PORT=80 -KIBANA_LOGGING_SERVICE_HOST=10.0.204.206 -KIBANA_LOGGING_PORT_5601_TCP=tcp://10.0.204.206:5601 -KUBERNETES_PORT=tcp://10.0.0.2:443 -MONITORING_INFLUXDB_PORT=tcp://10.0.2.30:80 -MONITORING_INFLUXDB_PORT_80_TCP_PROTO=tcp -MONITORING_INFLUXDB_UI_PORT=tcp://10.0.36.78:80 -KUBE_DNS_PORT_53_UDP=udp://10.0.0.10:53 -MONITORING_INFLUXDB_SERVICE_HOST=10.0.2.30 -ELASTICSEARCH_LOGGING_PORT=tcp://10.0.48.200:9200 -ELASTICSEARCH_LOGGING_PORT_9200_TCP_PORT=9200 -KUBERNETES_PORT_443_TCP=tcp://10.0.0.2:443 -ELASTICSEARCH_LOGGING_PORT_9200_TCP_PROTO=tcp -KIBANA_LOGGING_PORT_5601_TCP_ADDR=10.0.204.206 -KUBE_DNS_PORT_53_UDP_ADDR=10.0.0.10 -MONITORING_HEAPSTER_PORT_80_TCP_PROTO=tcp -MONITORING_INFLUXDB_PORT_80_TCP_ADDR=10.0.2.30 -KIBANA_LOGGING_PORT=tcp://10.0.204.206:5601 -MONITORING_GRAFANA_SERVICE_PORT=80 -MONITORING_HEAPSTER_SERVICE_PORT=80 -MONITORING_HEAPSTER_PORT_80_TCP=tcp://10.0.150.238:80 -ELASTICSEARCH_LOGGING_PORT_9200_TCP=tcp://10.0.48.200:9200 -ELASTICSEARCH_LOGGING_PORT_9200_TCP_ADDR=10.0.48.200 -MONITORING_GRAFANA_PORT_80_TCP_PORT=80 -MONITORING_HEAPSTER_PORT=tcp://10.0.150.238:80 -MONITORING_INFLUXDB_PORT_80_TCP=tcp://10.0.2.30:80 -KUBE_DNS_SERVICE_PORT=53 -KUBE_DNS_PORT_53_UDP_PORT=53 -MONITORING_GRAFANA_PORT_80_TCP_ADDR=10.0.100.174 -MONITORING_INFLUXDB_UI_SERVICE_HOST=10.0.36.78 -KIBANA_LOGGING_PORT_5601_TCP_PROTO=tcp -MONITORING_GRAFANA_PORT=tcp://10.0.100.174:80 -MONITORING_INFLUXDB_UI_PORT_80_TCP_ADDR=10.0.36.78 -KUBE_DNS_SERVICE_HOST=10.0.0.10 -KUBERNETES_PORT_443_TCP_PORT=443 -MONITORING_HEAPSTER_PORT_80_TCP_ADDR=10.0.150.238 -MONITORING_INFLUXDB_UI_SERVICE_PORT=80 -KUBE_DNS_PORT=udp://10.0.0.10:53 -ELASTICSEARCH_LOGGING_SERVICE_HOST=10.0.48.200 -KUBERNETES_SERVICE_PORT=443 -MONITORING_HEAPSTER_SERVICE_HOST=10.0.150.238 -MONITORING_INFLUXDB_SERVICE_PORT=80 -MONITORING_INFLUXDB_PORT_80_TCP_PORT=80 -KUBE_DNS_PORT_53_UDP_PROTO=udp -MONITORING_GRAFANA_PORT_80_TCP=tcp://10.0.100.174:80 -ELASTICSEARCH_LOGGING_SERVICE_PORT=9200 -MONITORING_GRAFANA_SERVICE_HOST=10.0.100.174 -MONITORING_INFLUXDB_UI_PORT_80_TCP=tcp://10.0.36.78:80 -KUBERNETES_PORT_443_TCP_PROTO=tcp -KUBERNETES_PORT_443_TCP_ADDR=10.0.0.2 -HOME=/ - -$ curl localhost:8001/api/v1beta3/proxy/namespaces/default/pods/explorer:8080/fs/ -mount/ -var/ -.dockerenv -etc/ -dev/ -proc/ -.dockerinit -sys/ -README.md -explorer - -$ curl localhost:8001/api/v1beta3/proxy/namespaces/default/pods/explorer:8080/dns?q=elasticsearch-logging - -
- - -
-

LookupNS(elasticsearch-logging):
-Result: ([]*net.NS)
-Error: <*>lookup elasticsearch-logging: no such host
-
-LookupTXT(elasticsearch-logging):
-Result: ([]string)
-Error: <*>lookup elasticsearch-logging: no such host
-
-LookupSRV("", "", elasticsearch-logging):
-cname: elasticsearch-logging.default.cluster.local.
-Result: ([]*net.SRV)[<*>{Target:(string)elasticsearch-logging.default.cluster.local. Port:(uint16)9200 Priority:(uint16)10 Weight:(uint16)100}]
-Error: 
-
-LookupHost(elasticsearch-logging):
-Result: ([]string)[10.0.60.245]
-Error: 
-
-LookupIP(elasticsearch-logging):
-Result: ([]net.IP)[10.0.60.245]
-Error: 
-
-LookupMX(elasticsearch-logging):
-Result: ([]*net.MX)
-Error: <*>lookup elasticsearch-logging: no such host
-
-
- - -``` - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/explorer/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/explorer/README.md?pixel)]() diff --git a/release-0.19.0/examples/explorer/explorer.go b/release-0.19.0/examples/explorer/explorer.go deleted file mode 100644 index e10dfc925c9..00000000000 --- a/release-0.19.0/examples/explorer/explorer.go +++ /dev/null @@ -1,122 +0,0 @@ -/* -Copyright 2015 The Kubernetes Authors All rights reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// A tiny web server for viewing the environment kubernetes creates for your -// containers. It exposes the filesystem and environment variables via http -// server. -package main - -import ( - "flag" - "fmt" - "log" - "net" - "net/http" - "os" - - "github.com/davecgh/go-spew/spew" -) - -var ( - port = flag.Int("port", 8080, "Port number to serve at.") -) - -func main() { - flag.Parse() - hostname, err := os.Hostname() - if err != nil { - log.Fatalf("Error getting hostname: %v", err) - } - - links := []struct { - link, desc string - }{ - {"/fs/", "Complete file system as seen by this container."}, - {"/vars/", "Environment variables as seen by this container."}, - {"/hostname/", "Hostname as seen by this container."}, - {"/dns?q=google.com", "Explore DNS records seen by this container."}, - {"/quit", "Cause this container to exit."}, - } - - http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) { - fmt.Fprintf(w, " Kubernetes environment explorer

") - for _, v := range links { - fmt.Fprintf(w, `%v: %v
`, v.link, v.link, v.desc) - } - }) - - http.Handle("/fs/", http.StripPrefix("/fs/", http.FileServer(http.Dir("/")))) - http.HandleFunc("/vars/", func(w http.ResponseWriter, r *http.Request) { - for _, v := range os.Environ() { - fmt.Fprintf(w, "%v\n", v) - } - }) - http.HandleFunc("/hostname/", func(w http.ResponseWriter, r *http.Request) { - fmt.Fprintf(w, hostname) - }) - http.HandleFunc("/quit", func(w http.ResponseWriter, r *http.Request) { - os.Exit(0) - }) - http.HandleFunc("/dns", dns) - - go log.Fatal(http.ListenAndServe(fmt.Sprintf("0.0.0.0:%d", *port), nil)) - - select {} -} - -func dns(w http.ResponseWriter, r *http.Request) { - q := r.URL.Query().Get("q") - // Note that the below is NOT safe from input attacks, but that's OK - // because this is just for debugging. - fmt.Fprintf(w, ` -
- - -
-

`, q)
-	{
-		res, err := net.LookupNS(q)
-		spew.Fprintf(w, "LookupNS(%v):\nResult: %#v\nError: %v\n\n", q, res, err)
-	}
-	{
-		res, err := net.LookupTXT(q)
-		spew.Fprintf(w, "LookupTXT(%v):\nResult: %#v\nError: %v\n\n", q, res, err)
-	}
-	{
-		cname, res, err := net.LookupSRV("", "", q)
-		spew.Fprintf(w, `LookupSRV("", "", %v):
-cname: %v
-Result: %#v
-Error: %v
-
-`, q, cname, res, err)
-	}
-	{
-		res, err := net.LookupHost(q)
-		spew.Fprintf(w, "LookupHost(%v):\nResult: %#v\nError: %v\n\n", q, res, err)
-	}
-	{
-		res, err := net.LookupIP(q)
-		spew.Fprintf(w, "LookupIP(%v):\nResult: %#v\nError: %v\n\n", q, res, err)
-	}
-	{
-		res, err := net.LookupMX(q)
-		spew.Fprintf(w, "LookupMX(%v):\nResult: %#v\nError: %v\n\n", q, res, err)
-	}
-	fmt.Fprintf(w, `
- -`) -} diff --git a/release-0.19.0/examples/explorer/pod.json b/release-0.19.0/examples/explorer/pod.json deleted file mode 100644 index 99e68332255..00000000000 --- a/release-0.19.0/examples/explorer/pod.json +++ /dev/null @@ -1,36 +0,0 @@ -{ - "kind": "Pod", - "apiVersion": "v1beta3", - "metadata": { - "name": "explorer" - }, - "spec": { - "containers": [ - { - "name": "explorer", - "image": "gcr.io/google_containers/explorer:1.0", - "args": [ - "-port=8080" - ], - "ports": [ - { - "containerPort": 8080, - "protocol": "TCP" - } - ], - "volumeMounts": [ - { - "name": "test-volume", - "mountPath": "/mount/test-volume" - } - ] - } - ], - "volumes": [ - { - "name": "test-volume", - "emptyDir": {} - } - ] - } -} diff --git a/release-0.19.0/examples/glusterfs/README.md b/release-0.19.0/examples/glusterfs/README.md deleted file mode 100644 index 47d758f46c1..00000000000 --- a/release-0.19.0/examples/glusterfs/README.md +++ /dev/null @@ -1,89 +0,0 @@ -## Glusterfs - -[Glusterfs](http://www.gluster.org) is an open source scale-out filesystem. These examples provide information about how to allow containers use Glusterfs volumes. - -The example assumes that you have already set up a Glusterfs server cluster and the Glusterfs client package is installed on all Kubernetes nodes. - -### Prerequisites - -Set up Glusterfs server cluster; install Glusterfs client package on the Kubernetes nodes. ([Guide](https://www.howtoforge.com/high-availability-storage-with-glusterfs-3.2.x-on-debian-wheezy-automatic-file-replication-mirror-across-two-storage-servers)) - -### Create endpoints -Here is a snippet of [glusterfs-endpoints.json](glusterfs-endpoints.json), - -``` - "addresses": [ - { - "IP": "10.240.106.152" - } - ], - "ports": [ - { - "port": 1, - "protocol": "TCP" - } - ] - -``` -The "IP" field should be filled with the address of a node in the Glusterfs server cluster. In this example, it is fine to give any valid value (from 1 to 65535) to the "port" field. - -Create the endpoints, -```shell -$ kubectl create -f examples/glusterfs/glusterfs-endpoints.json -``` - -You can verify that the endpoints are successfully created by running -```shell -$ kubect get endpoints -NAME ENDPOINTS -glusterfs-cluster 10.240.106.152:1,10.240.79.157:1 -``` - -### Create a POD - -The following *volume* spec in [glusterfs-pod.json](glusterfs-pod.json) illustrates a sample configuration. - -```js -{ - "name": "glusterfsvol", - "glusterfs": { - "endpoints": "glusterfs-cluster", - "path": "kube_vol", - "readOnly": true - } -} -``` - -The parameters are explained as the followings. - -- **endpoints** is endpoints name that represents a Gluster cluster configuration. *kubelet* is optimized to avoid mount storm, it will randomly pick one from the endpoints to mount. If this host is unresponsive, the next Gluster host in the endpoints is automatically selected. -- **path** is the Glusterfs volume name. -- **readOnly** is the boolean that sets the mountpoint readOnly or readWrite. - -Create a pod that has a container using Glusterfs volume, -```shell -$ kubectl create -f examples/glusterfs/glusterfs-pod.json -``` -You can verify that the pod is running: - -```shell -$ kubectl get pods -POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE -glusterfs 10.244.2.13 kubernetes-minion-151f/23.236.54.97 Running About a minute - glusterfs kubernetes/pause Running About a minute - -``` - -You may ssh to the host and run 'mount' to see if the Glusterfs volume is mounted, -```shell -$ mount | grep kube_vol -10.240.106.152:kube_vol on /var/lib/kubelet/pods/f164a571-fa68-11e4-ad5c-42010af019b7/volumes/kubernetes.io~glusterfs/glusterfsvol type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) -``` - -You may also run `docker ps` on the host to see the actual container. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/glusterfs/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/glusterfs/README.md?pixel)]() diff --git a/release-0.19.0/examples/glusterfs/glusterfs-endpoints.json b/release-0.19.0/examples/glusterfs/glusterfs-endpoints.json deleted file mode 100644 index 4c5d649e14a..00000000000 --- a/release-0.19.0/examples/glusterfs/glusterfs-endpoints.json +++ /dev/null @@ -1,35 +0,0 @@ -{ - "kind": "Endpoints", - "apiVersion": "v1beta3", - "metadata": { - "name": "glusterfs-cluster" - }, - "subsets": [ - { - "addresses": [ - { - "IP": "10.240.106.152" - } - ], - "ports": [ - { - "port": 1, - "protocol": "TCP" - } - ] - }, - { - "addresses": [ - { - "IP": "10.240.79.157" - } - ], - "ports": [ - { - "port": 1, - "protocol": "TCP" - } - ] - } - ] -} diff --git a/release-0.19.0/examples/glusterfs/glusterfs-pod.json b/release-0.19.0/examples/glusterfs/glusterfs-pod.json deleted file mode 100644 index 664a35dc0fa..00000000000 --- a/release-0.19.0/examples/glusterfs/glusterfs-pod.json +++ /dev/null @@ -1,32 +0,0 @@ -{ - "apiVersion": "v1beta3", - "id": "glusterfs", - "kind": "Pod", - "metadata": { - "name": "glusterfs" - }, - "spec": { - "containers": [ - { - "name": "glusterfs", - "image": "kubernetes/pause", - "volumeMounts": [ - { - "mountPath": "/mnt/glusterfs", - "name": "glusterfsvol" - } - ] - } - ], - "volumes": [ - { - "name": "glusterfsvol", - "glusterfs": { - "endpoints": "glusterfs-cluster", - "path": "kube_vol", - "readOnly": true - } - } - ] - } -} \ No newline at end of file diff --git a/release-0.19.0/examples/guestbook-go/README.md b/release-0.19.0/examples/guestbook-go/README.md deleted file mode 100644 index 1c1b5e1af9e..00000000000 --- a/release-0.19.0/examples/guestbook-go/README.md +++ /dev/null @@ -1,212 +0,0 @@ -## GuestBook example - -This example shows how to build a simple multi-tier web application using Kubernetes and Docker. It consists of a web frontend, a redis master for storage and a replicated set of redis slaves. - -### Step Zero: Prerequisites - -This example assumes that you have forked the repository and [turned up a Kubernetes cluster](../../docs/getting-started-guides): - -```shell -$ cd kubernetes -$ hack/dev-build-and-up.sh -``` - -### Step One: Turn up the redis master. - -Use the file `examples/guestbook-go/redis-master-controller.json` to create a [replication controller](../../docs/replication-controller.md) which manages a single [pod](../../docs/pods.md). The pod runs a redis key-value server in a container. Using a replication controller is the preferred way to launch long-running pods, even for 1 replica, so the pod will benefit from self-healing mechanism in kubernetes. - -Create the redis master replication controller in your Kubernetes cluster using the `kubectl` CLI: - -```shell -$ kubectl create -f examples/guestbook-go/redis-master-controller.json -``` - -Once that's up you can list the replication controllers in the cluster: -```shell -$ kubectl get rc -CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS -redis-master-controller redis-master gurpartap/redis name=redis,role=master 1 -``` - -List pods in cluster to verify the master is running. You'll see a single redis master pod. It will also display the machine that the pod is running on once it gets placed (may take up to thirty seconds). - -```shell -$ kubectl get pods -POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE -redis-master-y06lj 10.244.3.4 kubernetes-minion-bz1p/104.154.61.231 name=redis,role=master Running 8 seconds - redis-master gurpartap/redis Running 3 seconds -``` - -If you ssh to that machine, you can run `docker ps` to see the actual pod: - -```shell -me@workstation$ gcloud compute ssh --zone us-central1-b kubernetes-minion-bz1p - -me@kubernetes-minion-3:~$ sudo docker ps -CONTAINER ID IMAGE COMMAND CREATED STATUS -d5c458dabe50 gurpartap/redis:latest "/usr/local/bin/redi 5 minutes ago Up 5 minutes -``` - -(Note that initial `docker pull` may take a few minutes, depending on network conditions.) - -### Step Two: Turn up the master service. -A Kubernetes '[service](../../docs/services.md)' is a named load balancer that proxies traffic to one or more containers. The services in a Kubernetes cluster are discoverable inside other containers via environment variables or DNS. Services find the containers to load balance based on pod labels. - -The pod that you created in Step One has the label `name=redis` and `role=master`. The selector field of the service determines which pods will receive the traffic sent to the service. Use the file `examples/guestbook-go/redis-master-service.json` to create the service in the `kubectl` cli: - -```shell -$ kubectl create -f examples/guestbook-go/redis-master-service.json - -$ kubectl get services -NAME LABELS SELECTOR IP(S) PORT(S) -redis-master name=redis,role=master name=redis,role=master 10.0.11.173 6379/TCP -``` - -This will cause all new pods to see the redis master apparently running on $REDIS_MASTER_SERVICE_HOST at port 6379, or running on 'redis-master:6379'. Once created, the service proxy on each node is configured to set up a proxy on the specified port (in this case port 6379). - -### Step Three: Turn up the replicated slave pods. -Although the redis master is a single pod, the redis read slaves are a 'replicated' pod. In Kubernetes, a replication controller is responsible for managing multiple instances of a replicated pod. - -Use the file `examples/guestbook-go/redis-slave-controller.json` to create the replication controller: - -```shell -$ kubectl create -f examples/guestbook-go/redis-slave-controller.json - -$ kubectl get rc -CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS -redis-master redis-master gurpartap/redis name=redis,role=master 1 -redis-slave redis-slave gurpartap/redis name=redis,role=slave 2 -``` - -The redis slave configures itself by looking for the redis-master service name:port pair. In particular, the redis slave is started with the following command: - -```shell -redis-server --slaveof redis-master 6379 -``` - -Once that's up you can list the pods in the cluster, to verify that the master and slaves are running: - -```shell -$ kubectl get pods -POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE -redis-master-y06lj 10.244.3.4 kubernetes-minion-bz1p/104.154.61.231 name=redis,role=master Running 5 minutes - redis-master gurpartap/redis Running 5 minutes -redis-slave-3psic 10.244.0.4 kubernetes-minion-mluf/104.197.10.10 name=redis,role=slave Running 38 seconds - redis-slave gurpartap/redis Running 33 seconds -redis-slave-qtigf 10.244.2.4 kubernetes-minion-rcgd/130.211.122.180 name=redis,role=slave Running 38 seconds - redis-slave gurpartap/redis Running 36 seconds -``` - -You will see a single redis master pod and two redis slave pods. - -### Step Four: Create the redis slave service. - -Just like the master, we want to have a service to proxy connections to the read slaves. In this case, in addition to discovery, the slave service provides transparent load balancing to clients. The service specification for the slaves is in `examples/guestbook-go/redis-slave-service.json` - -This time the selector for the service is `name=redis,role=slave`, because that identifies the pods running redis slaves. It may also be helpful to set labels on your service itself--as we've done here--to make it easy to locate them later. - -Now that you have created the service specification, create it in your cluster with the `kubectl` CLI: - -```shell -$ kubectl create -f examples/guestbook-go/redis-slave-service.json - -$ kubectl get services -NAME LABELS SELECTOR IP(S) PORT(S) -redis-master name=redis,role=master name=redis,role=master 10.0.11.173 6379/TCP -redis-slave name=redis,role=slave name=redis,role=slave 10.0.234.24 6379/TCP -``` - -### Step Five: Create the guestbook pod. - -This is a simple Go net/http ([negroni](https://github.com/codegangsta/negroni) based) server that is configured to talk to either the slave or master services depending on whether the request is a read or a write. It exposes a simple JSON interface, and serves a jQuery-Ajax based UX. Like the redis read slaves it is a replicated service instantiated by a replication controller. - -The pod is described in the file `examples/guestbook-go/guestbook-controller.json`. Using this file, you can turn up your guestbook with: - -```shell -$ kubectl create -f examples/guestbook-go/guestbook-controller.json - -$ kubectl get replicationControllers -CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS -guestbook guestbook kubernetes/guestbook:v2 name=guestbook 3 -redis-master redis-master gurpartap/redis name=redis,role=master 1 -redis-slave redis-slave gurpartap/redis name=redis,role=slave 2 -``` - -Once that's up (it may take ten to thirty seconds to create the pods) you can list the pods in the cluster, to verify that the master, slaves and guestbook frontends are running: - -```shell -POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE -guestbook-1xzms 10.244.1.6 kubernetes-minion-q6w5/23.236.54.97 name=guestbook Running 40 seconds - guestbook kubernetes/guestbook:v2 Running 35 seconds -guestbook-9ksu4 10.244.0.5 kubernetes-minion-mluf/104.197.10.10 name=guestbook Running 40 seconds - guestbook kubernetes/guestbook:v2 Running 34 seconds -guestbook-lycwm 10.244.1.7 kubernetes-minion-q6w5/23.236.54.97 name=guestbook Running 40 seconds - guestbook kubernetes/guestbook:v2 Running 35 seconds -redis-master-y06lj 10.244.3.4 kubernetes-minion-bz1p/104.154.61.231 name=redis,role=master Running 8 minutes - redis-master gurpartap/redis Running 8 minutes -redis-slave-3psic 10.244.0.4 kubernetes-minion-mluf/104.197.10.10 name=redis,role=slave Running 3 minutes - redis-slave gurpartap/redis Running 3 minutes -redis-slave-qtigf 10.244.2.4 kubernetes-minion-rcgd/130.211.122.180 name=redis,role=slave Running 3 minutes - redis-slave gurpartap/redis Running 3 minutes -``` - -You will see a single redis master pod, two redis slaves, and three guestbook pods. - -### Step Six: Create the guestbook service. - -Just like the others, you want a service to group your guestbook pods. The service specification for the guestbook is in `examples/guestbook-go/guestbook-service.json`. There's a twist this time - because we want it to be externally visible, we set the `createExternalLoadBalancer` flag on the service. - -```shell -$ kubectl create -f examples/guestbook-go/guestbook-service.json - -$ kubectl get services -NAME LABELS SELECTOR IP(S) PORT(S) -guestbook name=guestbook name=guestbook 10.0.114.109 3000/TCP -redis-master name=redis,role=master name=redis,role=master 10.0.11.173 6379/TCP -redis-slave name=redis,role=slave name=redis,role=slave 10.0.234.24 6379/TCP -``` - -To play with the service itself, find the external IP of the load balancer: - -```shell -$ kubectl get services guestbook -o template --template='{{(index .status.loadBalancer.ingress 0).ip}}' -104.154.63.66$ -``` -and then visit port 3000 of that IP address e.g. `http://104.154.63.66:3000`. - -**NOTE:** You may need to open the firewall for port 3000 using the [console][cloud-console] or the `gcloud` tool. The following command will allow traffic from any source to instances tagged `kubernetes-minion`: - -```shell -$ gcloud compute firewall-rules create --allow=tcp:3000 --target-tags=kubernetes-minion kubernetes-minion-3000 -``` - -If you are running Kubernetes locally, you can just visit http://localhost:3000 -For details about limiting traffic to specific sources, see the [GCE firewall documentation][gce-firewall-docs]. - -[cloud-console]: https://console.developer.google.com -[gce-firewall-docs]: https://cloud.google.com/compute/docs/networking#firewalls - -### Step Seven: Cleanup - -You should delete the service which will remove any associated resources that were created e.g. load balancers, forwarding rules and target pools. All the resources (replication controllers and service) can be deleted with a single command: -```shell -$ kubectl delete -f examples/guestbook-go -guestbook-controller -guestbook -redis-master-controller -redis-master -redis-slave-controller -redis-slave -``` - -To turn down a Kubernetes cluster: - -```shell -$ cluster/kube-down.sh -``` - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/guestbook-go/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/guestbook-go/README.md?pixel)]() diff --git a/release-0.19.0/examples/guestbook-go/guestbook-controller.json b/release-0.19.0/examples/guestbook-go/guestbook-controller.json deleted file mode 100644 index bcea604bd54..00000000000 --- a/release-0.19.0/examples/guestbook-go/guestbook-controller.json +++ /dev/null @@ -1,38 +0,0 @@ -{ - "kind":"ReplicationController", - "apiVersion":"v1beta3", - "metadata":{ - "name":"guestbook", - "labels":{ - "name":"guestbook" - } - }, - "spec":{ - "replicas":3, - "selector":{ - "name":"guestbook" - }, - "template":{ - "metadata":{ - "labels":{ - "name":"guestbook" - } - }, - "spec":{ - "containers":[ - { - "image":"kubernetes/guestbook:v2", - "name":"guestbook", - "ports":[ - { - "name":"http-server", - "containerPort":3000, - "protocol":"TCP" - } - ] - } - ] - } - } - } -} diff --git a/release-0.19.0/examples/guestbook-go/guestbook-service.json b/release-0.19.0/examples/guestbook-go/guestbook-service.json deleted file mode 100644 index 3359efee25a..00000000000 --- a/release-0.19.0/examples/guestbook-go/guestbook-service.json +++ /dev/null @@ -1,23 +0,0 @@ -{ - "kind":"Service", - "apiVersion":"v1beta3", - "metadata":{ - "name":"guestbook", - "labels":{ - "name":"guestbook" - } - }, - "spec":{ - "createExternalLoadBalancer": true, - "ports": [ - { - "port":3000, - "targetPort":"http-server", - "protocol":"TCP" - } - ], - "selector":{ - "name":"guestbook" - } - } -} diff --git a/release-0.19.0/examples/guestbook-go/redis-master-controller.json b/release-0.19.0/examples/guestbook-go/redis-master-controller.json deleted file mode 100644 index 2ca918e7398..00000000000 --- a/release-0.19.0/examples/guestbook-go/redis-master-controller.json +++ /dev/null @@ -1,42 +0,0 @@ -{ - "kind":"ReplicationController", - "apiVersion":"v1beta3", - "id":"redis-master", - "metadata":{ - "name":"redis-master", - "labels":{ - "name":"redis", - "role":"master" - } - }, - "spec":{ - "replicas":1, - "selector":{ - "name":"redis", - "role":"master" - }, - "template":{ - "metadata":{ - "labels":{ - "name":"redis", - "role":"master" - } - }, - "spec":{ - "containers":[ - { - "name":"redis-master", - "image":"gurpartap/redis", - "ports":[ - { - "name":"redis-server", - "containerPort":6379, - "protocol":"TCP" - } - ] - } - ] - } - } - } -} diff --git a/release-0.19.0/examples/guestbook-go/redis-master-service.json b/release-0.19.0/examples/guestbook-go/redis-master-service.json deleted file mode 100644 index 5aed7d9ff84..00000000000 --- a/release-0.19.0/examples/guestbook-go/redis-master-service.json +++ /dev/null @@ -1,24 +0,0 @@ -{ - "kind":"Service", - "apiVersion":"v1beta3", - "metadata":{ - "name":"redis-master", - "labels":{ - "name":"redis", - "role":"master" - } - }, - "spec":{ - "ports": [ - { - "port":6379, - "targetPort":"redis-server", - "protocol":"TCP" - } - ], - "selector":{ - "name":"redis", - "role":"master" - } - } -} diff --git a/release-0.19.0/examples/guestbook-go/redis-slave-controller.json b/release-0.19.0/examples/guestbook-go/redis-slave-controller.json deleted file mode 100644 index 6fabb700889..00000000000 --- a/release-0.19.0/examples/guestbook-go/redis-slave-controller.json +++ /dev/null @@ -1,47 +0,0 @@ -{ - "kind":"ReplicationController", - "apiVersion":"v1beta3", - "id":"redis-slave", - "metadata":{ - "name":"redis-slave", - "labels":{ - "name":"redis", - "role":"slave" - } - }, - "spec":{ - "replicas":2, - "selector":{ - "name":"redis", - "role":"slave" - }, - "template":{ - "metadata":{ - "labels":{ - "name":"redis", - "role":"slave" - } - }, - "spec":{ - "containers":[ - { - "name":"redis-slave", - "image":"gurpartap/redis", - "command":[ - "sh", - "-c", - "redis-server /etc/redis/redis.conf --slaveof redis-master 6379" - ], - "ports":[ - { - "name":"redis-server", - "containerPort":6379, - "protocol":"TCP" - } - ] - } - ] - } - } - } -} diff --git a/release-0.19.0/examples/guestbook-go/redis-slave-service.json b/release-0.19.0/examples/guestbook-go/redis-slave-service.json deleted file mode 100644 index 2eb1fb4ad04..00000000000 --- a/release-0.19.0/examples/guestbook-go/redis-slave-service.json +++ /dev/null @@ -1,24 +0,0 @@ -{ - "kind":"Service", - "apiVersion":"v1beta3", - "metadata":{ - "name":"redis-slave", - "labels":{ - "name":"redis", - "role":"slave" - } - }, - "spec":{ - "ports": [ - { - "port":6379, - "targetPort":"redis-server", - "protocol":"TCP" - } - ], - "selector":{ - "name":"redis", - "role":"slave" - } - } -} diff --git a/release-0.19.0/examples/guestbook/README.md b/release-0.19.0/examples/guestbook/README.md deleted file mode 100644 index 644465add99..00000000000 --- a/release-0.19.0/examples/guestbook/README.md +++ /dev/null @@ -1,549 +0,0 @@ -## GuestBook example - -This example shows how to build a simple, multi-tier web application using Kubernetes and Docker. - -The example consists of: -- A web frontend -- A redis master (for storage and a replicated set of redis slaves) - -The web front end interacts with the redis master via javascript redis API calls. - -### Step Zero: Prerequisites - -This example requires a kubernetes cluster. See the [Getting Started guides](../../docs/getting-started-guides) for how to get started. - -### Step One: Fire up the redis master - -Note: This redis-master is *not* highly available. Making it highly available would be a very interesting, but intricate exercise - redis doesn't actually support multi-master deployments at the time of this writing, so high availability would be a somewhat tricky thing to implement, and might involve periodic serialization to disk, and so on. - -Use (or just create) the file `examples/guestbook/redis-master-controller.json` which describes a single [pod](../../docs/pods.md) running a redis key-value server in a container: - -Note that, although the redis server runs just with a single replica, we use [replication controller](../../docs/replication-controller.md) to enforce that exactly one pod keeps running (e.g. in a event of node going down, the replication controller will ensure that the redis master gets restarted on a healthy node). This could result in data loss. - - -```js -{ - "kind":"ReplicationController", - "apiVersion":"v1beta3", - "metadata":{ - "name":"redis-master", - "labels":{ - "name":"redis-master" - } - }, - "spec":{ - "replicas":1, - "selector":{ - "name":"redis-master" - }, - "template":{ - "metadata":{ - "labels":{ - "name":"redis-master" - } - }, - "spec":{ - "containers":[ - { - "name":"master", - "image":"redis", - "ports":[ - { - "containerPort":6379, - "protocol":"TCP" - } - ] - } - ] - } - } - } -} -``` - -Now, create the redis pod in your Kubernetes cluster by running: - -```shell -$ kubectl create -f examples/guestbook/redis-master-controller.json - -$ kubectl get rc -CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS -redis-master master redis name=redis-master 1 -``` - -Once that's up you can list the pods in the cluster, to verify that the master is running: - -```shell -$ kubectl get pods -``` - -You'll see all kubernetes components, most importantly the redis master pod. It will also display the machine that the pod is running on once it gets placed (may take up to thirty seconds): - -```shell -POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS -redis-master-controller-gb50a 10.244.3.7 master redis kubernetes-minion-7agi.c.hazel-mote-834.internal/104.154.54.203 name=redis-master Running -``` - -If you ssh to that machine, you can run `docker ps` to see the actual pod: - -```shell -me@workstation$ gcloud compute ssh kubernetes-minion-7agi - -me@kubernetes-minion-7agi:~$ sudo docker ps -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES -0ffef9649265 redis:latest "redis-server /etc/r About a minute ago Up About a minute k8s_redis-master.767aef46_redis-master-controller-gb50a.default.api_4530d7b3-ae5d-11e4-bf77-42010af0d719_579ee964 -``` - -(Note that initial `docker pull` may take a few minutes, depending on network conditions. The pods will be reported as pending while the image is being downloaded.) - -### Step Two: Fire up the master service -A Kubernetes '[service](../../docs/services.md)' is a named load balancer that proxies traffic to *one or more* containers. This is done using the *labels* metadata which we defined in the redis-master pod above. As mentioned, in redis there is only one master, but we nevertheless still want to create a service for it. Why? Because it gives us a deterministic way to route to the single master using an elastic IP. - -The services in a Kubernetes cluster are discoverable inside other containers via environment variables. - -Services find the containers to load balance based on pod labels. - -The pod that you created in Step One has the label `name=redis-master`. The selector field of the service determines *which pods will receive the traffic* sent to the service, and the port and targetPort information defines what port the service proxy will run at. - -Use the file `examples/guestbook/redis-master-service.json`: - -```js -{ - "kind":"Service", - "apiVersion":"v1beta3", - "metadata":{ - "name":"redis-master", - "labels":{ - "name":"redis-master" - } - }, - "spec":{ - "ports": [ - { - "port":6379, - "targetPort":6379, - "protocol":"TCP" - } - ], - "selector":{ - "name":"redis-master" - } - } -} -``` - -to create the service by running: - -```shell -$ kubectl create -f examples/guestbook/redis-master-service.json -redis-master - -$ kubectl get services -NAME LABELS SELECTOR IP PORT -redis-master name=redis-master name=redis-master 10.0.246.242 6379 -``` - -This will cause all pods to see the redis master apparently running on :6379. The traffic flow from slaves to masters can be described in two steps, like so. - -- A *redis slave* will connect to "port" on the *redis master service* -- Traffic will be forwarded from the service "port" (on the service node) to the *targetPort* on the pod which (a node the service listens to). - -Thus, once created, the service proxy on each minion is configured to set up a proxy on the specified port (in this case port 6379). - -### Step Three: Fire up the replicated slave pods -Although the redis master is a single pod, the redis read slaves are a 'replicated' pod. In Kubernetes, a replication controller is responsible for managing multiple instances of a replicated pod. The replication controller will automatically launch new pods if the number of replicas falls (this is quite easy - and fun - to test, just kill the docker processes for your pods at will and watch them come back online on a new node shortly thereafter). - -Use the file `examples/guestbook/redis-slave-controller.json`, which looks like this: - -```js -{ - "kind":"ReplicationController", - "apiVersion":"v1beta3", - "metadata":{ - "name":"redis-slave", - "labels":{ - "name":"redis-slave" - } - }, - "spec":{ - "replicas":2, - "selector":{ - "name":"redis-slave" - }, - "template":{ - "metadata":{ - "labels":{ - "name":"redis-slave" - } - }, - "spec":{ - "containers":[ - { - "name":"slave", - "image":"kubernetes/redis-slave:v2", - "ports":[ - { - "containerPort":6379, - "protocol":"TCP" - } - ] - } - ] - } - } - } -} -``` - -to create the replication controller by running: - -```shell -$ kubectl create -f examples/guestbook/redis-slave-controller.json -redis-slave-controller - -$ kubectl get rc -CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS -redis-master master redis name=redis-master 1 -redis-slave slave kubernetes/redis-slave:v2 name=redis-slave 2 -``` - -Once that's up you can list the pods in the cluster, to verify that the master and slaves are running: - -```shell -$ kubectl get pods -POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS -redis-master-controller-gb50a 10.244.3.7 master redis kubernetes-minion-7agi.c.hazel-mote-834.internal/104.154.54.203 name=redis-master Running -redis-slave-controller-182tv 10.244.3.6 slave kubernetes/redis-slave:v2 kubernetes-minion-7agi.c.hazel-mote-834.internal/104.154.54.203 name=redis-slave Running -redis-slave-controller-zwk1b 10.244.2.8 slave kubernetes/redis-slave:v2 kubernetes-minion-3vxa.c.hazel-mote-834.internal/104.154.54.6 name=redis-slave Running -``` - -You will see a single redis master pod and two redis slave pods. - -### Step Four: Create the redis slave service - -Just like the master, we want to have a service to proxy connections to the read slaves. In this case, in addition to discovery, the slave service provides transparent load balancing to web app clients. - -The service specification for the slaves is in `examples/guestbook/redis-slave-service.json`: - -```js -{ - "kind":"Service", - "apiVersion":"v1beta3", - "metadata":{ - "name":"redis-slave", - "labels":{ - "name":"redis-slave" - } - }, - "spec":{ - "ports": [ - { - "port":6379, - "targetPort":6379, - "protocol":"TCP" - } - ], - "selector":{ - "name":"redis-slave" - } - } -} -``` - -This time the selector for the service is `name=redis-slave`, because that identifies the pods running redis slaves. It may also be helpful to set labels on your service itself as we've done here to make it easy to locate them with the `kubectl get services -l "label=value"` command. - -Now that you have created the service specification, create it in your cluster by running: - -```shell -$ kubectl create -f examples/guestbook/redis-slave-service.json -redis-slave - -$ kubectl get services -NAME LABELS SELECTOR IP PORT -redis-master name=redis-master name=redis-master 10.0.246.242 6379 -redis-slave name=redis-slave name=redis-slave 10.0.72.62 6379 -``` - -### Step Five: Create the frontend pod - -This is a simple PHP server that is configured to talk to either the slave or master services depending on whether the request is a read or a write. It exposes a simple AJAX interface, and serves an angular-based UX. Like the redis read slaves it is a replicated service instantiated by a replication controller. - -It can now leverage writes to the load balancing redis-slaves, which can be highly replicated. - -The pod is described in the file `examples/guestbook/frontend-controller.json`: - -```js -{ - "kind":"ReplicationController", - "apiVersion":"v1beta3", - "metadata":{ - "name":"frontend", - "labels":{ - "name":"frontend" - } - }, - "spec":{ - "replicas":3, - "selector":{ - "name":"frontend" - }, - "template":{ - "metadata":{ - "labels":{ - "name":"frontend" - } - }, - "spec":{ - "containers":[ - { - "name":"php-redis", - "image":"kubernetes/example-guestbook-php-redis:v2", - "ports":[ - { - "containerPort":80, - "protocol":"TCP" - } - ] - } - ] - } - } - } -} -``` - -Using this file, you can turn up your frontend with: - -```shell -$ kubectl create -f examples/guestbook/frontend-controller.json -frontend-controller - -$ kubectl get rc -CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS -frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=frontend 3 -redis-master master redis name=redis-master 1 -redis-slave slave kubernetes/redis-slave:v2 name=redis-slave 2 -``` - -Once that's up (it may take ten to thirty seconds to create the pods) you can list the pods in the cluster, to verify that the master, slaves and frontends are running: - -```shell -$ kubectl get pods -POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS -frontend-5m1zc 10.244.1.131 php-redis kubernetes/example-guestbook-php-redis:v2 kubernetes-minion-3vxa.c.hazel-mote-834.internal/146.148.71.71 app=frontend,name=frontend,uses=redis-slave,redis-master Running -frontend-ckn42 10.244.2.134 php-redis kubernetes/example-guestbook-php-redis:v2 kubernetes-minion-by92.c.hazel-mote-834.internal/104.154.54.6 app=frontend,name=frontend,uses=redis-slave,redis-master Running -frontend-v5drx 10.244.0.128 php-redis kubernetes/example-guestbook-php-redis:v2 kubernetes-minion-wilb.c.hazel-mote-834.internal/23.236.61.63 app=frontend,name=frontend,uses=redis-slave,redis-master Running -redis-master-gb50a 10.244.3.7 master redis kubernetes-minion-7agi.c.hazel-mote-834.internal/104.154.54.203 name=redis-master Running -redis-slave-182tv 10.244.3.6 slave kubernetes/redis-slave:v2 kubernetes-minion-7agi.c.hazel-mote-834.internal/104.154.54.203 name=redis-slave Running -redis-slave-zwk1b 10.244.2.8 slave kubernetes/redis-slave:v2 kubernetes-minion-3vxa.c.hazel-mote-834.internal/104.154.54.6 name=redis-slave Running -``` - -You will see a single redis master pod, two redis slaves, and three frontend pods. - -The code for the PHP service looks like this: - -```php - 'tcp', - 'host' => 'redis-master', - 'port' => 6379, - ]); - - $client->set($_GET['key'], $_GET['value']); - print('{"message": "Updated"}'); - } else { - $client = new Predis\Client([ - 'scheme' => 'tcp', - 'host' => 'redis-slave', - 'port' => 6379, - ]); - - $value = $client->get($_GET['key']); - print('{"data": "' . $value . '"}'); - } -} else { - phpinfo(); -} ?> -``` - -### Step Six: Create the guestbook service. - -Just like the others, you want a service to group your frontend pods. -The service is described in the file `examples/guestbook/frontend-service.json`: - -```js -{ - "kind":"Service", - "apiVersion":"v1beta3", - "metadata":{ - "name":"frontend", - "labels":{ - "name":"frontend" - } - }, - "spec":{ - "ports": [ - { - "port":80, - "targetPort":80, - "protocol":"TCP" - } - ], - "selector":{ - "name":"frontend" - } - } -} -``` - -When `createExternalLoadBalancer` is specified `"createExternalLoadBalancer":true`, it takes some time for an external IP to show up in `kubectl get services` output. -There should eventually be an internal (10.x.x.x) and an external address assigned to the frontend service. -If running a single node local setup, or single VM, you don't need `createExternalLoadBalancer`, nor do you need `publicIPs`. -Read the *Accessing the guestbook site externally* section below for details and set 10.11.22.33 accordingly (for now, you can -delete these parameters or run this - either way it won't hurt anything to have both parameters the way they are). - -```shell -$ kubectl create -f examples/guestbook/frontend-service.json -frontend - -$ kubectl get services -NAME LABELS SELECTOR IP PORT -frontend name=frontend name=frontend 10.0.93.211 8000 -redis-master name=redis-master name=redis-master 10.0.246.242 6379 -redis-slave name=redis-slave name=redis-slave 10.0.72.62 6379 -``` - -### A few Google Container Engine specifics for playing around with the services. - -In GCE, `kubectl` automatically creates forwarding rule for services with `createExternalLoadBalancer`. - -```shell -$ gcloud compute forwarding-rules list -NAME REGION IP_ADDRESS IP_PROTOCOL TARGET -frontend us-central1 130.211.188.51 TCP us-central1/targetPools/frontend -``` - -You can grab the external IP of the load balancer associated with that rule and visit `http://130.211.188.51:80`. - -In GCE, you also may need to open the firewall for port 80 using the [console][cloud-console] or the `gcloud` tool. The following command will allow traffic from any source to instances tagged `kubernetes-minion`: - -```shell -$ gcloud compute firewall-rules create --allow=tcp:80 --target-tags=kubernetes-minion kubernetes-minion-80 -``` - -For GCE details about limiting traffic to specific sources, see the [GCE firewall documentation][gce-firewall-docs]. - -[cloud-console]: https://console.developer.google.com -[gce-firewall-docs]: https://cloud.google.com/compute/docs/networking#firewalls - -### Accessing the guestbook site externally - -The pods that we have set up are reachable through the frontend service, but you'll notice that 10.0.93.211 (the IP of the frontend service) is unavailable from outside of kubernetes. -Of course, if you are running kubernetes minions locally, this isn't such a big problem - the port binding will allow you to reach the guestbook website at localhost:80... but the beloved **localhost** solution obviously doesn't work in any real world scenario. - -Unless you have access to the `createExternalLoadBalancer` feature (cloud provider specific), you will want to set up a **publicIP on a node**, so that the service can be accessed from outside of the internal kubernetes network. This is quite easy. You simply look at your list of kubelet IP addresses, and update the service file to include a `publicIPs` string, which is mapped to an IP address of any number of your existing kubelets. This will allow all your kubelets to act as external entry points to the service (translation: this will allow you to browse the guestbook site at your kubelet IP address from your browser). - -If you are more advanced in the ops arena, note you can manually get the service IP from looking at the output of `kubectl get pods,services`, and modify your firewall using standard tools and services (firewalld, iptables, selinux) which you are already familar with. - -And of course, finally, if you are running Kubernetes locally, you can just visit http://localhost:80. - -### Step Seven: Cleanup - -If you are in a live kubernetes cluster, you can just kill the pods, using a script such as this (obviously, read through it and make sure you understand it before running it blindly, as it will kill several pods automatically for you). - -```shell -### First, kill services and controllers. -kubectl stop -f examples/guestbook/redis-master-controller.json -kubectl stop -f examples/guestbook/redis-slave-controller.json -kubectl stop -f examples/guestbook/frontend-controller.json -kubectl delete -f examples/guestbook/redis-master-service.json -kubectl delete -f examples/guestbook/redis-slave-service.json -kubectl delete -f examples/guestbook/frontend-service.json -``` - -To completely tear down a Kubernetes cluster, if you ran this from source, you can use - -```shell -$ cluster/kube-down.sh -``` - -### Troubleshooting - -the Guestbook example can fail for a variety of reasons, which makes it an effective test. Lets test the web app simply using *curl*, so we can see whats going on. - -Before we proceed, what are some setup idiosyncracies that might cause the app to fail (or, appear to fail, when merely you have a *cold start* issue. - -- running kubernetes from HEAD, in which case, there may be subtle bugs in the kubernetes core component interactions. -- running kubernetes with security turned on, in such a way that containers are restricted from doing their job. -- starting the kubernetes and not allowing enough time for all services and pods to come online, before doing testing. - - - -To post a message (Note that this call *overwrites* the messages field), so it will be reset to just one entry. - -``` -curl "localhost:8000/index.php?cmd=set&key=messages&value=jay_sais_hi" -``` - -And, to get messages afterwards... - -``` -curl "localhost:8000/index.php?cmd=get&key=messages" -``` - -1) When the *Web page hasn't come up yet*: - -When you go to localhost:8000, you might not see the page at all. Testing it with curl... -```shell - ==> default: curl: (56) Recv failure: Connection reset by peer -``` -This means the web frontend isn't up yet. Specifically, the "reset by peer" message is occurring because you are trying to access the *right port*, but *nothing is bound* to that port yet. Wait a while, possibly about 2 minutes or more, depending on your set up. Also, run a *watch* on docker ps, to see if containers are cycling on and off or not starting. - -```watch -$> watch -n 1 docker ps -``` - -If you run this on a node to which the frontend is assigned, you will eventually see the frontend container turns on. At that point, this basic error will likely go away. - -2) *Temporarily, while waiting for the app to come up* , you might see a few of these: - -```shell -==> default:
-==> default: Fatal error: Uncaught exception 'Predis\Connection\ConnectionException' with message 'Error while reading line from the server [tcp://10.254.168.69:6379]' in /vendor/predis/predis/lib/Predis/Connection/AbstractConnection.php:141 -``` - -The fix, just go get some coffee. When you come back, there is a good chance the service endpoint will eventually be up. If not, make sure its running and that the redis master / slave docker logs show something like this. - -```shell -$> docker logs 26af6bd5ac12 -... -[9] 20 Feb 23:47:51.015 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128. -[9] 20 Feb 23:47:51.015 * The server is now ready to accept connections on port 6379 -[9] 20 Feb 23:47:52.005 * Connecting to MASTER 10.254.168.69:6379 -[9] 20 Feb 23:47:52.005 * MASTER <-> SLAVE sync started -``` - -3) *When security issues cause redis writes to fail* you may have to run *docker logs* on the redis containers: - -```shell -==> default: Fatal error: Uncaught exception 'Predis\ServerException' with message 'MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on disk. Commands that may modify the data set are disabled. Please check Redis logs for details about the error.' in /vendor/predis/predis/lib/Predis/Client.php:282" -``` -The fix is to setup SE Linux properly (don't just turn it off). Remember that you can also rebuild this entire app from scratch, using the dockerfiles, and modify while redeploying. Reach out on the mailing list if you need help doing so! - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/guestbook/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/guestbook/README.md?pixel)]() diff --git a/release-0.19.0/examples/guestbook/frontend-controller.json b/release-0.19.0/examples/guestbook/frontend-controller.json deleted file mode 100644 index 8b8119b94cb..00000000000 --- a/release-0.19.0/examples/guestbook/frontend-controller.json +++ /dev/null @@ -1,37 +0,0 @@ -{ - "kind":"ReplicationController", - "apiVersion":"v1beta3", - "metadata":{ - "name":"frontend", - "labels":{ - "name":"frontend" - } - }, - "spec":{ - "replicas":3, - "selector":{ - "name":"frontend" - }, - "template":{ - "metadata":{ - "labels":{ - "name":"frontend" - } - }, - "spec":{ - "containers":[ - { - "name":"php-redis", - "image":"kubernetes/example-guestbook-php-redis:v2", - "ports":[ - { - "containerPort":80, - "protocol":"TCP" - } - ] - } - ] - } - } - } -} diff --git a/release-0.19.0/examples/guestbook/frontend-service.json b/release-0.19.0/examples/guestbook/frontend-service.json deleted file mode 100644 index 07e81f9942b..00000000000 --- a/release-0.19.0/examples/guestbook/frontend-service.json +++ /dev/null @@ -1,22 +0,0 @@ -{ - "kind":"Service", - "apiVersion":"v1beta3", - "metadata":{ - "name":"frontend", - "labels":{ - "name":"frontend" - } - }, - "spec":{ - "ports": [ - { - "port":80, - "targetPort":80, - "protocol":"TCP" - } - ], - "selector":{ - "name":"frontend" - } - } -} diff --git a/release-0.19.0/examples/guestbook/php-redis/Dockerfile b/release-0.19.0/examples/guestbook/php-redis/Dockerfile deleted file mode 100644 index 3cf7c2cfa20..00000000000 --- a/release-0.19.0/examples/guestbook/php-redis/Dockerfile +++ /dev/null @@ -1,7 +0,0 @@ -FROM brendanburns/php - -ADD index.php /var/www/index.php -ADD controllers.js /var/www/controllers.js -ADD index.html /var/www/index.html - -CMD /run.sh diff --git a/release-0.19.0/examples/guestbook/php-redis/controllers.js b/release-0.19.0/examples/guestbook/php-redis/controllers.js deleted file mode 100644 index 1ea5bdce18f..00000000000 --- a/release-0.19.0/examples/guestbook/php-redis/controllers.js +++ /dev/null @@ -1,29 +0,0 @@ -var redisApp = angular.module('redis', ['ui.bootstrap']); - -/** - * Constructor - */ -function RedisController() {} - -RedisController.prototype.onRedis = function() { - this.scope_.messages.push(this.scope_.msg); - this.scope_.msg = ""; - var value = this.scope_.messages.join(); - this.http_.get("/index.php?cmd=set&key=messages&value=" + value) - .success(angular.bind(this, function(data) { - this.scope_.redisResponse = "Updated."; - })); -}; - -redisApp.controller('RedisCtrl', function ($scope, $http, $location) { - $scope.controller = new RedisController(); - $scope.controller.scope_ = $scope; - $scope.controller.location_ = $location; - $scope.controller.http_ = $http; - - $scope.controller.http_.get("/index.php?cmd=get&key=messages") - .success(function(data) { - console.log(data); - $scope.messages = data.data.split(","); - }); -}); diff --git a/release-0.19.0/examples/guestbook/php-redis/index.html b/release-0.19.0/examples/guestbook/php-redis/index.html deleted file mode 100644 index 81328b4fcd8..00000000000 --- a/release-0.19.0/examples/guestbook/php-redis/index.html +++ /dev/null @@ -1,25 +0,0 @@ - - - Guestbook - - - - - - -
-

Guestbook

-
-
-
- -
-
-
-
- {{msg}} -
-
-
- - diff --git a/release-0.19.0/examples/guestbook/php-redis/index.php b/release-0.19.0/examples/guestbook/php-redis/index.php deleted file mode 100644 index 18bff077579..00000000000 --- a/release-0.19.0/examples/guestbook/php-redis/index.php +++ /dev/null @@ -1,33 +0,0 @@ - 'tcp', - 'host' => 'redis-master', - 'port' => 6379, - ]); - - $client->set($_GET['key'], $_GET['value']); - print('{"message": "Updated"}'); - } else { - $client = new Predis\Client([ - 'scheme' => 'tcp', - 'host' => 'redis-slave', - 'port' => 6379, - ]); - - $value = $client->get($_GET['key']); - print('{"data": "' . $value . '"}'); - } -} else { - phpinfo(); -} ?> diff --git a/release-0.19.0/examples/guestbook/redis-master-controller.json b/release-0.19.0/examples/guestbook/redis-master-controller.json deleted file mode 100644 index add8ba79904..00000000000 --- a/release-0.19.0/examples/guestbook/redis-master-controller.json +++ /dev/null @@ -1,37 +0,0 @@ -{ - "kind":"ReplicationController", - "apiVersion":"v1beta3", - "metadata":{ - "name":"redis-master", - "labels":{ - "name":"redis-master" - } - }, - "spec":{ - "replicas":1, - "selector":{ - "name":"redis-master" - }, - "template":{ - "metadata":{ - "labels":{ - "name":"redis-master" - } - }, - "spec":{ - "containers":[ - { - "name":"master", - "image":"redis", - "ports":[ - { - "containerPort":6379, - "protocol":"TCP" - } - ] - } - ] - } - } - } -} diff --git a/release-0.19.0/examples/guestbook/redis-master-service.json b/release-0.19.0/examples/guestbook/redis-master-service.json deleted file mode 100644 index 101d9ea965c..00000000000 --- a/release-0.19.0/examples/guestbook/redis-master-service.json +++ /dev/null @@ -1,22 +0,0 @@ -{ - "kind":"Service", - "apiVersion":"v1beta3", - "metadata":{ - "name":"redis-master", - "labels":{ - "name":"redis-master" - } - }, - "spec":{ - "ports": [ - { - "port":6379, - "targetPort":6379, - "protocol":"TCP" - } - ], - "selector":{ - "name":"redis-master" - } - } -} diff --git a/release-0.19.0/examples/guestbook/redis-slave-controller.json b/release-0.19.0/examples/guestbook/redis-slave-controller.json deleted file mode 100644 index 4a668fe091b..00000000000 --- a/release-0.19.0/examples/guestbook/redis-slave-controller.json +++ /dev/null @@ -1,37 +0,0 @@ -{ - "kind":"ReplicationController", - "apiVersion":"v1beta3", - "metadata":{ - "name":"redis-slave", - "labels":{ - "name":"redis-slave" - } - }, - "spec":{ - "replicas":2, - "selector":{ - "name":"redis-slave" - }, - "template":{ - "metadata":{ - "labels":{ - "name":"redis-slave" - } - }, - "spec":{ - "containers":[ - { - "name":"slave", - "image":"kubernetes/redis-slave:v2", - "ports":[ - { - "containerPort":6379, - "protocol":"TCP" - } - ] - } - ] - } - } - } -} diff --git a/release-0.19.0/examples/guestbook/redis-slave-service.json b/release-0.19.0/examples/guestbook/redis-slave-service.json deleted file mode 100644 index 2b866b6f94a..00000000000 --- a/release-0.19.0/examples/guestbook/redis-slave-service.json +++ /dev/null @@ -1,22 +0,0 @@ -{ - "kind":"Service", - "apiVersion":"v1beta3", - "metadata":{ - "name":"redis-slave", - "labels":{ - "name":"redis-slave" - } - }, - "spec":{ - "ports": [ - { - "port":6379, - "targetPort":6379, - "protocol":"TCP" - } - ], - "selector":{ - "name":"redis-slave" - } - } -} diff --git a/release-0.19.0/examples/guestbook/redis-slave/Dockerfile b/release-0.19.0/examples/guestbook/redis-slave/Dockerfile deleted file mode 100644 index 8167438bbea..00000000000 --- a/release-0.19.0/examples/guestbook/redis-slave/Dockerfile +++ /dev/null @@ -1,7 +0,0 @@ -FROM redis - -ADD run.sh /run.sh - -RUN chmod a+x /run.sh - -CMD /run.sh diff --git a/release-0.19.0/examples/guestbook/redis-slave/run.sh b/release-0.19.0/examples/guestbook/redis-slave/run.sh deleted file mode 100755 index bf48f27c015..00000000000 --- a/release-0.19.0/examples/guestbook/redis-slave/run.sh +++ /dev/null @@ -1,17 +0,0 @@ -#!/bin/bash - -# Copyright 2014 The Kubernetes Authors All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -redis-server --slaveof redis-master 6379 diff --git a/release-0.19.0/examples/hazelcast/Dockerfile b/release-0.19.0/examples/hazelcast/Dockerfile deleted file mode 100644 index 55963290c1a..00000000000 --- a/release-0.19.0/examples/hazelcast/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM quay.io/pires/docker-jre:8u45-2 - -MAINTAINER Paulo Pires - -EXPOSE 5701 - -RUN \ - curl -Lskj https://github.com/pires/hazelcast-kubernetes-bootstrapper/releases/download/0.3.1/hazelcast-kubernetes-bootstrapper-0.3.1.jar \ - -o /bootstrapper.jar - -CMD java -jar /bootstrapper.jar diff --git a/release-0.19.0/examples/hazelcast/README.md b/release-0.19.0/examples/hazelcast/README.md deleted file mode 100644 index b8836d0b80a..00000000000 --- a/release-0.19.0/examples/hazelcast/README.md +++ /dev/null @@ -1,214 +0,0 @@ -## Cloud Native Deployments of Hazelcast using Kubernetes - -The following document describes the development of a _cloud native_ [Hazelcast](http://hazelcast.org/) deployment on Kubernetes. When we say _cloud native_ we mean an application which understands that it is running within a cluster manager, and uses this cluster management infrastructure to help implement the application. In particular, in this instance, a custom Hazelcast ```bootstrapper``` is used to enable Hazelcast to dynamically discover Hazelcast nodes that have already joined the cluster. - -Any topology changes are communicated and handled by Hazelcast nodes themselves. - -This document also attempts to describe the core components of Kubernetes: _Pods_, _Services_, and _Replication Controllers_. - -### Prerequisites -This example assumes that you have a Kubernetes cluster installed and running, and that you have installed the `kubectl` command line tool somewhere in your path. Please see the [getting started](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/docs/getting-started-guides) for installation instructions for your platform. - -### A note for the impatient -This is a somewhat long tutorial. If you want to jump straight to the "do it now" commands, please see the [tl; dr](#tl-dr) at the end. - -### Sources - -Source is freely available at: -* Hazelcast Discovery - https://github.com/pires/hazelcast-kubernetes-bootstrapper -* Dockerfile - https://github.com/pires/hazelcast-kubernetes -* Docker Trusted Build - https://registry.hub.docker.com/u/pires/hazelcast-k8s - -### Simple Single Pod Hazelcast Node -In Kubernetes, the atomic unit of an application is a [_Pod_](../../docs/pods.md). A Pod is one or more containers that _must_ be scheduled onto the same host. All containers in a pod share a network namespace, and may optionally share mounted volumes. - -In this case, we shall not run a single Hazelcast pod, because the discovery mechanism now relies on a service definition. - - -### Adding a Hazelcast Service -In Kubernetes a _[Service](../../docs/services.md)_ describes a set of Pods that perform the same task. For example, the set of nodes in a Hazelcast cluster. An important use for a Service is to create a load balancer which distributes traffic across members of the set. But a _Service_ can also be used as a standing query which makes a dynamically changing set of Pods available via the Kubernetes API. This is actually how our discovery mechanism works, by relying on the service to discover other Hazelcast pods. - -Here is the service description: -```yaml -apiVersion: v1beta3 -kind: Service -metadata: - labels: - name: hazelcast - name: hazelcast -spec: - ports: - - port: 5701 - targetPort: 5701 - selector: - name: hazelcast -``` - -The important thing to note here is the `selector`. It is a query over labels, that identifies the set of _Pods_ contained by the _Service_. In this case the selector is `name: hazelcast`. If you look at the Replication Controller specification below, you'll see that the pod has the corresponding label, so it will be selected for membership in this Service. - -Create this service as follows: -```sh -$ kubectl create -f hazelcast-service.yaml -``` - -### Adding replicated nodes -The real power of Kubernetes and Hazelcast lies in easily building a replicated, resizable Hazelcast cluster. - -In Kubernetes a _[Replication Controller](../../docs/replication-controller.md)_ is responsible for replicating sets of identical pods. Like a _Service_ it has a selector query which identifies the members of it's set. Unlike a _Service_ it also has a desired number of replicas, and it will create or delete _Pods_ to ensure that the number of _Pods_ matches up with it's desired state. - -Replication Controllers will "adopt" existing pods that match their selector query, so let's create a Replication Controller with a single replica to adopt our existing Hazelcast Pod. - -```yaml -apiVersion: v1beta3 -kind: ReplicationController -metadata: - labels: - name: hazelcast - name: hazelcast -spec: - replicas: 1 - selector: - name: hazelcast - template: - metadata: - labels: - name: hazelcast - spec: - containers: - - resources: - limits: - cpu: 1 - image: quay.io/pires/hazelcast-kubernetes:0.3.1 - name: hazelcast - env: - - name: "DNS_DOMAIN" - value: "cluster.local" - ports: - - containerPort: 5701 - name: hazelcast -``` - -There are a few things to note in this description. First is that we are running the `quay.io/pires/hazelcast-kubernetes` image, tag `0.3.1`. This is a `busybox` installation with JRE 8. However it also adds a custom [`application`](https://github.com/pires/hazelcast-kubernetes-bootstrapper) that finds any Hazelcast nodes in the cluster and bootstraps an Hazelcast instance accordingle. The `HazelcastDiscoveryController` discovers the Kubernetes API Server using the built in Kubernetes discovery service, and then uses the Kubernetes API to find new nodes (more on this later). - -You may also note that we tell Kubernetes that the container exposes the `hazelcast` port. Finally, we tell the cluster manager that we need 1 cpu core. - -The bulk of the replication controller config is actually identical to the Hazelcast pod declaration above, it simply gives the controller a recipe to use when creating new pods. The other parts are the `selector` which contains the controller's selector query, and the `replicas` parameter which specifies the desired number of replicas, in this case 1. - -Last but not least, we set `DNS_DOMAIN` environment variable according to your Kubernetes clusters DNS configuration. - -Create this controller: - -```sh -$ kubectl create -f hazelcast-controller.yaml -``` - -After the controller provisions successfully the pod, you can query the service endpoints: -```sh -$ kubectl get endpoints hazelcast -o yaml -apiVersion: v1beta3 -kind: Endpoints -metadata: - creationTimestamp: 2015-05-04T17:43:40Z - labels: - name: hazelcast - name: hazelcast - namespace: default - resourceVersion: "120480" - selfLink: /api/v1beta3/namespaces/default/endpoints/hazelcast - uid: 19a22aa9-f285-11e4-b38f-42010af0bbf9 -subsets: -- addresses: - - IP: 10.245.2.68 - targetRef: - kind: Pod - name: hazelcast - namespace: default - resourceVersion: "120479" - uid: d7238173-f283-11e4-b38f-42010af0bbf9 - ports: - - port: 5701 - protocol: TCP -``` - -You can see that the _Service_ has found the pod created by the replication controller. - -Now it gets even more interesting. - -Let's scale our cluster to 2 pods: -```sh -$ kubectl scale rc hazelcast --replicas=2 -``` - -Now if you list the pods in your cluster, you should see two hazelcast pods: - -```sh -$ kubectl get pods -POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE -hazelcast-pkyzd 10.244.90.3 e2e-test-minion-vj7k/104.197.8.214 name=hazelcast Running 14 seconds - hazelcast quay.io/pires/hazelcast-kubernetes:0.3.1 Running 2 seconds -hazelcast-ulkws 10.244.66.2 e2e-test-minion-2x1f/146.148.62.37 name=hazelcast Running 7 seconds - hazelcast quay.io/pires/hazelcast-kubernetes:0.3.1 Running 6 seconds -``` - -To prove that this all works, you can use the `log` command to examine the logs of one pod, for example: - -```sh -$ kubectl log hazelcast-ulkws hazelcast -2015-05-09 22:06:20.016 INFO 5 --- [ main] com.github.pires.hazelcast.Application : Starting Application v0.2-SNAPSHOT on hazelcast-enyli with PID 5 (/bootstrapper.jar started by root in /) -2015-05-09 22:06:20.071 INFO 5 --- [ main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@5424f110: startup date [Sat May 09 22:06:20 GMT 2015]; root of context hierarchy -2015-05-09 22:06:21.511 INFO 5 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup -2015-05-09 22:06:21.549 INFO 5 --- [ main] c.g.p.h.HazelcastDiscoveryController : Asking k8s registry at https://kubernetes.default.cluster.local.. -2015-05-09 22:06:22.031 INFO 5 --- [ main] c.g.p.h.HazelcastDiscoveryController : Found 2 pods running Hazelcast. -2015-05-09 22:06:22.176 INFO 5 --- [ main] c.h.instance.DefaultAddressPicker : [LOCAL] [someGroup] [3.4.2] Interfaces is disabled, trying to pick one address from TCP-IP config addresses: [10.244.90.3, 10.244.66.2] -2015-05-09 22:06:22.177 INFO 5 --- [ main] c.h.instance.DefaultAddressPicker : [LOCAL] [someGroup] [3.4.2] Prefer IPv4 stack is true. -2015-05-09 22:06:22.189 INFO 5 --- [ main] c.h.instance.DefaultAddressPicker : [LOCAL] [someGroup] [3.4.2] Picked Address[10.244.66.2]:5701, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5701], bind any local is true -2015-05-09 22:06:22.642 INFO 5 --- [ main] com.hazelcast.spi.OperationService : [10.244.66.2]:5701 [someGroup] [3.4.2] Backpressure is disabled -2015-05-09 22:06:22.647 INFO 5 --- [ main] c.h.spi.impl.BasicOperationScheduler : [10.244.66.2]:5701 [someGroup] [3.4.2] Starting with 2 generic operation threads and 2 partition operation threads. -2015-05-09 22:06:22.796 INFO 5 --- [ main] com.hazelcast.system : [10.244.66.2]:5701 [someGroup] [3.4.2] Hazelcast 3.4.2 (20150326 - f6349a4) starting at Address[10.244.66.2]:5701 -2015-05-09 22:06:22.798 INFO 5 --- [ main] com.hazelcast.system : [10.244.66.2]:5701 [someGroup] [3.4.2] Copyright (C) 2008-2014 Hazelcast.com -2015-05-09 22:06:22.800 INFO 5 --- [ main] com.hazelcast.instance.Node : [10.244.66.2]:5701 [someGroup] [3.4.2] Creating TcpIpJoiner -2015-05-09 22:06:22.801 INFO 5 --- [ main] com.hazelcast.core.LifecycleService : [10.244.66.2]:5701 [someGroup] [3.4.2] Address[10.244.66.2]:5701 is STARTING -2015-05-09 22:06:23.108 INFO 5 --- [cached.thread-2] com.hazelcast.nio.tcp.SocketConnector : [10.244.66.2]:5701 [someGroup] [3.4.2] Connecting to /10.244.90.3:5701, timeout: 0, bind-any: true -2015-05-09 22:06:23.182 INFO 5 --- [cached.thread-2] c.h.nio.tcp.TcpIpConnectionManager : [10.244.66.2]:5701 [someGroup] [3.4.2] Established socket connection between /10.244.66.2:48051 and 10.244.90.3/10.244.90.3:5701 -2015-05-09 22:06:29.158 INFO 5 --- [ration.thread-1] com.hazelcast.cluster.ClusterService : [10.244.66.2]:5701 [someGroup] [3.4.2] - -Members [2] { - Member [10.244.90.3]:5701 - Member [10.244.66.2]:5701 this -} - -2015-05-09 22:06:31.177 INFO 5 --- [ main] com.hazelcast.core.LifecycleService : [10.244.66.2]:5701 [someGroup] [3.4.2] Address[10.244.66.2]:5701 is STARTED -``` - -Now let's scale our cluster to 4 nodes: -```sh -$ kubectl scale rc hazelcast --replicas=4 -``` - -Examine the status again by checking a node’s log and you should see the 4 members connected. - -### tl; dr; -For those of you who are impatient, here is the summary of the commands we ran in this tutorial. - -```sh -# create a service to track all hazelcast nodes -kubectl create -f hazelcast-service.yaml - -# create a replication controller to replicate hazelcast nodes -kubectl create -f hazelcast-controller.yaml - -# scale up to 2 nodes -kubectl scale rc hazelcast --replicas=2 - -# scale up to 4 nodes -kubectl scale rc hazelcast --replicas=4 -``` - -### Hazelcast Discovery Source - -See [here](https://github.com/pires/hazelcast-kubernetes-bootstrapper/blob/master/src/main/java/com/github/pires/hazelcast/HazelcastDiscoveryController.java) - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/hazelcast/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/hazelcast/README.md?pixel)]() diff --git a/release-0.19.0/examples/hazelcast/hazelcast-controller.yaml b/release-0.19.0/examples/hazelcast/hazelcast-controller.yaml deleted file mode 100644 index 86496ef665f..00000000000 --- a/release-0.19.0/examples/hazelcast/hazelcast-controller.yaml +++ /dev/null @@ -1,27 +0,0 @@ -apiVersion: v1beta3 -kind: ReplicationController -metadata: - labels: - name: hazelcast - name: hazelcast -spec: - replicas: 1 - selector: - name: hazelcast - template: - metadata: - labels: - name: hazelcast - spec: - containers: - - resources: - limits: - cpu: 1 - image: quay.io/pires/hazelcast-kubernetes:0.3.1 - name: hazelcast - env: - - name: "DNS_DOMAIN" - value: "cluster.local" - ports: - - containerPort: 5701 - name: hazelcast diff --git a/release-0.19.0/examples/hazelcast/hazelcast-service.yaml b/release-0.19.0/examples/hazelcast/hazelcast-service.yaml deleted file mode 100644 index 1ea5a121209..00000000000 --- a/release-0.19.0/examples/hazelcast/hazelcast-service.yaml +++ /dev/null @@ -1,12 +0,0 @@ -apiVersion: v1beta3 -kind: Service -metadata: - labels: - name: hazelcast - name: hazelcast -spec: - ports: - - port: 5701 - targetPort: 5701 - selector: - name: hazelcast diff --git a/release-0.19.0/examples/iscsi/README.md b/release-0.19.0/examples/iscsi/README.md deleted file mode 100644 index 97731de8849..00000000000 --- a/release-0.19.0/examples/iscsi/README.md +++ /dev/null @@ -1,65 +0,0 @@ -## Step 1. Setting up iSCSI target and iSCSI initiator -**Setup A.** On Fedora 21 nodes - -If you use Fedora 21 on Kubernetes node, then first install iSCSI initiator on the node: - - # yum -y install iscsi-initiator-utils - - -then edit */etc/iscsi/initiatorname.iscsi* and */etc/iscsi/iscsid.conf* to match your iSCSI target configuration. - -I mostly followed these [instructions](http://www.server-world.info/en/note?os=Fedora_21&p=iscsi&f=2) to setup iSCSI initiator and these [instructions](http://www.server-world.info/en/note?os=Fedora_21&p=iscsi) to setup iSCSI target. - -**Setup B.** On Unbuntu 12.04 and Debian 7 nodes on GCE - -GCE does not provide preconfigured Fedora 21 image, so I set up the iSCSI target on a preconfigured Ubuntu 12.04 image, mostly following these [instructions](http://www.server-world.info/en/note?os=Ubuntu_12.04&p=iscsi). My Kubernetes cluster on GCE was running Debian 7 images, so I followed these [instructions](http://www.server-world.info/en/note?os=Debian_7.0&p=iscsi&f=2) to set up the iSCSI initiator. - -##Step 2. Creating the pod with iSCSI persistent storage -Once you have installed iSCSI initiator and new Kubernetes, you can create a pod based on my example *iscsi.json*. In the pod JSON, you need to provide *targetPortal* (the iSCSI target's **IP** address and *port* if not the default port 3260), target's *iqn*, *lun*, and the type of the filesystem that has been created on the lun, and *readOnly* boolean. - -Once your pod is created, run it on the Kubernetes master: - -```console -kubectl create -f your_new_pod.json -``` - -Here is my command and output: - -```console -# kubectl create -f examples/iscsi/iscsi.json -# kubectl get pods -POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE -iscsipd 10.244.3.14 kubernetes-minion-bz1p/104.154.61.231 Running About an hour - iscsipd-rw kubernetes/pause Running About an hour - iscsipd-ro kubernetes/pause Running About an hour -``` - -On the Kubernetes node, I got these in mount output - -```console -# mount |grep kub -/dev/sdb on /var/lib/kubelet/plugins/kubernetes.io/iscsi/iscsi/10.240.205.13:3260-iqn-iqn.2014-12.world.server:storage.target1-lun-0 type ext4 (ro,relatime,data=ordered) -/dev/sdb on /var/lib/kubelet/pods/e36158ce-f8d8-11e4-9ae7-42010af01964/volumes/kubernetes.io~iscsi/iscsipd-ro type ext4 (ro,relatime,data=ordered) -/dev/sdc on /var/lib/kubelet/plugins/kubernetes.io/iscsi/iscsi/10.240.205.13:3260-iqn-iqn.2014-12.world.server:storage.target1-lun-1 type xfs (rw,relatime,attr2,inode64,noquota) -/dev/sdc on /var/lib/kubelet/pods/e36158ce-f8d8-11e4-9ae7-42010af01964/volumes/kubernetes.io~iscsi/iscsipd-rw type xfs (rw,relatime,attr2,inode64,noquota) -``` - -If you ssh to that machine, you can run `docker ps` to see the actual pod. -```console -# docker ps -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES -cc051196e7af kubernetes/pause:latest "/pause" About an hour ago Up About an hour k8s_iscsipd-rw.ff2d2e9f_iscsipd_default_e36158ce-f8d8-11e4-9ae7-42010af01964_26f3a457 -8aa981443cf4 kubernetes/pause:latest "/pause" About an hour ago Up About an hour k8s_iscsipd-ro.d7752e8f_iscsipd_default_e36158ce-f8d8-11e4-9ae7-42010af01964_4939633d -``` - -Run *docker inspect* and I found the Containers mounted the host directory into the their */mnt/iscsipd* directory. -```console -# docker inspect --format '{{index .Volumes "/mnt/iscsipd"}}' cc051196e7af -/var/lib/kubelet/pods/75e0af2b-f8e8-11e4-9ae7-42010af01964/volumes/kubernetes.io~iscsi/iscsipd-rw -``` - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/iscsi/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/iscsi/README.md?pixel)]() diff --git a/release-0.19.0/examples/iscsi/iscsi.json b/release-0.19.0/examples/iscsi/iscsi.json deleted file mode 100644 index 439832b8049..00000000000 --- a/release-0.19.0/examples/iscsi/iscsi.json +++ /dev/null @@ -1,53 +0,0 @@ -{ - "apiVersion": "v1beta3", - "kind": "Pod", - "metadata": { - "name": "iscsipd" - }, - "spec": { - "containers": [ - { - "name": "iscsipd-ro", - "image": "kubernetes/pause", - "volumeMounts": [ - { - "mountPath": "/mnt/iscsipd", - "name": "iscsipd-ro" - } - ] - }, - { - "name": "iscsipd-rw", - "image": "kubernetes/pause", - "volumeMounts": [ - { - "mountPath": "/mnt/iscsipd", - "name": "iscsipd-rw" - } - ] - } - ], - "volumes": [ - { - "name": "iscsipd-ro", - "iscsi": { - "targetPortal": "10.16.154.81:3260", - "iqn": "iqn.2014-12.world.server:storage.target01", - "lun": 0, - "fsType": "ext4", - "readOnly": true - } - }, - { - "name": "iscsipd-rw", - "iscsi": { - "targetPortal": "10.16.154.81:3260", - "iqn": "iqn.2014-12.world.server:storage.target01", - "lun": 1, - "fsType": "xfs", - "readOnly": false - } - } - ] - } -} diff --git a/release-0.19.0/examples/k8petstore/README.md b/release-0.19.0/examples/k8petstore/README.md deleted file mode 100644 index 541cdc41b61..00000000000 --- a/release-0.19.0/examples/k8petstore/README.md +++ /dev/null @@ -1,117 +0,0 @@ -## Welcome to k8PetStore - -This is a follow up to the [Guestbook Example](../guestbook/README.md)'s [Go implementation](../guestbook-go/). - -- It leverages the same components (redis, Go REST API) as the guestbook application -- It comes with visualizations for graphing whats happening in Redis transactions, along with commandline printouts of transaction throughput -- It is hackable : you can build all images from the files is in this repository (With the exception of the data generator, which is apache bigtop). -- It generates massive load using a semantically rich, realistic transaction simulator for petstores - -This application will run a web server which returns REDIS records for a petstore application. -It is meant to simulate and test high load on kubernetes or any other docker based system. - -If you are new to kubernetes, and you haven't run guestbook yet, - -you might want to stop here and go back and run guestbook app first. - -The guestbook tutorial will teach you a lot about the basics of kubernetes, and we've tried not to be redundant here. - -## Architecture of this SOA - -A diagram of the overall architecture of this application can be seen in [arch.dot](arch.dot) (you can paste the contents in any graphviz viewer, including online ones such as http://sandbox.kidstrythisathome.com/erdos/. - -## Docker image dependencies - -Reading this section is optional, only if you want to rebuild everything from scratch. - -This project depends on three docker images which you can build for yourself and save -in your dockerhub "dockerhub-name". - -Since these images are already published under other parties like redis, jayunit100, and so on, -so you don't need to build the images to run the app. - -If you do want to build the images, you will need to build and push the images in this repository. - -For a list of those images, see the `build-and-push` shell script - it builds and pushes all the images for you, just - -modify the dockerhub user name in it accordingly. - -## Get started with the WEBAPP - -The web app is written in Go, and borrowed from the original Guestbook example by brendan burns. - -We have extended it to do some error reporting, persisting of JSON petstore transactions (not much different then guestbook entries), - -and supporting of additional REST calls, like LLEN, which returns the total # of transactions in the database. - -To work on the app, just cd to the `dev` directory, and follow the instructions. You can easily edit it in your local machine, by installing - -redis and go. Then you can use the `Vagrantfile` in this top level directory to launch a minimal version of the app in pure docker containers. - -If that is all working, you can finally run `k8petstore.sh` in any kubernetes cluster, and run the app at scale. - -## Set up the data generator (optional) - -The web front end provides users an interface for watching pet store transactions in real time as they occur. - -To generate those transactions, you can use the bigpetstore data generator. Alternatively, you could just write a - -shell script which calls "curl localhost:3000/k8petstore/rpush/blahblahblah" over and over again :). But thats not nearly - -as fun, and its not a good test of a real world scenario where payloads scale and have lots of information content. - -Similarly, you can locally run and test the data generator code, which is Java based, you can pull it down directly from - -apache bigtop. - -Directions for that are here : https://github.com/apache/bigtop/tree/master/bigtop-bigpetstore/bigpetstore-transaction-queue - -You will likely want to checkout the branch 2b2392bf135e9f1256bd0b930f05ae5aef8bbdcb, which is the exact commit which the current k8petstore was tested on. - -## Now what? - -Once you have done the above 3 steps, you have a working, from source, locally runnable version of the k8petstore app, now, we can try to run it in kubernetes. - -## Hacking, testing, benchmarking - -Once the app is running, you can go to the location of publicIP:3000 (the first parameter in the script). In your browser, you should see a chart - -and the k8petstore title page, as well as an indicator of transaction throughput, and so on. You should be able to modify - -You can modify the HTML pages, add new REST paths to the Go app, and so on. - -## Running in kubernetes - -Now that you are done hacking around on the app, you can run it in kubernetes. To do this, you will want to rebuild the docker images (most likely, for the Go web-server app), but less likely for the other images which you are less likely to need to change. Then you will push those images to dockerhub. - -Now, how to run the entire application in kubernetes? - -To simplify running this application, we have a single file, k8petstore.sh, which writes out json files on to disk. This allows us to have dynamic parameters, without needing to worry about managing multiplejson files. - -You might want to change it to point to your customized Go image, if you chose to modify things. - -like the number of data generators (more generators will create more load on the redis master). - -So, to run this app in kubernetes, simply run [The all in one k8petstore.sh shell script](k8petstore.sh). - -Note that there are a few , self explanatory parameters to set at the top of it. - -Most importantly, the Public IPs parameter, so that you can checkout the web ui (at $PUBLIC_IP:3000), which will show a plot and read outs of transaction throughput. - -## Future - -In the future, we plan to add cassandra support. Redis is a fabulous in memory data store, but it is not meant for truly available and resilient storage. - -Thus we plan to add another tier of queueing, which empties the REDIS transactions into a cassandra store which persists. - -## Questions - -For questions on running this app, you can ask on the google containers group (freenode ~ google-containers@googlegroups.com or #google-containers on IRC) - -For questions about bigpetstore, and how the data is generated, ask on the apache bigtop mailing list. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/k8petstore/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/k8petstore/README.md?pixel)]() diff --git a/release-0.19.0/examples/k8petstore/Vagrantfile b/release-0.19.0/examples/k8petstore/Vagrantfile deleted file mode 100644 index a96af767b65..00000000000 --- a/release-0.19.0/examples/k8petstore/Vagrantfile +++ /dev/null @@ -1,37 +0,0 @@ -# -*- mode: ruby -*- -# vi: set ft=ruby : - -require 'fileutils' - -#$fes = 1 -#$rslavess = 1 - -Vagrant.configure("2") do |config| - - config.vm.define "rmaster" do |rm| - rm.vm.provider "docker" do |d| - d.vagrant_vagrantfile = "./dev/hosts/Vagrantfile" - d.build_dir = "redis-master" - d.name = "rmaster" - d.create_args = ["--privileged=true", "-m", "1g"] - #d.ports = [ "6379:6379" ] - d.remains_running = true - end - end - - config.vm.define "frontend" do |fe| - fe.vm.provider "docker" do |d| - d.vagrant_vagrantfile = "./dev/hosts/Vagrantfile" - d.build_dir = "web-server" - d.name = "web-server" - d.create_args = ["--privileged=true"] - d.remains_running = true - d.create_args = d.create_args << "--link" << "rmaster:rmaster" - d.ports = ["3000:3000"] - d.env = {"REDISMASTER_SERVICE_HOST"=>"rmaster","REDISMASTER_SERVICE_PORT"=>"6379"} - end - end - - ### Todo , add data generator. - -end diff --git a/release-0.19.0/examples/k8petstore/bps-data-generator/README.md b/release-0.19.0/examples/k8petstore/bps-data-generator/README.md deleted file mode 100644 index 09b18fc9748..00000000000 --- a/release-0.19.0/examples/k8petstore/bps-data-generator/README.md +++ /dev/null @@ -1,21 +0,0 @@ -# How to generate the bps-data-generator container # - -This container is maintained as part of the apache bigtop project. - -To create it, simply - -`git clone https://github.com/apache/bigtop` - -and checkout the last exact version (will be updated periodically). - -`git checkout -b aNewBranch 2b2392bf135e9f1256bd0b930f05ae5aef8bbdcb` - -then, cd to bigtop-bigpetstore/bigpetstore-transaction-queue, and run the docker file, i.e. - -`Docker build -t -i jayunit100/bps-transaction-queue`. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/k8petstore/bps-data-generator/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/k8petstore/bps-data-generator/README.md?pixel)]() diff --git a/release-0.19.0/examples/k8petstore/build-push-containers.sh b/release-0.19.0/examples/k8petstore/build-push-containers.sh deleted file mode 100755 index 7733b6fdd48..00000000000 --- a/release-0.19.0/examples/k8petstore/build-push-containers.sh +++ /dev/null @@ -1,29 +0,0 @@ -#!/bin/bash - -# Copyright 2015 The Kubernetes Authors All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -#K8PetStore version is tied to the redis version. We will add more info to version tag later. -#Change the 'jayunit100' string below to you're own dockerhub name and run this script. -#It will build all the containers for this application and publish them to your dockerhub account -version="r.2.8.19" -docker build -t jayunit100/k8-petstore-redis:$version ./redis/ -docker build -t jayunit100/k8-petstore-redis-master:$version ./redis-master -docker build -t jayunit100/k8-petstore-redis-slave:$version ./redis-slave -docker build -t jayunit100/k8-petstore-web-server:$version ./web-server - -docker push jayunit100/k8-petstore-redis:$version -docker push jayunit100/k8-petstore-redis-master:$version -docker push jayunit100/k8-petstore-redis-slave:$version -docker push jayunit100/k8-petstore-web-server:$version diff --git a/release-0.19.0/examples/k8petstore/dev/README b/release-0.19.0/examples/k8petstore/dev/README deleted file mode 100644 index 3b495ea7034..00000000000 --- a/release-0.19.0/examples/k8petstore/dev/README +++ /dev/null @@ -1,35 +0,0 @@ -### Local development - -1) Install Go - -2) Install Redis - -Now start a local redis instance - -``` -redis-server -``` - -And run the app - -``` -export GOPATH=~/Development/k8hacking/k8petstore/web-server/ -cd $GOPATH/src/main/ -## Now, you're in the local dir to run the app. Go get its depenedencies. -go get -go run PetStoreBook.go -``` - -Once the app works the way you want it to, test it in the vagrant recipe below. This will gaurantee that you're local environment isn't doing something that breaks the containers at the versioning level. - -### Testing - -This folder can be used by anyone interested in building and developing the k8petstore application. - -This is for dev and test. - -`vagrant up` gets you a cluster with the app's core components running. - -You can rename Vagrantfile_atomic to Vagrantfile if you want to try to test in atomic instead. - -** Now you can run the code on the kubernetes cluster with reasonable assurance that any problems you run into are not bugs in the code itself :) * diff --git a/release-0.19.0/examples/k8petstore/dev/Vagrantfile b/release-0.19.0/examples/k8petstore/dev/Vagrantfile deleted file mode 100755 index c4f19b2aa4d..00000000000 --- a/release-0.19.0/examples/k8petstore/dev/Vagrantfile +++ /dev/null @@ -1,44 +0,0 @@ -# -*- mode: ruby -*- -# vi: set ft=ruby : - -require 'fileutils' - -#$fes = 1 -#$rslavess = 1 - -Vagrant.configure("2") do |config| - - config.vm.define "rmaster" do |rm| - rm.vm.provider "docker" do |d| - d.vagrant_vagrantfile = "./hosts/Vagrantfile" - d.build_dir = "../redis-master" - d.name = "rmaster" - d.create_args = ["--privileged=true"] - #d.ports = [ "6379:6379" ] - d.remains_running = true - end - end - - puts "sleep 20 to make sure container is up..." - sleep(20) - puts "resume" - - config.vm.define "frontend" do |fe| - fe.vm.provider "docker" do |d| - d.vagrant_vagrantfile = "./hosts/Vagrantfile" - d.build_dir = "../web-server" - d.name = "web-server" - d.create_args = ["--privileged=true"] - d.remains_running = true - d.create_args = d.create_args << "--link" << "rmaster:rmaster" - d.ports = ["3000:3000"] - d.env = {"REDISMASTER_SERVICE_HOST"=>"rmaster","REDISMASTER_SERVICE_PORT"=>"6379"} - end - end - - - - ### Todo , add data generator. - - -end diff --git a/release-0.19.0/examples/k8petstore/dev/hosts/Vagrantfile b/release-0.19.0/examples/k8petstore/dev/hosts/Vagrantfile deleted file mode 100644 index 72e86d72621..00000000000 --- a/release-0.19.0/examples/k8petstore/dev/hosts/Vagrantfile +++ /dev/null @@ -1,11 +0,0 @@ -VAGRANTFILE_API_VERSION = "2" - -Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| - config.vm.box = "jayunit100/centos7" - config.vm.provision "docker" - config.vm.provision "shell", inline: "ps aux | grep 'sshd:' | awk '{print $2}' | xargs kill" - config.vm.provision "shell", inline: "yum install -y git && service firewalld stop && service docker restart" - config.vm.provision "shell", inline: "docker ps -a | awk '{print $1}' | xargs --no-run-if-empty docker rm -f || ls" - config.vm.network :forwarded_port, guest: 3000, host: 3000 - -end diff --git a/release-0.19.0/examples/k8petstore/dev/test.sh b/release-0.19.0/examples/k8petstore/dev/test.sh deleted file mode 100755 index 53d42a8c5b7..00000000000 --- a/release-0.19.0/examples/k8petstore/dev/test.sh +++ /dev/null @@ -1,47 +0,0 @@ -#!/bin/bash - -# Copyright 2015 The Kubernetes Authors All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -## First set up the host VM. That ensures -## we avoid vagrant race conditions. -set -x - -cd hosts/ -echo "note: the VM must be running before you try this" -echo "if not already running, cd to hosts and run vagrant up" -vagrant provision -#echo "removing containers" -#vagrant ssh -c "sudo docker rm -f $(docker ps -a -q)" -cd .. - -## Now spin up the docker containers -## these will run in the ^ host vm above. - -vagrant up - -## Finally, curl the length, it should be 3 . - -x=`curl localhost:3000/llen` - -for i in `seq 1 100` do - if [ x$x == "x3" ]; then - echo " passed $3 " - exit 0 - else - echo " FAIL" - fi -done - -exit 1 # if we get here the test obviously failed. diff --git a/release-0.19.0/examples/k8petstore/k8petstore.dot b/release-0.19.0/examples/k8petstore/k8petstore.dot deleted file mode 100644 index 539132fb3aa..00000000000 --- a/release-0.19.0/examples/k8petstore/k8petstore.dot +++ /dev/null @@ -1,9 +0,0 @@ - digraph k8petstore { - - USERS -> publicIP_proxy -> web_server; - bps_data_generator -> web_server [arrowhead = crow, label = "http://$FRONTEND_SERVICE_HOST:3000/rpush/k8petstore/{name..address..,product=..."]; - external -> web_server [arrowhead = crow, label=" http://$FRONTEND_SERVICE_HOST/k8petstore/llen:3000"]; - web_server -> redis_master [label=" RESP : k8petstore, llen"]; - redis_master -> redis_slave [arrowhead = crow] [label="replication (one-way)"]; -} - diff --git a/release-0.19.0/examples/k8petstore/k8petstore.sh b/release-0.19.0/examples/k8petstore/k8petstore.sh deleted file mode 100755 index 5a5393435cf..00000000000 --- a/release-0.19.0/examples/k8petstore/k8petstore.sh +++ /dev/null @@ -1,287 +0,0 @@ -#!/bin/bash - -# Copyright 2015 The Kubernetes Authors All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -echo "WRITING KUBE FILES , will overwrite the jsons, then testing pods. is kube clean ready to go?" - - -#Args below can be overriden when calling from cmd line. -#Just send all the args in order. -#for dev/test you can use: -#kubectl=$GOPATH/src/github.com/GoogleCloudPlatform/kubernetes/cluster/kubectl.sh" -kubectl="kubectl" -VERSION="r.2.8.19" -PUBLIC_IP="10.1.4.89" # ip which we use to access the Web server. -_SECONDS=1000 # number of seconds to measure throughput. -FE="1" # amount of Web server -LG="1" # amount of load generators -SLAVE="1" # amount of redis slaves -TEST="1" # 0 = Dont run tests, 1 = Do run tests. -NS="k8petstore" # namespace - -kubectl="${1:-$kubectl}" -VERSION="${2:-$VERSION}" -PUBLIC_IP="${3:-$PUBLIC_IP}" # ip which we use to access the Web server. -_SECONDS="${4:-$_SECONDS}" # number of seconds to measure throughput. -FE="${5:-$FE}" # amount of Web server -LG="${6:-$LG}" # amount of load generators -SLAVE="${7:-$SLAVE}" # amount of redis slaves -TEST="${8:-$TEST}" # 0 = Dont run tests, 1 = Do run tests. -NS="${9:-$NS}" # namespace - -echo "Running w/ args: kubectl $kubectl version $VERSION ip $PUBLIC_IP sec $_SECONDS fe $FE lg $LG slave $SLAVE test $TEST NAMESPACE $NS" -function create { - -cat << EOF > fe-rc.json -{ - "kind": "ReplicationController", - "apiVersion": "v1beta3", - "metadata": { - "name": "fectrl", - "labels": {"name": "frontend"} - }, - "spec": { - "replicas": $FE, - "selector": {"name": "frontend"}, - "template": { - "metadata": { - "labels": { - "name": "frontend", - "uses": "redis-master" - } - }, - "spec": { - "containers": [{ - "name": "frontend-go-restapi", - "image": "jayunit100/k8-petstore-web-server:$VERSION" - }] - } - } - } -} -EOF - -cat << EOF > bps-load-gen-rc.json -{ - "kind": "ReplicationController", - "apiVersion": "v1beta3", - "metadata": { - "name": "bpsloadgenrc", - "labels": {"name": "bpsLoadGenController"} - }, - "spec": { - "replicas": $LG, - "selector": {"name": "bps"}, - "template": { - "metadata": { - "labels": { - "name": "bps", - "uses": "frontend" - } - }, - "spec": { - "containers": [{ - "name": "bps", - "image": "jayunit100/bigpetstore-load-generator", - "command": ["sh","-c","/opt/PetStoreLoadGenerator-1.0/bin/PetStoreLoadGenerator http://\$FRONTEND_SERVICE_HOST:3000/rpush/k8petstore/ 4 4 1000 123"] - }] - } - } - } -} -EOF - -cat << EOF > fe-s.json -{ - "kind": "Service", - "apiVersion": "v1beta3", - "metadata": { - "name": "frontend", - "labels": { - "name": "frontend" - } - }, - "spec": { - "ports": [{ - "port": 3000 - }], - "publicIPs":["$PUBLIC_IP","10.1.4.89"], - "selector": { - "name": "frontend" - } - } -} -EOF - -cat << EOF > rm.json -{ - "kind": "Pod", - "apiVersion": "v1beta3", - "metadata": { - "name": "redismaster", - "labels": { - "name": "redis-master" - } - }, - "spec": { - "containers": [{ - "name": "master", - "image": "jayunit100/k8-petstore-redis-master:$VERSION", - "ports": [{ - "containerPort": 6379 - }] - }] - } -} -EOF - -cat << EOF > rm-s.json -{ - "kind": "Service", - "apiVersion": "v1beta3", - "metadata": { - "name": "redismaster", - "labels": { - "name": "redis-master" - } - }, - "spec": { - "ports": [{ - "port": 6379 - }], - "selector": { - "name": "redis-master" - } - } -} -EOF - -cat << EOF > rs-s.json -{ - "kind": "Service", - "apiVersion": "v1beta3", - "metadata": { - "name": "redisslave", - "labels": { - "name": "redisslave" - } - }, - "spec": { - "ports": [{ - "port": 6379 - }], - "selector": { - "name": "redisslave" - } - } -} -EOF - -cat << EOF > slave-rc.json -{ - "kind": "ReplicationController", - "apiVersion": "v1beta3", - "metadata": { - "name": "redissc", - "labels": {"name": "redisslave"} - }, - "spec": { - "replicas": $SLAVE, - "selector": {"name": "redisslave"}, - "template": { - "metadata": { - "labels": { - "name": "redisslave", - "uses": "redis-master" - } - }, - "spec": { - "containers": [{ - "name": "slave", - "image": "jayunit100/k8-petstore-redis-slave:$VERSION", - "ports": [{"containerPort": 6379}] - }] - } - } - } -} -EOF -$kubectl create -f rm.json --namespace=$NS -$kubectl create -f rm-s.json --namespace=$NS -sleep 3 # precaution to prevent fe from spinning up too soon. -$kubectl create -f slave-rc.json --namespace=$NS -$kubectl create -f rs-s.json --namespace=$NS -sleep 3 # see above comment. -$kubectl create -f fe-rc.json --namespace=$NS -$kubectl create -f fe-s.json --namespace=$NS -$kubectl create -f bps-load-gen-rc.json --namespace=$NS -} - -function pollfor { - pass_http=0 - - ### Test HTTP Server comes up. - for i in `seq 1 150`; - do - ### Just testing that the front end comes up. Not sure how to test total entries etc... (yet) - echo "Trying curl ... $PUBLIC_IP:3000 , attempt $i . expect a few failures while pulling images... " - curl "$PUBLIC_IP:3000" > result - cat result - cat result | grep -q "k8-bps" - if [ $? -eq 0 ]; then - echo "TEST PASSED after $i tries !" - i=1000 - break - else - echo "the above RESULT didn't contain target string for trial $i" - fi - sleep 3 - done - - if [ $i -eq 1000 ]; then - pass_http=1 - fi - -} - -function tests { - pass_load=0 - - ### Print statistics of db size, every second, until $SECONDS are up. - for i in `seq 1 $_SECONDS`; - do - echo "curl : $PUBLIC_IP:3000 , $i of $_SECONDS" - curr_cnt="`curl "$PUBLIC_IP:3000/llen"`" - ### Write CSV File of # of trials / total transcations. - echo "$i $curr_cnt" >> result - echo "total transactions so far : $curr_cnt" - sleep 1 - done -} - -create - -pollfor - -if [[ $pass_http -eq 1 ]]; then - echo "Passed..." -else - exit 1 -fi - -if [[ $TEST -eq 1 ]]; then - echo "running polling tests now" - tests -fi diff --git a/release-0.19.0/examples/k8petstore/redis-master/Dockerfile b/release-0.19.0/examples/k8petstore/redis-master/Dockerfile deleted file mode 100644 index bd3a67ced04..00000000000 --- a/release-0.19.0/examples/k8petstore/redis-master/Dockerfile +++ /dev/null @@ -1,17 +0,0 @@ -# -# Redis Dockerfile -# -# https://github.com/dockerfile/redis -# - -# Pull base image. -# -# Just a stub. - -FROM jayunit100/redis:2.8.19 - -ADD etc_redis_redis.conf /etc/redis/redis.conf - -CMD ["redis-server", "/etc/redis/redis.conf"] -# Expose ports. -EXPOSE 6379 diff --git a/release-0.19.0/examples/k8petstore/redis-master/etc_redis_redis.conf b/release-0.19.0/examples/k8petstore/redis-master/etc_redis_redis.conf deleted file mode 100644 index 38b8c701e7a..00000000000 --- a/release-0.19.0/examples/k8petstore/redis-master/etc_redis_redis.conf +++ /dev/null @@ -1,46 +0,0 @@ -pidfile /var/run/redis.pid -port 6379 -tcp-backlog 511 -timeout 0 -tcp-keepalive 0 -loglevel verbose -syslog-enabled yes -databases 1 -save 1 1 -save 900 1 -save 300 10 -save 60 10000 -stop-writes-on-bgsave-error yes -rdbcompression no -rdbchecksum yes -dbfilename dump.rdb -dir /data -slave-serve-stale-data no -slave-read-only yes -repl-disable-tcp-nodelay no -slave-priority 100 -maxmemory -appendonly yes -appendfilename "appendonly.aof" -appendfsync everysec -no-appendfsync-on-rewrite no -auto-aof-rewrite-percentage 100 -auto-aof-rewrite-min-size 1 -aof-load-truncated yes -lua-time-limit 5000 -slowlog-log-slower-than 10000 -slowlog-max-len 128 -latency-monitor-threshold 0 -notify-keyspace-events "KEg$lshzxeA" -list-max-ziplist-entries 512 -list-max-ziplist-value 64 -set-max-intset-entries 512 -zset-max-ziplist-entries 128 -zset-max-ziplist-value 64 -hll-sparse-max-bytes 3000 -activerehashing yes -client-output-buffer-limit normal 0 0 0 -client-output-buffer-limit slave 256mb 64mb 60 -client-output-buffer-limit pubsub 32mb 8mb 60 -hz 10 -aof-rewrite-incremental-fsync yes diff --git a/release-0.19.0/examples/k8petstore/redis-slave/Dockerfile b/release-0.19.0/examples/k8petstore/redis-slave/Dockerfile deleted file mode 100644 index 67952daf116..00000000000 --- a/release-0.19.0/examples/k8petstore/redis-slave/Dockerfile +++ /dev/null @@ -1,15 +0,0 @@ -# -# Redis Dockerfile -# -# https://github.com/dockerfile/redis -# - -# Pull base image. -# -# Just a stub. - -FROM jayunit100/redis:2.8.19 - -ADD run.sh /run.sh - -CMD /run.sh diff --git a/release-0.19.0/examples/k8petstore/redis-slave/etc_redis_redis.conf b/release-0.19.0/examples/k8petstore/redis-slave/etc_redis_redis.conf deleted file mode 100644 index 38b8c701e7a..00000000000 --- a/release-0.19.0/examples/k8petstore/redis-slave/etc_redis_redis.conf +++ /dev/null @@ -1,46 +0,0 @@ -pidfile /var/run/redis.pid -port 6379 -tcp-backlog 511 -timeout 0 -tcp-keepalive 0 -loglevel verbose -syslog-enabled yes -databases 1 -save 1 1 -save 900 1 -save 300 10 -save 60 10000 -stop-writes-on-bgsave-error yes -rdbcompression no -rdbchecksum yes -dbfilename dump.rdb -dir /data -slave-serve-stale-data no -slave-read-only yes -repl-disable-tcp-nodelay no -slave-priority 100 -maxmemory -appendonly yes -appendfilename "appendonly.aof" -appendfsync everysec -no-appendfsync-on-rewrite no -auto-aof-rewrite-percentage 100 -auto-aof-rewrite-min-size 1 -aof-load-truncated yes -lua-time-limit 5000 -slowlog-log-slower-than 10000 -slowlog-max-len 128 -latency-monitor-threshold 0 -notify-keyspace-events "KEg$lshzxeA" -list-max-ziplist-entries 512 -list-max-ziplist-value 64 -set-max-intset-entries 512 -zset-max-ziplist-entries 128 -zset-max-ziplist-value 64 -hll-sparse-max-bytes 3000 -activerehashing yes -client-output-buffer-limit normal 0 0 0 -client-output-buffer-limit slave 256mb 64mb 60 -client-output-buffer-limit pubsub 32mb 8mb 60 -hz 10 -aof-rewrite-incremental-fsync yes diff --git a/release-0.19.0/examples/k8petstore/redis-slave/run.sh b/release-0.19.0/examples/k8petstore/redis-slave/run.sh deleted file mode 100755 index d42c8f261fa..00000000000 --- a/release-0.19.0/examples/k8petstore/redis-slave/run.sh +++ /dev/null @@ -1,30 +0,0 @@ -#!/bin/bash - -# Copyright 2014 The Kubernetes Authors All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -echo "Note, if you get errors below indicate kubernetes env injection could be faliing..." -echo "env vars =" -env -echo "CHECKING ENVS BEFORE STARTUP........" -if [ ! "$REDISMASTER_SERVICE_HOST" ]; then - echo "Need to set REDIS_MASTER_SERVICE_HOST" && exit 1; -fi -if [ ! "$REDISMASTER_PORT" ]; then - echo "Need to set REDIS_MASTER_PORT" && exit 1; -fi - -echo "ENV Vars look good, starting !" - -redis-server --slaveof ${REDISMASTER_SERVICE_HOST:-$SERVICE_HOST} $REDISMASTER_SERVICE_PORT diff --git a/release-0.19.0/examples/k8petstore/redis/Dockerfile b/release-0.19.0/examples/k8petstore/redis/Dockerfile deleted file mode 100644 index 41ac9dcdd44..00000000000 --- a/release-0.19.0/examples/k8petstore/redis/Dockerfile +++ /dev/null @@ -1,45 +0,0 @@ -# -# Redis Dockerfile -# -# https://github.com/dockerfile/redis -# - -# Pull base image. -FROM ubuntu - -# Install Redis. -RUN \ - cd /tmp && \ - # Modify to stay at this version rather then always update. - - ################################################################# - ###################### REDIS INSTALL ############################ - wget http://download.redis.io/releases/redis-2.8.19.tar.gz && \ - tar xvzf redis-2.8.19.tar.gz && \ - cd redis-2.8.19 && \ - ################################################################ - ################################################################ - make && \ - make install && \ - cp -f src/redis-sentinel /usr/local/bin && \ - mkdir -p /etc/redis && \ - cp -f *.conf /etc/redis && \ - rm -rf /tmp/redis-stable* && \ - sed -i 's/^\(bind .*\)$/# \1/' /etc/redis/redis.conf && \ - sed -i 's/^\(daemonize .*\)$/# \1/' /etc/redis/redis.conf && \ - sed -i 's/^\(dir .*\)$/# \1\ndir \/data/' /etc/redis/redis.conf && \ - sed -i 's/^\(logfile .*\)$/# \1/' /etc/redis/redis.conf - -# Define mountable directories. -VOLUME ["/data"] - -# Define working directory. -WORKDIR /data - -ADD etc_redis_redis.conf /etc/redis/redis.conf - -# Print redis configs and start. -# CMD "redis-server /etc/redis/redis.conf" - -# Expose ports. -EXPOSE 6379 diff --git a/release-0.19.0/examples/k8petstore/redis/etc_redis_redis.conf b/release-0.19.0/examples/k8petstore/redis/etc_redis_redis.conf deleted file mode 100644 index 38b8c701e7a..00000000000 --- a/release-0.19.0/examples/k8petstore/redis/etc_redis_redis.conf +++ /dev/null @@ -1,46 +0,0 @@ -pidfile /var/run/redis.pid -port 6379 -tcp-backlog 511 -timeout 0 -tcp-keepalive 0 -loglevel verbose -syslog-enabled yes -databases 1 -save 1 1 -save 900 1 -save 300 10 -save 60 10000 -stop-writes-on-bgsave-error yes -rdbcompression no -rdbchecksum yes -dbfilename dump.rdb -dir /data -slave-serve-stale-data no -slave-read-only yes -repl-disable-tcp-nodelay no -slave-priority 100 -maxmemory -appendonly yes -appendfilename "appendonly.aof" -appendfsync everysec -no-appendfsync-on-rewrite no -auto-aof-rewrite-percentage 100 -auto-aof-rewrite-min-size 1 -aof-load-truncated yes -lua-time-limit 5000 -slowlog-log-slower-than 10000 -slowlog-max-len 128 -latency-monitor-threshold 0 -notify-keyspace-events "KEg$lshzxeA" -list-max-ziplist-entries 512 -list-max-ziplist-value 64 -set-max-intset-entries 512 -zset-max-ziplist-entries 128 -zset-max-ziplist-value 64 -hll-sparse-max-bytes 3000 -activerehashing yes -client-output-buffer-limit normal 0 0 0 -client-output-buffer-limit slave 256mb 64mb 60 -client-output-buffer-limit pubsub 32mb 8mb 60 -hz 10 -aof-rewrite-incremental-fsync yes diff --git a/release-0.19.0/examples/k8petstore/web-server/Dockerfile b/release-0.19.0/examples/k8petstore/web-server/Dockerfile deleted file mode 100644 index fe98d81ce26..00000000000 --- a/release-0.19.0/examples/k8petstore/web-server/Dockerfile +++ /dev/null @@ -1,21 +0,0 @@ -FROM google/golang:latest - -# Add source to gopath. This is defacto required for go apps. -ADD ./src /gopath/src/ -ADD ./static /tmp/static -ADD ./test.sh /opt/test.sh -RUN chmod 777 /opt/test.sh -# $GOPATH/[src/a/b/c] -# go build a/b/c -# go run main - -# So that we can easily run and install -WORKDIR /gopath/src/ - -# Install the code (the executables are in the main dir) This will get the deps also. -RUN go get main -#RUN go build main - -# Expected that you will override this in production kubernetes. -ENV STATIC_FILES /tmp/static -CMD /gopath/bin/main diff --git a/release-0.19.0/examples/k8petstore/web-server/PetStoreBook.go b/release-0.19.0/examples/k8petstore/web-server/PetStoreBook.go deleted file mode 100644 index 1c81cef9537..00000000000 --- a/release-0.19.0/examples/k8petstore/web-server/PetStoreBook.go +++ /dev/null @@ -1,204 +0,0 @@ -/* -Copyright 2014 The Kubernetes Authors All rights reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package main - -import ( - "encoding/json" - "fmt" - "net/http" - "os" - "strings" - - "github.com/codegangsta/negroni" - "github.com/gorilla/mux" - "github.com/xyproto/simpleredis" -) - -//return the path to static assets (i.e. index.html) -func pathToStaticContents() string { - var static_content = os.Getenv("STATIC_FILES") - // Take a wild guess. This will work in dev environment. - if static_content == "" { - println("*********** WARNING: DIDNT FIND ENV VAR 'STATIC_FILES', guessing your running in dev.") - static_content = "../../static/" - } else { - println("=========== Read ENV 'STATIC_FILES', path to assets : " + static_content) - } - - //Die if no the static files are missing. - _, err := os.Stat(static_content) - if err != nil { - println("*********** os.Stat failed on " + static_content + " This means no static files are available. Dying...") - os.Exit(2) - } - return static_content -} - -func main() { - - var connection = os.Getenv("REDISMASTER_SERVICE_HOST") + ":" + os.Getenv("REDISMASTER_SERVICE_PORT") - - if connection == ":" { - print("WARNING ::: If in kube, this is a failure: Missing env variable REDISMASTER_SERVICE_HOST") - print("WARNING ::: Attempting to connect redis localhost.") - connection = "127.0.0.1:6379" - } else { - print("Found redis master host " + os.Getenv("REDISMASTER_SERVICE_PORT")) - connection = os.Getenv("REDISMASTER_SERVICE_HOST") + ":" + os.Getenv("REDISMASTER_SERVICE_PORT") - } - - println("Now connecting to : " + connection) - /** - * Create a connection pool. ?The pool pointer will otherwise - * not be of any use.?https://gist.github.com/jayunit100/1d00e6d343056401ef00 - */ - pool = simpleredis.NewConnectionPoolHost(connection) - - println("Connection pool established : " + connection) - - defer pool.Close() - - r := mux.NewRouter() - - println("Router created ") - - /** - * Define a REST path. - * - The parameters (key) can be accessed via mux.Vars. - * - The Methods (GET) will be bound to a handler function. - */ - r.Path("/info").Methods("GET").HandlerFunc(InfoHandler) - r.Path("/lrange/{key}").Methods("GET").HandlerFunc(ListRangeHandler) - r.Path("/rpush/{key}/{value}").Methods("GET").HandlerFunc(ListPushHandler) - r.Path("/llen").Methods("GET").HandlerFunc(LLENHandler) - - //for dev environment, the site is one level up... - - r.PathPrefix("/").Handler(http.FileServer(http.Dir(pathToStaticContents()))) - - r.Path("/env").Methods("GET").HandlerFunc(EnvHandler) - - list := simpleredis.NewList(pool, "k8petstore") - HandleError(nil, list.Add("jayunit100")) - HandleError(nil, list.Add("tstclaire")) - HandleError(nil, list.Add("rsquared")) - - // Verify that this is 3 on startup. - infoL := HandleError(pool.Get(0).Do("LLEN", "k8petstore")).(int64) - fmt.Printf("\n=========== Starting DB has %d elements \n", infoL) - if infoL < 3 { - print("Not enough entries in DB. something is wrong w/ redis querying") - print(infoL) - panic("Failed ... ") - } - - println("=========== Now launching negroni...this might take a second...") - n := negroni.Classic() - n.UseHandler(r) - n.Run(":3000") - println("Done ! Web app is now running.") - -} - -/** -* the Pool will be populated on startup, -* it will be an instance of a connection pool. -* Hence, we reference its address rather than copying. - */ -var pool *simpleredis.ConnectionPool - -/** -* REST -* input: key -* -* Writes all members to JSON. - */ -func ListRangeHandler(rw http.ResponseWriter, req *http.Request) { - println("ListRangeHandler") - - key := mux.Vars(req)["key"] - - list := simpleredis.NewList(pool, key) - - //members := HandleError(list.GetAll()).([]string) - members := HandleError(list.GetLastN(4)).([]string) - - print(members) - membersJSON := HandleError(json.MarshalIndent(members, "", " ")).([]byte) - - print("RETURN MEMBERS = " + string(membersJSON)) - rw.Write(membersJSON) -} - -func LLENHandler(rw http.ResponseWriter, req *http.Request) { - println("=========== LLEN HANDLER") - - infoL := HandleError(pool.Get(0).Do("LLEN", "k8petstore")).(int64) - fmt.Printf("=========== LLEN is %d ", infoL) - lengthJSON := HandleError(json.MarshalIndent(infoL, "", " ")).([]byte) - fmt.Printf("================ LLEN json is %s", infoL) - - print("RETURN LEN = " + string(lengthJSON)) - rw.Write(lengthJSON) - -} - -func ListPushHandler(rw http.ResponseWriter, req *http.Request) { - println("ListPushHandler") - - /** - * Expect a key and value as input. - * - */ - key := mux.Vars(req)["key"] - value := mux.Vars(req)["value"] - - println("New list " + key + " " + value) - list := simpleredis.NewList(pool, key) - HandleError(nil, list.Add(value)) - ListRangeHandler(rw, req) -} - -func InfoHandler(rw http.ResponseWriter, req *http.Request) { - println("InfoHandler") - - info := HandleError(pool.Get(0).Do("INFO")).([]byte) - rw.Write(info) -} - -func EnvHandler(rw http.ResponseWriter, req *http.Request) { - println("EnvHandler") - - environment := make(map[string]string) - for _, item := range os.Environ() { - splits := strings.Split(item, "=") - key := splits[0] - val := strings.Join(splits[1:], "=") - environment[key] = val - } - - envJSON := HandleError(json.MarshalIndent(environment, "", " ")).([]byte) - rw.Write(envJSON) -} - -func HandleError(result interface{}, err error) (r interface{}) { - if err != nil { - print("ERROR : " + err.Error()) - //panic(err) - } - return result -} diff --git a/release-0.19.0/examples/k8petstore/web-server/dump.rdb b/release-0.19.0/examples/k8petstore/web-server/dump.rdb deleted file mode 100644 index d1028f16798..00000000000 Binary files a/release-0.19.0/examples/k8petstore/web-server/dump.rdb and /dev/null differ diff --git a/release-0.19.0/examples/k8petstore/web-server/static/histogram.js b/release-0.19.0/examples/k8petstore/web-server/static/histogram.js deleted file mode 100644 index c9f20203e35..00000000000 --- a/release-0.19.0/examples/k8petstore/web-server/static/histogram.js +++ /dev/null @@ -1,39 +0,0 @@ -//var data = [4, 8, 15, 16, 23, 42]; - -function defaults(){ - - Chart.defaults.global.animation = false; - -} - -function f(data2) { - - defaults(); - - // Get context with jQuery - using jQuery's .get() method. - var ctx = $("#myChart").get(0).getContext("2d"); - ctx.width = $(window).width()*1.5; - ctx.width = $(window).height *.5; - - // This will get the first returned node in the jQuery collection. - var myNewChart = new Chart(ctx); - - var data = { - labels: Array.apply(null, Array(data2.length)).map(function (_, i) {return i;}), - datasets: [ - { - label: "My First dataset", - fillColor: "rgba(220,220,220,0.2)", - strokeColor: "rgba(220,220,220,1)", - pointColor: "rgba(220,220,220,1)", - pointStrokeColor: "#fff", - pointHighlightFill: "#fff", - pointHighlightStroke: "rgba(220,220,220,1)", - data: data2 - } - ] - }; - - var myLineChart = new Chart(ctx).Line(data); -} - diff --git a/release-0.19.0/examples/k8petstore/web-server/static/index.html b/release-0.19.0/examples/k8petstore/web-server/static/index.html deleted file mode 100644 index b184ab0e782..00000000000 --- a/release-0.19.0/examples/k8petstore/web-server/static/index.html +++ /dev/null @@ -1,47 +0,0 @@ - - - - - - - - - - - - - - ((( - PRODUCTION -))) Guestbook - - - - - - - - - - - - -
-
-

Waiting for database connection...This will get overwritten...

-
-
-
-
-

-

/env - /info

-
-
- -
- - - diff --git a/release-0.19.0/examples/k8petstore/web-server/static/script.js b/release-0.19.0/examples/k8petstore/web-server/static/script.js deleted file mode 100644 index 095d161fdfe..00000000000 --- a/release-0.19.0/examples/k8petstore/web-server/static/script.js +++ /dev/null @@ -1,72 +0,0 @@ -$(document).ready(function() { - - var max_trials=1000 - - var headerTitleElement = $("#header h1"); - var entriesElement = $("#k8petstore-entries"); - var hostAddressElement = $("#k8petstore-host-address"); - var currentEntries = [] - - var updateEntryCount = function(data, trial) { - if(currentEntries.length > 1000) - currentEntries.splice(0,100); - //console.info("entry count " + data) ; - currentEntries[trial]=data ; - } - - var updateEntries = function(data) { - entriesElement.empty(); - //console.info("data - > " + Math.random()) - //uncommend for debugging... - //entriesElement.append("

CURRENT TIME : "+ $.now() +"

TOTAL entries : "+ JSON.stringify(currentEntries)+"

") - var c1 = currentEntries[currentEntries.length-1] - var c2 = currentEntries[currentEntries.length-2] - entriesElement.append("

CURRENT TIME : "+ $.now() +"

TOTAL entries : "+ c1 +"
transaction delta " + (c1-c2) +"

") - f(currentEntries); - $.each(data, function(key, val) { - //console.info(key + " -> " +val); - entriesElement.append("

" + key + " " + val.substr(0,50) + val.substr(100,150) + "

"); - }); - - } - - // colors = purple, blue, red, green, yellow - var colors = ["#549", "#18d", "#d31", "#2a4", "#db1"]; - var randomColor = colors[Math.floor(5 * Math.random())]; - ( - function setElementsColor(color) { - headerTitleElement.css("color", color); - }) - - (randomColor); - - hostAddressElement.append(document.URL); - - // Poll every second. - (function fetchGuestbook() { - - // Get JSON by running the query, and append - $.getJSON("lrange/k8petstore").done(updateEntries).always( - function() { - setTimeout(fetchGuestbook, 2000); - }); - })(); - - (function fetchLength(trial) { - $.getJSON("llen").done( - function a(llen1){ - updateEntryCount(llen1, trial) - }).always( - function() { - // This function is run every 2 seconds. - setTimeout( - function(){ - trial+=1 ; - fetchLength(trial); - f(); - }, 5000); - } - ) - })(0); -}); - diff --git a/release-0.19.0/examples/k8petstore/web-server/static/style.css b/release-0.19.0/examples/k8petstore/web-server/static/style.css deleted file mode 100644 index 36852934520..00000000000 --- a/release-0.19.0/examples/k8petstore/web-server/static/style.css +++ /dev/null @@ -1,69 +0,0 @@ -body, input { - color: #123; - font-family: "Gill Sans", sans-serif; -} - -div { - overflow: hidden; - padding: 1em 0; - position: relative; - text-align: center; -} - -h1, h2, p, input, a { - font-weight: 300; - margin: 0; -} - -h1 { - color: #BDB76B; - font-size: 3.5em; -} - -h2 { - color: #999; -} - -form { - margin: 0 auto; - max-width: 50em; - text-align: center; -} - -input { - border: 0; - border-radius: 1000px; - box-shadow: inset 0 0 0 2px #BDB76B; - display: inline; - font-size: 1.5em; - margin-bottom: 1em; - outline: none; - padding: .5em 5%; - width: 55%; -} - -form a { - background: #BDB76B; - border: 0; - border-radius: 1000px; - color: #FFF; - font-size: 1.25em; - font-weight: 400; - padding: .75em 2em; - text-decoration: none; - text-transform: uppercase; - white-space: normal; -} - -p { - font-size: 1.5em; - line-height: 1.5; -} -.chart div { - font: 10px sans-serif; - background-color: steelblue; - text-align: right; - padding: 3px; - margin: 1px; - color: white; -} diff --git a/release-0.19.0/examples/k8petstore/web-server/test.sh b/release-0.19.0/examples/k8petstore/web-server/test.sh deleted file mode 100644 index 7b8b0eacd10..00000000000 --- a/release-0.19.0/examples/k8petstore/web-server/test.sh +++ /dev/null @@ -1,25 +0,0 @@ -#!/bin/bash - -# Copyright 2015 The Kubernetes Authors All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -echo "start test of frontend" -curl localhost:3000/llen -curl localhost:3000/llen -curl localhost:3000/llen -curl localhost:3000/llen -curl localhost:3000/llen -curl localhost:3000/llen -x=`curl localhost:3000/llen` -echo "done testing frontend result = $x" diff --git a/release-0.19.0/examples/kubectl-container/.gitignore b/release-0.19.0/examples/kubectl-container/.gitignore deleted file mode 100644 index 50a4a06fd1d..00000000000 --- a/release-0.19.0/examples/kubectl-container/.gitignore +++ /dev/null @@ -1,2 +0,0 @@ -kubectl -.tag diff --git a/release-0.19.0/examples/kubectl-container/Dockerfile b/release-0.19.0/examples/kubectl-container/Dockerfile deleted file mode 100644 index d27d3573644..00000000000 --- a/release-0.19.0/examples/kubectl-container/Dockerfile +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright 2014 Google Inc. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -FROM scratch -MAINTAINER Daniel Smith -ADD kubectl kubectl -ENTRYPOINT ["/kubectl"] diff --git a/release-0.19.0/examples/kubectl-container/Makefile b/release-0.19.0/examples/kubectl-container/Makefile deleted file mode 100644 index b13b09d2ec4..00000000000 --- a/release-0.19.0/examples/kubectl-container/Makefile +++ /dev/null @@ -1,30 +0,0 @@ -# Use: -# -# `make kubectl` will build kubectl. -# `make tag` will suggest a tag. -# `make container` will build a container-- you must supply a tag. -# `make push` will push the container-- you must supply a tag. - -kubectl: - KUBE_STATIC_OVERRIDES="kubectl" ../../hack/build-go.sh cmd/kubectl; cp ../../_output/local/bin/linux/amd64/kubectl . - -.tag: kubectl - ./kubectl version -c | grep -o 'GitVersion:"[^"]*"' | cut -f 2 -d '"' > .tag - -tag: .tag - @echo "Suggest using TAG=$(shell cat .tag)" - @echo "$$ make container TAG=$(shell cat .tag)" - @echo "or" - @echo "$$ make push TAG=$(shell cat .tag)" - -container: - $(if $(TAG),,$(error TAG is not defined. Use 'make tag' to see a suggestion)) - docker build -t gcr.io/google_containers/kubectl:$(TAG) . - -push: container - $(if $(TAG),,$(error TAG is not defined. Use 'make tag' to see a suggestion)) - gcloud preview docker push gcr.io/google_containers/kubectl:$(TAG) - -clean: - rm -f kubectl - rm -f .tag diff --git a/release-0.19.0/examples/kubectl-container/README.md b/release-0.19.0/examples/kubectl-container/README.md deleted file mode 100644 index 697d1a9699f..00000000000 --- a/release-0.19.0/examples/kubectl-container/README.md +++ /dev/null @@ -1,24 +0,0 @@ -This directory contains a Dockerfile and Makefile for packaging up kubectl into -a container. - -It's not currently automated as part of a release process, so for the moment -this is an example of what to do if you want to package kubectl into a -container/your pod. - -In the future, we may release consistently versioned groups of containers when -we cut a release, in which case the source of gcr.io/google_containers/kubectl -would become that automated process. - -```pod.json``` is provided as an example of packaging kubectl as a sidecar -container, and to help you verify that kubectl works correctly in -this configuration. - -A possible reason why you would want to do this is to use ```kubectl proxy``` as -a drop-in replacement for the old no-auth KUBERNETES_RO service. The other -containers in your pod will find the proxy apparently serving on localhost. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/kubectl-container/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/kubectl-container/README.md?pixel)]() diff --git a/release-0.19.0/examples/kubectl-container/pod.json b/release-0.19.0/examples/kubectl-container/pod.json deleted file mode 100644 index 756090862f2..00000000000 --- a/release-0.19.0/examples/kubectl-container/pod.json +++ /dev/null @@ -1,54 +0,0 @@ -{ - "kind": "Pod", - "apiVersion": "v1beta3", - "metadata": { - "name": "kubectl-tester" - }, - "spec": { - "containers": [ - { - "name": "bb", - "image": "gcr.io/google_containers/busybox", - "command": [ - "sh", "-c", "sleep 5; wget -O - ${KUBERNETES_RO_SERVICE_HOST}:${KUBERNETES_RO_SERVICE_PORT}/api/v1beta3/pods/; sleep 10000" - ], - "ports": [ - { - "containerPort": 8080, - "protocol": "TCP" - } - ], - "env": [ - { - "name": "KUBERNETES_RO_SERVICE_HOST", - "value": "127.0.0.1" - }, - { - "name": "KUBERNETES_RO_SERVICE_PORT", - "value": "8001" - } - ], - "volumeMounts": [ - { - "name": "test-volume", - "mountPath": "/mount/test-volume" - } - ] - }, - { - "name": "kubectl", - "image": "gcr.io/google_containers/kubectl:v0.18.0-120-gaeb4ac55ad12b1-dirty", - "imagePullPolicy": "Always", - "args": [ - "proxy", "-p", "8001" - ] - } - ], - "volumes": [ - { - "name": "test-volume", - "emptyDir": {} - } - ] - } -} diff --git a/release-0.19.0/examples/kubernetes-namespaces/README.md b/release-0.19.0/examples/kubernetes-namespaces/README.md deleted file mode 100644 index 8d2bae92696..00000000000 --- a/release-0.19.0/examples/kubernetes-namespaces/README.md +++ /dev/null @@ -1,255 +0,0 @@ -## Kubernetes Namespaces - -Kubernetes _[namespaces](../../docs/namespaces.md)_ help different projects, teams, or customers to share a Kubernetes cluster. - -It does this by providing the following: - -1. A scope for [Names](../../docs/identifiers.md). -2. A mechanism to attach authorization and policy to a subsection of the cluster. - -Use of multiple namespaces is optional. - -This example demonstrates how to use Kubernetes namespaces to subdivide your cluster. - -### Step Zero: Prerequisites - -This example assumes the following: - -1. You have an [existing Kubernetes cluster](../../docs/getting-started-guides). -2. You have a basic understanding of Kubernetes _[pods](../../docs/pods.md)_, _[services](../../docs/services.md)_, and _[replication controllers](../../docs/replication-controller.md)_. - -### Step One: Understand the default namespace - -By default, a Kubernetes cluster will instantiate a default namespace when provisioning the cluster to hold the default set of pods, -services, and replication controllers used by the cluster. - -Assuming you have a fresh cluster, you can introspect the available namespace's by doing the following: - -```shell -$ kubectl get namespaces -NAME LABELS -default -``` - -### Step Two: Create new namespaces - -For this exercise, we will create two additional Kubernetes namespaces to hold our content. - -Let's imagine a scenario where an organization is using a shared Kubernetes cluster for development and production use cases. - -The development team would like to maintain a space in the cluster where they can get a view on the list of pods, services, and replication-controllers -they use to build and run their application. In this space, Kubernetes resources come and go, and the restrictions on who can or cannot modify resources -are relaxed to enable agile development. - -The operations team would like to maintain a space in the cluster where they can enforce strict procedures on who can or cannot manipulate the set of -pods, services, and replication controllers that run the production site. - -One pattern this organization could follow is to partition the Kubernetes cluster into two namespaces: development and production. - -Let's create two new namespaces to hold our work. - -Use the file [`examples/kubernetes-namespaces/namespace-dev.json`](namespace-dev.json) which describes a development namespace: - -```js -{ - "kind": "Namespace", - "apiVersion": "v1beta3", - "metadata": { - "name": "development", - "labels": { - "name": "development" - } - } -} -``` - -Create the development namespace using kubectl. - -```shell -$ kubectl create -f examples/kubernetes-namespaces/namespace-dev.json -``` - -And then lets create the production namespace using kubectl. - -```shell -$ kubectl create -f examples/kubernetes-namespaces/namespace-prod.json -``` - -To be sure things are right, let's list all of the namespaces in our cluster. - -```shell -$ kubectl get namespaces -NAME LABELS STATUS -default Active -development name=development Active -production name=production Active -``` - - -### Step Three: Create pods in each namespace - -A Kubernetes namespace provides the scope for pods, services, and replication controllers in the cluster. - -Users interacting with one namespace do not see the content in another namespace. - -To demonstrate this, let's spin up a simple replication controller and pod in the development namespace. - -We first check what is the current context: - -```shell -apiVersion: v1 -clusters: -- cluster: - certificate-authority-data: REDACTED - server: https://130.211.122.180 - name: lithe-cocoa-92103_kubernetes -contexts: -- context: - cluster: lithe-cocoa-92103_kubernetes - user: lithe-cocoa-92103_kubernetes - name: lithe-cocoa-92103_kubernetes -current-context: lithe-cocoa-92103_kubernetes -kind: Config -preferences: {} -users: -- name: lithe-cocoa-92103_kubernetes - user: - client-certificate-data: REDACTED - client-key-data: REDACTED - token: 65rZW78y8HbwXXtSXuUw9DbP4FLjHi4b -- name: lithe-cocoa-92103_kubernetes-basic-auth - user: - password: h5M0FtUUIflBSdI7 - username: admin -``` - -The next step is to define a context for the kubectl client to work in each namespace. The value of "cluster" and "user" fields are copied from the current context. - -```shell -$ kubectl config set-context dev --namespace=development --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes -$ kubectl config set-context prod --namespace=production --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes -``` - -The above commands provided two request contexts you can alternate against depending on what namespace you -wish to work against. - -Let's switch to operate in the development namespace. - -```shell -$ kubectl config use-context dev -``` - -You can verify your current context by doing the following: - -```shell -$ kubectl config view -apiVersion: v1 -clusters: -- cluster: - certificate-authority-data: REDACTED - server: https://130.211.122.180 - name: lithe-cocoa-92103_kubernetes -contexts: -- context: - cluster: lithe-cocoa-92103_kubernetes - namespace: development - user: lithe-cocoa-92103_kubernetes - name: dev -- context: - cluster: lithe-cocoa-92103_kubernetes - user: lithe-cocoa-92103_kubernetes - name: lithe-cocoa-92103_kubernetes -- context: - cluster: lithe-cocoa-92103_kubernetes - namespace: production - user: lithe-cocoa-92103_kubernetes - name: prod -current-context: dev -kind: Config -preferences: {} -users: -- name: lithe-cocoa-92103_kubernetes - user: - client-certificate-data: REDACTED - client-key-data: REDACTED - token: 65rZW78y8HbwXXtSXuUw9DbP4FLjHi4b -- name: lithe-cocoa-92103_kubernetes-basic-auth - user: - password: h5M0FtUUIflBSdI7 - username: admin -``` - -At this point, all requests we make to the Kubernetes cluster from the command line are scoped to the development namespace. - -Let's create some content. - -```shell -$ kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2 -``` - -We have just created a replication controller whose replica size is 2 that is running the pod called snowflake with a basic container that just serves the hostname. - -```shell -kubectl get rc -CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS -snowflake snowflake kubernetes/serve_hostname run=snowflake 2 - -$ kubectl get pods -POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE -snowflake-mbrfi 10.244.2.4 kubernetes-minion-ilqx/104.197.8.214 run=snowflake Running About an hour - snowflake kubernetes/serve_hostname Running About an hour -snowflake-p78ev 10.244.2.5 kubernetes-minion-ilqx/104.197.8.214 run=snowflake Running About an hour - snowflake kubernetes/serve_hostname Running About an hour -``` - -And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace. - -Let's switch to the production namespace and show how resources in one namespace are hidden from the other. - -```shell -$ kubectl config use-context prod -``` - -The production namespace should be empty. - -```shell -$ kubectl get rc -CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS - -$ kubectl get pods -POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE -``` - -Production likes to run cattle, so let's create some cattle pods. - -```shell -$ kubectl run cattle --image=kubernetes/serve_hostname --replicas=5 - -$ kubectl get rc -CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS -cattle cattle kubernetes/serve_hostname run=cattle 5 - -$ kubectl get pods -POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE -cattle-1kyvj 10.244.0.4 kubernetes-minion-7s1y/23.236.54.97 run=cattle Running About an hour - cattle kubernetes/serve_hostname Running About an hour -cattle-kobrk 10.244.1.4 kubernetes-minion-cfs6/104.154.61.231 run=cattle Running About an hour - cattle kubernetes/serve_hostname Running About an hour -cattle-l1v9t 10.244.0.5 kubernetes-minion-7s1y/23.236.54.97 run=cattle Running About an hour - cattle kubernetes/serve_hostname Running About an hour -cattle-ne2sj 10.244.3.7 kubernetes-minion-x8gx/104.154.47.83 run=cattle Running About an hour - cattle kubernetes/serve_hostname Running About an hour -cattle-qrk4x 10.244.0.6 kubernetes-minion-7s1y/23.236.54.97 run=cattle Running About an hour - cattle kubernetes/serve_hostname -``` - -At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace. - -As the policy support in Kubernetes evolves, we will extend this scenario to show how you can provide different -authorization rules for each namespace. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/kubernetes-namespaces/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/kubernetes-namespaces/README.md?pixel)]() diff --git a/release-0.19.0/examples/kubernetes-namespaces/namespace-dev.json b/release-0.19.0/examples/kubernetes-namespaces/namespace-dev.json deleted file mode 100644 index 2561e92a38f..00000000000 --- a/release-0.19.0/examples/kubernetes-namespaces/namespace-dev.json +++ /dev/null @@ -1,10 +0,0 @@ -{ - "kind": "Namespace", - "apiVersion": "v1beta3", - "metadata": { - "name": "development", - "labels": { - "name": "development" - } - } -} diff --git a/release-0.19.0/examples/kubernetes-namespaces/namespace-prod.json b/release-0.19.0/examples/kubernetes-namespaces/namespace-prod.json deleted file mode 100644 index 149183c94ab..00000000000 --- a/release-0.19.0/examples/kubernetes-namespaces/namespace-prod.json +++ /dev/null @@ -1,10 +0,0 @@ -{ - "kind": "Namespace", - "apiVersion": "v1beta3", - "metadata": { - "name": "production", - "labels": { - "name": "production" - } - } -} diff --git a/release-0.19.0/examples/limitrange/README.md b/release-0.19.0/examples/limitrange/README.md deleted file mode 100644 index ea330d924ad..00000000000 --- a/release-0.19.0/examples/limitrange/README.md +++ /dev/null @@ -1,7 +0,0 @@ -Please refer to this [doc](https://github.com/GoogleCloudPlatform/kubernetes/blob/620af168920b773ade28e27211ad684903a1db21/docs/design/admission_control_limit_range.md#kubectl). - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/limitrange/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/limitrange/README.md?pixel)]() diff --git a/release-0.19.0/examples/limitrange/invalid-pod.json b/release-0.19.0/examples/limitrange/invalid-pod.json deleted file mode 100644 index 3c622859f81..00000000000 --- a/release-0.19.0/examples/limitrange/invalid-pod.json +++ /dev/null @@ -1,22 +0,0 @@ -{ - "apiVersion":"v1beta3", - "kind": "Pod", - "metadata": { - "name": "invalid-pod", - "labels": { - "name": "invalid-pod" - } - }, - "spec": { - "containers": [{ - "name": "kubernetes-serve-hostname", - "image": "gcr.io/google_containers/serve_hostname", - "resources": { - "limits": { - "cpu": "10m", - "memory": "5Mi" - } - } - }] - } -} diff --git a/release-0.19.0/examples/limitrange/limit-range.json b/release-0.19.0/examples/limitrange/limit-range.json deleted file mode 100644 index c27e9f14fe1..00000000000 --- a/release-0.19.0/examples/limitrange/limit-range.json +++ /dev/null @@ -1,37 +0,0 @@ -{ - "apiVersion": "v1beta3", - "kind": "LimitRange", - "metadata": { - "name": "limits" - }, - "spec": { - "limits": [ - { - "type": "Pod", - "max": { - "memory": "1Gi", - "cpu": "2" - }, - "min": { - "memory": "6Mi", - "cpu": "250m" - } - }, - { - "type": "Container", - "max": { - "memory": "1Gi", - "cpu": "2" - }, - "min": { - "memory": "6Mi", - "cpu": "250m" - }, - "default": { - "memory": "6Mi", - "cpu": "250m" - } - } - ] - } -} diff --git a/release-0.19.0/examples/limitrange/valid-pod.json b/release-0.19.0/examples/limitrange/valid-pod.json deleted file mode 100644 index 350a844d2ca..00000000000 --- a/release-0.19.0/examples/limitrange/valid-pod.json +++ /dev/null @@ -1,22 +0,0 @@ -{ - "apiVersion":"v1beta3", - "kind": "Pod", - "metadata": { - "name": "valid-pod", - "labels": { - "name": "valid-pod" - } - }, - "spec": { - "containers": [{ - "name": "kubernetes-serve-hostname", - "image": "gcr.io/google_containers/serve_hostname", - "resources": { - "limits": { - "cpu": "1", - "memory": "6Mi" - } - } - }] - } -} diff --git a/release-0.19.0/examples/liveness/README.md b/release-0.19.0/examples/liveness/README.md deleted file mode 100644 index 16689ac0365..00000000000 --- a/release-0.19.0/examples/liveness/README.md +++ /dev/null @@ -1,82 +0,0 @@ -## Overview -This example shows two types of pod health checks: HTTP checks and container execution checks. - -The [exec-liveness.yaml](./exec-liveness.yaml) demonstrates the container execution check. -``` - livenessProbe: - exec: - command: - - cat - - /tmp/health - initialDelaySeconds: 15 - timeoutSeconds: 1 -``` -Kubelet executes the command cat /tmp/health in the container and reports failure if the command returns a non-zero exit code. - -Note that the container removes the /tmp/health file after 10 seconds, -``` -echo ok > /tmp/health; sleep 10; rm -rf /tmp/health; sleep 600 -``` -so when Kubelet executes the health check 15 seconds (defined by initialDelaySeconds) after the container started, the check would fail. - - -The [http-liveness.yaml](http-liveness.yaml) demonstrates the HTTP check. -``` - livenessProbe: - httpGet: - path: /healthz - port: 8080 - initialDelaySeconds: 15 - timeoutSeconds: 1 -``` -The Kubelet sends a HTTP request to the specified path and port to perform the health check. If you take a look at image/server.go, you will see the server starts to respond with an error code 500 after 10 seconds, so the check fails. - -This [guide](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/examples/walkthrough/k8s201.md#health-checking) has more information on health checks. - -## Get your hands dirty -To show the health check is actually working, first create the pods: -``` -# kubectl create -f exec-liveness.yaml -# cluster/kbuectl.sh create -f http-liveness.yaml -``` - -Check the status of the pods once they are created: -``` -# kubectl get pods -POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE -liveness-exec 10.244.3.7 kubernetes-minion-f08h/130.211.122.180 test=liveness Running 3 seconds - liveness gcr.io/google_containers/busybox Running 2 seconds -liveness-http 10.244.0.8 kubernetes-minion-0bks/104.197.10.10 test=liveness Running 3 seconds - liveness gcr.io/google_containers/liveness Running 2 seconds -``` - -Check the status half a minute later, you will see the termination messages: -``` -# kubectl get pods -POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE -liveness-exec 10.244.3.7 kubernetes-minion-f08h/130.211.122.180 test=liveness Running 34 seconds - liveness gcr.io/google_containers/busybox Running 3 seconds last termination: exit code 137 -liveness-http 10.244.0.8 kubernetes-minion-0bks/104.197.10.10 test=liveness Running 34 seconds - liveness gcr.io/google_containers/liveness Running 13 seconds last termination: exit code 2 -``` -The termination messages indicate that the liveness probes have failed, and the containers have been killed and recreated. - -You can also see the container restart count being incremented by running `kubectl describe`. -``` -# kubectl describe pods liveness-exec | grep "Restart Count" -Restart Count: 8 -``` - -You would also see the killing and creating events at the bottom of the *kubectl describe* output: -``` - Thu, 14 May 2015 15:23:25 -0700 Thu, 14 May 2015 15:23:25 -0700 1 {kubelet kubernetes-minion-0uzf} spec.containers{liveness} killing Killing 88c8b717d8b0940d52743c086b43c3fad0d725a36300b9b5f0ad3a1c8cef2d3e - Thu, 14 May 2015 15:23:25 -0700 Thu, 14 May 2015 15:23:25 -0700 1 {kubelet kubernetes-minion-0uzf} spec.containers{liveness} created Created with docker id b254a9810073f9ee9075bb38ac29a4b063647176ad9eabd9184078ca98a60062 - Thu, 14 May 2015 15:23:25 -0700 Thu, 14 May 2015 15:23:25 -0700 1 {kubelet kubernetes-minion-0uzf} spec.containers{liveness} started Started with docker id b254a9810073f9ee9075bb38ac29a4b063647176ad9eabd9184078ca98a60062 - ... -``` - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/liveness/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/liveness/README.md?pixel)]() diff --git a/release-0.19.0/examples/liveness/exec-liveness.yaml b/release-0.19.0/examples/liveness/exec-liveness.yaml deleted file mode 100644 index b72dac0f595..00000000000 --- a/release-0.19.0/examples/liveness/exec-liveness.yaml +++ /dev/null @@ -1,21 +0,0 @@ -apiVersion: v1beta3 -kind: Pod -metadata: - labels: - test: liveness - name: liveness-exec -spec: - containers: - - args: - - /bin/sh - - -c - - echo ok > /tmp/health; sleep 10; rm -rf /tmp/health; sleep 600 - image: gcr.io/google_containers/busybox - livenessProbe: - exec: - command: - - cat - - /tmp/health - initialDelaySeconds: 15 - timeoutSeconds: 1 - name: liveness diff --git a/release-0.19.0/examples/liveness/http-liveness.yaml b/release-0.19.0/examples/liveness/http-liveness.yaml deleted file mode 100644 index 36d3d70caf0..00000000000 --- a/release-0.19.0/examples/liveness/http-liveness.yaml +++ /dev/null @@ -1,18 +0,0 @@ -apiVersion: v1beta3 -kind: Pod -metadata: - labels: - test: liveness - name: liveness-http -spec: - containers: - - args: - - /server - image: gcr.io/google_containers/liveness - livenessProbe: - httpGet: - path: /healthz - port: 8080 - initialDelaySeconds: 15 - timeoutSeconds: 1 - name: liveness diff --git a/release-0.19.0/examples/liveness/image/Dockerfile b/release-0.19.0/examples/liveness/image/Dockerfile deleted file mode 100644 index d057ecd309e..00000000000 --- a/release-0.19.0/examples/liveness/image/Dockerfile +++ /dev/null @@ -1,4 +0,0 @@ -FROM scratch - -ADD server /server - diff --git a/release-0.19.0/examples/liveness/image/Makefile b/release-0.19.0/examples/liveness/image/Makefile deleted file mode 100644 index c123ac6df9d..00000000000 --- a/release-0.19.0/examples/liveness/image/Makefile +++ /dev/null @@ -1,13 +0,0 @@ -all: push - -server: server.go - CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -ldflags '-w' ./server.go - -container: server - docker build -t gcr.io/google_containers/liveness . - -push: container - gcloud preview docker push gcr.io/google_containers/liveness - -clean: - rm -f server diff --git a/release-0.19.0/examples/liveness/image/server.go b/release-0.19.0/examples/liveness/image/server.go deleted file mode 100644 index 26c337e767b..00000000000 --- a/release-0.19.0/examples/liveness/image/server.go +++ /dev/null @@ -1,46 +0,0 @@ -/* -Copyright 2014 The Kubernetes Authors All rights reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// A simple server that is alive for 10 seconds, then reports unhealthy for -// the rest of its (hopefully) short existence. -package main - -import ( - "fmt" - "log" - "net/http" - "time" -) - -func main() { - started := time.Now() - http.HandleFunc("/started", func(w http.ResponseWriter, r *http.Request) { - w.WriteHeader(200) - data := (time.Now().Sub(started)).String() - w.Write([]byte(data)) - }) - http.HandleFunc("/healthz", func(w http.ResponseWriter, r *http.Request) { - duration := time.Now().Sub(started) - if duration.Seconds() > 10 { - w.WriteHeader(500) - w.Write([]byte(fmt.Sprintf("error: %v", duration.Seconds()))) - } else { - w.WriteHeader(200) - w.Write([]byte("ok")) - } - }) - log.Fatal(http.ListenAndServe(":8080", nil)) -} diff --git a/release-0.19.0/examples/logging-demo/Makefile b/release-0.19.0/examples/logging-demo/Makefile deleted file mode 100644 index c847f9d6b35..00000000000 --- a/release-0.19.0/examples/logging-demo/Makefile +++ /dev/null @@ -1,34 +0,0 @@ -# Makefile for launching syntheitc logging sources (any platform) -# and for reporting the forwarding rules for the -# Elasticsearch and Kibana pods for the GCE platform. - - -.PHONY: up down logger-up logger-down logger10-up logger10-downget net - -KUBECTL=../../cluster/kubectl.sh - -up: logger-up logger10-up - -down: logger-down logger10-down - - -logger-up: - -${KUBECTL} create -f synthetic_0_25lps.yaml - -logger-down: - -${KUBECTL} delete pods synthetic-logger-0.25lps-pod - -logger10-up: - -${KUBECTL} create -f synthetic_10lps.yaml - -logger10-down: - -${KUBECTL} delete pods synthetic-logger-10lps-pod - -get: - ${KUBECTL} get pods - ${KUBECTL} get replicationControllers - ${KUBECTL} get services - -net: - ${KUBECTL} get services elasticsearch-logging -o json - ${KUBECTL} get services kibana-logging -o json diff --git a/release-0.19.0/examples/logging-demo/README.md b/release-0.19.0/examples/logging-demo/README.md deleted file mode 100644 index 159eb353589..00000000000 --- a/release-0.19.0/examples/logging-demo/README.md +++ /dev/null @@ -1,248 +0,0 @@ -# Elasticsearch/Kibana Logging Demonstration -This directory contains two [pod](../../docs/pods.md) specifications which can be used as synthetic -logging sources. The pod specification in [synthetic_0_25lps.yaml](synthetic_0_25lps.yaml) -describes a pod that just emits a log message once every 4 seconds: -``` -# This pod specification creates an instance of a synthetic logger. The logger -# is simply a program that writes out the hostname of the pod, a count which increments -# by one on each iteration (to help notice missing log enteries) and the date using -# a long format (RFC-3339) to nano-second precision. This program logs at a frequency -# of 0.25 lines per second. The shellscript program is given directly to bash as -c argument -# and could have been written out as: -# i="0" -# while true -# do -# echo -n "`hostname`: $i: " -# date --rfc-3339 ns -# sleep 4 -# i=$[$i+1] -# done -apiVersion: v1beta3 -kind: Pod -metadata: - labels: - name: synth-logging-source - name: synthetic-logger-0.25lps-pod -spec: - containers: - - args: - - bash - - -c - - 'i="0"; while true; do echo -n "`hostname`: $i: "; date --rfc-3339 ns; sleep - 4; i=$[$i+1]; done' - image: ubuntu:14.04 - name: synth-lgr -``` - -The other YAML file [synthetic_10lps.yaml](synthetic_10lps.yaml) specifies a similar synthetic logger that emits 10 log messages every second. To run both synthetic loggers: -``` -$ make up -../../../kubectl.sh create -f synthetic_0_25lps.yaml -Running: ../../../cluster/../cluster/gce/../../_output/dockerized/bin/linux/amd64/kubectl create -f synthetic_0_25lps.yaml -synthetic-logger-0.25lps-pod -../../../kubectl.sh create -f synthetic_10lps.yaml -Running: ../../../cluster/../cluster/gce/../../_output/dockerized/bin/linux/amd64/kubectl create -f synthetic_10lps.yaml -synthetic-logger-10lps-pod - -``` - -Visiting the Kibana dashboard should make it clear that logs are being collected from the two synthetic loggers: -![Synthetic loggers](synth-logger.png) - -You can report the running pods, [replication controllers](../../docs/replication-controller.md), and [services](../../docs/services.md) with another Makefile rule: -``` -$ make get -../../../kubectl.sh get pods -Running: ../../../../cluster/gce/../../_output/dockerized/bin/linux/amd64/kubectl get pods -POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE -elasticsearch-logging-f0smz 10.244.2.3 kubernetes-minion-ilqx/104.197.8.214 kubernetes.io/cluster-service=true,name=elasticsearch-logging Running 5 hours - elasticsearch-logging gcr.io/google_containers/elasticsearch:1.0 Running 5 hours -etcd-server-kubernetes-master kubernetes-master/ Running 5 hours - etcd-container gcr.io/google_containers/etcd:2.0.9 Running 5 hours -fluentd-elasticsearch-kubernetes-minion-7s1y 10.244.0.2 kubernetes-minion-7s1y/23.236.54.97 Running 5 hours - fluentd-elasticsearch gcr.io/google_containers/fluentd-elasticsearch:1.5 Running 5 hours -fluentd-elasticsearch-kubernetes-minion-cfs6 10.244.1.2 kubernetes-minion-cfs6/104.154.61.231 Running 5 hours - fluentd-elasticsearch gcr.io/google_containers/fluentd-elasticsearch:1.5 Running 5 hours -fluentd-elasticsearch-kubernetes-minion-ilqx 10.244.2.2 kubernetes-minion-ilqx/104.197.8.214 Running 5 hours - fluentd-elasticsearch gcr.io/google_containers/fluentd-elasticsearch:1.5 Running 5 hours -fluentd-elasticsearch-kubernetes-minion-x8gx 10.244.3.2 kubernetes-minion-x8gx/104.154.47.83 Running 5 hours - fluentd-elasticsearch gcr.io/google_containers/fluentd-elasticsearch:1.5 Running 5 hours -kibana-logging-cwe0b 10.244.1.3 kubernetes-minion-cfs6/104.154.61.231 kubernetes.io/cluster-service=true,name=kibana-logging Running 5 hours - kibana-logging gcr.io/google_containers/kibana:1.2 Running 5 hours -kube-apiserver-kubernetes-master kubernetes-master/ Running 5 hours - kube-apiserver gcr.io/google_containers/kube-apiserver:f0c332fc2582927ec27d24965572d4b0 Running 5 hours -kube-controller-manager-kubernetes-master kubernetes-master/ Running 5 hours - kube-controller-manager gcr.io/google_containers/kube-controller-manager:6729154dfd4e2a19752bdf9ceff8464c Running 5 hours -kube-dns-swd4n 10.244.3.5 kubernetes-minion-x8gx/104.154.47.83 k8s-app=kube-dns,kubernetes.io/cluster-service=true,name=kube-dns Running 5 hours - kube2sky gcr.io/google_containers/kube2sky:1.2 Running 5 hours - etcd quay.io/coreos/etcd:v2.0.3 Running 5 hours - skydns gcr.io/google_containers/skydns:2015-03-11-001 Running 5 hours -kube-scheduler-kubernetes-master kubernetes-master/ Running 5 hours - kube-scheduler gcr.io/google_containers/kube-scheduler:ec9d2092f754211cc5ab3a5162c05fc1 Running 5 hours -monitoring-heapster-controller-zpjj1 10.244.3.3 kubernetes-minion-x8gx/104.154.47.83 kubernetes.io/cluster-service=true,name=heapster Running 5 hours - heapster gcr.io/google_containers/heapster:v0.10.0 Running 5 hours -monitoring-influx-grafana-controller-dqan4 10.244.3.4 kubernetes-minion-x8gx/104.154.47.83 kubernetes.io/cluster-service=true,name=influxGrafana Running 5 hours - grafana gcr.io/google_containers/heapster_grafana:v0.6 Running 5 hours - influxdb gcr.io/google_containers/heapster_influxdb:v0.3 Running 5 hours -synthetic-logger-0.25lps-pod 10.244.0.7 kubernetes-minion-7s1y/23.236.54.97 name=synth-logging-source Running 19 minutes - synth-lgr ubuntu:14.04 Running 19 minutes -synthetic-logger-10lps-pod 10.244.3.14 kubernetes-minion-x8gx/104.154.47.83 name=synth-logging-source Running 19 minutes - synth-lgr ubuntu:14.04 Running 19 minutes -../../_output/local/bin/linux/amd64/kubectl get replicationControllers -CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS -elasticsearch-logging elasticsearch-logging gcr.io/google_containers/elasticsearch:1.0 name=elasticsearch-logging 1 -kibana-logging kibana-logging gcr.io/google_containers/kibana:1.2 name=kibana-logging 1 -kube-dns etcd quay.io/coreos/etcd:v2.0.3 k8s-app=kube-dns 1 - kube2sky gcr.io/google_containers/kube2sky:1.2 - skydns gcr.io/google_containers/skydns:2015-03-11-001 -monitoring-heapster-controller heapster gcr.io/google_containers/heapster:v0.10.0 name=heapster 1 -monitoring-influx-grafana-controller influxdb gcr.io/google_containers/heapster_influxdb:v0.3 name=influxGrafana 1 - grafana gcr.io/google_containers/heapster_grafana:v0.6 -../../_output/local/bin/linux/amd64/kubectl get services -NAME LABELS SELECTOR IP(S) PORT(S) -elasticsearch-logging kubernetes.io/cluster-service=true,name=elasticsearch-logging name=elasticsearch-logging 10.0.251.221 9200/TCP -kibana-logging kubernetes.io/cluster-service=true,name=kibana-logging name=kibana-logging 10.0.188.118 5601/TCP -kube-dns k8s-app=kube-dns,kubernetes.io/cluster-service=true,name=kube-dns k8s-app=kube-dns 10.0.0.10 53/UDP -kubernetes component=apiserver,provider=kubernetes 10.0.0.2 443/TCP -monitoring-grafana kubernetes.io/cluster-service=true,name=grafana name=influxGrafana 10.0.254.202 80/TCP -monitoring-heapster kubernetes.io/cluster-service=true,name=heapster name=heapster 10.0.19.214 80/TCP -monitoring-influxdb name=influxGrafana name=influxGrafana 10.0.198.71 80/TCP -monitoring-influxdb-ui name=influxGrafana name=influxGrafana 10.0.109.66 80/TCP -``` - -The `net` rule in the Makefile will report information about the Elasticsearch and Kibana services including the public IP addresses of each service. -``` -$ make net -../../../kubectl.sh get services elasticsearch-logging -o json -current-context: "lithe-cocoa-92103_kubernetes" -Running: ../../_output/local/bin/linux/amd64/kubectl get services elasticsearch-logging -o json -{ - "kind": "Service", - "apiVersion": "v1beta3", - "metadata": { - "name": "elasticsearch-logging", - "namespace": "default", - "selfLink": "/api/v1beta3/namespaces/default/services/elasticsearch-logging", - "uid": "9dc7290f-f358-11e4-a58e-42010af09a93", - "resourceVersion": "28", - "creationTimestamp": "2015-05-05T18:57:45Z", - "labels": { - "kubernetes.io/cluster-service": "true", - "name": "elasticsearch-logging" - } - }, - "spec": { - "ports": [ - { - "name": "", - "protocol": "TCP", - "port": 9200, - "targetPort": "es-port" - } - ], - "selector": { - "name": "elasticsearch-logging" - }, - "portalIP": "10.0.251.221", - "sessionAffinity": "None" - }, - "status": {} -} -current-context: "lithe-cocoa-92103_kubernetes" -Running: ../../_output/local/bin/linux/amd64/kubectl get services kibana-logging -o json -{ - "kind": "Service", - "apiVersion": "v1beta3", - "metadata": { - "name": "kibana-logging", - "namespace": "default", - "selfLink": "/api/v1beta3/namespaces/default/services/kibana-logging", - "uid": "9dc6f856-f358-11e4-a58e-42010af09a93", - "resourceVersion": "31", - "creationTimestamp": "2015-05-05T18:57:45Z", - "labels": { - "kubernetes.io/cluster-service": "true", - "name": "kibana-logging" - } - }, - "spec": { - "ports": [ - { - "name": "", - "protocol": "TCP", - "port": 5601, - "targetPort": "kibana-port" - } - ], - "selector": { - "name": "kibana-logging" - }, - "portalIP": "10.0.188.118", - "sessionAffinity": "None" - }, - "status": {} -} -``` -To find the URLs to access the Elasticsearch and Kibana viewer, -``` -$ kubectl cluster-info -Kubernetes master is running at https://130.211.122.180 -elasticsearch-logging is running at https://130.211.122.180/api/v1beta3/proxy/namespaces/default/services/elasticsearch-logging -kibana-logging is running at https://130.211.122.180/api/v1beta3/proxy/namespaces/default/services/kibana-logging -kube-dns is running at https://130.211.122.180/api/v1beta3/proxy/namespaces/default/services/kube-dns -grafana is running at https://130.211.122.180/api/v1beta3/proxy/namespaces/default/services/monitoring-grafana -heapster is running at https://130.211.122.180/api/v1beta3/proxy/namespaces/default/services/monitoring-heapster -``` - -To find the user name and password to access the URLs, -``` -$ kubectl config view -apiVersion: v1 -clusters: -- cluster: - certificate-authority-data: REDACTED - server: https://130.211.122.180 - name: lithe-cocoa-92103_kubernetes -contexts: -- context: - cluster: lithe-cocoa-92103_kubernetes - user: lithe-cocoa-92103_kubernetes - name: lithe-cocoa-92103_kubernetes -current-context: lithe-cocoa-92103_kubernetes -kind: Config -preferences: {} -users: -- name: lithe-cocoa-92103_kubernetes - user: - client-certificate-data: REDACTED - client-key-data: REDACTED - token: 65rZW78y8HxmXXtSXuUw9DbP4FLjHi4b -- name: lithe-cocoa-92103_kubernetes-basic-auth - user: - password: h5M0FtVXXflBSdI7 - username: admin -``` - -Access the Elasticsearch service at URL `https://130.211.122.180/api/v1beta3/proxy/namespaces/default/services/elasticsearch-logging`, use the user name 'admin' and password 'h5M0FtVXXflBSdI7', -``` -{ - "status" : 200, - "name" : "Major Mapleleaf", - "cluster_name" : "kubernetes_logging", - "version" : { - "number" : "1.4.4", - "build_hash" : "c88f77ffc81301dfa9dfd81ca2232f09588bd512", - "build_timestamp" : "2015-02-19T13:05:36Z", - "build_snapshot" : false, - "lucene_version" : "4.10.3" - }, - "tagline" : "You Know, for Search" -} -``` -Visiting the URL `https://130.211.122.180/api/v1beta3/proxy/namespaces/default/services/kibana-logging` should show the Kibana viewer for the logging information stored in the Elasticsearch service. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/logging-demo/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/logging-demo/README.md?pixel)]() diff --git a/release-0.19.0/examples/logging-demo/synth-logger.png b/release-0.19.0/examples/logging-demo/synth-logger.png deleted file mode 100644 index bd19ea3ee41..00000000000 Binary files a/release-0.19.0/examples/logging-demo/synth-logger.png and /dev/null differ diff --git a/release-0.19.0/examples/logging-demo/synthetic_0_25lps.yaml b/release-0.19.0/examples/logging-demo/synthetic_0_25lps.yaml deleted file mode 100644 index 5ff01e52874..00000000000 --- a/release-0.19.0/examples/logging-demo/synthetic_0_25lps.yaml +++ /dev/null @@ -1,29 +0,0 @@ -# This pod specification creates an instance of a synthetic logger. The logger -# is simply a program that writes out the hostname of the pod, a count which increments -# by one on each iteration (to help notice missing log enteries) and the date using -# a long format (RFC-3339) to nano-second precision. This program logs at a frequency -# of 0.25 lines per second. The shellscript program is given directly to bash as -c argument -# and could have been written out as: -# i="0" -# while true -# do -# echo -n "`hostname`: $i: " -# date --rfc-3339 ns -# sleep 4 -# i=$[$i+1] -# done -apiVersion: v1beta3 -kind: Pod -metadata: - labels: - name: synth-logging-source - name: synthetic-logger-0.25lps-pod -spec: - containers: - - args: - - bash - - -c - - 'i="0"; while true; do echo -n "`hostname`: $i: "; date --rfc-3339 ns; sleep - 4; i=$[$i+1]; done' - image: ubuntu:14.04 - name: synth-lgr diff --git a/release-0.19.0/examples/logging-demo/synthetic_10lps.yaml b/release-0.19.0/examples/logging-demo/synthetic_10lps.yaml deleted file mode 100644 index 35f305d260f..00000000000 --- a/release-0.19.0/examples/logging-demo/synthetic_10lps.yaml +++ /dev/null @@ -1,30 +0,0 @@ -# This pod specification creates an instance of a synthetic logger. The logger -# is simply a program that writes out the hostname of the pod, a count which increments -# by one on each iteration (to help notice missing log enteries) and the date using -# a long format (RFC-3339) to nano-second precision. This program logs at a frequency -# of 0.25 lines per second. The shellscript program is given directly to bash as -c argument -# and could have been written out as: -# i="0" -# while true -# do -# echo -n "`hostname`: $i: " -# date --rfc-3339 ns -# sleep 4 -# i=$[$i+1] -# done -apiVersion: v1beta3 -kind: Pod -metadata: - creationTimestamp: null - labels: - name: synth-logging-source - name: synthetic-logger-10lps-pod -spec: - containers: - - args: - - bash - - -c - - 'i="0"; while true; do echo -n "`hostname`: $i: "; date --rfc-3339 ns; sleep - 0.1; i=$[$i+1]; done' - image: ubuntu:14.04 - name: synth-lgr diff --git a/release-0.19.0/examples/meteor/README.md b/release-0.19.0/examples/meteor/README.md deleted file mode 100644 index 6641943bdfe..00000000000 --- a/release-0.19.0/examples/meteor/README.md +++ /dev/null @@ -1,171 +0,0 @@ -Meteor on Kuberenetes -===================== - -This example shows you how to package and run a -[Meteor](https://www.meteor.com/) app on Kubernetes. - -Build a container for your Meteor app -------------------------------------- - -To be able to run your Meteor app on Kubernetes you need to build a -Docker container for it first. To do that you need to install -[Docker](https://www.docker.com) Once you have that you need to add 2 -files to your existing Meteor project `Dockerfile` and -`.dockerignore`. - -`Dockerfile` should contain the below lines. You should replace the -`ROOT_URL` with the actual hostname of your app. -``` -FROM chees/meteor-kubernetes -ENV ROOT_URL http://myawesomeapp.com -``` - -The `.dockerignore` file should contain the below lines. This tells -Docker to ignore the files on those directories when it's building -your container. -``` -.meteor/local -packages/*/.build* -``` - -You can see an example meteor project already set up at: -[meteor-gke-example](https://github.com/Q42/meteor-gke-example). Feel -free to use this app for this example. - -> Note: The next step will not work if you have added mobile platforms -> to your meteor project. Check with `meteor list-platforms` - -Now you can build your container by running this in -your Meteor project directory: -``` -docker build -t my-meteor . -``` - -Pushing to a registry ---------------------- - -For the [Docker Hub](https://hub.docker.com/), tag your app image with -your username and push to the Hub with the below commands. Replace -`` with your Hub username. -``` -docker tag my-meteor /my-meteor -docker push /my-meteor -``` - -For [Google Container -Registry](https://cloud.google.com/tools/container-registry/), tag -your app image with your project ID, and push to GCR. Replace -`` with your project ID. -``` -docker tag my-meteor gcr.io//my-meteor -gcloud preview docker push gcr.io//my-meteor -``` - -Running -------- - -Now that you have containerized your Meteor app it's time to set up -your cluster. Edit [`meteor-controller.json`](meteor-controller.json) and make sure the `image` -points to the container you just pushed to the Docker Hub or GCR. - -As you may know, Meteor uses MongoDB, and we'll need to provide it a -persistent Kuberetes volume to store its data. See the [volumes -documentation](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/volumes.md) -for options. We're going to use Google Compute Engine persistent -disks. Create the MongoDB disk by running: -``` -gcloud compute disks create --size=200GB mongo-disk -``` - -You also need to format the disk before you can use it: -``` -gcloud compute instances attach-disk --disk=mongo-disk --device-name temp-data kubernetes-master -gcloud compute ssh kubernetes-master --command "sudo mkdir /mnt/tmp && sudo /usr/share/google/safe_format_and_mount /dev/disk/by-id/google-temp-data /mnt/tmp" -gcloud compute instances detach-disk --disk mongo-disk kubernetes-master -``` - -Now you can start Mongo using that disk: -``` -kubectl create -f mongo-pod.json -kubectl create -f mongo-service.json -``` - -Wait until Mongo is started completely and then start up your Meteor app: -``` -kubectl create -f meteor-controller.json -kubectl create -f meteor-service.json -``` - -Note that [`meteor-service.json`](meteor-service.json) creates an external load balancer, so -your app should be available through the IP of that load balancer once -the Meteor pods are started. You can find the IP of your load balancer -by running: -``` -kubectl get services/meteor -o template -t "{{.spec.publicIPs}}" -``` - -You will have to open up port 80 if it's not open yet in your -environment. On GCE, you may run the below command. -``` -gcloud compute firewall-rules create meteor-80 --allow=tcp:80 --target-tags kubernetes-minion -``` - -What is going on? ------------------ - -Firstly, the `FROM chees/meteor-kubernetes` line in your `Dockerfile` -specifies the base image for your Meteor app. The code for that image -is located in the `dockerbase/` subdirectory. Open up the `Dockerfile` -to get an insight of what happens during the `docker build` step. The -image is based on the Node.js official image. It then installs Meteor -and copies in your apps' code. The last line specifies what happens -when your app container is run. -``` -ENTRYPOINT MONGO_URL=mongodb://$MONGO_SERVICE_HOST:$MONGO_SERVICE_PORT /usr/local/bin/node main.js -``` - -Here we can see the MongoDB host and port information being passed -into the Meteor app. The `MONGO_SERVICE...` environment variables are -set by Kubernetes, and point to the service named `mongo` specified in -[`mongo-service.json`](mongo-service.json). See the [environment -documentation](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/container-environment.md) -for more details. - -As you may know, Meteor uses long lasting connections, and requires -_sticky sessions_. With Kubernetes you can scale out your app easily -with session affinity. The [`meteor-service.json`](meteor-service.json) file contains -`"sessionAffinity": "ClientIP"`, which provides this for us. See the -[service -documentation](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/services.md#virtual-ips-and-service-proxies) -for more information. - -As mentioned above, the mongo container uses a volume which is mapped -to a persistent disk by Kubernetes. In [`mongo-pod.json`](mongo-pod.json) the container -section specifies the volume: -``` - "volumeMounts": [ - { - "name": "mongo-disk", - "mountPath": "/data/db" - } -``` - -The name `mongo-disk` refers to the volume specified outside the -container section: -``` - "volumes": [ - { - "name": "mongo-disk", - "gcePersistentDisk": { - "pdName": "mongo-disk", - "fsType": "ext4" - } - } - ], -``` - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/meteor/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/meteor/README.md?pixel)]() diff --git a/release-0.19.0/examples/meteor/dockerbase/Dockerfile b/release-0.19.0/examples/meteor/dockerbase/Dockerfile deleted file mode 100644 index 8ce633c634b..00000000000 --- a/release-0.19.0/examples/meteor/dockerbase/Dockerfile +++ /dev/null @@ -1,18 +0,0 @@ -FROM node:0.10 -MAINTAINER Christiaan Hees - -ONBUILD WORKDIR /appsrc -ONBUILD COPY . /appsrc - -ONBUILD RUN curl https://install.meteor.com/ | sh && \ - meteor build ../app --directory --architecture os.linux.x86_64 && \ - rm -rf /appsrc -# TODO rm meteor so it doesn't take space in the image? - -ONBUILD WORKDIR /app/bundle - -ONBUILD RUN (cd programs/server && npm install) -EXPOSE 8080 -CMD [] -ENV PORT 8080 -ENTRYPOINT MONGO_URL=mongodb://$MONGO_SERVICE_HOST:$MONGO_SERVICE_PORT /usr/local/bin/node main.js diff --git a/release-0.19.0/examples/meteor/dockerbase/README.md b/release-0.19.0/examples/meteor/dockerbase/README.md deleted file mode 100644 index a17b773e6ad..00000000000 --- a/release-0.19.0/examples/meteor/dockerbase/README.md +++ /dev/null @@ -1,15 +0,0 @@ -Building the meteor-kubernetes base image ------------------------------------------ - -As a normal user you don't need to do this since the image is already built and pushed to Docker Hub. You can just use it as a base image. See [this example](https://github.com/Q42/meteor-gke-example/blob/master/Dockerfile). - -To build and push the base meteor-kubernetes image: - - docker build -t chees/meteor-kubernetes . - docker push chees/meteor-kubernetes - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/meteor/dockerbase/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/meteor/dockerbase/README.md?pixel)]() diff --git a/release-0.19.0/examples/meteor/meteor-controller.json b/release-0.19.0/examples/meteor/meteor-controller.json deleted file mode 100644 index 2935126e03f..00000000000 --- a/release-0.19.0/examples/meteor/meteor-controller.json +++ /dev/null @@ -1,40 +0,0 @@ -{ - "kind": "ReplicationController", - "apiVersion": "v1beta3", - "metadata": { - "name": "meteor-controller", - "labels": { - "name": "meteor" - } - }, - "spec": { - "replicas": 2, - "selector": { - "name": "meteor" - }, - "template": { - "metadata": { - "labels": { - "name": "meteor" - } - }, - "spec": { - "containers": [ - { - "name": "meteor", - "image": "chees/meteor-gke-example:latest", - "ports": [ - { - "name": "http-server", - "hostPort": 80, - "containerPort": 8080, - "protocol": "TCP" - } - ], - "resources": {} - } - ] - } - } - } -} diff --git a/release-0.19.0/examples/meteor/meteor-service.json b/release-0.19.0/examples/meteor/meteor-service.json deleted file mode 100644 index e04be7c13f8..00000000000 --- a/release-0.19.0/examples/meteor/meteor-service.json +++ /dev/null @@ -1,21 +0,0 @@ -{ - "kind": "Service", - "apiVersion": "v1beta3", - "metadata": { - "name": "meteor" - }, - "spec": { - "ports": [ - { - "protocol": "TCP", - "port": 80, - "targetPort": "http-server" - } - ], - "selector": { - "name": "meteor" - }, - "createExternalLoadBalancer": true, - "sessionAffinity": "ClientIP" - } -} diff --git a/release-0.19.0/examples/meteor/mongo-pod.json b/release-0.19.0/examples/meteor/mongo-pod.json deleted file mode 100644 index cd7deba68e8..00000000000 --- a/release-0.19.0/examples/meteor/mongo-pod.json +++ /dev/null @@ -1,42 +0,0 @@ -{ - "kind": "Pod", - "apiVersion": "v1beta3", - "metadata": { - "name": "mongo", - "labels": { - "name": "mongo", - "role": "mongo" - } - }, - "spec": { - "volumes": [ - { - "name": "mongo-disk", - "gcePersistentDisk": { - "pdName": "mongo-disk", - "fsType": "ext4" - } - } - ], - "containers": [ - { - "name": "mongo", - "image": "mongo:latest", - "ports": [ - { - "name": "mongo", - "containerPort": 27017, - "protocol": "TCP" - } - ], - "resources": {}, - "volumeMounts": [ - { - "name": "mongo-disk", - "mountPath": "/data/db" - } - ] - } - ] - } -} diff --git a/release-0.19.0/examples/meteor/mongo-service.json b/release-0.19.0/examples/meteor/mongo-service.json deleted file mode 100644 index 72e9ed46503..00000000000 --- a/release-0.19.0/examples/meteor/mongo-service.json +++ /dev/null @@ -1,23 +0,0 @@ -{ - "kind": "Service", - "apiVersion": "v1beta3", - "metadata": { - "name": "mongo", - "labels": { - "name": "mongo" - } - }, - "spec": { - "ports": [ - { - "protocol": "TCP", - "port": 27017, - "targetPort": "mongo" - } - ], - "selector": { - "name": "mongo", - "role": "mongo" - } - } -} diff --git a/release-0.19.0/examples/mysql-wordpress-pd/README.md b/release-0.19.0/examples/mysql-wordpress-pd/README.md deleted file mode 100644 index 5362451f6a1..00000000000 --- a/release-0.19.0/examples/mysql-wordpress-pd/README.md +++ /dev/null @@ -1,314 +0,0 @@ - -# Persistent Installation of MySQL and WordPress on Kubernetes - -This example describes how to run a persistent installation of [Wordpress](https://wordpress.org/) using the [volumes](/docs/volumes.md) feature of Kubernetes, and [Google Compute Engine](https://cloud.google.com/compute/docs/disks) [persistent disks](/docs/volumes.md#gcepersistentdisk). - -We'll use the [mysql](https://registry.hub.docker.com/_/mysql/) and [wordpress](https://registry.hub.docker.com/_/wordpress/) official [Docker](https://www.docker.com/) images for this installation. (The wordpress image includes an Apache server). - -We'll create two Kubernetes [pods](http://docs.k8s.io/pods.md) to run mysql and wordpress, both with associated persistent disks, then set up a Kubernetes [service](http://docs.k8s.io/services.md) to front each pod. - -This example demonstrates several useful things, including: how to set up and use persistent disks with Kubernetes pods; how to define Kubernetes services to leverage docker-links-compatible service environment variables; and use of an external load balancer to expose the wordpress service externally and make it transparent to the user if the wordpress pod moves to a different cluster node. - -## Install gcloud and start up a Kubernetes cluster - -First, if you have not already done so, [create](https://cloud.google.com/compute/docs/quickstart) a [Google Cloud Platform](https://cloud.google.com/) project, and install the [gcloud SDK](https://cloud.google.com/sdk/). - -Then, set the gcloud default project name to point to the project you want to use for your Kubernetes cluster: - -``` -gcloud config set project -``` - -Next, grab the Kubernetes [release binary](https://github.com/GoogleCloudPlatform/kubernetes/releases) and start up a Kubernetes cluster: -``` -$ cluster/kube-up.sh -``` -where `` is the path to your Kubernetes installation. - -Or, as [described here](http://docs.k8s.io/getting-started-guides/gce.md), you can do this via: -```shell -wget -q -O - https://get.k8s.io | bash -``` -or -```shell -curl -sS https://get.k8s.io | bash -``` - -## Create two persistent disks - -For this WordPress installation, we're going to configure our Kubernetes [pods](http://docs.k8s.io/pods.md) to use [persistent disks](https://cloud.google.com/compute/docs/disks). This means that we can preserve installation state across pod shutdown and re-startup. - -You will need to create the disks in the same [GCE zone](https://cloud.google.com/compute/docs/zones) as the Kubernetes cluster. The `cluster/kube-up.sh` script will create the cluster in the `us-central1-b` zone by default, as seen in the [config-default.sh](/cluster/gce/config-default.sh) file. Replace `$ZONE` below with the appropriate zone. - -Before doing anything else, we'll create the persistent disks that we'll use for the installation: one for the mysql pod, and one for the wordpress pod. -The general series of steps required is as described [here](http://docs.k8s.io/volumes.md), where $DISK_SIZE is specified as, e.g. '500GB'. In future, this process will be more streamlined. - -So for the two disks used in this example, do the following. -First create the mysql disk, setting the disk size to meet your needs: - -```shell -gcloud compute disks create --size=$DISK_SIZE --zone=$ZONE mysql-disk -``` - -Then create the wordpress disk. Note that you may not want as large a disk size for the wordpress code as for the mysql disk. - -```shell -gcloud compute disks create --size=$DISK_SIZE --zone=$ZONE wordpress-disk -``` - -## Start the Mysql Pod and Service - -Now that the persistent disks are defined, the Kubernetes pods can be launched. We'll start with the mysql pod. - -### Start the Mysql pod - -First, **edit [`mysql.yaml`](mysql.yaml)**, the mysql pod definition, to use a database password that you specify. -`mysql.yaml` looks like this: - -```yaml -apiVersion: v1beta3 -kind: Pod -metadata: - name: mysql - labels: - name: mysql -spec: - containers: - - resources: - limits : - cpu: 1 - image: mysql - name: mysql - env: - - name: MYSQL_ROOT_PASSWORD - # change this - value: yourpassword - ports: - - containerPort: 3306 - name: mysql - volumeMounts: - # name must match the volume name below - - name: mysql-persistent-storage - # mount path within the container - mountPath: /var/lib/mysql - volumes: - - name: mysql-persistent-storage - gcePersistentDisk: - # This GCE PD must already exist. - pdName: mysql-disk - fsType: ext4 - -``` - -Note that we've defined a volume mount for `/var/lib/mysql`, and specified a volume that uses the persistent disk (`mysql-disk`) that you created. -Once you've edited the file to set your database password, create the pod as follows, where `` is the path to your Kubernetes installation: - -```shell -$ kubectl create -f mysql.yaml -``` - -It may take a short period before the new pod reaches the `Running` state. -List all pods to see the status of this new pod and the cluster node that it is running on: - -```shell -$ kubectl get pods -``` - - -#### Check the running pod on the Compute instance - -You can take a look at the logs for a pod by using `kubectl.sh log`. For example: - -```shell -$ kubectl log mysql -``` - -If you want to do deeper troubleshooting, e.g. if it seems a container is not staying up, you can also ssh in to the node that a pod is running on. There, you can run `sudo -s`, then `docker ps -a` to see all the containers. You can then inspect the logs of containers that have exited, via `docker logs `. (You can also find some relevant logs under `/var/log`, e.g. `docker.log` and `kubelet.log`). - -### Start the Mysql service - -We'll define and start a [service](http://docs.k8s.io/services.md) that lets other pods access the mysql database on a known port and host. -We will specifically name the service `mysql`. This will let us leverage the support for [Docker-links-compatible](http://docs.k8s.io/services.md#how-do-they-work) service environment variables when we set up the wordpress pod. The wordpress Docker image expects to be linked to a mysql container named `mysql`, as you can see in the "How to use this image" section on the wordpress docker hub [page](https://registry.hub.docker.com/_/wordpress/). - -So if we label our Kubernetes mysql service `mysql`, the wordpress pod will be able to use the Docker-links-compatible environment variables, defined by Kubernetes, to connect to the database. - -The [`mysql-service.yaml`](mysql-service.yaml) file looks like this: - -```yaml -apiVersion: v1beta3 -kind: Service -metadata: - labels: - name: mysql - name: mysql -spec: - ports: - # the port that this service should serve on - - port: 3306 - # label keys and values that must match in order to receive traffic for this service - selector: - name: mysql -``` - -Start the service like this: - -```shell -$ kubectl create -f mysql-service.yaml -``` - -You can see what services are running via: - -```shell -$ kubectl get services -``` - - -## Start the WordPress Pod and Service - -Once the mysql service is up, start the wordpress pod, specified in -[`wordpress.yaml`](wordpress.yaml). Before you start it, **edit `wordpress.yaml`** and **set the database password to be the same as you used in `mysql.yaml`**. -Note that this config file also defines a volume, this one using the `wordpress-disk` persistent disk that you created. - -```yaml -apiVersion: v1beta3 -kind: Pod -metadata: - name: wordpress - labels: - name: wordpress -spec: - containers: - - image: wordpress - name: wordpress - env: - - name: WORDPRESS_DB_PASSWORD - # change this - must match mysql.yaml password - value: yourpassword - ports: - - containerPort: 80 - name: wordpress - volumeMounts: - # name must match the volume name below - - name: wordpress-persistent-storage - # mount path within the container - mountPath: /var/www/html - volumes: - - name: wordpress-persistent-storage - gcePersistentDisk: - # This GCE PD must already exist. - pdName: wordpress-disk - fsType: ext4 -``` - -Create the pod: - -```shell -$ kubectl create -f wordpress.yaml -``` - -And list the pods to check that the status of the new pod changes -to `Running`. As above, this might take a minute. - -```shell -$ kubectl get pods -``` - -### Start the WordPress service - -Once the wordpress pod is running, start its service, specified by [`wordpress-service.yaml`](wordpress-service.yaml). - -The service config file looks like this: - -```yaml -apiVersion: v1beta3 -kind: Service -metadata: - labels: - name: wpfrontend - name: wpfrontend -spec: - createExternalLoadBalancer: true - ports: - # the port that this service should serve on - - port: 80 - # label keys and values that must match in order to receive traffic for this service - selector: - name: wordpress -``` - -Note the `createExternalLoadBalancer` setting. This will set up the wordpress service behind an external IP. -Note also that we've set the service port to 80. We'll return to that shortly. - -Start the service: - -```shell -$ kubectl create -f wordpress-service.yaml -``` - -and see it in the list of services: - -```shell -$ kubectl get services -``` - -Then, find the external IP for your WordPress service by listing the forwarding rules for your project: - -```shell -$ gcloud compute forwarding-rules list -``` - -Look for the rule called `wpfrontend`, which is what we named the wordpress service, and note its IP address. - -## Visit your new WordPress blog - -To access your new installation, you first may need to open up port 80 (the port specified in the wordpress service config) in the firewall for your cluster. You can do this, e.g. via: - -```shell -$ gcloud compute firewall-rules create sample-http --allow tcp:80 -``` - -This will define a firewall rule called `sample-http` that opens port 80 in the default network for your project. - -Now, we can visit the running WordPress app. -Use the external IP that you obtained above, and visit it on port 80: - -``` -http:// -``` - -You should see the familiar WordPress init page. - -## Take down and restart your blog - -Set up your WordPress blog and play around with it a bit. Then, take down its pods and bring them back up again. Because you used persistent disks, your blog state will be preserved. - -If you are just experimenting, you can take down and bring up only the pods: - -```shell -$ kubectl delete -f wordpress.yaml -$ kubectl delete -f mysql.yaml -``` - -When you restart the pods again (using the `create` operation as described above), their services will pick up the new pods based on their labels. - -If you want to shut down the entire app installation, you can delete the services as well. - -If you are ready to turn down your Kubernetes cluster altogether, run: - -```shell -$ cluster/kube-down.sh -``` - - - - - - - - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/mysql-wordpress-pd/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/mysql-wordpress-pd/README.md?pixel)]() diff --git a/release-0.19.0/examples/mysql-wordpress-pd/mysql-service.yaml b/release-0.19.0/examples/mysql-wordpress-pd/mysql-service.yaml deleted file mode 100644 index c8e0c55a18f..00000000000 --- a/release-0.19.0/examples/mysql-wordpress-pd/mysql-service.yaml +++ /dev/null @@ -1,13 +0,0 @@ -apiVersion: v1beta3 -kind: Service -metadata: - labels: - name: mysql - name: mysql -spec: - ports: - # the port that this service should serve on - - port: 3306 - # label keys and values that must match in order to receive traffic for this service - selector: - name: mysql diff --git a/release-0.19.0/examples/mysql-wordpress-pd/mysql.yaml b/release-0.19.0/examples/mysql-wordpress-pd/mysql.yaml deleted file mode 100644 index b94c5607942..00000000000 --- a/release-0.19.0/examples/mysql-wordpress-pd/mysql.yaml +++ /dev/null @@ -1,31 +0,0 @@ -apiVersion: v1beta3 -kind: Pod -metadata: - name: mysql - labels: - name: mysql -spec: - containers: - - resources: - limits : - cpu: 0.5 - image: mysql - name: mysql - env: - - name: MYSQL_ROOT_PASSWORD - # change this - value: yourpassword - ports: - - containerPort: 3306 - name: mysql - volumeMounts: - # name must match the volume name below - - name: mysql-persistent-storage - # mount path within the container - mountPath: /var/lib/mysql - volumes: - - name: mysql-persistent-storage - gcePersistentDisk: - # This GCE PD must already exist. - pdName: mysql-disk - fsType: ext4 diff --git a/release-0.19.0/examples/mysql-wordpress-pd/wordpress-service.yaml b/release-0.19.0/examples/mysql-wordpress-pd/wordpress-service.yaml deleted file mode 100644 index 3a8573d1097..00000000000 --- a/release-0.19.0/examples/mysql-wordpress-pd/wordpress-service.yaml +++ /dev/null @@ -1,14 +0,0 @@ -apiVersion: v1beta3 -kind: Service -metadata: - labels: - name: wpfrontend - name: wpfrontend -spec: - createExternalLoadBalancer: true - ports: - # the port that this service should serve on - - port: 80 - # label keys and values that must match in order to receive traffic for this service - selector: - name: wordpress diff --git a/release-0.19.0/examples/mysql-wordpress-pd/wordpress.yaml b/release-0.19.0/examples/mysql-wordpress-pd/wordpress.yaml deleted file mode 100644 index 56230ab3710..00000000000 --- a/release-0.19.0/examples/mysql-wordpress-pd/wordpress.yaml +++ /dev/null @@ -1,28 +0,0 @@ -apiVersion: v1beta3 -kind: Pod -metadata: - name: wordpress - labels: - name: wordpress -spec: - containers: - - image: wordpress - name: wordpress - env: - - name: WORDPRESS_DB_PASSWORD - # change this - must match mysql.yaml password - value: yourpassword - ports: - - containerPort: 80 - name: wordpress - volumeMounts: - # name must match the volume name below - - name: wordpress-persistent-storage - # mount path within the container - mountPath: /var/www/html - volumes: - - name: wordpress-persistent-storage - gcePersistentDisk: - # This GCE PD must already exist. - pdName: wordpress-disk - fsType: ext4 diff --git a/release-0.19.0/examples/nfs/README.md b/release-0.19.0/examples/nfs/README.md deleted file mode 100644 index 3cecc3089be..00000000000 --- a/release-0.19.0/examples/nfs/README.md +++ /dev/null @@ -1,43 +0,0 @@ -# Example of NFS volume - -See [nfs-web-pod.yaml](nfs-web-pod.yaml) for a quick example, how to use NFS volume -in a pod. - -## Complete setup - -The example below shows how to export a NFS share from a pod and import it -into another one. - -### NFS server part - -Define [NFS server pod](nfs-server-pod.yaml) and -[NFS service](nfs-server-service.yaml): - - $ kubectl create -f nfs-server-pod.yaml - $ kubectl create -f nfs-server-service.yaml - -The server exports `/mnt/data` directory as `/` (fsid=0). The directory contains -dummy `index.html`. Wait until the pod is running! - -### NFS client - -[WEB server pod](nfs-web-pod.yaml) uses the NFS share exported above as a NFS -volume and runs simple web server on it. The pod assumes your DNS is configured -and the NFS service is reachable as `nfs-server.default.kube.local`. Edit the -yaml file to supply another name or directly its IP address (use -`kubectl get services` to get it). - -Define the pod: - - $ kubectl create -f nfs-web-pod.yaml - -Now the pod serves `index.html` from the NFS server: - - $ curl http:/// - Hello World! - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/nfs/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/nfs/README.md?pixel)]() diff --git a/release-0.19.0/examples/nfs/exporter/Dockerfile b/release-0.19.0/examples/nfs/exporter/Dockerfile deleted file mode 100644 index 68755ed44b1..00000000000 --- a/release-0.19.0/examples/nfs/exporter/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM fedora:21 -MAINTAINER Jan Safranek -EXPOSE 2049/tcp - -RUN yum -y install nfs-utils && yum clean all - -ADD run_nfs /usr/local/bin/ - -RUN chmod +x /usr/local/bin/run_nfs - -ENTRYPOINT ["/usr/local/bin/run_nfs"] diff --git a/release-0.19.0/examples/nfs/exporter/README.md b/release-0.19.0/examples/nfs/exporter/README.md deleted file mode 100644 index 8266fb81db0..00000000000 --- a/release-0.19.0/examples/nfs/exporter/README.md +++ /dev/null @@ -1,16 +0,0 @@ -# NFS-exporter container - -Inspired by https://github.com/cpuguy83/docker-nfs-server. Rewritten for -Fedora. - -Serves NFS4 exports, defined on command line. At least one export must be defined! - -Usage:: - - docker run -d --name nfs --privileged jsafrane/nfsexporter /path/to/share /path/to/share2 ... - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/nfs/exporter/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/nfs/exporter/README.md?pixel)]() diff --git a/release-0.19.0/examples/nfs/exporter/run_nfs b/release-0.19.0/examples/nfs/exporter/run_nfs deleted file mode 100755 index b6b888e9300..00000000000 --- a/release-0.19.0/examples/nfs/exporter/run_nfs +++ /dev/null @@ -1,72 +0,0 @@ -#!/bin/bash - -# Copyright 2015 The Kubernetes Authors All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -function start() -{ - - # prepare /etc/exports - seq=0 - for i in "$@"; do - echo "$i *(rw,sync,no_root_squash,insecure,fsid=$seq)" >> /etc/exports - seq=$(($seq + 1)) - echo "Serving $i" - done - - # from /lib/systemd/system/proc-fs-nfsd.mount - mount -t nfsd nfds /proc/fs/nfsd - - # from /lib/systemd/system/nfs-config.service - /usr/lib/systemd/scripts/nfs-utils_env.sh - - # from /lib/systemd/system/nfs-mountd.service - . /run/sysconfig/nfs-utils - /usr/sbin/rpc.mountd $RPCMOUNTDARGS - - # from /lib/systemd/system/nfs-server.service - . /run/sysconfig/nfs-utils - /usr/sbin/exportfs -r - /usr/sbin/rpc.nfsd -N 2 -N 3 -V 4 -V 4.1 $RPCNFSDARGS - - echo "NFS started" -} - -function stop() -{ - echo "Stopping NFS" - - # from /lib/systemd/system/nfs-server.service - /usr/sbin/rpc.nfsd 0 - /usr/sbin/exportfs -au - /usr/sbin/exportfs -f - - # from /lib/systemd/system/nfs-mountd.service - kill $( pidof rpc.mountd ) - # from /lib/systemd/system/proc-fs-nfsd.mount - umount /proc/fs/nfsd - - echo > /etc/exports - exit 0 -} - - -trap stop TERM - -start "$@" - -# Ugly hack to do nothing and wait for SIGTERM -while true; do - read -done diff --git a/release-0.19.0/examples/nfs/nfs-data/Dockerfile b/release-0.19.0/examples/nfs/nfs-data/Dockerfile deleted file mode 100644 index 33fd131a5c7..00000000000 --- a/release-0.19.0/examples/nfs/nfs-data/Dockerfile +++ /dev/null @@ -1,5 +0,0 @@ -FROM jsafrane/nfsexporter -MAINTAINER Jan Safranek -ADD index.html /mnt/data/index.html - -ENTRYPOINT ["/usr/local/bin/run_nfs", "/mnt/data"] diff --git a/release-0.19.0/examples/nfs/nfs-data/README.md b/release-0.19.0/examples/nfs/nfs-data/README.md deleted file mode 100644 index df31bb168fe..00000000000 --- a/release-0.19.0/examples/nfs/nfs-data/README.md +++ /dev/null @@ -1,12 +0,0 @@ -# NFS-exporter container with a file - -This container exports /mnt/data with index.html in it via NFSv4. Based on -../exporter. - -Available in dockerhub as -[jsafrane/nfs-data](https://registry.hub.docker.com/u/jsafrane/nfs-data/). - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/nfs/nfs-data/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/nfs/nfs-data/README.md?pixel)]() diff --git a/release-0.19.0/examples/nfs/nfs-data/index.html b/release-0.19.0/examples/nfs/nfs-data/index.html deleted file mode 100644 index cd0875583aa..00000000000 --- a/release-0.19.0/examples/nfs/nfs-data/index.html +++ /dev/null @@ -1 +0,0 @@ -Hello world! diff --git a/release-0.19.0/examples/nfs/nfs-server-pod.yaml b/release-0.19.0/examples/nfs/nfs-server-pod.yaml deleted file mode 100644 index e0bb565e6eb..00000000000 --- a/release-0.19.0/examples/nfs/nfs-server-pod.yaml +++ /dev/null @@ -1,15 +0,0 @@ -apiVersion: v1beta3 -kind: Pod -metadata: - name: nfs-server - labels: - role: nfs-server -spec: - containers: - - name: nfs-server - image: jsafrane/nfs-data - privileged: true - ports: - - name: nfs - containerPort: 2049 - protocol: tcp diff --git a/release-0.19.0/examples/nfs/nfs-server-service.yaml b/release-0.19.0/examples/nfs/nfs-server-service.yaml deleted file mode 100644 index 634087122ef..00000000000 --- a/release-0.19.0/examples/nfs/nfs-server-service.yaml +++ /dev/null @@ -1,9 +0,0 @@ -kind: Service -apiVersion: v1beta3 -metadata: - name: nfs-server -spec: - ports: - - port: 2049 - selector: - role: nfs-server diff --git a/release-0.19.0/examples/nfs/nfs-web-pod.yaml b/release-0.19.0/examples/nfs/nfs-web-pod.yaml deleted file mode 100644 index 0c897fd910e..00000000000 --- a/release-0.19.0/examples/nfs/nfs-web-pod.yaml +++ /dev/null @@ -1,27 +0,0 @@ -# -# This pod imports nfs-server.default.kube.local:/ into /var/www/html -# - -apiVersion: v1beta3 -kind: Pod -metadata: - name: nfs-web -spec: - containers: - - name: web - image: nginx - ports: - - name: web - containerPort: 80 - protocol: tcp - volumeMounts: - # name must match the volume name below - - name: nfs - mountPath: "/usr/share/nginx/html" - volumes: - - name: nfs - nfs: - # FIXME: use the right hostname - server: nfs-server.default.kube.local - path: "/" - readOnly: false diff --git a/release-0.19.0/examples/node-selection/README.md b/release-0.19.0/examples/node-selection/README.md deleted file mode 100644 index c69ad487314..00000000000 --- a/release-0.19.0/examples/node-selection/README.md +++ /dev/null @@ -1,66 +0,0 @@ -## Node selection example - -This example shows how to assign a pod to a specific node or to one of a set of nodes using node labels and the nodeSelector field in a pod specification. Generally this is unnecessary, as the scheduler will take care of things for you, but you may want to do so in certain circumstances like to ensure that your pod ends up on a machine with an SSD attached to it. - -### Step Zero: Prerequisites - -This example assumes that you have a basic understanding of kubernetes pods and that you have [turned up a Kubernetes cluster](https://github.com/GoogleCloudPlatform/kubernetes#documentation). - -### Step One: Attach label to the node - -Run `kubectl get nodes` to get the names of your cluster's nodes. Pick out the one that you want to add a label to. - -Then, to add a label to the node you've chosen, run `kubectl label nodes =`. For example, if my node name is 'kubernetes-foo-node-1.c.a-robinson.internal' and my desired label is 'disktype=ssd', then I can run `kubectl label nodes kubernetes-foo-node-1.c.a-robinson.internal disktype=ssd`. - -If this fails with an "invalid command" error, you're likely using an older version of kubectl that doesn't have the `label` command. In that case, see the [previous version](https://github.com/GoogleCloudPlatform/kubernetes/blob/a053dbc313572ed60d89dae9821ecab8bfd676dc/examples/node-selection/README.md) of this guide for instructions on how to manually set labels on a node. - -Also, note that label keys must be in the form of DNS labels (as described in the [identifiers doc](/docs/design/identifiers.md)), meaning that they are not allowed to contain any upper-case letters. - -You can verify that it worked by re-running `kubectl get nodes` and checking that the node now has a label. - -### Step Two: Add a nodeSelector field to your pod configuration - -Take whatever pod config file you want to run, and add a nodeSelector section to it, like this. For example, if this is my pod config: - -
-apiVersion: v1beta3
-kind: Pod
-metadata:
-  labels:
-    env: test
-  name: nginx
-spec:
-  containers:
-  - image: nginx
-    name: nginx
-
- -Then add a nodeSelector like so: - -
-apiVersion: v1beta3
-kind: Pod
-metadata:
-  labels:
-    env: test
-  name: nginx
-spec:
-  containers:
-  - image: nginx
-    imagePullPolicy: IfNotPresent
-    name: nginx
-  nodeSelector:
-    disktype: ssd
-
- -When you then run `kubectl create -f pod.yaml`, the pod will get scheduled on the node that you attached the label to! You can verify that it worked by running `kubectl get pods` and looking at the "host" that the pod was assigned to. - -### Conclusion - -While this example only covered one node, you can attach labels to as many nodes as you want. Then when you schedule a pod with a nodeSelector, it can be scheduled on any of the nodes that satisfy that nodeSelector. Be careful that it will match at least one node, however, because if it doesn't the pod won't be scheduled at all. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/node-selection/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/node-selection/README.md?pixel)]() diff --git a/release-0.19.0/examples/node-selection/pod.yaml b/release-0.19.0/examples/node-selection/pod.yaml deleted file mode 100644 index 42a6b39e8a2..00000000000 --- a/release-0.19.0/examples/node-selection/pod.yaml +++ /dev/null @@ -1,13 +0,0 @@ -apiVersion: v1beta3 -kind: Pod -metadata: - labels: - env: test - name: nginx -spec: - containers: - - image: nginx - imagePullPolicy: IfNotPresent - name: nginx - nodeSelector: - disktype: ssd diff --git a/release-0.19.0/examples/openshift-origin/.gitignore b/release-0.19.0/examples/openshift-origin/.gitignore deleted file mode 100644 index 8dd8c8ed38b..00000000000 --- a/release-0.19.0/examples/openshift-origin/.gitignore +++ /dev/null @@ -1,2 +0,0 @@ -config/ -secret.json diff --git a/release-0.19.0/examples/openshift-origin/README.md b/release-0.19.0/examples/openshift-origin/README.md deleted file mode 100644 index 8f98113ca81..00000000000 --- a/release-0.19.0/examples/openshift-origin/README.md +++ /dev/null @@ -1,161 +0,0 @@ -## OpenShift Origin example - -This example shows how to run OpenShift Origin as a pod on an existing Kubernetes cluster. - -OpenShift Origin runs with a rich set of role based policy rules out of the box that requires authentication from users -via certificates. When run as a pod on an existing Kubernetes cluster, it proxies access to the underlying Kubernetes services -to provide security. - -As a result, this example is a complex end-to-end configuration that shows how to configure certificates for a service that runs -on Kubernetes, and requires a number of configuration files to be injected dynamically via a secret volume to the pod. - -### Step 0: Prerequisites - -This example assumes that you have an understanding of Kubernetes and that you have forked the repository. - -OpenShift Origin creates privileged containers when running Docker builds during the source-to-image process. - -If you are using a Salt based KUBERNETES_PROVIDER (**gce**, **vagrant**, **aws**), you should enable the -ability to create privileged containers via the API. - -```shell -$ cd kubernetes -$ vi cluster/saltbase/pillar/privilege.sls - -# If true, allow privileged containers to be created by API -allow_privileged: true -``` - -Now spin up a cluster using your preferred KUBERNETES_PROVIDER - -```shell -$ export KUBERNETES_PROVIDER=gce -$ cluster/kube-up.sh -``` - -Next, let's setup some variables, and create a local folder that will hold generated configuration files. - -```shell -$ export OPENSHIFT_EXAMPLE=$(pwd)/examples/openshift-origin -$ export OPENSHIFT_CONFIG=${OPENSHIFT_EXAMPLE}/config -$ mkdir ${OPENSHIFT_CONFIG} -``` - -### Step 1: Export your Kubernetes configuration file for use by OpenShift pod - -OpenShift Origin uses a configuration file to know how to access your Kubernetes cluster with administrative authority. - -``` -$ cluster/kubectl.sh config view --output=yaml --flatten=true --minify=true > ${OPENSHIFT_CONFIG}/kubeconfig -``` - -The output from this command will contain a single file that has all the required information needed to connect to your -Kubernetes cluster that you previously provisioned. This file should be considered sensitive, so do not share this file with -untrusted parties. - -We will later use this file to tell OpenShift how to bootstap its own configuration. - -### Step 2: Create an External Load Balancer to Route Traffic to OpenShift - -An external load balancer is needed to route traffic to our OpenShift master service that will run as a pod on your -Kubernetes cluster. - - -```shell -$ cluster/kubectl.sh create -f $OPENSHIFT_EXAMPLE/openshift-service.yaml -``` - -### Step 3: Generate configuration file for your OpenShift master pod - -The OpenShift master requires a configuration file as input to know how to bootstrap the system. - -In order to build this configuration file, we need to know the public IP address of our external load balancer in order to -build default certificates. - -Grab the public IP address of the service we previously created. - -```shell -$ export PUBLIC_IP=$(cluster/kubectl.sh get services openshift --template="{{ index .status.loadBalancer.ingress 0 \"ip\" }}") -$ echo $PUBLIC_IP -``` - -Ensure you have a valid PUBLIC_IP address before continuing in the example. - -We now need to run a command on your host to generate a proper OpenShift configuration. To do this, we will volume mount the configuration directory that holds your Kubernetes kubeconfig file from the prior step. - -```shell -docker run --privileged -v ${OPENSHIFT_CONFIG}:/config openshift/origin start master --write-config=/config --kubeconfig='/config/kubeconfig' --master='https://localhost:8443' --public-master='https://${PUBLIC_IP}:8443' -``` - -You should now see a number of certificates minted in your configuration directory, as well as a master-config.yaml file that tells the OpenShift master how to execute. In the next step, we will bundle this into a Kubernetes Secret that our OpenShift master pod will consume. - -### Step 4: Bundle the configuration into a Secret - -We now need to bundle the contents of our configuration into a secret for use by our OpenShift master pod. - -OpenShift includes an experimental command to make this easier. - -First, update the ownership for the files previously generated: - -``` -$ sudo -E chown -R ${USER} ${OPENSHIFT_CONFIG} -``` - -Then run the following command to collapse them into a Kubernetes secret. - -```shell -docker run -i -t --privileged -e="OPENSHIFTCONFIG=/config/admin.kubeconfig" -v ${OPENSHIFT_CONFIG}:/config openshift/origin ex bundle-secret openshift-config -f /config &> ${OPENSHIFT_EXAMPLE}/secret.json -``` - -Now, lets create the secret in your Kubernetes cluster. - -```shell -$ cluster/kubectl.sh create -f ${OPENSHIFT_EXAMPLE}/secret.json -``` - -**NOTE: This secret is secret and should not be shared with untrusted parties.** - -### Step 5: Deploy OpenShift Master - -We are now ready to deploy OpenShift. - -We will deploy a pod that runs the OpenShift master. The OpenShift master will delegate to the underlying Kubernetes -system to manage Kubernetes specific resources. For the sake of simplicity, the OpenShift master will run with an embedded etcd to hold OpenShift specific content. This demonstration will evolve in the future to show how to run etcd in a pod so that content is not destroyed if the OpenShift master fails. - -```shell -$ cluster/kubectl.sh create -f ${OPENSHIFT_EXAMPLE}/openshift-controller.yaml -``` - -You should now get a pod provisioned whose name begins with openshift. - -```shell -$ cluster/kubectl.sh get pods | grep openshift -$ cluster/kubectl.sh log openshift-t7147 origin -Running: cluster/../cluster/gce/../../cluster/../_output/dockerized/bin/linux/amd64/kubectl log openshift-t7t47 origin -2015-04-30T15:26:00.454146869Z I0430 15:26:00.454005 1 start_master.go:296] Starting an OpenShift master, reachable at 0.0.0.0:8443 (etcd: [https://10.0.27.2:4001]) -2015-04-30T15:26:00.454231211Z I0430 15:26:00.454223 1 start_master.go:297] OpenShift master public address is https://104.197.73.241:8443 -``` - -Depending upon your cloud provider, you may need to open up an external firewall rule for tcp:8443. For GCE, you can run the following: - -```shell -gcloud compute --project "your-project" firewall-rules create "origin" --allow tcp:8443 --network "your-network" --source-ranges "0.0.0.0/0" -``` - -Consult your cloud provider's documentation for more information. - -Open a browser and visit the OpenShift master public address reported in your log. - -You can use the CLI commands by running the following: - -```shell -$ docker run --privileged --entrypoint="/usr/bin/bash" -it -e="OPENSHIFTCONFIG=/config/admin.kubeconfig" -v ${OPENSHIFT_CONFIG}:/config openshift/origin -$ osc config use-context public-default -$ osc --help -``` - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/openshift-origin/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/openshift-origin/README.md?pixel)]() diff --git a/release-0.19.0/examples/openshift-origin/cleanup.sh b/release-0.19.0/examples/openshift-origin/cleanup.sh deleted file mode 100755 index abe9dbf7ae3..00000000000 --- a/release-0.19.0/examples/openshift-origin/cleanup.sh +++ /dev/null @@ -1,25 +0,0 @@ -#!/bin/bash - -# Copyright 2014 The Kubernetes Authors All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# Cleans up resources from the example, assumed to be run from Kubernetes repo root - -export OPENSHIFT_EXAMPLE=$(pwd)/examples/openshift-origin -export OPENSHIFT_CONFIG=${OPENSHIFT_EXAMPLE}/config -rm -fr ${OPENSHIFT_CONFIG} -cluster/kubectl.sh delete secrets openshift-config -cluster/kubectl.sh stop rc openshift -cluster/kubectl.sh delete rc openshift -cluster/kubectl.sh delete services openshift diff --git a/release-0.19.0/examples/openshift-origin/create.sh b/release-0.19.0/examples/openshift-origin/create.sh deleted file mode 100755 index 8de6020c476..00000000000 --- a/release-0.19.0/examples/openshift-origin/create.sh +++ /dev/null @@ -1,31 +0,0 @@ -#!/bin/bash - -# Copyright 2014 The Kubernetes Authors All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# Creates resources from the example, assumed to be run from Kubernetes repo root -export OPENSHIFT_EXAMPLE=$(pwd)/examples/openshift-origin -export OPENSHIFT_CONFIG=${OPENSHIFT_EXAMPLE}/config -mkdir ${OPENSHIFT_CONFIG} -cluster/kubectl.sh config view --output=yaml --flatten=true --minify=true > ${OPENSHIFT_CONFIG}/kubeconfig -cluster/kubectl.sh create -f $OPENSHIFT_EXAMPLE/openshift-service.yaml -sleep 60 -export PUBLIC_IP=$(cluster/kubectl.sh get services openshift --template="{{ index .status.loadBalancer.ingress 0 \"ip\" }}") -echo "PUBLIC IP: ${PUBLIC_IP}" -docker run --privileged -v ${OPENSHIFT_CONFIG}:/config openshift/origin start master --write-config=/config --kubeconfig=/config/kubeconfig --master=https://localhost:8443 --public-master=https://${PUBLIC_IP}:8443 -sudo -E chown ${USER} -R ${OPENSHIFT_CONFIG} -docker run -i -t --privileged -e="OPENSHIFTCONFIG=/config/admin.kubeconfig" -v ${OPENSHIFT_CONFIG}:/config openshift/origin ex bundle-secret openshift-config -f /config &> ${OPENSHIFT_EXAMPLE}/secret.json -cluster/kubectl.sh create -f ${OPENSHIFT_EXAMPLE}/secret.json -cluster/kubectl.sh create -f ${OPENSHIFT_EXAMPLE}/openshift-controller.yaml -cluster/kubectl.sh get pods | grep openshift diff --git a/release-0.19.0/examples/openshift-origin/openshift-controller.yaml b/release-0.19.0/examples/openshift-origin/openshift-controller.yaml deleted file mode 100644 index 5922254e16d..00000000000 --- a/release-0.19.0/examples/openshift-origin/openshift-controller.yaml +++ /dev/null @@ -1,33 +0,0 @@ -apiVersion: v1beta3 -kind: ReplicationController -metadata: - labels: - name: openshift - name: openshift -spec: - replicas: 1 - selector: - name: openshift - template: - metadata: - labels: - name: openshift - spec: - containers: - - args: - - start - - master - - --config=/config/master-config.yaml - image: "openshift/origin" - name: origin - ports: - - containerPort: 8443 - name: openshift - volumeMounts: - - mountPath: /config - name: config - readOnly: true - volumes: - - name: config - secret: - secretName: openshift-config \ No newline at end of file diff --git a/release-0.19.0/examples/openshift-origin/openshift-service.yaml b/release-0.19.0/examples/openshift-origin/openshift-service.yaml deleted file mode 100644 index 01540d02bda..00000000000 --- a/release-0.19.0/examples/openshift-origin/openshift-service.yaml +++ /dev/null @@ -1,12 +0,0 @@ -apiVersion: v1beta3 -kind: Service -metadata: - name: openshift -spec: - ports: - - port: 8443 - name: openshift - targetPort: 8443 - selector: - name: openshift - createExternalLoadBalancer: true diff --git a/release-0.19.0/examples/persistent-volumes/README.md b/release-0.19.0/examples/persistent-volumes/README.md deleted file mode 100644 index eb6ae2a589b..00000000000 --- a/release-0.19.0/examples/persistent-volumes/README.md +++ /dev/null @@ -1,117 +0,0 @@ -# How To Use Persistent Volumes - -The purpose of this guide is to help you become familiar with Kubernetes Persistent Volumes. By the end of the guide, we'll have -nginx serving content from your persistent volume. - -This guide assumes knowledge of Kubernetes fundamentals and that you have a cluster up and running. - -## Provisioning - -A PersistentVolume in Kubernetes represents a real piece of underlying storage capacity in the infrastructure. Cluster administrators -must first create storage (create their GCE disks, export their NFS shares, etc.) in order for Kubernetes to mount it. - -PVs are intended for "network volumes" like GCE Persistent Disks, NFS shares, and AWS ElasticBlockStore volumes. ```HostPath``` was included -for ease of development and testing. You'll create a local ```HostPath``` for this example. - -> IMPORTANT! For ```HostPath``` to work, you will need to run a single node cluster. Kubernetes does not -support local storage on the host at this time. There is no guarantee your pod ends up on the correct node where the ```HostPath``` resides. - - -``` - -// this will be nginx's webroot -mkdir /tmp/data01 -echo 'I love Kubernetes storage!' > /tmp/data01/index.html - -``` - -PVs are created by posting them to the API server. - -``` - -kubectl create -f examples/persistent-volumes/volumes/local-01.yaml -kubectl get pv - -NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM -pv0001 map[] 10737418240 RWO Available - -``` - -## Requesting storage - -Users of Kubernetes request persistent storage for their pods. They don't know how the underlying cluster is provisioned. -They just know they can rely on their claim to storage and can manage its lifecycle independently from the many pods that may use it. - -Claims must be created in the same namespace as the pods that use them. - -``` - -kubectl create -f examples/persistent-volumes/claims/claim-01.yaml -kubectl get pvc - -NAME LABELS STATUS VOLUME -myclaim-1 map[] - - -# A background process will attempt to match this claim to a volume. -# The eventual state of your claim will look something like this: - -kubectl get pvc - -NAME LABELS STATUS VOLUME -myclaim-1 map[] Bound f5c3a89a-e50a-11e4-972f-80e6500a981e - - -kubectl get pv - -NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM -pv0001 map[] 10737418240 RWO Bound myclaim-1 / 6bef4c40-e50b-11e4-972f-80e6500a981e - -``` - -## Using your claim as a volume - -Claims are used as volumes in pods. Kubernetes uses the claim to look up its bound PV. The PV is then exposed to the pod. - -``` - -kubectl create -f examples/persistent-volumes/simpletest/pod.yaml - -kubectl get pods - -POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED -mypod 172.17.0.2 myfrontend nginx 127.0.0.1/127.0.0.1 Running 12 minutes - - -kubectl create -f examples/persistent-volumes/simpletest/service.json -kubectl get services - -NAME LABELS SELECTOR IP PORT(S) -frontendservice name=frontendhttp 10.0.0.241 3000/TCP -kubernetes component=apiserver,provider=kubernetes 10.0.0.2 443/TCP - - -``` - -## Next steps - -You should be able to query your service endpoint and see what content nginx is serving. A "forbidden" error might mean you -need to disable SELinux (setenforce 0). - -``` - -curl 10.0.0.241:3000 -I love Kubernetes storage! - -``` - -Hopefully this simple guide is enough to get you started with PersistentVolumes. If you have any questions, join -```#google-containers``` on IRC and ask! - -Enjoy! - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/persistent-volumes/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/persistent-volumes/README.md?pixel)]() diff --git a/release-0.19.0/examples/persistent-volumes/claims/claim-01.yaml b/release-0.19.0/examples/persistent-volumes/claims/claim-01.yaml deleted file mode 100644 index 3c69d2e1b56..00000000000 --- a/release-0.19.0/examples/persistent-volumes/claims/claim-01.yaml +++ /dev/null @@ -1,10 +0,0 @@ -kind: PersistentVolumeClaim -apiVersion: v1beta3 -metadata: - name: myclaim-1 -spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 3Gi diff --git a/release-0.19.0/examples/persistent-volumes/claims/claim-02.yaml b/release-0.19.0/examples/persistent-volumes/claims/claim-02.yaml deleted file mode 100644 index 48d48070b22..00000000000 --- a/release-0.19.0/examples/persistent-volumes/claims/claim-02.yaml +++ /dev/null @@ -1,10 +0,0 @@ -kind: PersistentVolumeClaim -apiVersion: v1beta3 -metadata: - name: myclaim-2 -spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 8Gi diff --git a/release-0.19.0/examples/persistent-volumes/claims/claim-03.json b/release-0.19.0/examples/persistent-volumes/claims/claim-03.json deleted file mode 100644 index b3b0717af09..00000000000 --- a/release-0.19.0/examples/persistent-volumes/claims/claim-03.json +++ /dev/null @@ -1,17 +0,0 @@ -{ - "kind": "PersistentVolumeClaim", - "apiVersion": "v1beta3", - "metadata": { - "name": "myclaim-3" - }, "spec": { - "accessModes": [ - "ReadWriteOnce", - "ReadOnlyMany" - ], - "resources": { - "requests": { - "storage": "10G" - } - } - } -} diff --git a/release-0.19.0/examples/persistent-volumes/simpletest/namespace.json b/release-0.19.0/examples/persistent-volumes/simpletest/namespace.json deleted file mode 100644 index c9e7ced5557..00000000000 --- a/release-0.19.0/examples/persistent-volumes/simpletest/namespace.json +++ /dev/null @@ -1,10 +0,0 @@ -{ - "kind": "Namespace", - "apiVersion":"v1beta3", - "metadata": { - "name": "myns", - "labels": { - "name": "development" - } - } -} diff --git a/release-0.19.0/examples/persistent-volumes/simpletest/pod.yaml b/release-0.19.0/examples/persistent-volumes/simpletest/pod.yaml deleted file mode 100644 index f7f686c0404..00000000000 --- a/release-0.19.0/examples/persistent-volumes/simpletest/pod.yaml +++ /dev/null @@ -1,20 +0,0 @@ -kind: Pod -apiVersion: v1beta3 -metadata: - name: mypod - labels: - name: frontendhttp -spec: - containers: - - name: myfrontend - image: dockerfile/nginx - ports: - - containerPort: 80 - name: "http-server" - volumeMounts: - - mountPath: "/var/www/html" - name: mypd - volumes: - - name: mypd - persistentVolumeClaim: - claimName: myclaim-1 diff --git a/release-0.19.0/examples/persistent-volumes/simpletest/service.json b/release-0.19.0/examples/persistent-volumes/simpletest/service.json deleted file mode 100644 index 1c80f9e5148..00000000000 --- a/release-0.19.0/examples/persistent-volumes/simpletest/service.json +++ /dev/null @@ -1,19 +0,0 @@ -{ - "kind": "Service", - "apiVersion": "v1beta3", - "metadata": { - "name": "frontendservice" - }, - "spec": { - "ports": [ - { - "protocol": "TCP", - "port": 3000, - "targetPort": "http-server" - } - ], - "selector": { - "name": "frontendhttp" - } - } -} diff --git a/release-0.19.0/examples/persistent-volumes/volumes/gce.yaml b/release-0.19.0/examples/persistent-volumes/volumes/gce.yaml deleted file mode 100644 index 8cc6520327f..00000000000 --- a/release-0.19.0/examples/persistent-volumes/volumes/gce.yaml +++ /dev/null @@ -1,13 +0,0 @@ -kind: PersistentVolume -apiVersion: v1beta3 -metadata: - name: pv0003 -spec: - capacity: - storage: 10Gi - accessModes: - - ReadWriteOnce - - ReadOnlyMany - gcePersistentDisk: - pdName: "abc123" - fsType: "ext4" diff --git a/release-0.19.0/examples/persistent-volumes/volumes/local-01.yaml b/release-0.19.0/examples/persistent-volumes/volumes/local-01.yaml deleted file mode 100644 index ce0fe9fbbe2..00000000000 --- a/release-0.19.0/examples/persistent-volumes/volumes/local-01.yaml +++ /dev/null @@ -1,13 +0,0 @@ -kind: PersistentVolume -apiVersion: v1beta3 -metadata: - name: pv0001 - labels: - type: local -spec: - capacity: - storage: 10Gi - accessModes: - - ReadWriteOnce - hostPath: - path: "/tmp/data01" diff --git a/release-0.19.0/examples/persistent-volumes/volumes/local-02.yaml b/release-0.19.0/examples/persistent-volumes/volumes/local-02.yaml deleted file mode 100644 index 4be4c3ce12e..00000000000 --- a/release-0.19.0/examples/persistent-volumes/volumes/local-02.yaml +++ /dev/null @@ -1,14 +0,0 @@ -kind: PersistentVolume -apiVersion: v1beta3 -metadata: - name: pv0002 - labels: - type: local -spec: - capacity: - storage: 8Gi - accessModes: - - ReadWriteOnce - hostPath: - path: "/tmp/data02" - persistentVolumeReclaimPolicy: Recycle diff --git a/release-0.19.0/examples/persistent-volumes/volumes/nfs.yaml b/release-0.19.0/examples/persistent-volumes/volumes/nfs.yaml deleted file mode 100644 index 6e0f911ecb8..00000000000 --- a/release-0.19.0/examples/persistent-volumes/volumes/nfs.yaml +++ /dev/null @@ -1,12 +0,0 @@ -apiVersion: v1beta3 -kind: PersistentVolume -metadata: - name: pv0003 -spec: - capacity: - storage: 5Gi - accessModes: - - ReadWriteOnce - nfs: - path: /tmp - server: 172.17.0.2 diff --git a/release-0.19.0/examples/phabricator/README.md b/release-0.19.0/examples/phabricator/README.md deleted file mode 100644 index d252b29fc75..00000000000 --- a/release-0.19.0/examples/phabricator/README.md +++ /dev/null @@ -1,224 +0,0 @@ -## Phabricator example - -This example shows how to build a simple multi-tier web application using Kubernetes and Docker. - -The example combines a web frontend and an external service that provides MySQL database. We use CloudSQL on Google Cloud Platform in this example, but in principle any approach to running MySQL should work. - -### Step Zero: Prerequisites - -This example assumes that you have a basic understanding of kubernetes [services](../../docs/services.md) and that you have forked the repository and [turned up a Kubernetes cluster](../../docs/getting-started-guides): - -```shell -$ cd kubernetes -$ hack/dev-build-and-up.sh -``` - -### Step One: Set up Cloud SQL instance - -Follow the [official instructions](https://cloud.google.com/sql/docs/getting-started) to set up Cloud SQL instance. - -In the remaining part of this example we will assume that your instance is named "phabricator-db", has IP 173.194.242.66 and the password is "1234". - -### Step Two: Turn up the phabricator - -To start Phabricator server use the file [`examples/phabricator/phabricator-controller.json`](phabricator-controller.json) which describes a [replication controller](../../docs/replication-controller.md) with a single [pod](../../docs/pods.md) running an Apache server with Phabricator PHP source: - -```js -{ - "kind": "ReplicationController", - "apiVersion": "v1beta3", - "metadata": { - "name": "phabricator-controller", - "labels": { - "name": "phabricator" - } - }, - "spec": { - "replicas": 1, - "selector": { - "name": "phabricator" - }, - "template": { - "metadata": { - "labels": { - "name": "phabricator" - } - }, - "spec": { - "containers": [ - { - "name": "phabricator", - "image": "fgrzadkowski/example-php-phabricator", - "ports": [ - { - "name": "http-server", - "containerPort": 80 - } - ] - } - ] - } - } - } -} -``` - -Create the phabricator pod in your Kubernetes cluster by running: - -```shell -$ kubectl create -f examples/phabricator/phabricator-controller.json -``` - -Once that's up you can list the pods in the cluster, to verify that it is running: - -```shell -kubectl get pods -``` - -You'll see a single phabricator pod. It will also display the machine that the pod is running on once it gets placed (may take up to thirty seconds): - -``` -POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS -phabricator-controller-02qp4 10.244.1.34 phabricator fgrzadkowski/phabricator kubernetes-minion-2.c.myproject.internal/130.211.141.151 name=phabricator -``` - -If you ssh to that machine, you can run `docker ps` to see the actual pod: - -```shell -me@workstation$ gcloud compute ssh --zone us-central1-b kubernetes-minion-2 - -$ sudo docker ps -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES -54983bc33494 fgrzadkowski/phabricator:latest "/run.sh" 2 hours ago Up 2 hours k8s_phabricator.d6b45054_phabricator-controller-02qp4.default.api_eafb1e53-b6a9-11e4-b1ae-42010af05ea6_01c2c4ca -``` - -(Note that initial `docker pull` may take a few minutes, depending on network conditions. During this time, the `get pods` command will return `Pending` because the container has not yet started ) - -### Step Three: Authenticate phabricator in Cloud SQL - -If you read logs of the phabricator container you will notice the following error message: - -```bash -$ kubectl log phabricator-controller-02qp4 -[...] -Raw MySQL Error: Attempt to connect to root@173.194.252.142 failed with error -#2013: Lost connection to MySQL server at 'reading initial communication -packet', system error: 0. - -``` - -This is because the host on which this container is running is not authorized in Cloud SQL. To fix this run: - -```bash -gcloud sql instances patch phabricator-db --authorized-networks 130.211.141.151 -``` - -To automate this process and make sure that a proper host is authorized even if pod is rescheduled to a new machine we need a separate pod that periodically lists pods and authorizes hosts. Use the file [`examples/phabricator/authenticator-controller.json`](authenticator-controller.json): - -```js -{ - "kind": "ReplicationController", - "apiVersion": "v1beta3", - "metadata": { - "name": "authenticator-controller", - "labels": { - "name": "authenticator" - } - }, - "spec": { - "replicas": 1, - "selector": { - "name": "authenticator" - }, - "template": { - "metadata": { - "labels": { - "name": "authenticator" - } - }, - "spec": { - "containers": [ - { - "name": "authenticator", - "image": "gcr.io.google_containers/cloudsql-authenticator:v1" - } - ] - } - } - } -} -``` - -To create the pod run: - -```shell -$ kubectl create -f examples/phabricator/authenticator-controller.json -``` - - -### Step Four: Turn up the phabricator service - -A Kubernetes 'service' is a named load balancer that proxies traffic to one or more containers. The services in a Kubernetes cluster are discoverable inside other containers via *environment variables*. Services find the containers to load balance based on pod labels. These environment variables are typically referenced in application code, shell scripts, or other places where one node needs to talk to another in a distributed system. You should catch up on [kubernetes services](http://docs.k8s.io/services.md) before proceeding. - -The pod that you created in Step One has the label `name=phabricator`. The selector field of the service determines which pods will receive the traffic sent to the service. Since we are setting up a service for an external application we also need to request external static IP address (otherwise it will be assigned dynamically): - -```shell -$ gcloud compute addresses create phabricator --region us-central1 -Created [https://www.googleapis.com/compute/v1/projects/myproject/regions/us-central1/addresses/phabricator]. -NAME REGION ADDRESS STATUS -phabricator us-central1 107.178.210.6 RESERVED -``` - -Use the file [`examples/phabricator/phabricator-service.json`](phabricator-service.json): - -```js -{ - "kind": "Service", - "apiVersion": "v1beta3", - "metadata": { - "name": "phabricator" - }, - "spec": { - "ports": [ - { - "port": 80, - "targetPort": "http-server" - } - ], - "selector": { - "name": "phabricator" - }, - "createExternalLoadBalancer": true, - "publicIPs": [ - "107.178.210.6" - ] - } -} -``` - -To create the service run: - -```shell -$ kubectl create -f examples/phabricator/phabricator-service.json -phabricator -``` - -Note that it will also create an external load balancer so that we can access it from outside. You may need to open the firewall for port 80 using the [console][cloud-console] or the `gcloud` tool. The following command will allow traffic from any source to instances tagged `kubernetes-minion`: - -```shell -$ gcloud compute firewall-rules create phabricator-node-80 --allow=tcp:80 --target-tags kubernetes-minion -``` - -### Step Six: Cleanup - -To turn down a Kubernetes cluster: - -```shell -$ cluster/kube-down.sh -``` - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/phabricator/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/phabricator/README.md?pixel)]() diff --git a/release-0.19.0/examples/phabricator/authenticator-controller.json b/release-0.19.0/examples/phabricator/authenticator-controller.json deleted file mode 100644 index 1da45113e90..00000000000 --- a/release-0.19.0/examples/phabricator/authenticator-controller.json +++ /dev/null @@ -1,31 +0,0 @@ -{ - "kind": "ReplicationController", - "apiVersion": "v1beta3", - "metadata": { - "name": "authenticator-controller", - "labels": { - "name": "authenticator" - } - }, - "spec": { - "replicas": 1, - "selector": { - "name": "authenticator" - }, - "template": { - "metadata": { - "labels": { - "name": "authenticator" - } - }, - "spec": { - "containers": [ - { - "name": "authenticator", - "image": "gcr.io/google_containers/cloudsql-authenticator:v1" - } - ] - } - } - } -} diff --git a/release-0.19.0/examples/phabricator/cloudsql-authenticator/Dockerfile b/release-0.19.0/examples/phabricator/cloudsql-authenticator/Dockerfile deleted file mode 100644 index 50456c0fd49..00000000000 --- a/release-0.19.0/examples/phabricator/cloudsql-authenticator/Dockerfile +++ /dev/null @@ -1,8 +0,0 @@ -FROM google/cloud-sdk - -RUN apt-get update && apt-get install -y curl - -ADD run.sh /run.sh -RUN chmod a+x /*.sh - -CMD ["/run.sh"] diff --git a/release-0.19.0/examples/phabricator/cloudsql-authenticator/run.sh b/release-0.19.0/examples/phabricator/cloudsql-authenticator/run.sh deleted file mode 100755 index e2898c8bf14..00000000000 --- a/release-0.19.0/examples/phabricator/cloudsql-authenticator/run.sh +++ /dev/null @@ -1,32 +0,0 @@ -#!/bin/bash - -# Copyright 2015 The Kubernetes Authors All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# TODO: This loop updates authorized networks even if nothing has changed. It -# should only send updates if something changes. We should be able to do -# this by comparing pod creation time with the last scan time. -while true; do - hostport="https://kubernetes.default.cluster.local" - token=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token) - path="api/v1beta3/pods" - query="labels=$SELECTOR" - - # TODO: load in the CAS cert when we distributed it on all platforms. - ips_json=`curl ${hostport}/${path}?${query} --insecure --header "Authorization: Bearer ${token}" 2>/dev/null | grep hostIP` - ips=`echo $ips_json | cut -d'"' -f 4 | sed 's/,$//'` - echo "Adding IPs $ips" - gcloud sql instances patch $CLOUDSQL_DB --authorized-networks $ips - sleep 10 -done diff --git a/release-0.19.0/examples/phabricator/phabricator-controller.json b/release-0.19.0/examples/phabricator/phabricator-controller.json deleted file mode 100644 index 795f0b24f0d..00000000000 --- a/release-0.19.0/examples/phabricator/phabricator-controller.json +++ /dev/null @@ -1,37 +0,0 @@ -{ - "kind": "ReplicationController", - "apiVersion": "v1beta3", - "metadata": { - "name": "phabricator-controller", - "labels": { - "name": "phabricator" - } - }, - "spec": { - "replicas": 1, - "selector": { - "name": "phabricator" - }, - "template": { - "metadata": { - "labels": { - "name": "phabricator" - } - }, - "spec": { - "containers": [ - { - "name": "phabricator", - "image": "fgrzadkowski/example-php-phabricator", - "ports": [ - { - "name": "http-server", - "containerPort": 80 - } - ] - } - ] - } - } - } -} \ No newline at end of file diff --git a/release-0.19.0/examples/phabricator/phabricator-service.json b/release-0.19.0/examples/phabricator/phabricator-service.json deleted file mode 100644 index 8448d720552..00000000000 --- a/release-0.19.0/examples/phabricator/phabricator-service.json +++ /dev/null @@ -1,22 +0,0 @@ -{ - "kind": "Service", - "apiVersion": "v1beta3", - "metadata": { - "name": "phabricator" - }, - "spec": { - "ports": [ - { - "port": 80, - "targetPort": "http-server" - } - ], - "selector": { - "name": "phabricator" - }, - "createExternalLoadBalancer": true, - "publicIPs": [ - "107.178.210.6" - ] - } -} \ No newline at end of file diff --git a/release-0.19.0/examples/phabricator/php-phabricator/000-default.conf b/release-0.19.0/examples/phabricator/php-phabricator/000-default.conf deleted file mode 100644 index 2ec64d6879d..00000000000 --- a/release-0.19.0/examples/phabricator/php-phabricator/000-default.conf +++ /dev/null @@ -1,12 +0,0 @@ - - Require all granted - - - - DocumentRoot /home/www-data/phabricator/webroot - - RewriteEngine on - RewriteRule ^/rsrc/(.*) - [L,QSA] - RewriteRule ^/favicon.ico - [L,QSA] - RewriteRule ^(.*)$ /index.php?__path__=$1 [B,L,QSA] - diff --git a/release-0.19.0/examples/phabricator/php-phabricator/Dockerfile b/release-0.19.0/examples/phabricator/php-phabricator/Dockerfile deleted file mode 100644 index 9bf1e0d3620..00000000000 --- a/release-0.19.0/examples/phabricator/php-phabricator/Dockerfile +++ /dev/null @@ -1,26 +0,0 @@ -FROM ubuntu:14.04 - -# Install all the required packages. -RUN apt-get update -RUN apt-get -y install \ - git apache2 dpkg-dev python-pygments \ - php5 php5-mysql php5-gd php5-dev php5-curl php-apc php5-cli php5-json php5-xhprof -RUN a2enmod rewrite -RUN apt-get source php5 -RUN (cd `ls -1F | grep '^php5-.*/$'`/ext/pcntl && phpize && ./configure && make && sudo make install) - -# Load code source. -RUN mkdir /home/www-data -RUN cd /home/www-data && git clone https://github.com/phacility/libphutil.git -RUN cd /home/www-data && git clone https://github.com/phacility/arcanist.git -RUN cd /home/www-data && git clone https://github.com/phacility/phabricator.git -RUN chown -R www-data /home/www-data -RUN chgrp -R www-data /home/www-data - -ADD 000-default.conf /etc/apache2/sites-available/000-default.conf -ADD run.sh /run.sh -RUN chmod a+x /*.sh - -# Run Apache2. -EXPOSE 80 -CMD ["/run.sh"] diff --git a/release-0.19.0/examples/phabricator/php-phabricator/run.sh b/release-0.19.0/examples/phabricator/php-phabricator/run.sh deleted file mode 100755 index abbfff611ba..00000000000 --- a/release-0.19.0/examples/phabricator/php-phabricator/run.sh +++ /dev/null @@ -1,28 +0,0 @@ -#!/bin/bash - -# Copyright 2015 The Kubernetes Authors All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -echo "MySQL host IP ${MYSQL_SERVICE_IP} port ${MYSQL_SERVICE_PORT}." -/home/www-data/phabricator/bin/config set mysql.host $MYSQL_SERVICE_IP -/home/www-data/phabricator/bin/config set mysql.port $MYSQL_SERVICE_PORT -/home/www-data/phabricator/bin/config set mysql.pass $MYSQL_PASSWORD - -echo "Running storage upgrade" -/home/www-data/phabricator/bin/storage --force upgrade || exit 1 - -source /etc/apache2/envvars -echo "Starting Apache2" -apache2 -D FOREGROUND - diff --git a/release-0.19.0/examples/phabricator/setup.sh b/release-0.19.0/examples/phabricator/setup.sh deleted file mode 100755 index 860c93f0896..00000000000 --- a/release-0.19.0/examples/phabricator/setup.sh +++ /dev/null @@ -1,21 +0,0 @@ -#!/bin/bash - -# Copyright 2015 The Kubernetes Authors All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -echo "Create Phabricator replication controller" && kubectl create -f phabricator-controller.json -echo "Create Phabricator service" && kubectl create -f phabricator-service.json -echo "Create Authenticator replication controller" && kubectl create -f authenticator-controller.json -echo "Create firewall rule" && gcloud compute firewall-rules create phabricator-node-80 --allow=tcp:80 --target-tags kubernetes-minion - diff --git a/release-0.19.0/examples/phabricator/teardown.sh b/release-0.19.0/examples/phabricator/teardown.sh deleted file mode 100755 index 884c5f4bddf..00000000000 --- a/release-0.19.0/examples/phabricator/teardown.sh +++ /dev/null @@ -1,22 +0,0 @@ -#!/bin/bash - -# Copyright 2015 The Kubernetes Authors All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -echo "Deleting Authenticator replication controller" && kubectl stop rc authenticator-controller -echo "Deleting Phabricator service" && kubectl delete -f phabricator-service.json -echo "Deleting Phabricator replication controller" && kubectl stop rc phabricator-controller - -echo "Delete firewall rule" && gcloud compute firewall-rules delete -q phabricator-node-80 - diff --git a/release-0.19.0/examples/pod.yaml b/release-0.19.0/examples/pod.yaml deleted file mode 100644 index c5eb91f988a..00000000000 --- a/release-0.19.0/examples/pod.yaml +++ /dev/null @@ -1,16 +0,0 @@ -apiVersion: v1beta3 -kind: Pod -metadata: - labels: - name: nginx - name: nginx - namespace: default -spec: - containers: - - image: nginx - imagePullPolicy: IfNotPresent - name: nginx - ports: - - containerPort: 80 - protocol: TCP - restartPolicy: Always \ No newline at end of file diff --git a/release-0.19.0/examples/rbd/README.md b/release-0.19.0/examples/rbd/README.md deleted file mode 100644 index 2fa256a969a..00000000000 --- a/release-0.19.0/examples/rbd/README.md +++ /dev/null @@ -1,50 +0,0 @@ -# How to Use it? -Install Ceph on the Kubernetes host. For example, on Fedora 21 - - # yum -y install ceph - -If you don't have a Ceph cluster, you can set up a [containerized Ceph cluster](https://github.com/rootfs/docker-ceph) - -Then get the keyring from the Ceph cluster and copy it to */etc/ceph/keyring*. - -Once you have installed Ceph and new Kubernetes, you can create a pod based on my examples [rbd.json](v1beta3/rbd.json) [rbd-with-secret.json](v1beta3/rbd-with-secret.json). In the pod JSON, you need to provide the following information. - -- *monitors*: Ceph monitors. -- *pool*: The name of the RADOS pool, if not provided, default *rbd* pool is used. -- *image*: The image name that rbd has created. -- *user*: The RADOS user name. If not provided, default *admin* is used. -- *keyring*: The path to the keyring file. If not provided, default */etc/ceph/keyring* is used. -- *secretName*: The name of the authentication secrets. If provided, *secretName* overrides *keyring*. Note, see below about how to create a secret. -- *fsType*: The filesystem type (ext4, xfs, etc) that formatted on the device. -- *readOnly*: Whether the filesystem is used as readOnly. - -# Use Ceph Authentication Secret - -If Ceph authentication secret is provided, the secret should be first be base64 encoded, then encoded string is placed in a secret yaml. An example yaml is provided [here](secret/ceph-secret.yaml). Then post the secret through ```kubectl``` in the following command. - -```console - # kubectl create -f examples/rbd/secret/ceph-secret.yaml -``` - -# Get started - -Here are my commands: - -```console - # kubectl create -f examples/rbd/v1beta3/rbd.json - # kubectl get pods -``` - -On the Kubernetes host, I got these in mount output - -```console - #mount |grep kub - /dev/rbd0 on /var/lib/kubelet/plugins/kubernetes.io/rbd/rbd/kube-image-foo type ext4 (ro,relatime,stripe=4096,data=ordered) - /dev/rbd0 on /var/lib/kubelet/pods/ec2166b4-de07-11e4-aaf5-d4bed9b39058/volumes/kubernetes.io~rbd/rbdpd type ext4 (ro,relatime,stripe=4096,data=ordered) -``` - - If you ssh to that machine, you can run `docker ps` to see the actual pod and `docker inspect` to see the volumes used by the container. - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/rbd/README.md?pixel)]() - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/rbd/README.md?pixel)]() diff --git a/release-0.19.0/examples/rbd/secret/ceph-secret.yaml b/release-0.19.0/examples/rbd/secret/ceph-secret.yaml deleted file mode 100644 index 1acd1a29ea1..00000000000 --- a/release-0.19.0/examples/rbd/secret/ceph-secret.yaml +++ /dev/null @@ -1,6 +0,0 @@ -apiVersion: v1beta3 -kind: Secret -metadata: - name: ceph-secret -data: - key: QVFCMTZWMVZvRjVtRXhBQTVrQ1FzN2JCajhWVUxSdzI2Qzg0SEE9PQ== diff --git a/release-0.19.0/examples/rbd/v1beta3/rbd-with-secret.json b/release-0.19.0/examples/rbd/v1beta3/rbd-with-secret.json deleted file mode 100644 index 295009d3f4a..00000000000 --- a/release-0.19.0/examples/rbd/v1beta3/rbd-with-secret.json +++ /dev/null @@ -1,42 +0,0 @@ -{ - "apiVersion": "v1beta3", - "id": "rbdpd2", - "kind": "Pod", - "metadata": { - "name": "rbd2" - }, - "spec": { - "containers": [ - { - "name": "rbd-rw", - "image": "kubernetes/pause", - "volumeMounts": [ - { - "mountPath": "/mnt/rbd", - "name": "rbdpd" - } - ] - } - ], - "volumes": [ - { - "name": "rbdpd", - "rbd": { - "monitors": [ - "10.16.154.78:6789", - "10.16.154.82:6789", - "10.16.154.83:6789" - ], - "pool": "kube", - "image": "foo", - "user": "admin", - "secretRef": { - "name": "ceph-secret" - }, - "fsType": "ext4", - "readOnly": true - } - } - ] - } -} diff --git a/release-0.19.0/examples/rbd/v1beta3/rbd.json b/release-0.19.0/examples/rbd/v1beta3/rbd.json deleted file mode 100644 index e704c8dab60..00000000000 --- a/release-0.19.0/examples/rbd/v1beta3/rbd.json +++ /dev/null @@ -1,40 +0,0 @@ -{ - "apiVersion": "v1beta3", - "id": "rbdpd", - "kind": "Pod", - "metadata": { - "name": "rbd" - }, - "spec": { - "containers": [ - { - "name": "rbd-rw", - "image": "kubernetes/pause", - "volumeMounts": [ - { - "mountPath": "/mnt/rbd", - "name": "rbdpd" - } - ] - } - ], - "volumes": [ - { - "name": "rbdpd", - "rbd": { - "monitors": [ - "10.16.154.78:6789", - "10.16.154.82:6789", - "10.16.154.83:6789" - ], - "pool": "kube", - "image": "foo", - "user": "admin", - "keyring": "/etc/ceph/keyring", - "fsType": "ext4", - "readOnly": true - } - } - ] - } -} diff --git a/release-0.19.0/examples/redis/README.md b/release-0.19.0/examples/redis/README.md deleted file mode 100644 index 82ab059805b..00000000000 --- a/release-0.19.0/examples/redis/README.md +++ /dev/null @@ -1,120 +0,0 @@ -## Reliable, Scalable Redis on Kubernetes - -The following document describes the deployment of a reliable, multi-node Redis on Kubernetes. It deploys a master with replicated slaves, as well as replicated redis sentinels which are use for health checking and failover. - -### Prerequisites -This example assumes that you have a Kubernetes cluster installed and running, and that you have installed the ```kubectl``` command line tool somewhere in your path. Please see the [getting started](../../docs/getting-started-guides) for installation instructions for your platform. - -### A note for the impatient -This is a somewhat long tutorial. If you want to jump straight to the "do it now" commands, please see the [tl; dr](#tl-dr) at the end. - -### Turning up an initial master/sentinel pod. -A [_Pod_](../../docs/pods.md) is one or more containers that _must_ be scheduled onto the same host. All containers in a pod share a network namespace, and may optionally share mounted volumes. - -We will used the shared network namespace to bootstrap our Redis cluster. In particular, the very first sentinel needs to know how to find the master (subsequent sentinels just ask the first sentinel). Because all containers in a Pod share a network namespace, the sentinel can simply look at ```$(hostname -i):6379```. - -Here is the config for the initial master and sentinel pod: [redis-master.yaml](redis-master.yaml) - - -Create this master as follows: -```sh -kubectl create -f examples/redis/redis-master.yaml -``` - -### Turning up a sentinel service -In Kubernetes a [_Service_](../../docs/services.md) describes a set of Pods that perform the same task. For example, the set of nodes in a Cassandra cluster, or even the single node we created above. An important use for a Service is to create a load balancer which distributes traffic across members of the set. But a _Service_ can also be used as a standing query which makes a dynamically changing set of Pods (or the single Pod we've already created) available via the Kubernetes API. - -In Redis, we will use a Kubernetes Service to provide a discoverable endpoints for the Redis sentinels in the cluster. From the sentinels Redis clients can find the master, and then the slaves and other relevant info for the cluster. This enables new members to join the cluster when failures occur. - -Here is the definition of the sentinel service: [redis-sentinel-service.yaml](redis-sentinel-service.yaml) - -Create this service: -```sh -kubectl create -f examples/redis/redis-sentinel-service.yaml -``` - -### Turning up replicated redis servers -So far, what we have done is pretty manual, and not very fault-tolerant. If the ```redis-master``` pod that we previously created is destroyed for some reason (e.g. a machine dying) our Redis service goes away with it. - -In Kubernetes a [_Replication Controller_](../../docs/replication-controller.md) is responsible for replicating sets of identical pods. Like a _Service_ it has a selector query which identifies the members of it's set. Unlike a _Service_ it also has a desired number of replicas, and it will create or delete _Pods_ to ensure that the number of _Pods_ matches up with it's desired state. - -Replication Controllers will "adopt" existing pods that match their selector query, so let's create a Replication Controller with a single replica to adopt our existing Redis server. Here is the replication controller config: [redis-controller.yaml](redis-controller.yaml) - -The bulk of this controller config is actually identical to the redis-master pod definition above. It forms the template or "cookie cutter" that defines what it means to be a member of this set. - -Create this controller: - -```sh -kubectl create -f examples/redis/redis-controller.yaml -``` - -We'll do the same thing for the sentinel. Here is the controller config: [redis-sentinel-controller.yaml](redis-sentinel-controller.yaml) - -We create it as follows: -```sh -kubectl create -f examples/redis/redis-sentinel-controller.yaml -``` - -### Scale our replicated pods -Initially creating those pods didn't actually do anything, since we only asked for one sentinel and one redis server, and they already existed, nothing changed. Now we will add more replicas: - -```sh -kubectl scale rc redis --replicas=3 -``` - -```sh -kubectl scale rc redis-sentinel --replicas=3 -``` - -This will create two additional replicas of the redis server and two additional replicas of the redis sentinel. - -Unlike our original redis-master pod, these pods exist independently, and they use the ```redis-sentinel-service``` that we defined above to discover and join the cluster. - -### Delete our manual pod -The final step in the cluster turn up is to delete the original redis-master pod that we created manually. While it was useful for bootstrapping discovery in the cluster, we really don't want the lifespan of our sentinel to be tied to the lifespan of one of our redis servers, and now that we have a successful, replicated redis sentinel service up and running, the binding is unnecessary. - -Delete the master as follows: -```sh -kubectl delete pods redis-master -``` - -Now let's take a close look at what happens after this pod is deleted. There are three things that happen: - - 1. The redis replication controller notices that its desired state is 3 replicas, but there are currently only 2 replicas, and so it creates a new redis server to bring the replica count back up to 3 - 2. The redis-sentinel replication controller likewise notices the missing sentinel, and also creates a new sentinel. - 3. The redis sentinels themselves, realize that the master has disappeared from the cluster, and begin the election procedure for selecting a new master. They perform this election and selection, and chose one of the existing redis server replicas to be the new master. - -### Conclusion -At this point we now have a reliable, scalable Redis installation. By scaling the replication controller for redis servers, we can increase or decrease the number of read-slaves in our cluster. Likewise, if failures occur, the redis-sentinels will perform master election and select a new master. - -### tl; dr -For those of you who are impatient, here is the summary of commands we ran in this tutorial: - -``` -# Create a bootstrap master -kubectl create -f examples/redis/redis-master.yaml - -# Create a service to track the sentinels -kubectl create -f examples/redis/redis-sentinel-service.yaml - -# Create a replication controller for redis servers -kubectl create -f examples/redis/redis-controller.yaml - -# Create a replication controller for redis sentinels -kubectl create -f examples/redis/redis-sentinel-controller.yaml - -# Scale both replication controllers -kubectl scale rc redis --replicas=3 -kubectl scale rc redis-sentinel --replicas=3 - -# Delete the original master pod -kubectl delete pods redis-master -``` - - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/redis/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/redis/README.md?pixel)]() diff --git a/release-0.19.0/examples/redis/image/Dockerfile b/release-0.19.0/examples/redis/image/Dockerfile deleted file mode 100644 index c770efd8a4b..00000000000 --- a/release-0.19.0/examples/redis/image/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM redis:2.8 -RUN apt-get update -RUN apt-get install -yy -q python - -COPY redis-master.conf /redis-master/redis.conf -COPY redis-slave.conf /redis-slave/redis.conf -COPY run.sh /run.sh -COPY sentinel.py /sentinel.py - -CMD [ "/run.sh" ] -ENTRYPOINT [ "sh", "-c" ] diff --git a/release-0.19.0/examples/redis/image/redis-master.conf b/release-0.19.0/examples/redis/image/redis-master.conf deleted file mode 100644 index a514219dcfd..00000000000 --- a/release-0.19.0/examples/redis/image/redis-master.conf +++ /dev/null @@ -1,827 +0,0 @@ -# Redis configuration file example - -# Note on units: when memory size is needed, it is possible to specify -# it in the usual form of 1k 5GB 4M and so forth: -# -# 1k => 1000 bytes -# 1kb => 1024 bytes -# 1m => 1000000 bytes -# 1mb => 1024*1024 bytes -# 1g => 1000000000 bytes -# 1gb => 1024*1024*1024 bytes -# -# units are case insensitive so 1GB 1Gb 1gB are all the same. - -################################## INCLUDES ################################### - -# Include one or more other config files here. This is useful if you -# have a standard template that goes to all Redis servers but also need -# to customize a few per-server settings. Include files can include -# other files, so use this wisely. -# -# Notice option "include" won't be rewritten by command "CONFIG REWRITE" -# from admin or Redis Sentinel. Since Redis always uses the last processed -# line as value of a configuration directive, you'd better put includes -# at the beginning of this file to avoid overwriting config change at runtime. -# -# If instead you are interested in using includes to override configuration -# options, it is better to use include as the last line. -# -# include /path/to/local.conf -# include /path/to/other.conf - -################################ GENERAL ##################################### - -# By default Redis does not run as a daemon. Use 'yes' if you need it. -# Note that Redis will write a pid file in /var/run/redis.pid when daemonized. -daemonize no - -# When running daemonized, Redis writes a pid file in /var/run/redis.pid by -# default. You can specify a custom pid file location here. -pidfile /var/run/redis.pid - -# Accept connections on the specified port, default is 6379. -# If port 0 is specified Redis will not listen on a TCP socket. -port 6379 - -# TCP listen() backlog. -# -# In high requests-per-second environments you need an high backlog in order -# to avoid slow clients connections issues. Note that the Linux kernel -# will silently truncate it to the value of /proc/sys/net/core/somaxconn so -# make sure to raise both the value of somaxconn and tcp_max_syn_backlog -# in order to get the desired effect. -tcp-backlog 511 - -# By default Redis listens for connections from all the network interfaces -# available on the server. It is possible to listen to just one or multiple -# interfaces using the "bind" configuration directive, followed by one or -# more IP addresses. -# -# Examples: -# -# bind 192.168.1.100 10.0.0.1 -# bind 127.0.0.1 - -# Specify the path for the Unix socket that will be used to listen for -# incoming connections. There is no default, so Redis will not listen -# on a unix socket when not specified. -# -# unixsocket /tmp/redis.sock -# unixsocketperm 700 - -# Close the connection after a client is idle for N seconds (0 to disable) -timeout 0 - -# TCP keepalive. -# -# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence -# of communication. This is useful for two reasons: -# -# 1) Detect dead peers. -# 2) Take the connection alive from the point of view of network -# equipment in the middle. -# -# On Linux, the specified value (in seconds) is the period used to send ACKs. -# Note that to close the connection the double of the time is needed. -# On other kernels the period depends on the kernel configuration. -# -# A reasonable value for this option is 60 seconds. -tcp-keepalive 60 - -# Specify the server verbosity level. -# This can be one of: -# debug (a lot of information, useful for development/testing) -# verbose (many rarely useful info, but not a mess like the debug level) -# notice (moderately verbose, what you want in production probably) -# warning (only very important / critical messages are logged) -loglevel notice - -# Specify the log file name. Also the empty string can be used to force -# Redis to log on the standard output. Note that if you use standard -# output for logging but daemonize, logs will be sent to /dev/null -logfile "" - -# To enable logging to the system logger, just set 'syslog-enabled' to yes, -# and optionally update the other syslog parameters to suit your needs. -# syslog-enabled no - -# Specify the syslog identity. -# syslog-ident redis - -# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7. -# syslog-facility local0 - -# Set the number of databases. The default database is DB 0, you can select -# a different one on a per-connection basis using SELECT where -# dbid is a number between 0 and 'databases'-1 -databases 16 - -################################ SNAPSHOTTING ################################ -# -# Save the DB on disk: -# -# save -# -# Will save the DB if both the given number of seconds and the given -# number of write operations against the DB occurred. -# -# In the example below the behaviour will be to save: -# after 900 sec (15 min) if at least 1 key changed -# after 300 sec (5 min) if at least 10 keys changed -# after 60 sec if at least 10000 keys changed -# -# Note: you can disable saving completely by commenting out all "save" lines. -# -# It is also possible to remove all the previously configured save -# points by adding a save directive with a single empty string argument -# like in the following example: -# -# save "" - -save 900 1 -save 300 10 -save 60 10000 - -# By default Redis will stop accepting writes if RDB snapshots are enabled -# (at least one save point) and the latest background save failed. -# This will make the user aware (in a hard way) that data is not persisting -# on disk properly, otherwise chances are that no one will notice and some -# disaster will happen. -# -# If the background saving process will start working again Redis will -# automatically allow writes again. -# -# However if you have setup your proper monitoring of the Redis server -# and persistence, you may want to disable this feature so that Redis will -# continue to work as usual even if there are problems with disk, -# permissions, and so forth. -stop-writes-on-bgsave-error yes - -# Compress string objects using LZF when dump .rdb databases? -# For default that's set to 'yes' as it's almost always a win. -# If you want to save some CPU in the saving child set it to 'no' but -# the dataset will likely be bigger if you have compressible values or keys. -rdbcompression yes - -# Since version 5 of RDB a CRC64 checksum is placed at the end of the file. -# This makes the format more resistant to corruption but there is a performance -# hit to pay (around 10%) when saving and loading RDB files, so you can disable it -# for maximum performances. -# -# RDB files created with checksum disabled have a checksum of zero that will -# tell the loading code to skip the check. -rdbchecksum yes - -# The filename where to dump the DB -dbfilename dump.rdb - -# The working directory. -# -# The DB will be written inside this directory, with the filename specified -# above using the 'dbfilename' configuration directive. -# -# The Append Only File will also be created inside this directory. -# -# Note that you must specify a directory here, not a file name. -dir /redis-master-data - -################################# REPLICATION ################################# - -# Master-Slave replication. Use slaveof to make a Redis instance a copy of -# another Redis server. A few things to understand ASAP about Redis replication. -# -# 1) Redis replication is asynchronous, but you can configure a master to -# stop accepting writes if it appears to be not connected with at least -# a given number of slaves. -# 2) Redis slaves are able to perform a partial resynchronization with the -# master if the replication link is lost for a relatively small amount of -# time. You may want to configure the replication backlog size (see the next -# sections of this file) with a sensible value depending on your needs. -# 3) Replication is automatic and does not need user intervention. After a -# network partition slaves automatically try to reconnect to masters -# and resynchronize with them. -# -# slaveof - -# If the master is password protected (using the "requirepass" configuration -# directive below) it is possible to tell the slave to authenticate before -# starting the replication synchronization process, otherwise the master will -# refuse the slave request. -# -# masterauth - -# When a slave loses its connection with the master, or when the replication -# is still in progress, the slave can act in two different ways: -# -# 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will -# still reply to client requests, possibly with out of date data, or the -# data set may just be empty if this is the first synchronization. -# -# 2) if slave-serve-stale-data is set to 'no' the slave will reply with -# an error "SYNC with master in progress" to all the kind of commands -# but to INFO and SLAVEOF. -# -slave-serve-stale-data yes - -# You can configure a slave instance to accept writes or not. Writing against -# a slave instance may be useful to store some ephemeral data (because data -# written on a slave will be easily deleted after resync with the master) but -# may also cause problems if clients are writing to it because of a -# misconfiguration. -# -# Since Redis 2.6 by default slaves are read-only. -# -# Note: read only slaves are not designed to be exposed to untrusted clients -# on the internet. It's just a protection layer against misuse of the instance. -# Still a read only slave exports by default all the administrative commands -# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve -# security of read only slaves using 'rename-command' to shadow all the -# administrative / dangerous commands. -slave-read-only yes - -# Replication SYNC strategy: disk or socket. -# -# ------------------------------------------------------- -# WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY -# ------------------------------------------------------- -# -# New slaves and reconnecting slaves that are not able to continue the replication -# process just receiving differences, need to do what is called a "full -# synchronization". An RDB file is transmitted from the master to the slaves. -# The transmission can happen in two different ways: -# -# 1) Disk-backed: The Redis master creates a new process that writes the RDB -# file on disk. Later the file is transferred by the parent -# process to the slaves incrementally. -# 2) Diskless: The Redis master creates a new process that directly writes the -# RDB file to slave sockets, without touching the disk at all. -# -# With disk-backed replication, while the RDB file is generated, more slaves -# can be queued and served with the RDB file as soon as the current child producing -# the RDB file finishes its work. With diskless replication instead once -# the transfer starts, new slaves arriving will be queued and a new transfer -# will start when the current one terminates. -# -# When diskless replication is used, the master waits a configurable amount of -# time (in seconds) before starting the transfer in the hope that multiple slaves -# will arrive and the transfer can be parallelized. -# -# With slow disks and fast (large bandwidth) networks, diskless replication -# works better. -repl-diskless-sync no - -# When diskless replication is enabled, it is possible to configure the delay -# the server waits in order to spawn the child that trnasfers the RDB via socket -# to the slaves. -# -# This is important since once the transfer starts, it is not possible to serve -# new slaves arriving, that will be queued for the next RDB transfer, so the server -# waits a delay in order to let more slaves arrive. -# -# The delay is specified in seconds, and by default is 5 seconds. To disable -# it entirely just set it to 0 seconds and the transfer will start ASAP. -repl-diskless-sync-delay 5 - -# Slaves send PINGs to server in a predefined interval. It's possible to change -# this interval with the repl_ping_slave_period option. The default value is 10 -# seconds. -# -# repl-ping-slave-period 10 - -# The following option sets the replication timeout for: -# -# 1) Bulk transfer I/O during SYNC, from the point of view of slave. -# 2) Master timeout from the point of view of slaves (data, pings). -# 3) Slave timeout from the point of view of masters (REPLCONF ACK pings). -# -# It is important to make sure that this value is greater than the value -# specified for repl-ping-slave-period otherwise a timeout will be detected -# every time there is low traffic between the master and the slave. -# -# repl-timeout 60 - -# Disable TCP_NODELAY on the slave socket after SYNC? -# -# If you select "yes" Redis will use a smaller number of TCP packets and -# less bandwidth to send data to slaves. But this can add a delay for -# the data to appear on the slave side, up to 40 milliseconds with -# Linux kernels using a default configuration. -# -# If you select "no" the delay for data to appear on the slave side will -# be reduced but more bandwidth will be used for replication. -# -# By default we optimize for low latency, but in very high traffic conditions -# or when the master and slaves are many hops away, turning this to "yes" may -# be a good idea. -repl-disable-tcp-nodelay no - -# Set the replication backlog size. The backlog is a buffer that accumulates -# slave data when slaves are disconnected for some time, so that when a slave -# wants to reconnect again, often a full resync is not needed, but a partial -# resync is enough, just passing the portion of data the slave missed while -# disconnected. -# -# The bigger the replication backlog, the longer the time the slave can be -# disconnected and later be able to perform a partial resynchronization. -# -# The backlog is only allocated once there is at least a slave connected. -# -# repl-backlog-size 1mb - -# After a master has no longer connected slaves for some time, the backlog -# will be freed. The following option configures the amount of seconds that -# need to elapse, starting from the time the last slave disconnected, for -# the backlog buffer to be freed. -# -# A value of 0 means to never release the backlog. -# -# repl-backlog-ttl 3600 - -# The slave priority is an integer number published by Redis in the INFO output. -# It is used by Redis Sentinel in order to select a slave to promote into a -# master if the master is no longer working correctly. -# -# A slave with a low priority number is considered better for promotion, so -# for instance if there are three slaves with priority 10, 100, 25 Sentinel will -# pick the one with priority 10, that is the lowest. -# -# However a special priority of 0 marks the slave as not able to perform the -# role of master, so a slave with priority of 0 will never be selected by -# Redis Sentinel for promotion. -# -# By default the priority is 100. -slave-priority 100 - -# It is possible for a master to stop accepting writes if there are less than -# N slaves connected, having a lag less or equal than M seconds. -# -# The N slaves need to be in "online" state. -# -# The lag in seconds, that must be <= the specified value, is calculated from -# the last ping received from the slave, that is usually sent every second. -# -# This option does not GUARANTEE that N replicas will accept the write, but -# will limit the window of exposure for lost writes in case not enough slaves -# are available, to the specified number of seconds. -# -# For example to require at least 3 slaves with a lag <= 10 seconds use: -# -# min-slaves-to-write 3 -# min-slaves-max-lag 10 -# -# Setting one or the other to 0 disables the feature. -# -# By default min-slaves-to-write is set to 0 (feature disabled) and -# min-slaves-max-lag is set to 10. - -################################## SECURITY ################################### - -# Require clients to issue AUTH before processing any other -# commands. This might be useful in environments in which you do not trust -# others with access to the host running redis-server. -# -# This should stay commented out for backward compatibility and because most -# people do not need auth (e.g. they run their own servers). -# -# Warning: since Redis is pretty fast an outside user can try up to -# 150k passwords per second against a good box. This means that you should -# use a very strong password otherwise it will be very easy to break. -# -# requirepass foobared - -# Command renaming. -# -# It is possible to change the name of dangerous commands in a shared -# environment. For instance the CONFIG command may be renamed into something -# hard to guess so that it will still be available for internal-use tools -# but not available for general clients. -# -# Example: -# -# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52 -# -# It is also possible to completely kill a command by renaming it into -# an empty string: -# -# rename-command CONFIG "" -# -# Please note that changing the name of commands that are logged into the -# AOF file or transmitted to slaves may cause problems. - -################################### LIMITS #################################### - -# Set the max number of connected clients at the same time. By default -# this limit is set to 10000 clients, however if the Redis server is not -# able to configure the process file limit to allow for the specified limit -# the max number of allowed clients is set to the current file limit -# minus 32 (as Redis reserves a few file descriptors for internal uses). -# -# Once the limit is reached Redis will close all the new connections sending -# an error 'max number of clients reached'. -# -# maxclients 10000 - -# Don't use more memory than the specified amount of bytes. -# When the memory limit is reached Redis will try to remove keys -# according to the eviction policy selected (see maxmemory-policy). -# -# If Redis can't remove keys according to the policy, or if the policy is -# set to 'noeviction', Redis will start to reply with errors to commands -# that would use more memory, like SET, LPUSH, and so on, and will continue -# to reply to read-only commands like GET. -# -# This option is usually useful when using Redis as an LRU cache, or to set -# a hard memory limit for an instance (using the 'noeviction' policy). -# -# WARNING: If you have slaves attached to an instance with maxmemory on, -# the size of the output buffers needed to feed the slaves are subtracted -# from the used memory count, so that network problems / resyncs will -# not trigger a loop where keys are evicted, and in turn the output -# buffer of slaves is full with DELs of keys evicted triggering the deletion -# of more keys, and so forth until the database is completely emptied. -# -# In short... if you have slaves attached it is suggested that you set a lower -# limit for maxmemory so that there is some free RAM on the system for slave -# output buffers (but this is not needed if the policy is 'noeviction'). -# -# maxmemory - -# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory -# is reached. You can select among five behaviors: -# -# volatile-lru -> remove the key with an expire set using an LRU algorithm -# allkeys-lru -> remove any key according to the LRU algorithm -# volatile-random -> remove a random key with an expire set -# allkeys-random -> remove a random key, any key -# volatile-ttl -> remove the key with the nearest expire time (minor TTL) -# noeviction -> don't expire at all, just return an error on write operations -# -# Note: with any of the above policies, Redis will return an error on write -# operations, when there are no suitable keys for eviction. -# -# At the date of writing these commands are: set setnx setex append -# incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd -# sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby -# zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby -# getset mset msetnx exec sort -# -# The default is: -# -# maxmemory-policy volatile-lru - -# LRU and minimal TTL algorithms are not precise algorithms but approximated -# algorithms (in order to save memory), so you can select as well the sample -# size to check. For instance for default Redis will check three keys and -# pick the one that was used less recently, you can change the sample size -# using the following configuration directive. -# -# maxmemory-samples 3 - -############################## APPEND ONLY MODE ############################### - -# By default Redis asynchronously dumps the dataset on disk. This mode is -# good enough in many applications, but an issue with the Redis process or -# a power outage may result into a few minutes of writes lost (depending on -# the configured save points). -# -# The Append Only File is an alternative persistence mode that provides -# much better durability. For instance using the default data fsync policy -# (see later in the config file) Redis can lose just one second of writes in a -# dramatic event like a server power outage, or a single write if something -# wrong with the Redis process itself happens, but the operating system is -# still running correctly. -# -# AOF and RDB persistence can be enabled at the same time without problems. -# If the AOF is enabled on startup Redis will load the AOF, that is the file -# with the better durability guarantees. -# -# Please check http://redis.io/topics/persistence for more information. - -appendonly yes - -# The name of the append only file (default: "appendonly.aof") - -appendfilename "appendonly.aof" - -# The fsync() call tells the Operating System to actually write data on disk -# instead of waiting for more data in the output buffer. Some OS will really flush -# data on disk, some other OS will just try to do it ASAP. -# -# Redis supports three different modes: -# -# no: don't fsync, just let the OS flush the data when it wants. Faster. -# always: fsync after every write to the append only log. Slow, Safest. -# everysec: fsync only one time every second. Compromise. -# -# The default is "everysec", as that's usually the right compromise between -# speed and data safety. It's up to you to understand if you can relax this to -# "no" that will let the operating system flush the output buffer when -# it wants, for better performances (but if you can live with the idea of -# some data loss consider the default persistence mode that's snapshotting), -# or on the contrary, use "always" that's very slow but a bit safer than -# everysec. -# -# More details please check the following article: -# http://antirez.com/post/redis-persistence-demystified.html -# -# If unsure, use "everysec". - -# appendfsync always -appendfsync everysec -# appendfsync no - -# When the AOF fsync policy is set to always or everysec, and a background -# saving process (a background save or AOF log background rewriting) is -# performing a lot of I/O against the disk, in some Linux configurations -# Redis may block too long on the fsync() call. Note that there is no fix for -# this currently, as even performing fsync in a different thread will block -# our synchronous write(2) call. -# -# In order to mitigate this problem it's possible to use the following option -# that will prevent fsync() from being called in the main process while a -# BGSAVE or BGREWRITEAOF is in progress. -# -# This means that while another child is saving, the durability of Redis is -# the same as "appendfsync none". In practical terms, this means that it is -# possible to lose up to 30 seconds of log in the worst scenario (with the -# default Linux settings). -# -# If you have latency problems turn this to "yes". Otherwise leave it as -# "no" that is the safest pick from the point of view of durability. - -no-appendfsync-on-rewrite no - -# Automatic rewrite of the append only file. -# Redis is able to automatically rewrite the log file implicitly calling -# BGREWRITEAOF when the AOF log size grows by the specified percentage. -# -# This is how it works: Redis remembers the size of the AOF file after the -# latest rewrite (if no rewrite has happened since the restart, the size of -# the AOF at startup is used). -# -# This base size is compared to the current size. If the current size is -# bigger than the specified percentage, the rewrite is triggered. Also -# you need to specify a minimal size for the AOF file to be rewritten, this -# is useful to avoid rewriting the AOF file even if the percentage increase -# is reached but it is still pretty small. -# -# Specify a percentage of zero in order to disable the automatic AOF -# rewrite feature. - -auto-aof-rewrite-percentage 100 -auto-aof-rewrite-min-size 64mb - -# An AOF file may be found to be truncated at the end during the Redis -# startup process, when the AOF data gets loaded back into memory. -# This may happen when the system where Redis is running -# crashes, especially when an ext4 filesystem is mounted without the -# data=ordered option (however this can't happen when Redis itself -# crashes or aborts but the operating system still works correctly). -# -# Redis can either exit with an error when this happens, or load as much -# data as possible (the default now) and start if the AOF file is found -# to be truncated at the end. The following option controls this behavior. -# -# If aof-load-truncated is set to yes, a truncated AOF file is loaded and -# the Redis server starts emitting a log to inform the user of the event. -# Otherwise if the option is set to no, the server aborts with an error -# and refuses to start. When the option is set to no, the user requires -# to fix the AOF file using the "redis-check-aof" utility before to restart -# the server. -# -# Note that if the AOF file will be found to be corrupted in the middle -# the server will still exit with an error. This option only applies when -# Redis will try to read more data from the AOF file but not enough bytes -# will be found. -aof-load-truncated yes - -################################ LUA SCRIPTING ############################### - -# Max execution time of a Lua script in milliseconds. -# -# If the maximum execution time is reached Redis will log that a script is -# still in execution after the maximum allowed time and will start to -# reply to queries with an error. -# -# When a long running script exceeds the maximum execution time only the -# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be -# used to stop a script that did not yet called write commands. The second -# is the only way to shut down the server in the case a write command was -# already issued by the script but the user doesn't want to wait for the natural -# termination of the script. -# -# Set it to 0 or a negative value for unlimited execution without warnings. -lua-time-limit 5000 - -################################## SLOW LOG ################################### - -# The Redis Slow Log is a system to log queries that exceeded a specified -# execution time. The execution time does not include the I/O operations -# like talking with the client, sending the reply and so forth, -# but just the time needed to actually execute the command (this is the only -# stage of command execution where the thread is blocked and can not serve -# other requests in the meantime). -# -# You can configure the slow log with two parameters: one tells Redis -# what is the execution time, in microseconds, to exceed in order for the -# command to get logged, and the other parameter is the length of the -# slow log. When a new command is logged the oldest one is removed from the -# queue of logged commands. - -# The following time is expressed in microseconds, so 1000000 is equivalent -# to one second. Note that a negative number disables the slow log, while -# a value of zero forces the logging of every command. -slowlog-log-slower-than 10000 - -# There is no limit to this length. Just be aware that it will consume memory. -# You can reclaim memory used by the slow log with SLOWLOG RESET. -slowlog-max-len 128 - -################################ LATENCY MONITOR ############################## - -# The Redis latency monitoring subsystem samples different operations -# at runtime in order to collect data related to possible sources of -# latency of a Redis instance. -# -# Via the LATENCY command this information is available to the user that can -# print graphs and obtain reports. -# -# The system only logs operations that were performed in a time equal or -# greater than the amount of milliseconds specified via the -# latency-monitor-threshold configuration directive. When its value is set -# to zero, the latency monitor is turned off. -# -# By default latency monitoring is disabled since it is mostly not needed -# if you don't have latency issues, and collecting data has a performance -# impact, that while very small, can be measured under big load. Latency -# monitoring can easily be enalbed at runtime using the command -# "CONFIG SET latency-monitor-threshold " if needed. -latency-monitor-threshold 0 - -############################# Event notification ############################## - -# Redis can notify Pub/Sub clients about events happening in the key space. -# This feature is documented at http://redis.io/topics/notifications -# -# For instance if keyspace events notification is enabled, and a client -# performs a DEL operation on key "foo" stored in the Database 0, two -# messages will be published via Pub/Sub: -# -# PUBLISH __keyspace@0__:foo del -# PUBLISH __keyevent@0__:del foo -# -# It is possible to select the events that Redis will notify among a set -# of classes. Every class is identified by a single character: -# -# K Keyspace events, published with __keyspace@__ prefix. -# E Keyevent events, published with __keyevent@__ prefix. -# g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ... -# $ String commands -# l List commands -# s Set commands -# h Hash commands -# z Sorted set commands -# x Expired events (events generated every time a key expires) -# e Evicted events (events generated when a key is evicted for maxmemory) -# A Alias for g$lshzxe, so that the "AKE" string means all the events. -# -# The "notify-keyspace-events" takes as argument a string that is composed -# of zero or multiple characters. The empty string means that notifications -# are disabled. -# -# Example: to enable list and generic events, from the point of view of the -# event name, use: -# -# notify-keyspace-events Elg -# -# Example 2: to get the stream of the expired keys subscribing to channel -# name __keyevent@0__:expired use: -# -# notify-keyspace-events Ex -# -# By default all notifications are disabled because most users don't need -# this feature and the feature has some overhead. Note that if you don't -# specify at least one of K or E, no events will be delivered. -notify-keyspace-events "" - -############################### ADVANCED CONFIG ############################### - -# Hashes are encoded using a memory efficient data structure when they have a -# small number of entries, and the biggest entry does not exceed a given -# threshold. These thresholds can be configured using the following directives. -hash-max-ziplist-entries 512 -hash-max-ziplist-value 64 - -# Similarly to hashes, small lists are also encoded in a special way in order -# to save a lot of space. The special representation is only used when -# you are under the following limits: -list-max-ziplist-entries 512 -list-max-ziplist-value 64 - -# Sets have a special encoding in just one case: when a set is composed -# of just strings that happen to be integers in radix 10 in the range -# of 64 bit signed integers. -# The following configuration setting sets the limit in the size of the -# set in order to use this special memory saving encoding. -set-max-intset-entries 512 - -# Similarly to hashes and lists, sorted sets are also specially encoded in -# order to save a lot of space. This encoding is only used when the length and -# elements of a sorted set are below the following limits: -zset-max-ziplist-entries 128 -zset-max-ziplist-value 64 - -# HyperLogLog sparse representation bytes limit. The limit includes the -# 16 bytes header. When an HyperLogLog using the sparse representation crosses -# this limit, it is converted into the dense representation. -# -# A value greater than 16000 is totally useless, since at that point the -# dense representation is more memory efficient. -# -# The suggested value is ~ 3000 in order to have the benefits of -# the space efficient encoding without slowing down too much PFADD, -# which is O(N) with the sparse encoding. The value can be raised to -# ~ 10000 when CPU is not a concern, but space is, and the data set is -# composed of many HyperLogLogs with cardinality in the 0 - 15000 range. -hll-sparse-max-bytes 3000 - -# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in -# order to help rehashing the main Redis hash table (the one mapping top-level -# keys to values). The hash table implementation Redis uses (see dict.c) -# performs a lazy rehashing: the more operation you run into a hash table -# that is rehashing, the more rehashing "steps" are performed, so if the -# server is idle the rehashing is never complete and some more memory is used -# by the hash table. -# -# The default is to use this millisecond 10 times every second in order to -# actively rehash the main dictionaries, freeing memory when possible. -# -# If unsure: -# use "activerehashing no" if you have hard latency requirements and it is -# not a good thing in your environment that Redis can reply from time to time -# to queries with 2 milliseconds delay. -# -# use "activerehashing yes" if you don't have such hard requirements but -# want to free memory asap when possible. -activerehashing yes - -# The client output buffer limits can be used to force disconnection of clients -# that are not reading data from the server fast enough for some reason (a -# common reason is that a Pub/Sub client can't consume messages as fast as the -# publisher can produce them). -# -# The limit can be set differently for the three different classes of clients: -# -# normal -> normal clients including MONITOR clients -# slave -> slave clients -# pubsub -> clients subscribed to at least one pubsub channel or pattern -# -# The syntax of every client-output-buffer-limit directive is the following: -# -# client-output-buffer-limit -# -# A client is immediately disconnected once the hard limit is reached, or if -# the soft limit is reached and remains reached for the specified number of -# seconds (continuously). -# So for instance if the hard limit is 32 megabytes and the soft limit is -# 16 megabytes / 10 seconds, the client will get disconnected immediately -# if the size of the output buffers reach 32 megabytes, but will also get -# disconnected if the client reaches 16 megabytes and continuously overcomes -# the limit for 10 seconds. -# -# By default normal clients are not limited because they don't receive data -# without asking (in a push way), but just after a request, so only -# asynchronous clients may create a scenario where data is requested faster -# than it can read. -# -# Instead there is a default limit for pubsub and slave clients, since -# subscribers and slaves receive data in a push fashion. -# -# Both the hard or the soft limit can be disabled by setting them to zero. -client-output-buffer-limit normal 0 0 0 -client-output-buffer-limit slave 256mb 64mb 60 -client-output-buffer-limit pubsub 32mb 8mb 60 - -# Redis calls an internal function to perform many background tasks, like -# closing connections of clients in timeout, purging expired keys that are -# never requested, and so forth. -# -# Not all tasks are performed with the same frequency, but Redis checks for -# tasks to perform according to the specified "hz" value. -# -# By default "hz" is set to 10. Raising the value will use more CPU when -# Redis is idle, but at the same time will make Redis more responsive when -# there are many keys expiring at the same time, and timeouts may be -# handled with more precision. -# -# The range is between 1 and 500, however a value over 100 is usually not -# a good idea. Most users should use the default of 10 and raise this up to -# 100 only in environments where very low latency is required. -hz 10 - -# When a child rewrites the AOF file, if the following option is enabled -# the file will be fsync-ed every 32 MB of data generated. This is useful -# in order to commit the file to the disk more incrementally and avoid -# big latency spikes. -aof-rewrite-incremental-fsync yes diff --git a/release-0.19.0/examples/redis/image/redis-slave.conf b/release-0.19.0/examples/redis/image/redis-slave.conf deleted file mode 100644 index cb01c10a0e8..00000000000 --- a/release-0.19.0/examples/redis/image/redis-slave.conf +++ /dev/null @@ -1,827 +0,0 @@ -# Redis configuration file example - -# Note on units: when memory size is needed, it is possible to specify -# it in the usual form of 1k 5GB 4M and so forth: -# -# 1k => 1000 bytes -# 1kb => 1024 bytes -# 1m => 1000000 bytes -# 1mb => 1024*1024 bytes -# 1g => 1000000000 bytes -# 1gb => 1024*1024*1024 bytes -# -# units are case insensitive so 1GB 1Gb 1gB are all the same. - -################################## INCLUDES ################################### - -# Include one or more other config files here. This is useful if you -# have a standard template that goes to all Redis servers but also need -# to customize a few per-server settings. Include files can include -# other files, so use this wisely. -# -# Notice option "include" won't be rewritten by command "CONFIG REWRITE" -# from admin or Redis Sentinel. Since Redis always uses the last processed -# line as value of a configuration directive, you'd better put includes -# at the beginning of this file to avoid overwriting config change at runtime. -# -# If instead you are interested in using includes to override configuration -# options, it is better to use include as the last line. -# -# include /path/to/local.conf -# include /path/to/other.conf - -################################ GENERAL ##################################### - -# By default Redis does not run as a daemon. Use 'yes' if you need it. -# Note that Redis will write a pid file in /var/run/redis.pid when daemonized. -daemonize no - -# When running daemonized, Redis writes a pid file in /var/run/redis.pid by -# default. You can specify a custom pid file location here. -pidfile /var/run/redis.pid - -# Accept connections on the specified port, default is 6379. -# If port 0 is specified Redis will not listen on a TCP socket. -port 6379 - -# TCP listen() backlog. -# -# In high requests-per-second environments you need an high backlog in order -# to avoid slow clients connections issues. Note that the Linux kernel -# will silently truncate it to the value of /proc/sys/net/core/somaxconn so -# make sure to raise both the value of somaxconn and tcp_max_syn_backlog -# in order to get the desired effect. -tcp-backlog 511 - -# By default Redis listens for connections from all the network interfaces -# available on the server. It is possible to listen to just one or multiple -# interfaces using the "bind" configuration directive, followed by one or -# more IP addresses. -# -# Examples: -# -# bind 192.168.1.100 10.0.0.1 -# bind 127.0.0.1 - -# Specify the path for the Unix socket that will be used to listen for -# incoming connections. There is no default, so Redis will not listen -# on a unix socket when not specified. -# -# unixsocket /tmp/redis.sock -# unixsocketperm 700 - -# Close the connection after a client is idle for N seconds (0 to disable) -timeout 0 - -# TCP keepalive. -# -# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence -# of communication. This is useful for two reasons: -# -# 1) Detect dead peers. -# 2) Take the connection alive from the point of view of network -# equipment in the middle. -# -# On Linux, the specified value (in seconds) is the period used to send ACKs. -# Note that to close the connection the double of the time is needed. -# On other kernels the period depends on the kernel configuration. -# -# A reasonable value for this option is 60 seconds. -tcp-keepalive 60 - -# Specify the server verbosity level. -# This can be one of: -# debug (a lot of information, useful for development/testing) -# verbose (many rarely useful info, but not a mess like the debug level) -# notice (moderately verbose, what you want in production probably) -# warning (only very important / critical messages are logged) -loglevel notice - -# Specify the log file name. Also the empty string can be used to force -# Redis to log on the standard output. Note that if you use standard -# output for logging but daemonize, logs will be sent to /dev/null -logfile "" - -# To enable logging to the system logger, just set 'syslog-enabled' to yes, -# and optionally update the other syslog parameters to suit your needs. -# syslog-enabled no - -# Specify the syslog identity. -# syslog-ident redis - -# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7. -# syslog-facility local0 - -# Set the number of databases. The default database is DB 0, you can select -# a different one on a per-connection basis using SELECT where -# dbid is a number between 0 and 'databases'-1 -databases 16 - -################################ SNAPSHOTTING ################################ -# -# Save the DB on disk: -# -# save -# -# Will save the DB if both the given number of seconds and the given -# number of write operations against the DB occurred. -# -# In the example below the behaviour will be to save: -# after 900 sec (15 min) if at least 1 key changed -# after 300 sec (5 min) if at least 10 keys changed -# after 60 sec if at least 10000 keys changed -# -# Note: you can disable saving completely by commenting out all "save" lines. -# -# It is also possible to remove all the previously configured save -# points by adding a save directive with a single empty string argument -# like in the following example: -# -# save "" - -save 900 1 -save 300 10 -save 60 10000 - -# By default Redis will stop accepting writes if RDB snapshots are enabled -# (at least one save point) and the latest background save failed. -# This will make the user aware (in a hard way) that data is not persisting -# on disk properly, otherwise chances are that no one will notice and some -# disaster will happen. -# -# If the background saving process will start working again Redis will -# automatically allow writes again. -# -# However if you have setup your proper monitoring of the Redis server -# and persistence, you may want to disable this feature so that Redis will -# continue to work as usual even if there are problems with disk, -# permissions, and so forth. -stop-writes-on-bgsave-error yes - -# Compress string objects using LZF when dump .rdb databases? -# For default that's set to 'yes' as it's almost always a win. -# If you want to save some CPU in the saving child set it to 'no' but -# the dataset will likely be bigger if you have compressible values or keys. -rdbcompression yes - -# Since version 5 of RDB a CRC64 checksum is placed at the end of the file. -# This makes the format more resistant to corruption but there is a performance -# hit to pay (around 10%) when saving and loading RDB files, so you can disable it -# for maximum performances. -# -# RDB files created with checksum disabled have a checksum of zero that will -# tell the loading code to skip the check. -rdbchecksum yes - -# The filename where to dump the DB -dbfilename dump.rdb - -# The working directory. -# -# The DB will be written inside this directory, with the filename specified -# above using the 'dbfilename' configuration directive. -# -# The Append Only File will also be created inside this directory. -# -# Note that you must specify a directory here, not a file name. -dir "./" - -################################# REPLICATION ################################# - -# Master-Slave replication. Use slaveof to make a Redis instance a copy of -# another Redis server. A few things to understand ASAP about Redis replication. -# -# 1) Redis replication is asynchronous, but you can configure a master to -# stop accepting writes if it appears to be not connected with at least -# a given number of slaves. -# 2) Redis slaves are able to perform a partial resynchronization with the -# master if the replication link is lost for a relatively small amount of -# time. You may want to configure the replication backlog size (see the next -# sections of this file) with a sensible value depending on your needs. -# 3) Replication is automatic and does not need user intervention. After a -# network partition slaves automatically try to reconnect to masters -# and resynchronize with them. -# -slaveof %master-ip% %master-port% - -# If the master is password protected (using the "requirepass" configuration -# directive below) it is possible to tell the slave to authenticate before -# starting the replication synchronization process, otherwise the master will -# refuse the slave request. -# -# masterauth - -# When a slave loses its connection with the master, or when the replication -# is still in progress, the slave can act in two different ways: -# -# 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will -# still reply to client requests, possibly with out of date data, or the -# data set may just be empty if this is the first synchronization. -# -# 2) if slave-serve-stale-data is set to 'no' the slave will reply with -# an error "SYNC with master in progress" to all the kind of commands -# but to INFO and SLAVEOF. -# -slave-serve-stale-data yes - -# You can configure a slave instance to accept writes or not. Writing against -# a slave instance may be useful to store some ephemeral data (because data -# written on a slave will be easily deleted after resync with the master) but -# may also cause problems if clients are writing to it because of a -# misconfiguration. -# -# Since Redis 2.6 by default slaves are read-only. -# -# Note: read only slaves are not designed to be exposed to untrusted clients -# on the internet. It's just a protection layer against misuse of the instance. -# Still a read only slave exports by default all the administrative commands -# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve -# security of read only slaves using 'rename-command' to shadow all the -# administrative / dangerous commands. -slave-read-only yes - -# Replication SYNC strategy: disk or socket. -# -# ------------------------------------------------------- -# WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY -# ------------------------------------------------------- -# -# New slaves and reconnecting slaves that are not able to continue the replication -# process just receiving differences, need to do what is called a "full -# synchronization". An RDB file is transmitted from the master to the slaves. -# The transmission can happen in two different ways: -# -# 1) Disk-backed: The Redis master creates a new process that writes the RDB -# file on disk. Later the file is transferred by the parent -# process to the slaves incrementally. -# 2) Diskless: The Redis master creates a new process that directly writes the -# RDB file to slave sockets, without touching the disk at all. -# -# With disk-backed replication, while the RDB file is generated, more slaves -# can be queued and served with the RDB file as soon as the current child producing -# the RDB file finishes its work. With diskless replication instead once -# the transfer starts, new slaves arriving will be queued and a new transfer -# will start when the current one terminates. -# -# When diskless replication is used, the master waits a configurable amount of -# time (in seconds) before starting the transfer in the hope that multiple slaves -# will arrive and the transfer can be parallelized. -# -# With slow disks and fast (large bandwidth) networks, diskless replication -# works better. -repl-diskless-sync no - -# When diskless replication is enabled, it is possible to configure the delay -# the server waits in order to spawn the child that trnasfers the RDB via socket -# to the slaves. -# -# This is important since once the transfer starts, it is not possible to serve -# new slaves arriving, that will be queued for the next RDB transfer, so the server -# waits a delay in order to let more slaves arrive. -# -# The delay is specified in seconds, and by default is 5 seconds. To disable -# it entirely just set it to 0 seconds and the transfer will start ASAP. -repl-diskless-sync-delay 5 - -# Slaves send PINGs to server in a predefined interval. It's possible to change -# this interval with the repl_ping_slave_period option. The default value is 10 -# seconds. -# -# repl-ping-slave-period 10 - -# The following option sets the replication timeout for: -# -# 1) Bulk transfer I/O during SYNC, from the point of view of slave. -# 2) Master timeout from the point of view of slaves (data, pings). -# 3) Slave timeout from the point of view of masters (REPLCONF ACK pings). -# -# It is important to make sure that this value is greater than the value -# specified for repl-ping-slave-period otherwise a timeout will be detected -# every time there is low traffic between the master and the slave. -# -# repl-timeout 60 - -# Disable TCP_NODELAY on the slave socket after SYNC? -# -# If you select "yes" Redis will use a smaller number of TCP packets and -# less bandwidth to send data to slaves. But this can add a delay for -# the data to appear on the slave side, up to 40 milliseconds with -# Linux kernels using a default configuration. -# -# If you select "no" the delay for data to appear on the slave side will -# be reduced but more bandwidth will be used for replication. -# -# By default we optimize for low latency, but in very high traffic conditions -# or when the master and slaves are many hops away, turning this to "yes" may -# be a good idea. -repl-disable-tcp-nodelay no - -# Set the replication backlog size. The backlog is a buffer that accumulates -# slave data when slaves are disconnected for some time, so that when a slave -# wants to reconnect again, often a full resync is not needed, but a partial -# resync is enough, just passing the portion of data the slave missed while -# disconnected. -# -# The bigger the replication backlog, the longer the time the slave can be -# disconnected and later be able to perform a partial resynchronization. -# -# The backlog is only allocated once there is at least a slave connected. -# -# repl-backlog-size 1mb - -# After a master has no longer connected slaves for some time, the backlog -# will be freed. The following option configures the amount of seconds that -# need to elapse, starting from the time the last slave disconnected, for -# the backlog buffer to be freed. -# -# A value of 0 means to never release the backlog. -# -# repl-backlog-ttl 3600 - -# The slave priority is an integer number published by Redis in the INFO output. -# It is used by Redis Sentinel in order to select a slave to promote into a -# master if the master is no longer working correctly. -# -# A slave with a low priority number is considered better for promotion, so -# for instance if there are three slaves with priority 10, 100, 25 Sentinel will -# pick the one with priority 10, that is the lowest. -# -# However a special priority of 0 marks the slave as not able to perform the -# role of master, so a slave with priority of 0 will never be selected by -# Redis Sentinel for promotion. -# -# By default the priority is 100. -slave-priority 100 - -# It is possible for a master to stop accepting writes if there are less than -# N slaves connected, having a lag less or equal than M seconds. -# -# The N slaves need to be in "online" state. -# -# The lag in seconds, that must be <= the specified value, is calculated from -# the last ping received from the slave, that is usually sent every second. -# -# This option does not GUARANTEE that N replicas will accept the write, but -# will limit the window of exposure for lost writes in case not enough slaves -# are available, to the specified number of seconds. -# -# For example to require at least 3 slaves with a lag <= 10 seconds use: -# -# min-slaves-to-write 3 -# min-slaves-max-lag 10 -# -# Setting one or the other to 0 disables the feature. -# -# By default min-slaves-to-write is set to 0 (feature disabled) and -# min-slaves-max-lag is set to 10. - -################################## SECURITY ################################### - -# Require clients to issue AUTH before processing any other -# commands. This might be useful in environments in which you do not trust -# others with access to the host running redis-server. -# -# This should stay commented out for backward compatibility and because most -# people do not need auth (e.g. they run their own servers). -# -# Warning: since Redis is pretty fast an outside user can try up to -# 150k passwords per second against a good box. This means that you should -# use a very strong password otherwise it will be very easy to break. -# -# requirepass foobared - -# Command renaming. -# -# It is possible to change the name of dangerous commands in a shared -# environment. For instance the CONFIG command may be renamed into something -# hard to guess so that it will still be available for internal-use tools -# but not available for general clients. -# -# Example: -# -# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52 -# -# It is also possible to completely kill a command by renaming it into -# an empty string: -# -# rename-command CONFIG "" -# -# Please note that changing the name of commands that are logged into the -# AOF file or transmitted to slaves may cause problems. - -################################### LIMITS #################################### - -# Set the max number of connected clients at the same time. By default -# this limit is set to 10000 clients, however if the Redis server is not -# able to configure the process file limit to allow for the specified limit -# the max number of allowed clients is set to the current file limit -# minus 32 (as Redis reserves a few file descriptors for internal uses). -# -# Once the limit is reached Redis will close all the new connections sending -# an error 'max number of clients reached'. -# -# maxclients 10000 - -# Don't use more memory than the specified amount of bytes. -# When the memory limit is reached Redis will try to remove keys -# according to the eviction policy selected (see maxmemory-policy). -# -# If Redis can't remove keys according to the policy, or if the policy is -# set to 'noeviction', Redis will start to reply with errors to commands -# that would use more memory, like SET, LPUSH, and so on, and will continue -# to reply to read-only commands like GET. -# -# This option is usually useful when using Redis as an LRU cache, or to set -# a hard memory limit for an instance (using the 'noeviction' policy). -# -# WARNING: If you have slaves attached to an instance with maxmemory on, -# the size of the output buffers needed to feed the slaves are subtracted -# from the used memory count, so that network problems / resyncs will -# not trigger a loop where keys are evicted, and in turn the output -# buffer of slaves is full with DELs of keys evicted triggering the deletion -# of more keys, and so forth until the database is completely emptied. -# -# In short... if you have slaves attached it is suggested that you set a lower -# limit for maxmemory so that there is some free RAM on the system for slave -# output buffers (but this is not needed if the policy is 'noeviction'). -# -# maxmemory - -# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory -# is reached. You can select among five behaviors: -# -# volatile-lru -> remove the key with an expire set using an LRU algorithm -# allkeys-lru -> remove any key according to the LRU algorithm -# volatile-random -> remove a random key with an expire set -# allkeys-random -> remove a random key, any key -# volatile-ttl -> remove the key with the nearest expire time (minor TTL) -# noeviction -> don't expire at all, just return an error on write operations -# -# Note: with any of the above policies, Redis will return an error on write -# operations, when there are no suitable keys for eviction. -# -# At the date of writing these commands are: set setnx setex append -# incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd -# sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby -# zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby -# getset mset msetnx exec sort -# -# The default is: -# -# maxmemory-policy volatile-lru - -# LRU and minimal TTL algorithms are not precise algorithms but approximated -# algorithms (in order to save memory), so you can select as well the sample -# size to check. For instance for default Redis will check three keys and -# pick the one that was used less recently, you can change the sample size -# using the following configuration directive. -# -# maxmemory-samples 3 - -############################## APPEND ONLY MODE ############################### - -# By default Redis asynchronously dumps the dataset on disk. This mode is -# good enough in many applications, but an issue with the Redis process or -# a power outage may result into a few minutes of writes lost (depending on -# the configured save points). -# -# The Append Only File is an alternative persistence mode that provides -# much better durability. For instance using the default data fsync policy -# (see later in the config file) Redis can lose just one second of writes in a -# dramatic event like a server power outage, or a single write if something -# wrong with the Redis process itself happens, but the operating system is -# still running correctly. -# -# AOF and RDB persistence can be enabled at the same time without problems. -# If the AOF is enabled on startup Redis will load the AOF, that is the file -# with the better durability guarantees. -# -# Please check http://redis.io/topics/persistence for more information. - -appendonly yes - -# The name of the append only file (default: "appendonly.aof") - -appendfilename "appendonly.aof" - -# The fsync() call tells the Operating System to actually write data on disk -# instead of waiting for more data in the output buffer. Some OS will really flush -# data on disk, some other OS will just try to do it ASAP. -# -# Redis supports three different modes: -# -# no: don't fsync, just let the OS flush the data when it wants. Faster. -# always: fsync after every write to the append only log. Slow, Safest. -# everysec: fsync only one time every second. Compromise. -# -# The default is "everysec", as that's usually the right compromise between -# speed and data safety. It's up to you to understand if you can relax this to -# "no" that will let the operating system flush the output buffer when -# it wants, for better performances (but if you can live with the idea of -# some data loss consider the default persistence mode that's snapshotting), -# or on the contrary, use "always" that's very slow but a bit safer than -# everysec. -# -# More details please check the following article: -# http://antirez.com/post/redis-persistence-demystified.html -# -# If unsure, use "everysec". - -# appendfsync always -appendfsync everysec -# appendfsync no - -# When the AOF fsync policy is set to always or everysec, and a background -# saving process (a background save or AOF log background rewriting) is -# performing a lot of I/O against the disk, in some Linux configurations -# Redis may block too long on the fsync() call. Note that there is no fix for -# this currently, as even performing fsync in a different thread will block -# our synchronous write(2) call. -# -# In order to mitigate this problem it's possible to use the following option -# that will prevent fsync() from being called in the main process while a -# BGSAVE or BGREWRITEAOF is in progress. -# -# This means that while another child is saving, the durability of Redis is -# the same as "appendfsync none". In practical terms, this means that it is -# possible to lose up to 30 seconds of log in the worst scenario (with the -# default Linux settings). -# -# If you have latency problems turn this to "yes". Otherwise leave it as -# "no" that is the safest pick from the point of view of durability. - -no-appendfsync-on-rewrite no - -# Automatic rewrite of the append only file. -# Redis is able to automatically rewrite the log file implicitly calling -# BGREWRITEAOF when the AOF log size grows by the specified percentage. -# -# This is how it works: Redis remembers the size of the AOF file after the -# latest rewrite (if no rewrite has happened since the restart, the size of -# the AOF at startup is used). -# -# This base size is compared to the current size. If the current size is -# bigger than the specified percentage, the rewrite is triggered. Also -# you need to specify a minimal size for the AOF file to be rewritten, this -# is useful to avoid rewriting the AOF file even if the percentage increase -# is reached but it is still pretty small. -# -# Specify a percentage of zero in order to disable the automatic AOF -# rewrite feature. - -auto-aof-rewrite-percentage 100 -auto-aof-rewrite-min-size 64mb - -# An AOF file may be found to be truncated at the end during the Redis -# startup process, when the AOF data gets loaded back into memory. -# This may happen when the system where Redis is running -# crashes, especially when an ext4 filesystem is mounted without the -# data=ordered option (however this can't happen when Redis itself -# crashes or aborts but the operating system still works correctly). -# -# Redis can either exit with an error when this happens, or load as much -# data as possible (the default now) and start if the AOF file is found -# to be truncated at the end. The following option controls this behavior. -# -# If aof-load-truncated is set to yes, a truncated AOF file is loaded and -# the Redis server starts emitting a log to inform the user of the event. -# Otherwise if the option is set to no, the server aborts with an error -# and refuses to start. When the option is set to no, the user requires -# to fix the AOF file using the "redis-check-aof" utility before to restart -# the server. -# -# Note that if the AOF file will be found to be corrupted in the middle -# the server will still exit with an error. This option only applies when -# Redis will try to read more data from the AOF file but not enough bytes -# will be found. -aof-load-truncated yes - -################################ LUA SCRIPTING ############################### - -# Max execution time of a Lua script in milliseconds. -# -# If the maximum execution time is reached Redis will log that a script is -# still in execution after the maximum allowed time and will start to -# reply to queries with an error. -# -# When a long running script exceeds the maximum execution time only the -# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be -# used to stop a script that did not yet called write commands. The second -# is the only way to shut down the server in the case a write command was -# already issued by the script but the user doesn't want to wait for the natural -# termination of the script. -# -# Set it to 0 or a negative value for unlimited execution without warnings. -lua-time-limit 5000 - -################################## SLOW LOG ################################### - -# The Redis Slow Log is a system to log queries that exceeded a specified -# execution time. The execution time does not include the I/O operations -# like talking with the client, sending the reply and so forth, -# but just the time needed to actually execute the command (this is the only -# stage of command execution where the thread is blocked and can not serve -# other requests in the meantime). -# -# You can configure the slow log with two parameters: one tells Redis -# what is the execution time, in microseconds, to exceed in order for the -# command to get logged, and the other parameter is the length of the -# slow log. When a new command is logged the oldest one is removed from the -# queue of logged commands. - -# The following time is expressed in microseconds, so 1000000 is equivalent -# to one second. Note that a negative number disables the slow log, while -# a value of zero forces the logging of every command. -slowlog-log-slower-than 10000 - -# There is no limit to this length. Just be aware that it will consume memory. -# You can reclaim memory used by the slow log with SLOWLOG RESET. -slowlog-max-len 128 - -################################ LATENCY MONITOR ############################## - -# The Redis latency monitoring subsystem samples different operations -# at runtime in order to collect data related to possible sources of -# latency of a Redis instance. -# -# Via the LATENCY command this information is available to the user that can -# print graphs and obtain reports. -# -# The system only logs operations that were performed in a time equal or -# greater than the amount of milliseconds specified via the -# latency-monitor-threshold configuration directive. When its value is set -# to zero, the latency monitor is turned off. -# -# By default latency monitoring is disabled since it is mostly not needed -# if you don't have latency issues, and collecting data has a performance -# impact, that while very small, can be measured under big load. Latency -# monitoring can easily be enalbed at runtime using the command -# "CONFIG SET latency-monitor-threshold " if needed. -latency-monitor-threshold 0 - -############################# Event notification ############################## - -# Redis can notify Pub/Sub clients about events happening in the key space. -# This feature is documented at http://redis.io/topics/notifications -# -# For instance if keyspace events notification is enabled, and a client -# performs a DEL operation on key "foo" stored in the Database 0, two -# messages will be published via Pub/Sub: -# -# PUBLISH __keyspace@0__:foo del -# PUBLISH __keyevent@0__:del foo -# -# It is possible to select the events that Redis will notify among a set -# of classes. Every class is identified by a single character: -# -# K Keyspace events, published with __keyspace@__ prefix. -# E Keyevent events, published with __keyevent@__ prefix. -# g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ... -# $ String commands -# l List commands -# s Set commands -# h Hash commands -# z Sorted set commands -# x Expired events (events generated every time a key expires) -# e Evicted events (events generated when a key is evicted for maxmemory) -# A Alias for g$lshzxe, so that the "AKE" string means all the events. -# -# The "notify-keyspace-events" takes as argument a string that is composed -# of zero or multiple characters. The empty string means that notifications -# are disabled. -# -# Example: to enable list and generic events, from the point of view of the -# event name, use: -# -# notify-keyspace-events Elg -# -# Example 2: to get the stream of the expired keys subscribing to channel -# name __keyevent@0__:expired use: -# -# notify-keyspace-events Ex -# -# By default all notifications are disabled because most users don't need -# this feature and the feature has some overhead. Note that if you don't -# specify at least one of K or E, no events will be delivered. -notify-keyspace-events "" - -############################### ADVANCED CONFIG ############################### - -# Hashes are encoded using a memory efficient data structure when they have a -# small number of entries, and the biggest entry does not exceed a given -# threshold. These thresholds can be configured using the following directives. -hash-max-ziplist-entries 512 -hash-max-ziplist-value 64 - -# Similarly to hashes, small lists are also encoded in a special way in order -# to save a lot of space. The special representation is only used when -# you are under the following limits: -list-max-ziplist-entries 512 -list-max-ziplist-value 64 - -# Sets have a special encoding in just one case: when a set is composed -# of just strings that happen to be integers in radix 10 in the range -# of 64 bit signed integers. -# The following configuration setting sets the limit in the size of the -# set in order to use this special memory saving encoding. -set-max-intset-entries 512 - -# Similarly to hashes and lists, sorted sets are also specially encoded in -# order to save a lot of space. This encoding is only used when the length and -# elements of a sorted set are below the following limits: -zset-max-ziplist-entries 128 -zset-max-ziplist-value 64 - -# HyperLogLog sparse representation bytes limit. The limit includes the -# 16 bytes header. When an HyperLogLog using the sparse representation crosses -# this limit, it is converted into the dense representation. -# -# A value greater than 16000 is totally useless, since at that point the -# dense representation is more memory efficient. -# -# The suggested value is ~ 3000 in order to have the benefits of -# the space efficient encoding without slowing down too much PFADD, -# which is O(N) with the sparse encoding. The value can be raised to -# ~ 10000 when CPU is not a concern, but space is, and the data set is -# composed of many HyperLogLogs with cardinality in the 0 - 15000 range. -hll-sparse-max-bytes 3000 - -# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in -# order to help rehashing the main Redis hash table (the one mapping top-level -# keys to values). The hash table implementation Redis uses (see dict.c) -# performs a lazy rehashing: the more operation you run into a hash table -# that is rehashing, the more rehashing "steps" are performed, so if the -# server is idle the rehashing is never complete and some more memory is used -# by the hash table. -# -# The default is to use this millisecond 10 times every second in order to -# actively rehash the main dictionaries, freeing memory when possible. -# -# If unsure: -# use "activerehashing no" if you have hard latency requirements and it is -# not a good thing in your environment that Redis can reply from time to time -# to queries with 2 milliseconds delay. -# -# use "activerehashing yes" if you don't have such hard requirements but -# want to free memory asap when possible. -activerehashing yes - -# The client output buffer limits can be used to force disconnection of clients -# that are not reading data from the server fast enough for some reason (a -# common reason is that a Pub/Sub client can't consume messages as fast as the -# publisher can produce them). -# -# The limit can be set differently for the three different classes of clients: -# -# normal -> normal clients including MONITOR clients -# slave -> slave clients -# pubsub -> clients subscribed to at least one pubsub channel or pattern -# -# The syntax of every client-output-buffer-limit directive is the following: -# -# client-output-buffer-limit -# -# A client is immediately disconnected once the hard limit is reached, or if -# the soft limit is reached and remains reached for the specified number of -# seconds (continuously). -# So for instance if the hard limit is 32 megabytes and the soft limit is -# 16 megabytes / 10 seconds, the client will get disconnected immediately -# if the size of the output buffers reach 32 megabytes, but will also get -# disconnected if the client reaches 16 megabytes and continuously overcomes -# the limit for 10 seconds. -# -# By default normal clients are not limited because they don't receive data -# without asking (in a push way), but just after a request, so only -# asynchronous clients may create a scenario where data is requested faster -# than it can read. -# -# Instead there is a default limit for pubsub and slave clients, since -# subscribers and slaves receive data in a push fashion. -# -# Both the hard or the soft limit can be disabled by setting them to zero. -client-output-buffer-limit normal 0 0 0 -client-output-buffer-limit slave 256mb 64mb 60 -client-output-buffer-limit pubsub 32mb 8mb 60 - -# Redis calls an internal function to perform many background tasks, like -# closing connections of clients in timeout, purging expired keys that are -# never requested, and so forth. -# -# Not all tasks are performed with the same frequency, but Redis checks for -# tasks to perform according to the specified "hz" value. -# -# By default "hz" is set to 10. Raising the value will use more CPU when -# Redis is idle, but at the same time will make Redis more responsive when -# there are many keys expiring at the same time, and timeouts may be -# handled with more precision. -# -# The range is between 1 and 500, however a value over 100 is usually not -# a good idea. Most users should use the default of 10 and raise this up to -# 100 only in environments where very low latency is required. -hz 10 - -# When a child rewrites the AOF file, if the following option is enabled -# the file will be fsync-ed every 32 MB of data generated. This is useful -# in order to commit the file to the disk more incrementally and avoid -# big latency spikes. -aof-rewrite-incremental-fsync yes diff --git a/release-0.19.0/examples/redis/image/run.sh b/release-0.19.0/examples/redis/image/run.sh deleted file mode 100755 index 90815a1b81f..00000000000 --- a/release-0.19.0/examples/redis/image/run.sh +++ /dev/null @@ -1,84 +0,0 @@ -#!/bin/bash - -# Copyright 2014 The Kubernetes Authors All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -function launchmaster() { - if [[ ! -e /redis-master-data ]]; then - echo "Redis master data doesn't exist, data won't be persistent!" - mkdir /redis-master-data - fi - redis-server /redis-master/redis.conf -} - -function launchsentinel() { - while true; do - master=$(redis-cli -h ${REDIS_SENTINEL_SERVICE_HOST} -p ${REDIS_SENTINEL_SERVICE_PORT} --csv SENTINEL get-master-addr-by-name mymaster | tr ',' ' ' | cut -d' ' -f1) - if [[ -n ${master} ]]; then - master="${master//\"}" - else - master=$(hostname -i) - fi - - redis-cli -h ${master} INFO - if [[ "$?" == "0" ]]; then - break - fi - echo "Connecting to master failed. Waiting..." - sleep 10 - done - - sentinel_conf=sentinel.conf - - echo "sentinel monitor mymaster ${master} 6379 2" > ${sentinel_conf} - echo "sentinel down-after-milliseconds mymaster 60000" >> ${sentinel_conf} - echo "sentinel failover-timeout mymaster 180000" >> ${sentinel_conf} - echo "sentinel parallel-syncs mymaster 1" >> ${sentinel_conf} - - redis-sentinel ${sentinel_conf} -} - -function launchslave() { - while true; do - master=$(redis-cli -h ${REDIS_SENTINEL_SERVICE_HOST} -p ${REDIS_SENTINEL_SERVICE_PORT} --csv SENTINEL get-master-addr-by-name mymaster | tr ',' ' ' | cut -d' ' -f1) - if [[ -n ${master} ]]; then - master="${master//\"}" - else - echo "Failed to find master." - sleep 60 - exit 1 - fi - redis-cli -h ${master} INFO - if [[ "$?" == "0" ]]; then - break - fi - echo "Connecting to master failed. Waiting..." - sleep 10 - done - perl -pi -e "s/%master-ip%/${master}/" /redis-slave/redis.conf - perl -pi -e "s/%master-port%/6379/" /redis-slave/redis.conf - redis-server /redis-slave/redis.conf -} - -if [[ "${MASTER}" == "true" ]]; then - launchmaster - exit 0 -fi - -if [[ "${SENTINEL}" == "true" ]]; then - launchsentinel - exit 0 -fi - -launchslave diff --git a/release-0.19.0/examples/redis/redis-controller.yaml b/release-0.19.0/examples/redis/redis-controller.yaml deleted file mode 100644 index 03f667a9814..00000000000 --- a/release-0.19.0/examples/redis/redis-controller.yaml +++ /dev/null @@ -1,28 +0,0 @@ -apiVersion: v1beta3 -kind: ReplicationController -metadata: - name: redis -spec: - replicas: 1 - selector: - name: redis - template: - metadata: - labels: - name: redis - spec: - containers: - - name: redis - image: kubernetes/redis:v1 - ports: - - containerPort: 6379 - resources: - limits: - cpu: "1" - volumeMounts: - - mountPath: /redis-master-data - name: data - volumes: - - name: data - emptyDir: {} - diff --git a/release-0.19.0/examples/redis/redis-master.yaml b/release-0.19.0/examples/redis/redis-master.yaml deleted file mode 100644 index 02abada976d..00000000000 --- a/release-0.19.0/examples/redis/redis-master.yaml +++ /dev/null @@ -1,33 +0,0 @@ -apiVersion: v1beta3 -kind: Pod -metadata: - labels: - name: redis - redis-sentinel: "true" - role: master - name: redis-master -spec: - containers: - - name: master - image: kubernetes/redis:v1 - env: - - name: MASTER - value: "true" - ports: - - containerPort: 6379 - resources: - limits: - cpu: "1" - volumeMounts: - - mountPath: /redis-master-data - name: data - - name: sentinel - image: kubernetes/redis:v1 - env: - - name: SENTINEL - value: "true" - ports: - - containerPort: 26379 - volumes: - - name: data - emptyDir: {} diff --git a/release-0.19.0/examples/redis/redis-proxy.yaml b/release-0.19.0/examples/redis/redis-proxy.yaml deleted file mode 100644 index 2993a45bf10..00000000000 --- a/release-0.19.0/examples/redis/redis-proxy.yaml +++ /dev/null @@ -1,14 +0,0 @@ -apiVersion: v1beta3 -kind: Pod -metadata: - labels: - name: redis-proxy - role: proxy - name: redis-proxy -spec: - containers: - - name: proxy - image: kubernetes/redis-proxy:v1 - ports: - - containerPort: 6379 - name: api diff --git a/release-0.19.0/examples/redis/redis-sentinel-controller.yaml b/release-0.19.0/examples/redis/redis-sentinel-controller.yaml deleted file mode 100644 index d75887736fa..00000000000 --- a/release-0.19.0/examples/redis/redis-sentinel-controller.yaml +++ /dev/null @@ -1,23 +0,0 @@ -apiVersion: v1beta3 -kind: ReplicationController -metadata: - name: redis-sentinel -spec: - replicas: 1 - selector: - redis-sentinel: "true" - template: - metadata: - labels: - name: redis-sentinel - redis-sentinel: "true" - role: sentinel - spec: - containers: - - name: sentinel - image: kubernetes/redis:v1 - env: - - name: SENTINEL - value: "true" - ports: - - containerPort: 26379 diff --git a/release-0.19.0/examples/redis/redis-sentinel-service.yaml b/release-0.19.0/examples/redis/redis-sentinel-service.yaml deleted file mode 100644 index 7078d182f3f..00000000000 --- a/release-0.19.0/examples/redis/redis-sentinel-service.yaml +++ /dev/null @@ -1,13 +0,0 @@ -apiVersion: v1beta3 -kind: Service -metadata: - labels: - name: sentinel - role: service - name: redis-sentinel -spec: - ports: - - port: 26379 - targetPort: 26379 - selector: - redis-sentinel: "true" diff --git a/release-0.19.0/examples/replication.yaml b/release-0.19.0/examples/replication.yaml deleted file mode 100644 index 6692777adf3..00000000000 --- a/release-0.19.0/examples/replication.yaml +++ /dev/null @@ -1,23 +0,0 @@ -apiVersion: v1beta3 -kind: ReplicationController -metadata: - name: nginx - namespace: default -spec: - replicas: 3 - selector: - app: nginx - template: - metadata: - name: nginx - labels: - app: nginx - spec: - containers: - - image: nginx - imagePullPolicy: IfNotPresent - name: nginx - ports: - - containerPort: 80 - protocol: TCP - restartPolicy: Always diff --git a/release-0.19.0/examples/resourcequota/README.md b/release-0.19.0/examples/resourcequota/README.md deleted file mode 100644 index 79d4d078af8..00000000000 --- a/release-0.19.0/examples/resourcequota/README.md +++ /dev/null @@ -1,155 +0,0 @@ -Resource Quota -======================================== -This example demonstrates how resource quota and limits can be applied to a Kubernetes namespace. - -This example assumes you have a functional Kubernetes setup. - -Step 1: Create a namespace ------------------------------------------ -This example will work in a custom namespace to demonstrate the concepts involved. - -Let's create a new namespace called quota-example: - -```shell -$ kubectl create -f namespace.yaml -$ kubectl get namespaces -NAME LABELS STATUS -default Active -quota-example Active -``` - -Step 2: Apply a quota to the namespace ------------------------------------------ -By default, a pod will run with unbounded CPU and memory limits. This means that any pod in the -system will be able to consume as much CPU and memory on the node that executes the pod. - -Users may want to restrict how much of the cluster resources a given namespace may consume -across all of its pods in order to manage cluster usage. To do this, a user applies a quota to -a namespace. A quota lets the user set hard limits on the total amount of node resources (cpu, memory) -and API resources (pods, services, etc.) that a namespace may consume. - -Let's create a simple quota in our namespace: - -```shell -$ kubectl create -f quota.yaml --namespace=quota-example -``` - -Once your quota is applied to a namespace, the system will restrict any creation of content -in the namespace until the quota usage has been calculated. This should happen quickly. - -You can describe your current quota usage to see what resources are being consumed in your -namespace. - -``` -$ kubectl describe quota quota --namespace=quota-example -Name: quota -Resource Used Hard --------- ---- ---- -cpu 0m 20 -memory 0m 1Gi -persistentvolumeclaims 0m 10 -pods 0m 10 -replicationcontrollers 0m 20 -resourcequotas 1 1 -secrets 1 10 -services 0m 5 -``` - -Step 3: Applying default resource limits ------------------------------------------ -Pod authors rarely specify resource limits for their pods. - -Since we applied a quota to our project, let's see what happens when an end-user creates a pod that has unbounded -cpu and memory by creating an nginx container. - -To demonstrate, lets create a replication controller that runs nginx: - -```shell -$ kubectl run nginx --image=nginx --replicas=1 --namespace=quota-example -CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS -nginx nginx nginx run=nginx 1 -``` - -Now let's look at the pods that were created. - -```shell -$ kubectl get pods --namespace=quota-example -POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE -``` - -What happened? I have no pods! Let's describe the replication controller to get a view of what is happening. - -```shell -kubectl describe rc nginx --namespace=quota-example -Name: nginx -Image(s): nginx -Selector: run=nginx -Labels: run=nginx -Replicas: 0 current / 1 desired -Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed -Events: - FirstSeen LastSeen Count From SubobjectPath Reason Message - Mon, 01 Jun 2015 22:49:31 -0400 Mon, 01 Jun 2015 22:52:22 -0400 7 {replication-controller } failedCreate Error creating: Pod "nginx-" is forbidden: Limited to 1Gi memory, but pod has no specified memory limit -``` - -The Kubernetes API server is rejecting the replication controllers requests to create a pod because our pods -do not specify any memory usage. - -So let's set some default limits for the amount of cpu and memory a pod can consume: - -```shell -$ kubectl create -f limits.yaml --namespace=quota-example -limitranges/limits -$ kubectl describe limits limits --namespace=quota-example -Name: limits -Type Resource Min Max Default ----- -------- --- --- --- -Container cpu - - 100m -Container memory - - 512Mi -``` - -Now any time a pod is created in this namespace, if it has not specified any resource limits, the default -amount of cpu and memory per container will be applied as part of admission control. - -Now that we have applied default limits for our namespace, our replication controller should be able to -create its pods. - -```shell -$ kubectl get pods --namespace=quota-example -POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE -nginx-t40zm 10.0.0.2 10.245.1.3/10.245.1.3 run=nginx Running 2 minutes - nginx nginx Running 2 minutes -``` - -And if we print out our quota usage in the namespace: - -```shell -kubectl describe quota quota --namespace=quota-example -Name: quota -Resource Used Hard --------- ---- ---- -cpu 100m 20 -memory 536870912 1Gi -persistentvolumeclaims 0m 10 -pods 1 10 -replicationcontrollers 1 20 -resourcequotas 1 1 -secrets 1 10 -services 0m 5 -``` - -You can now see the pod that was created is consuming explicit amounts of resources, and the usage is being -tracked by the Kubernetes system properly. - -Summary ----------------------------- -Actions that consume node resources for cpu and memory can be subject to hard quota limits defined -by the namespace quota. - -Any action that consumes those resources can be tweaked, or can pick up namespace level defaults to -meet your end goal. - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/resourcequota/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/resourcequota/README.md?pixel)]() diff --git a/release-0.19.0/examples/resourcequota/limits.yaml b/release-0.19.0/examples/resourcequota/limits.yaml deleted file mode 100644 index edba3d8318c..00000000000 --- a/release-0.19.0/examples/resourcequota/limits.yaml +++ /dev/null @@ -1,10 +0,0 @@ -apiVersion: v1beta3 -kind: LimitRange -metadata: - name: limits -spec: - limits: - - default: - cpu: 100m - memory: 512Mi - type: Container diff --git a/release-0.19.0/examples/resourcequota/namespace.yaml b/release-0.19.0/examples/resourcequota/namespace.yaml deleted file mode 100644 index 93f3dfb8fc8..00000000000 --- a/release-0.19.0/examples/resourcequota/namespace.yaml +++ /dev/null @@ -1,4 +0,0 @@ -apiVersion: v1beta3 -kind: Namespace -metadata: - name: quota-example diff --git a/release-0.19.0/examples/resourcequota/quota.yaml b/release-0.19.0/examples/resourcequota/quota.yaml deleted file mode 100644 index 61493a0167c..00000000000 --- a/release-0.19.0/examples/resourcequota/quota.yaml +++ /dev/null @@ -1,14 +0,0 @@ -apiVersion: v1beta3 -kind: ResourceQuota -metadata: - name: quota -spec: - hard: - cpu: "20" - memory: 1Gi - persistentvolumeclaims: "10" - pods: "10" - replicationcontrollers: "20" - resourcequotas: "1" - secrets: "10" - services: "5" diff --git a/release-0.19.0/examples/rethinkdb/README.md b/release-0.19.0/examples/rethinkdb/README.md deleted file mode 100644 index e760648d43e..00000000000 --- a/release-0.19.0/examples/rethinkdb/README.md +++ /dev/null @@ -1,138 +0,0 @@ -RethinkDB Cluster on Kubernetes -============================== - -Setting up a [rethinkdb](http://rethinkdb.com/) cluster on [kubernetes](http://kubernetes.io) - -**Features** - - * Auto configuration cluster by querying info from k8s - * Simple - -Quick start ------------ -**Step 0** - -change the namespace of the current context to "rethinkdb" -``` -$kubectl config view -o template --template='{{index . "current-context"}}' | xargs -I {} kubectl config set-context {} --namespace=rethinkdb -``` - -**Step 1** - -antmanler/rethinkdb will discover peer using endpoints provided by kubernetes_ro service, -so first create a service so the following pod can query its endpoint - -```shell -$kubectl create -f driver-service.yaml -``` - -check out: - -```shell -$kubectl get se -NAME LABELS SELECTOR IP(S) PORT(S) -rethinkdb-driver db=influxdb db=rethinkdb 10.0.27.114 28015/TCP -``` - -**Step 2** - -start fist server in cluster - -```shell -$kubectl create -f rc.yaml -``` - -Actually, you can start servers as many as you want at one time, just modify the `replicas` in `rc.ymal` - -check out again: - -```shell -$kubectl get po -POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE -rethinkdb-rc-1.16.0-6odi0 kubernetes-minion-s59e/ db=rethinkdb,role=replicas Pending 11 seconds - rethinkdb antmanler/rethinkdb:1.16.0 -``` - -**Done!** - - ---- - -Scale ------ - -You can scale up you cluster using `kubectl scale`, and new pod will join to exsits cluster automatically, for example - - -```shell -$kubectl scale rc rethinkdb-rc-1.16.0 --replicas=3 -scaled -$kubectl get po -POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE -rethinkdb-rc-1.16.0-6odi0 10.244.3.3 kubernetes-minion-s59e/104.197.79.42 db=rethinkdb,role=replicas Running About a minute - rethinkdb antmanler/rethinkdb:1.16.0 Running About a minute -rethinkdb-rc-1.16.0-e3mxv kubernetes-minion-d7ub/ db=rethinkdb,role=replicas Pending 6 seconds - rethinkdb antmanler/rethinkdb:1.16.0 -rethinkdb-rc-1.16.0-manu6 kubernetes-minion-cybz/ db=rethinkdb,role=replicas Pending 6 seconds - rethinkdb antmanler/rethinkdb:1.16.0 -``` - -Admin ------ - -You need a separate pod (labeled as role:admin) to access Web Admin UI - -```shell -kubectl create -f admin-pod.yaml -kubectl create -f admin-service.yaml -``` - -find the service - -```shell -$kubectl get se -NAME LABELS SELECTOR IP(S) PORT(S) -rethinkdb-admin db=influxdb db=rethinkdb,role=admin 10.0.131.19 8080/TCP - 104.197.19.120 -rethinkdb-driver db=influxdb db=rethinkdb 10.0.27.114 28015/TCP -``` - -We request for an external load balancer in the [admin-service.yaml](admin-service.yaml) file: - -``` -createExternalLoadBalancer: true -``` - -The external load balancer allows us to access the service from outside via an external IP, which is 104.197.19.120 in this case. - -Note that you may need to create a firewall rule to allow the traffic, assuming you are using GCE: -``` -$ gcloud compute firewall-rules create rethinkdb --allow=tcp:8080 -``` - -Now you can open a web browser and access to *http://104.197.19.120:8080* to manage your cluster. - - - -**Why not just using pods in replicas?** - -This is because kube-proxy will act as a load balancer and send your traffic to different server, -since the ui is not stateless when playing with Web Admin UI will cause `Connection not open on server` error. - - -- - - - -**BTW** - - * All services and pods are placed under namespace `rethinkdb`. - - * `gen_pod.sh` is using to generate pod templates for my local cluster, -the generated pods which is using `nodeSelector` to force k8s to schedule containers to my designate nodes, for I need to access persistent data on my host dirs. Note that one needs to label the node before 'nodeSelector' can work, see this [tutorial](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/node-selection) - - * see [antmanler/rethinkdb-k8s](https://github.com/antmanler/rethinkdb-k8s) for detail - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/rethinkdb/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/rethinkdb/README.md?pixel)]() diff --git a/release-0.19.0/examples/rethinkdb/admin-pod.yaml b/release-0.19.0/examples/rethinkdb/admin-pod.yaml deleted file mode 100644 index 87cf82c3ce5..00000000000 --- a/release-0.19.0/examples/rethinkdb/admin-pod.yaml +++ /dev/null @@ -1,25 +0,0 @@ -apiVersion: v1beta3 -kind: Pod -metadata: - labels: - db: rethinkdb - role: admin - name: rethinkdb-admin-1.16.0 - namespace: rethinkdb -spec: - containers: - - image: antmanler/rethinkdb:1.16.0 - name: rethinkdb - ports: - - containerPort: 8080 - name: admin-port - - containerPort: 28015 - name: driver-port - - containerPort: 29015 - name: cluster-port - volumeMounts: - - mountPath: /data/rethinkdb_data - name: rethinkdb-storage - volumes: - - name: rethinkdb-storage - emptyDir: {} diff --git a/release-0.19.0/examples/rethinkdb/admin-service.yaml b/release-0.19.0/examples/rethinkdb/admin-service.yaml deleted file mode 100644 index 6820e74eab1..00000000000 --- a/release-0.19.0/examples/rethinkdb/admin-service.yaml +++ /dev/null @@ -1,15 +0,0 @@ -apiVersion: v1beta3 -kind: Service -metadata: - labels: - db: influxdb - name: rethinkdb-admin - namespace: rethinkdb -spec: - ports: - - port: 8080 - targetPort: 8080 - createExternalLoadBalancer: true - selector: - db: rethinkdb - role: admin diff --git a/release-0.19.0/examples/rethinkdb/driver-service.yaml b/release-0.19.0/examples/rethinkdb/driver-service.yaml deleted file mode 100644 index 824afac8790..00000000000 --- a/release-0.19.0/examples/rethinkdb/driver-service.yaml +++ /dev/null @@ -1,13 +0,0 @@ -apiVersion: v1beta3 -kind: Service -metadata: - labels: - db: influxdb - name: rethinkdb-driver - namespace: rethinkdb -spec: - ports: - - port: 28015 - targetPort: 28015 - selector: - db: rethinkdb diff --git a/release-0.19.0/examples/rethinkdb/gen-pod.sh b/release-0.19.0/examples/rethinkdb/gen-pod.sh deleted file mode 100755 index 11681aaedd2..00000000000 --- a/release-0.19.0/examples/rethinkdb/gen-pod.sh +++ /dev/null @@ -1,73 +0,0 @@ -#!/bin/bash - -# Copyright 2015 The Kubernetes Authors All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -set -o errexit -set -o nounset -set -o pipefail - -: ${VERSION:=1.16.0} - -readonly NAME=${1-} -if [[ -z "${NAME}" ]]; then - echo -e "\033[1;31mName must be specified\033[0m" - exit 1 -fi - -ADMIN="" -if [[ ${NAME} == "admin" ]]; then - ADMIN="role: admin" -fi - -NODE="" -# One needs to label a node with the same key/value pair, -# i.e., 'kubectl label nodes name=${2}' -if [[ ! -z "${2-}" ]]; then - NODE="nodeSelector: { name: ${2} }" -fi - -cat << EOF -apiVersion: v1beta3 -kind: Pod -metadata: - labels: - ${ADMIN} - db: rethinkdb - name: rethinkdb-${NAME}-${VERSION} - namespace: rethinkdb -spec: - containers: - - image: antmanler/rethinkdb:${VERSION} - name: rethinkdb - ports: - - containerPort: 8080 - name: admin-port - protocol: TCP - - containerPort: 28015 - name: driver-port - protocol: TCP - - containerPort: 29015 - name: cluster-port - protocol: TCP - volumeMounts: - - mountPath: /data/rethinkdb_data - name: rethinkdb-storage - ${NODE} - restartPolicy: Always - volumes: - - hostPath: - path: /data/db/rethinkdb - name: rethinkdb-storage -EOF diff --git a/release-0.19.0/examples/rethinkdb/image/Dockerfile b/release-0.19.0/examples/rethinkdb/image/Dockerfile deleted file mode 100644 index e4a14508ac6..00000000000 --- a/release-0.19.0/examples/rethinkdb/image/Dockerfile +++ /dev/null @@ -1,14 +0,0 @@ -FROM rethinkdb:1.16.0 - -MAINTAINER BinZhao - -RUN apt-get update && \ - apt-get install -yq curl && \ - rm -rf /var/cache/apt/* && rm -rf /var/lib/apt/lists/* && \ - curl -L http://stedolan.github.io/jq/download/linux64/jq > /usr/bin/jq && \ - chmod u+x /usr/bin/jq - -COPY ./run.sh /usr/bin/run.sh -RUN chmod u+x /usr/bin/run.sh - -CMD ["/usr/bin/run.sh"] diff --git a/release-0.19.0/examples/rethinkdb/image/run.sh b/release-0.19.0/examples/rethinkdb/image/run.sh deleted file mode 100644 index 34574924481..00000000000 --- a/release-0.19.0/examples/rethinkdb/image/run.sh +++ /dev/null @@ -1,37 +0,0 @@ -#!/bin/bash - -# Copyright 2015 The Kubernetes Authors All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -set -o pipefail - -IP="" -if [[ -n "${KUBERNETES_RO_SERVICE_HOST}" ]]; then - - : ${NAMESPACE:=rethinkdb} - # try to pick up first different ip from endpoints - MYHOST=$(ip addr | grep 'state UP' -A2 | tail -n1 | awk '{print $2}' | cut -f1 -d'/') - URL="${KUBERNETES_RO_SERVICE_HOST}/api/v1beta3/namespaces/${NAMESPACE}/endpoints/rethinkdb-driver" - IP=$(curl -s ${URL} | jq -s -r --arg h "${MYHOST}" '.[0].subsets | .[].addresses | [ .[].IP ] | map(select(. != $h)) | .[0]') || exit 1 - [[ "${IP}" == null ]] && IP="" -fi - -if [[ -n "${IP}" ]]; then - ENDPOINT="${IP}:29015" - echo "Join to ${ENDPOINT}" - exec rethinkdb --bind all --join ${ENDPOINT} -else - echo "Start single instance" - exec rethinkdb --bind all -fi diff --git a/release-0.19.0/examples/rethinkdb/rc.yaml b/release-0.19.0/examples/rethinkdb/rc.yaml deleted file mode 100644 index 558a7c86ad8..00000000000 --- a/release-0.19.0/examples/rethinkdb/rc.yaml +++ /dev/null @@ -1,34 +0,0 @@ -apiVersion: v1beta3 -kind: ReplicationController -metadata: - labels: - db: rethinkdb - name: rethinkdb-rc-1.16.0 - namespace: rethinkdb -spec: - replicas: 1 - selector: - db: rethinkdb - role: replicas - template: - metadata: - labels: - db: rethinkdb - role: replicas - spec: - containers: - - image: antmanler/rethinkdb:1.16.0 - name: rethinkdb - ports: - - containerPort: 8080 - name: admin-port - - containerPort: 28015 - name: driver-port - - containerPort: 29015 - name: cluster-port - volumeMounts: - - mountPath: /data/rethinkdb_data - name: rethinkdb-storage - volumes: - - name: rethinkdb-storage - emptyDir: {} diff --git a/release-0.19.0/examples/secrets/README.md b/release-0.19.0/examples/secrets/README.md deleted file mode 100644 index 6ff0189821f..00000000000 --- a/release-0.19.0/examples/secrets/README.md +++ /dev/null @@ -1,52 +0,0 @@ -# Secrets example - -Following this example, you will create a secret and a pod that consumes that secret in a volume. - -## Step Zero: Prerequisites - -This example assumes you have a Kubernetes cluster installed and running, and that you have -installed the ```kubectl``` command line tool somewhere in your path. Please see the [getting -started](../../docs/getting-started-guides) for installation instructions for your platform. - -## Step One: Create the secret - -A secret contains a set of named byte arrays. - -Use the [`examples/secrets/secret.yaml`](secret.yaml) file to create a secret: - -```shell -$ kubectl create -f examples/secrets/secret.yaml -``` - -You can use `kubectl` to see information about the secret: - -```shell -$ kubectl get secrets -NAME TYPE DATA -test-secret Opaque 2 -``` - -## Step Two: Create a pod that consumes a secret - -Pods consume secrets in volumes. Now that you have created a secret, you can create a pod that -consumes it. - -Use the [`examples/secrets/secret-pod.yaml`](secret-pod.yaml) file to create a Pod that consumes the secret. - -```shell -$ kubectl create -f examples/secrets/secret-pod.yaml -``` - -This pod runs a binary that displays the content of one of the pieces of secret data in the secret -volume: - -```shell -$ kubectl log secret-test-pod -2015-04-29T21:17:24.712206409Z content of file "/etc/secret-volume/data-1": value-1 -``` - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/secrets/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/secrets/README.md?pixel)]() diff --git a/release-0.19.0/examples/secrets/secret-pod.yaml b/release-0.19.0/examples/secrets/secret-pod.yaml deleted file mode 100644 index be401018990..00000000000 --- a/release-0.19.0/examples/secrets/secret-pod.yaml +++ /dev/null @@ -1,18 +0,0 @@ -apiVersion: v1beta3 -kind: Pod -metadata: - name: secret-test-pod -spec: - containers: - - name: test-container - image: kubernetes/mounttest:0.1 - command: [ "/mt", "--file_content=/etc/secret-volume/data-1" ] - volumeMounts: - # name must match the volume name below - - name: secret-volume - mountPath: /etc/secret-volume - volumes: - - name: secret-volume - secret: - secretName: test-secret - restartPolicy: Never diff --git a/release-0.19.0/examples/secrets/secret.yaml b/release-0.19.0/examples/secrets/secret.yaml deleted file mode 100644 index 463094a6922..00000000000 --- a/release-0.19.0/examples/secrets/secret.yaml +++ /dev/null @@ -1,7 +0,0 @@ -apiVersion: v1beta3 -kind: Secret -metadata: - name: test-secret -data: - data-1: dmFsdWUtMQ0K - data-2: dmFsdWUtMg0KDQo= diff --git a/release-0.19.0/examples/simple-nginx.md b/release-0.19.0/examples/simple-nginx.md deleted file mode 100644 index 29f291bc930..00000000000 --- a/release-0.19.0/examples/simple-nginx.md +++ /dev/null @@ -1,50 +0,0 @@ -## Running your first containers in Kubernetes - -Ok, you've run one of the [getting started guides](../docs/getting-started-guides/) and you have -successfully turned up a Kubernetes cluster. Now what? This guide will help you get oriented -to Kubernetes and running your first containers on the cluster. - -### Running a container (simple version) - -From this point onwards, it is assumed that `kubectl` is on your path from one of the getting started guides. - -The [`kubectl run`](/docs/kubectl_run.md) line below will create two [nginx](https://registry.hub.docker.com/_/nginx/) [pods](/docs/pods.md) listening on port 80. It will also create a [replication controller](/docs/replication-controller.md) named `my-nginx` to ensure that there are always two pods running. - -```bash -kubectl run my-nginx --image=nginx --replicas=2 --port=80 -``` - -Once the pods are created, you can list them to see what is up and running: -```bash -kubectl get pods -``` - -You can also see the replication controller that was created: -```bash -kubectl get rc -``` - -To stop the two replicated containers, stop the replication controller: -```bash -kubectl stop rc my-nginx -``` - -### Exposing your pods to the internet. -On some platforms (for example Google Compute Engine) the kubectl command can integrate with your cloud provider to add a [public IP address](/docs/services.md#external-services) for the pods, -to do this run: - -```bash -kubectl expose rc my-nginx --port=80 --type=LoadBalancer -``` - -This should print the service that has been created, and map an external IP address to the service. - -### Next: Configuration files -Most people will eventually want to use declarative configuration files for creating/modifying their applications. A [simplified introduction](simple-yaml.md) -is given in a different document. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/simple-nginx.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/simple-nginx.md?pixel)]() diff --git a/release-0.19.0/examples/simple-yaml.md b/release-0.19.0/examples/simple-yaml.md deleted file mode 100644 index 5817d50d9c2..00000000000 --- a/release-0.19.0/examples/simple-yaml.md +++ /dev/null @@ -1,95 +0,0 @@ -## Getting started with config files. - -In addition to the imperative style commands described [elsewhere](simple-nginx.md), Kubernetes -supports declarative YAML or JSON configuration files. Often times config files are preferable -to imperative commands, since they can be checked into version control and changes to the files -can be code reviewed, producing a more robust, reliable and archival system. - -### Running a container from a pod configuration file - -```bash -cd kubernetes -kubectl create -f pod.yaml -``` - -Where pod.yaml contains something like: - -```yaml -apiVersion: v1beta3 -kind: Pod -metadata: - labels: - app: nginx - name: nginx - namespace: default -spec: - containers: - - image: nginx - imagePullPolicy: IfNotPresent - name: nginx - ports: - - containerPort: 80 - protocol: TCP - restartPolicy: Always -``` - -You can see your cluster's pods: - -```bash -kubectl get pods -``` - -and delete the pod you just created: - -```bash -kubectl delete pods nginx -``` - -### Running a replicated set of containers from a configuration file -To run replicated containers, you need a [Replication Controller](../docs/replication-controller.md). -A replication controller is responsible for ensuring that a specific number of pods exist in the -cluster. - -```bash -cd kubernetes -kubectl create -f replication.yaml -``` - -Where ```replication.yaml``` contains: - -```yaml -apiVersion: v1beta3 -kind: ReplicationController -metadata: - name: nginx - namespace: default -spec: - replicas: 3 - selector: - app: nginx - template: - metadata: - name: nginx - labels: - app: nginx - spec: - containers: - - image: nginx - imagePullPolicy: IfNotPresent - name: nginx - ports: - - containerPort: 80 - protocol: TCP - restartPolicy: Always -``` - -To delete the replication controller (and the pods it created): -```bash -kubectl delete rc nginx -``` - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/simple-yaml.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/simple-yaml.md?pixel)]() diff --git a/release-0.19.0/examples/spark/README.md b/release-0.19.0/examples/spark/README.md deleted file mode 100644 index 7a28bdf4e93..00000000000 --- a/release-0.19.0/examples/spark/README.md +++ /dev/null @@ -1,177 +0,0 @@ -# Spark example - -Following this example, you will create a functional [Apache -Spark](http://spark.apache.org/) cluster using Kubernetes and -[Docker](http://docker.io). - -You will setup a Spark master service and a set of -Spark workers using Spark's [standalone mode](http://spark.apache.org/docs/latest/spark-standalone.html). - -For the impatient expert, jump straight to the [tl;dr](#tldr) -section. - -### Sources - -Source is freely available at: -* Docker image - https://github.com/mattf/docker-spark -* Docker Trusted Build - https://registry.hub.docker.com/search?q=mattf/spark - -## Step Zero: Prerequisites - -This example assumes you have a Kubernetes cluster installed and -running, and that you have installed the ```kubectl``` command line -tool somewhere in your path. Please see the [getting -started](../../docs/getting-started-guides) for installation -instructions for your platform. - -## Step One: Start your Master service - -The Master [service](../../docs/services.md) is the master (or head) service for a Spark -cluster. - -Use the [`examples/spark/spark-master.json`](spark-master.json) file to create a [pod](../../docs/pods.md) running -the Master service. - -```shell -$ kubectl create -f examples/spark/spark-master.json -``` - -Then, use the `examples/spark/spark-master-service.json` file to -create a logical service endpoint that Spark workers can use to access -the Master pod. - -```shell -$ kubectl create -f examples/spark/spark-master-service.json -``` - -Ensure that the Master service is running and functional. - -### Check to see if Master is running and accessible - -```shell -$ kubectl get pods,services -POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS -spark-master 192.168.90.14 spark-master mattf/spark-master 172.18.145.8/172.18.145.8 name=spark-master Running -NAME LABELS SELECTOR IP PORT -kubernetes component=apiserver,provider=kubernetes 10.254.0.2 443 -spark-master name=spark-master name=spark-master 10.254.125.166 7077 -``` - -Connect to http://192.168.90.14:8080 to see the status of the master. - -```shell -$ links -dump 192.168.90.14:8080 - [IMG] 1.2.1 Spark Master at spark://spark-master:7077 - - * URL: spark://spark-master:7077 - * Workers: 0 - * Cores: 0 Total, 0 Used - * Memory: 0.0 B Total, 0.0 B Used - * Applications: 0 Running, 0 Completed - * Drivers: 0 Running, 0 Completed - * Status: ALIVE -... -``` - -(Pull requests welcome for an alternative that uses the service IP and -port) - -## Step Two: Start your Spark workers - -The Spark workers do the heavy lifting in a Spark cluster. They -provide execution resources and data cache capabilities for your -program. - -The Spark workers need the Master service to be running. - -Use the [`examples/spark/spark-worker-controller.json`](spark-worker-controller.json) file to create a -[replication controller](../../docs/replication-controller.md) that manages the worker pods. - -```shell -$ kubectl create -f examples/spark/spark-worker-controller.json -``` - -### Check to see if the workers are running - -```shell -$ links -dump 192.168.90.14:8080 - [IMG] 1.2.1 Spark Master at spark://spark-master:7077 - - * URL: spark://spark-master:7077 - * Workers: 3 - * Cores: 12 Total, 0 Used - * Memory: 20.4 GB Total, 0.0 B Used - * Applications: 0 Running, 0 Completed - * Drivers: 0 Running, 0 Completed - * Status: ALIVE - - Workers - -Id Address State Cores Memory - 4 (0 6.8 GB -worker-20150318151745-192.168.75.14-46422 192.168.75.14:46422 ALIVE Used) (0.0 B - Used) - 4 (0 6.8 GB -worker-20150318151746-192.168.35.17-53654 192.168.35.17:53654 ALIVE Used) (0.0 B - Used) - 4 (0 6.8 GB -worker-20150318151746-192.168.90.17-50490 192.168.90.17:50490 ALIVE Used) (0.0 B - Used) -... -``` - -(Pull requests welcome for an alternative that uses the service IP and -port) - -## Step Three: Do something with the cluster - -```shell -$ kubectl get pods,services -POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS -spark-master 192.168.90.14 spark-master mattf/spark-master 172.18.145.8/172.18.145.8 name=spark-master Running -spark-worker-controller-51wgg 192.168.75.14 spark-worker mattf/spark-worker 172.18.145.9/172.18.145.9 name=spark-worker,uses=spark-master Running -spark-worker-controller-5v48c 192.168.90.17 spark-worker mattf/spark-worker 172.18.145.8/172.18.145.8 name=spark-worker,uses=spark-master Running -spark-worker-controller-ehq23 192.168.35.17 spark-worker mattf/spark-worker 172.18.145.12/172.18.145.12 name=spark-worker,uses=spark-master Running -NAME LABELS SELECTOR IP PORT -kubernetes component=apiserver,provider=kubernetes 10.254.0.2 443 -spark-master name=spark-master name=spark-master 10.254.125.166 7077 - -$ sudo docker run -it mattf/spark-base sh - -sh-4.2# echo "10.254.125.166 spark-master" >> /etc/hosts - -sh-4.2# export SPARK_LOCAL_HOSTNAME=$(hostname -i) - -sh-4.2# MASTER=spark://spark-master:7077 pyspark -Python 2.7.5 (default, Jun 17 2014, 18:11:42) -[GCC 4.8.2 20140120 (Red Hat 4.8.2-16)] on linux2 -Type "help", "copyright", "credits" or "license" for more information. -Welcome to - ____ __ - / __/__ ___ _____/ /__ - _\ \/ _ \/ _ `/ __/ '_/ - /__ / .__/\_,_/_/ /_/\_\ version 1.2.1 - /_/ - -Using Python version 2.7.5 (default, Jun 17 2014 18:11:42) -SparkContext available as sc. ->>> import socket, resource ->>> sc.parallelize(range(1000)).map(lambda x: (socket.gethostname(), resource.getrlimit(resource.RLIMIT_NOFILE))).distinct().collect() -[('spark-worker-controller-ehq23', (1048576, 1048576)), ('spark-worker-controller-5v48c', (1048576, 1048576)), ('spark-worker-controller-51wgg', (1048576, 1048576))] -``` - -## tl;dr - -```kubectl create -f spark-master.json``` - -```kubectl create -f spark-master-service.json``` - -Make sure the Master Pod is running (use: ```kubectl get pods```). - -```kubectl create -f spark-worker-controller.json``` - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/spark/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/spark/README.md?pixel)]() diff --git a/release-0.19.0/examples/spark/spark-master-service.json b/release-0.19.0/examples/spark/spark-master-service.json deleted file mode 100644 index 28e3e8b3881..00000000000 --- a/release-0.19.0/examples/spark/spark-master-service.json +++ /dev/null @@ -1,21 +0,0 @@ -{ - "kind": "Service", - "apiVersion": "v1beta3", - "metadata": { - "name": "spark-master", - "labels": { - "name": "spark-master" - } - }, - "spec": { - "ports": [ - { - "port": 7077, - "targetPort": 7077 - } - ], - "selector": { - "name": "spark-master" - } - } -} \ No newline at end of file diff --git a/release-0.19.0/examples/spark/spark-master.json b/release-0.19.0/examples/spark/spark-master.json deleted file mode 100644 index 34373f6c674..00000000000 --- a/release-0.19.0/examples/spark/spark-master.json +++ /dev/null @@ -1,28 +0,0 @@ -{ - "kind": "Pod", - "apiVersion": "v1beta3", - "metadata": { - "name": "spark-master", - "labels": { - "name": "spark-master" - } - }, - "spec": { - "containers": [ - { - "name": "spark-master", - "image": "mattf/spark-master", - "ports": [ - { - "containerPort": 7077 - } - ], - "resources": { - "limits": { - "cpu": "100m" - } - } - } - ] - } -} \ No newline at end of file diff --git a/release-0.19.0/examples/spark/spark-worker-controller.json b/release-0.19.0/examples/spark/spark-worker-controller.json deleted file mode 100644 index 44eb4882dcc..00000000000 --- a/release-0.19.0/examples/spark/spark-worker-controller.json +++ /dev/null @@ -1,43 +0,0 @@ -{ - "kind": "ReplicationController", - "apiVersion": "v1beta3", - "metadata": { - "name": "spark-worker-controller", - "labels": { - "name": "spark-worker" - } - }, - "spec": { - "replicas": 3, - "selector": { - "name": "spark-worker" - }, - "template": { - "metadata": { - "labels": { - "name": "spark-worker", - "uses": "spark-master" - } - }, - "spec": { - "containers": [ - { - "name": "spark-worker", - "image": "mattf/spark-worker", - "ports": [ - { - "hostPort": 8888, - "containerPort": 8888 - } - ], - "resources": { - "limits": { - "cpu": "100m" - } - } - } - ] - } - } - } -} \ No newline at end of file diff --git a/release-0.19.0/examples/storm/README.md b/release-0.19.0/examples/storm/README.md deleted file mode 100644 index 2972dba7668..00000000000 --- a/release-0.19.0/examples/storm/README.md +++ /dev/null @@ -1,174 +0,0 @@ -# Storm example - -Following this example, you will create a functional [Apache -Storm](http://storm.apache.org/) cluster using Kubernetes and -[Docker](http://docker.io). - -You will setup an [Apache ZooKeeper](http://zookeeper.apache.org/) -service, a Storm master service (a.k.a. Nimbus server), and a set of -Storm workers (a.k.a. supervisors). - -For the impatient expert, jump straight to the [tl;dr](#tldr) -section. - -### Sources - -Source is freely available at: -* Docker image - https://github.com/mattf/docker-storm -* Docker Trusted Build - https://registry.hub.docker.com/search?q=mattf/storm - -## Step Zero: Prerequisites - -This example assumes you have a Kubernetes cluster installed and -running, and that you have installed the ```kubectl``` command line -tool somewhere in your path. Please see the [getting -started](../../docs/getting-started-guides) for installation -instructions for your platform. - -## Step One: Start your ZooKeeper service - -ZooKeeper is a distributed coordination [service](../../docs/services.md) that Storm uses as a -bootstrap and for state storage. - -Use the [`examples/storm/zookeeper.json`](zookeeper.json) file to create a [pod](../../docs/pods.md) running -the ZooKeeper service. - -```shell -$ kubectl create -f examples/storm/zookeeper.json -``` - -Then, use the [`examples/storm/zookeeper-service.json`](zookeeper-service.json) file to create a -logical service endpoint that Storm can use to access the ZooKeeper -pod. - -```shell -$ kubectl create -f examples/storm/zookeeper-service.json -``` - -You should make sure the ZooKeeper pod is Running and accessible -before proceeding. - -### Check to see if ZooKeeper is running - -```shell -$ kubectl get pods -POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS -zookeeper 192.168.86.4 zookeeper mattf/zookeeper 172.18.145.8/172.18.145.8 name=zookeeper Running -``` - -### Check to see if ZooKeeper is accessible - -```shell -$ kubectl get services -NAME LABELS SELECTOR IP PORT -kubernetes component=apiserver,provider=kubernetes 10.254.0.2 443 -zookeeper name=zookeeper name=zookeeper 10.254.139.141 2181 - -$ echo ruok | nc 10.254.139.141 2181; echo -imok -``` - -## Step Two: Start your Nimbus service - -The Nimbus service is the master (or head) service for a Storm -cluster. It depends on a functional ZooKeeper service. - -Use the [`examples/storm/storm-nimbus.json`](storm-nimbus.json) file to create a pod running -the Nimbus service. - -```shell -$ kubectl create -f examples/storm/storm-nimbus.json -``` - -Then, use the [`examples/storm/storm-nimbus-service.json`](storm-nimbus-service.json) file to -create a logical service endpoint that Storm workers can use to access -the Nimbus pod. - -```shell -$ kubectl create -f examples/storm/storm-nimbus-service.json -``` - -Ensure that the Nimbus service is running and functional. - -### Check to see if Nimbus is running and accessible - -```shell -$ kubectl get services -NAME LABELS SELECTOR IP PORT -kubernetes component=apiserver,provider=kubernetes 10.254.0.2 443 -zookeeper name=zookeeper name=zookeeper 10.254.139.141 2181 -nimbus name=nimbus name=nimbus 10.254.115.208 6627 - -$ sudo docker run -it -w /opt/apache-storm mattf/storm-base sh -c '/configure.sh 10.254.139.141 10.254.115.208; ./bin/storm list' -... -No topologies running. -``` - -## Step Three: Start your Storm workers - -The Storm workers (or supervisors) do the heavy lifting in a Storm -cluster. They run your stream processing topologies and are managed by -the Nimbus service. - -The Storm workers need both the ZooKeeper and Nimbus services to be -running. - -Use the [`examples/storm/storm-worker-controller.json`](storm-worker-controller.json) file to create a -[replication controller](../../docs/replication-controller.md) that manages the worker pods. - -```shell -$ kubectl create -f examples/storm/storm-worker-controller.json -``` - -### Check to see if the workers are running - -One way to check on the workers is to get information from the -ZooKeeper service about how many clients it has. - -```shell -$ echo stat | nc 10.254.139.141 2181; echo -Zookeeper version: 3.4.6--1, built on 10/23/2014 14:18 GMT -Clients: - /192.168.48.0:44187[0](queued=0,recved=1,sent=0) - /192.168.45.0:39568[1](queued=0,recved=14072,sent=14072) - /192.168.86.1:57591[1](queued=0,recved=34,sent=34) - /192.168.8.0:50375[1](queued=0,recved=34,sent=34) - /192.168.45.0:39576[1](queued=0,recved=34,sent=34) - -Latency min/avg/max: 0/2/2570 -Received: 23199 -Sent: 23198 -Connections: 5 -Outstanding: 0 -Zxid: 0xa39 -Mode: standalone -Node count: 13 -``` - -There should be one client from the Nimbus service and one per -worker. Ideally, you should get ```stat``` output from ZooKeeper -before and after creating the replication controller. - -(Pull requests welcome for alternative ways to validate the workers) - -## tl;dr - -```kubectl create -f zookeeper.json``` - -```kubectl create -f zookeeper-service.json``` - -Make sure the ZooKeeper Pod is running (use: ```kubectl get pods```). - -```kubectl create -f storm-nimbus.json``` - -```kubectl create -f storm-nimbus-service.json``` - -Make sure the Nimbus Pod is running. - -```kubectl create -f storm-worker-controller.json``` - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/storm/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/storm/README.md?pixel)]() diff --git a/release-0.19.0/examples/storm/storm-nimbus-service.json b/release-0.19.0/examples/storm/storm-nimbus-service.json deleted file mode 100644 index e593c10384a..00000000000 --- a/release-0.19.0/examples/storm/storm-nimbus-service.json +++ /dev/null @@ -1,21 +0,0 @@ -{ - "kind": "Service", - "apiVersion": "v1beta3", - "metadata": { - "name": "nimbus", - "labels": { - "name": "nimbus" - } - }, - "spec": { - "ports": [ - { - "port": 6627, - "targetPort": 6627 - } - ], - "selector": { - "name": "nimbus" - } - } -} \ No newline at end of file diff --git a/release-0.19.0/examples/storm/storm-nimbus.json b/release-0.19.0/examples/storm/storm-nimbus.json deleted file mode 100644 index dd303dc376b..00000000000 --- a/release-0.19.0/examples/storm/storm-nimbus.json +++ /dev/null @@ -1,28 +0,0 @@ -{ - "kind": "Pod", - "apiVersion": "v1beta3", - "metadata": { - "name": "nimbus", - "labels": { - "name": "nimbus" - } - }, - "spec": { - "containers": [ - { - "name": "nimbus", - "image": "mattf/storm-nimbus", - "ports": [ - { - "containerPort": 6627 - } - ], - "resources": { - "limits": { - "cpu": "100m" - } - } - } - ] - } -} \ No newline at end of file diff --git a/release-0.19.0/examples/storm/storm-worker-controller.json b/release-0.19.0/examples/storm/storm-worker-controller.json deleted file mode 100644 index 0ab315eccec..00000000000 --- a/release-0.19.0/examples/storm/storm-worker-controller.json +++ /dev/null @@ -1,55 +0,0 @@ -{ - "kind": "ReplicationController", - "apiVersion": "v1beta3", - "metadata": { - "name": "storm-worker-controller", - "labels": { - "name": "storm-worker" - } - }, - "spec": { - "replicas": 3, - "selector": { - "name": "storm-worker" - }, - "template": { - "metadata": { - "labels": { - "name": "storm-worker", - "uses": "nimbus" - } - }, - "spec": { - "containers": [ - { - "name": "storm-worker", - "image": "mattf/storm-worker", - "ports": [ - { - "hostPort": 6700, - "containerPort": 6700 - }, - { - "hostPort": 6701, - "containerPort": 6701 - }, - { - "hostPort": 6702, - "containerPort": 6702 - }, - { - "hostPort": 6703, - "containerPort": 6703 - } - ], - "resources": { - "limits": { - "cpu": "200m" - } - } - } - ] - } - } - } -} \ No newline at end of file diff --git a/release-0.19.0/examples/storm/zookeeper-service.json b/release-0.19.0/examples/storm/zookeeper-service.json deleted file mode 100644 index a4166b24a25..00000000000 --- a/release-0.19.0/examples/storm/zookeeper-service.json +++ /dev/null @@ -1,21 +0,0 @@ -{ - "kind": "Service", - "apiVersion": "v1beta3", - "metadata": { - "name": "zookeeper", - "labels": { - "name": "zookeeper" - } - }, - "spec": { - "ports": [ - { - "port": 2181, - "targetPort": 2181 - } - ], - "selector": { - "name": "zookeeper" - } - } -} \ No newline at end of file diff --git a/release-0.19.0/examples/storm/zookeeper.json b/release-0.19.0/examples/storm/zookeeper.json deleted file mode 100644 index c2b6dcb531b..00000000000 --- a/release-0.19.0/examples/storm/zookeeper.json +++ /dev/null @@ -1,28 +0,0 @@ -{ - "kind": "Pod", - "apiVersion": "v1beta3", - "metadata": { - "name": "zookeeper", - "labels": { - "name": "zookeeper" - } - }, - "spec": { - "containers": [ - { - "name": "zookeeper", - "image": "mattf/zookeeper", - "ports": [ - { - "containerPort": 2181 - } - ], - "resources": { - "limits": { - "cpu": "100m" - } - } - } - ] - } -} \ No newline at end of file diff --git a/release-0.19.0/examples/update-demo/README.md b/release-0.19.0/examples/update-demo/README.md deleted file mode 100644 index 65fbd16c9a2..00000000000 --- a/release-0.19.0/examples/update-demo/README.md +++ /dev/null @@ -1,121 +0,0 @@ - -# Live update example -This example demonstrates the usage of Kubernetes to perform a live update on a running group of [pods](../../docs/pods.md). - -### Step Zero: Prerequisites - -This example assumes that you have forked the repository and [turned up a Kubernetes cluster](../../docs/getting-started-guides): - -```bash -$ cd kubernetes -$ ./cluster/kube-up.sh -``` - -### Step One: Turn up the UX for the demo - -You can use bash job control to run this in the background (note that you must use the default port -- 8001 -- for the following demonstration to work properly). This can sometimes spew to the output so you could also run it in a different terminal. - -``` -$ ./kubectl proxy --www=examples/update-demo/local/ & -+ ./kubectl proxy --www=examples/update-demo/local/ -I0218 15:18:31.623279 67480 proxy.go:36] Starting to serve on localhost:8001 -``` - -Now visit the the [demo website](http://localhost:8001/static). You won't see anything much quite yet. - -### Step Two: Run the controller -Now we will turn up two replicas of an image. They all serve on internal port 80. - -```bash -$ ./kubectl create -f examples/update-demo/nautilus-rc.yaml -``` - -After pulling the image from the Docker Hub to your worker nodes (which may take a minute or so) you'll see a couple of squares in the UI detailing the pods that are running along with the image that they are serving up. A cute little nautilus. - -### Step Three: Try scaling the controller - -Now we will increase the number of replicas from two to four: - -```bash -$ ./kubectl scale rc update-demo-nautilus --replicas=4 -``` - -If you go back to the [demo website](http://localhost:8001/static/index.html) you should eventually see four boxes, one for each pod. - -### Step Four: Update the docker image -We will now update the docker image to serve a different image by doing a rolling update to a new Docker image. - -```bash -$ ./kubectl rolling-update update-demo-nautilus --update-period=10s -f examples/update-demo/kitten-rc.yaml -``` -The rolling-update command in kubectl will do 2 things: - -1. Create a new [replication controller](../../docs/replication-controller.md) with a pod template that uses the new image (`gcr.io/google_containers/update-demo:kitten`) -2. Scale the old and new replication controllers until the new controller replaces the old. This will kill the current pods one at a time, spinnning up new ones to replace them. - -Watch the [demo website](http://localhost:8001/static/index.html), it will update one pod every 10 seconds until all of the pods have the new image. - -### Step Five: Bring down the pods - -```bash -$ ./kubectl stop rc update-demo-kitten -``` - -This will first 'stop' the replication controller by turning the target number of replicas to 0. It'll then delete that controller. - -### Step Six: Cleanup - -To turn down a Kubernetes cluster: - -```bash -$ ./cluster/kube-down.sh -``` - -Kill the proxy running in the background: -After you are done running this demo make sure to kill it: - -```bash -$ jobs -[1]+ Running ./kubectl proxy --www=local/ & -$ kill %1 -[1]+ Terminated: 15 ./kubectl proxy --www=local/ -``` - -### Updating the Docker images - -If you want to build your own docker images, you can set `$DOCKER_HUB_USER` to your Docker user id and run the included shell script. It can take a few minutes to download/upload stuff. - -```bash -$ export DOCKER_HUB_USER=my-docker-id -$ ./examples/update-demo/build-images.sh -``` - -To use your custom docker image in the above examples, you will need to change the image name in `examples/update-demo/nautilus-rc.yaml` and `examples/update-demo/kitten-rc.yaml`. - -### Image Copyright - -Note that the images included here are public domain. - -* [kitten](http://commons.wikimedia.org/wiki/File:Kitten-stare.jpg) -* [nautilus](http://commons.wikimedia.org/wiki/File:Nautilus_pompilius.jpg) - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/update-demo/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/update-demo/README.md?pixel)]() diff --git a/release-0.19.0/examples/update-demo/build-images.sh b/release-0.19.0/examples/update-demo/build-images.sh deleted file mode 100755 index 63c0fe92984..00000000000 --- a/release-0.19.0/examples/update-demo/build-images.sh +++ /dev/null @@ -1,30 +0,0 @@ -#!/bin/bash - -# Copyright 2014 The Kubernetes Authors All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# This script will build and push the images necessary for the demo. - -set -o errexit -set -o nounset -set -o pipefail - -DOCKER_HUB_USER=${DOCKER_HUB_USER:-kubernetes} - -set -x - -docker build -t "${DOCKER_HUB_USER}/update-demo:kitten" images/kitten -docker build -t "${DOCKER_HUB_USER}/update-demo:nautilus" images/nautilus - -docker push "${DOCKER_HUB_USER}/update-demo" diff --git a/release-0.19.0/examples/update-demo/images/kitten/Dockerfile b/release-0.19.0/examples/update-demo/images/kitten/Dockerfile deleted file mode 100644 index b053138b352..00000000000 --- a/release-0.19.0/examples/update-demo/images/kitten/Dockerfile +++ /dev/null @@ -1,17 +0,0 @@ -# Copyright 2014 Google Inc. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -FROM kubernetes/test-webserver -COPY html/kitten.jpg kitten.jpg -COPY html/data.json data.json diff --git a/release-0.19.0/examples/update-demo/images/kitten/html/data.json b/release-0.19.0/examples/update-demo/images/kitten/html/data.json deleted file mode 100644 index 0be61a42b30..00000000000 --- a/release-0.19.0/examples/update-demo/images/kitten/html/data.json +++ /dev/null @@ -1,3 +0,0 @@ -{ - "image": "kitten.jpg" -} diff --git a/release-0.19.0/examples/update-demo/images/kitten/html/kitten.jpg b/release-0.19.0/examples/update-demo/images/kitten/html/kitten.jpg deleted file mode 100644 index a382bf16ace..00000000000 Binary files a/release-0.19.0/examples/update-demo/images/kitten/html/kitten.jpg and /dev/null differ diff --git a/release-0.19.0/examples/update-demo/images/nautilus/Dockerfile b/release-0.19.0/examples/update-demo/images/nautilus/Dockerfile deleted file mode 100644 index 2904a107916..00000000000 --- a/release-0.19.0/examples/update-demo/images/nautilus/Dockerfile +++ /dev/null @@ -1,17 +0,0 @@ -# Copyright 2014 Google Inc. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -FROM kubernetes/test-webserver -COPY html/nautilus.jpg nautilus.jpg -COPY html/data.json data.json diff --git a/release-0.19.0/examples/update-demo/images/nautilus/html/data.json b/release-0.19.0/examples/update-demo/images/nautilus/html/data.json deleted file mode 100644 index 2debee09a91..00000000000 --- a/release-0.19.0/examples/update-demo/images/nautilus/html/data.json +++ /dev/null @@ -1,3 +0,0 @@ -{ - "image": "nautilus.jpg" -} diff --git a/release-0.19.0/examples/update-demo/images/nautilus/html/nautilus.jpg b/release-0.19.0/examples/update-demo/images/nautilus/html/nautilus.jpg deleted file mode 100644 index 544d2bd471a..00000000000 Binary files a/release-0.19.0/examples/update-demo/images/nautilus/html/nautilus.jpg and /dev/null differ diff --git a/release-0.19.0/examples/update-demo/kitten-rc.yaml b/release-0.19.0/examples/update-demo/kitten-rc.yaml deleted file mode 100644 index 516d5b88d78..00000000000 --- a/release-0.19.0/examples/update-demo/kitten-rc.yaml +++ /dev/null @@ -1,20 +0,0 @@ -apiVersion: v1beta3 -kind: ReplicationController -metadata: - name: update-demo-kitten -spec: - selector: - name: update-demo - version: kitten - template: - metadata: - labels: - name: update-demo - version: kitten - spec: - containers: - - image: gcr.io/google_containers/update-demo:kitten - name: update-demo - ports: - - containerPort: 80 - protocol: TCP diff --git a/release-0.19.0/examples/update-demo/local/LICENSE.angular b/release-0.19.0/examples/update-demo/local/LICENSE.angular deleted file mode 100644 index 020f87acd2e..00000000000 --- a/release-0.19.0/examples/update-demo/local/LICENSE.angular +++ /dev/null @@ -1,21 +0,0 @@ -The MIT License - -Copyright (c) 2010-2014 Google, Inc. http://angularjs.org - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in -all copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN -THE SOFTWARE. diff --git a/release-0.19.0/examples/update-demo/local/angular.min.js b/release-0.19.0/examples/update-demo/local/angular.min.js deleted file mode 100644 index 43f31f67089..00000000000 --- a/release-0.19.0/examples/update-demo/local/angular.min.js +++ /dev/null @@ -1,210 +0,0 @@ -/* - AngularJS v1.2.16 - (c) 2010-2014 Google, Inc. http://angularjs.org - License: MIT -*/ -(function(O,U,s){'use strict';function t(b){return function(){var a=arguments[0],c,a="["+(b?b+":":"")+a+"] http://errors.angularjs.org/1.2.16/"+(b?b+"/":"")+a;for(c=1;c").append(b).html();try{return 3===b[0].nodeType?K(c):c.match(/^(<[^>]+>)/)[1].replace(/^<([\w\-]+)/, -function(a,b){return"<"+K(b)})}catch(d){return K(c)}}function Xb(b){try{return decodeURIComponent(b)}catch(a){}}function Yb(b){var a={},c,d;q((b||"").split("&"),function(b){b&&(c=b.split("="),d=Xb(c[0]),B(d)&&(b=B(c[1])?Xb(c[1]):!0,a[d]?M(a[d])?a[d].push(b):a[d]=[a[d],b]:a[d]=b))});return a}function Zb(b){var a=[];q(b,function(b,d){M(b)?q(b,function(b){a.push(za(d,!0)+(!0===b?"":"="+za(b,!0)))}):a.push(za(d,!0)+(!0===b?"":"="+za(b,!0)))});return a.length?a.join("&"):""}function wb(b){return za(b, -!0).replace(/%26/gi,"&").replace(/%3D/gi,"=").replace(/%2B/gi,"+")}function za(b,a){return encodeURIComponent(b).replace(/%40/gi,"@").replace(/%3A/gi,":").replace(/%24/g,"$").replace(/%2C/gi,",").replace(/%20/g,a?"%20":"+")}function Wc(b,a){function c(a){a&&d.push(a)}var d=[b],e,g,f=["ng:app","ng-app","x-ng-app","data-ng-app"],h=/\sng[:\-]app(:\s*([\w\d_]+);?)?\s/;q(f,function(a){f[a]=!0;c(U.getElementById(a));a=a.replace(":","\\:");b.querySelectorAll&&(q(b.querySelectorAll("."+a),c),q(b.querySelectorAll("."+ -a+"\\:"),c),q(b.querySelectorAll("["+a+"]"),c))});q(d,function(a){if(!e){var b=h.exec(" "+a.className+" ");b?(e=a,g=(b[2]||"").replace(/\s+/g,",")):q(a.attributes,function(b){!e&&f[b.name]&&(e=a,g=b.value)})}});e&&a(e,g?[g]:[])}function $b(b,a){var c=function(){b=y(b);if(b.injector()){var c=b[0]===U?"document":ha(b);throw Pa("btstrpd",c);}a=a||[];a.unshift(["$provide",function(a){a.value("$rootElement",b)}]);a.unshift("ng");c=ac(a);c.invoke(["$rootScope","$rootElement","$compile","$injector","$animate", -function(a,b,c,d,e){a.$apply(function(){b.data("$injector",d);c(b)(a)})}]);return c},d=/^NG_DEFER_BOOTSTRAP!/;if(O&&!d.test(O.name))return c();O.name=O.name.replace(d,"");Ea.resumeBootstrap=function(b){q(b,function(b){a.push(b)});c()}}function fb(b,a){a=a||"_";return b.replace(Xc,function(b,d){return(d?a:"")+b.toLowerCase()})}function xb(b,a,c){if(!b)throw Pa("areq",a||"?",c||"required");return b}function Ra(b,a,c){c&&M(b)&&(b=b[b.length-1]);xb(P(b),a,"not a function, got "+(b&&"object"==typeof b? -b.constructor.name||"Object":typeof b));return b}function Aa(b,a){if("hasOwnProperty"===b)throw Pa("badname",a);}function bc(b,a,c){if(!a)return b;a=a.split(".");for(var d,e=b,g=a.length,f=0;f "+e[1]+a.replace(le,"<$1>")+e[2]; -d.removeChild(d.firstChild);for(a=e[0];a--;)d=d.lastChild;a=0;for(e=d.childNodes.length;a=S?(c.preventDefault=null,c.stopPropagation=null,c.isDefaultPrevented=null):(delete c.preventDefault,delete c.stopPropagation,delete c.isDefaultPrevented)};c.elem=b;return c}function Ia(b){var a=typeof b,c;"object"==a&&null!==b?"function"==typeof(c=b.$$hashKey)?c=b.$$hashKey():c===s&&(c=b.$$hashKey=bb()):c=b;return a+":"+c}function Va(b){q(b,this.put,this)}function oc(b){var a,c;"function"==typeof b?(a=b.$inject)||(a=[],b.length&&(c=b.toString().replace(oe, -""),c=c.match(pe),q(c[1].split(qe),function(b){b.replace(re,function(b,c,d){a.push(d)})})),b.$inject=a):M(b)?(c=b.length-1,Ra(b[c],"fn"),a=b.slice(0,c)):Ra(b,"fn",!0);return a}function ac(b){function a(a){return function(b,c){if(X(b))q(b,Rb(a));else return a(b,c)}}function c(a,b){Aa(a,"service");if(P(b)||M(b))b=n.instantiate(b);if(!b.$get)throw Wa("pget",a);return m[a+h]=b}function d(a,b){return c(a,{$get:b})}function e(a){var b=[],c,d,g,h;q(a,function(a){if(!k.get(a)){k.put(a,!0);try{if(w(a))for(c= -Sa(a),b=b.concat(e(c.requires)).concat(c._runBlocks),d=c._invokeQueue,g=0,h=d.length;g 4096 bytes)!"));else{if(l.cookie!==da)for(da=l.cookie,d=da.split("; "),Q={},g=0;gk&&this.remove(p.key),b},get:function(a){if(k").parent()[0])});var g=L(a,b,a,c,d,e);ma(a,"ng-scope");return function(b,c,d){xb(b,"scope");var e=c?Ja.clone.call(a):a;q(d,function(a,b){e.data("$"+b+"Controller",a)});d=0;for(var f=e.length;darguments.length&& -(b=a,a=s);D&&(c=lb);return p(a,b,c)}var I,x,v,A,R,H,lb={},da;I=c===g?d:Ub(d,new Hb(y(g),d.$attr));x=I.$$element;if(Q){var T=/^\s*([@=&])(\??)\s*(\w*)\s*$/;f=y(g);H=e.$new(!0);ia&&ia===Q.$$originalDirective?f.data("$isolateScope",H):f.data("$isolateScopeNoTemplate",H);ma(f,"ng-isolate-scope");q(Q.scope,function(a,c){var d=a.match(T)||[],g=d[3]||c,f="?"==d[2],d=d[1],l,m,n,p;H.$$isolateBindings[c]=d+g;switch(d){case "@":I.$observe(g,function(a){H[c]=a});I.$$observers[g].$$scope=e;I[g]&&(H[c]=b(I[g])(e)); -break;case "=":if(f&&!I[g])break;m=r(I[g]);p=m.literal?xa:function(a,b){return a===b};n=m.assign||function(){l=H[c]=m(e);throw ja("nonassign",I[g],Q.name);};l=H[c]=m(e);H.$watch(function(){var a=m(e);p(a,H[c])||(p(a,l)?n(e,a=H[c]):H[c]=a);return l=a},null,m.literal);break;case "&":m=r(I[g]);H[c]=function(a){return m(e,a)};break;default:throw ja("iscp",Q.name,c,a);}})}da=p&&u;L&&q(L,function(a){var b={$scope:a===Q||a.$$isolateScope?H:e,$element:x,$attrs:I,$transclude:da},c;R=a.controller;"@"==R&&(R= -I[a.name]);c=z(R,b);lb[a.name]=c;D||x.data("$"+a.name+"Controller",c);a.controllerAs&&(b.$scope[a.controllerAs]=c)});f=0;for(v=l.length;fG.priority)break;if(V=G.scope)A=A||G,G.templateUrl||(K("new/isolated scope",Q,G,Z),X(V)&&(Q=G));t=G.name;!G.templateUrl&&G.controller&&(V=G.controller,L=L||{},K("'"+t+"' controller",L[t],G,Z),L[t]=G);if(V=G.transclude)E=!0,G.$$tlb||(K("transclusion",T,G,Z),T=G),"element"==V?(D=!0,v=G.priority, -V=H(c,ra,W),Z=d.$$element=y(U.createComment(" "+t+": "+d[t]+" ")),c=Z[0],mb(g,y(ya.call(V,0)),c),Xa=x(V,e,v,f&&f.name,{nonTlbTranscludeDirective:T})):(V=y(Eb(c)).contents(),Z.empty(),Xa=x(V,e));if(G.template)if(K("template",ia,G,Z),ia=G,V=P(G.template)?G.template(Z,d):G.template,V=Y(V),G.replace){f=G;V=Cb.test(V)?y(V):[];c=V[0];if(1!=V.length||1!==c.nodeType)throw ja("tplrt",t,"");mb(g,Z,c);S={$attr:{}};V=da(c,[],S);var $=a.splice(N+1,a.length-(N+1));Q&&pc(V);a=a.concat(V).concat($);B(d,S);S=a.length}else Z.html(V); -if(G.templateUrl)K("template",ia,G,Z),ia=G,G.replace&&(f=G),J=C(a.splice(N,a.length-N),Z,d,g,Xa,l,n,{controllerDirectives:L,newIsolateScopeDirective:Q,templateDirective:ia,nonTlbTranscludeDirective:T}),S=a.length;else if(G.compile)try{O=G.compile(Z,d,Xa),P(O)?u(null,O,ra,W):O&&u(O.pre,O.post,ra,W)}catch(aa){m(aa,ha(Z))}G.terminal&&(J.terminal=!0,v=Math.max(v,G.priority))}J.scope=A&&!0===A.scope;J.transclude=E&&Xa;p.hasElementTranscludeDirective=D;return J}function pc(a){for(var b=0,c=a.length;bp.priority)&&-1!=p.restrict.indexOf(g)&&(n&&(p=Tb(p,{$$start:n,$$end:r})),b.push(p),k=p)}catch(F){m(F)}}return k}function B(a,b){var c=b.$attr,d=a.$attr,e=a.$$element;q(a,function(d,e){"$"!=e.charAt(0)&&(b[e]&&(d+=("style"===e?";":" ")+b[e]),a.$set(e,d,!0,c[e]))});q(b,function(b,g){"class"==g?(ma(e,b),a["class"]=(a["class"]? -a["class"]+" ":"")+b):"style"==g?(e.attr("style",e.attr("style")+";"+b),a.style=(a.style?a.style+";":"")+b):"$"==g.charAt(0)||a.hasOwnProperty(g)||(a[g]=b,d[g]=c[g])})}function C(a,b,c,d,e,g,f,l){var k=[],m,r,z=b[0],u=a.shift(),F=D({},u,{templateUrl:null,transclude:null,replace:null,$$originalDirective:u}),x=P(u.templateUrl)?u.templateUrl(b,c):u.templateUrl;b.empty();n.get(v.getTrustedResourceUrl(x),{cache:p}).success(function(n){var p,J;n=Y(n);if(u.replace){n=Cb.test(n)?y(n):[];p=n[0];if(1!=n.length|| -1!==p.nodeType)throw ja("tplrt",u.name,x);n={$attr:{}};mb(d,b,p);var v=da(p,[],n);X(u.scope)&&pc(v);a=v.concat(a);B(c,n)}else p=z,b.html(n);a.unshift(F);m=ia(a,p,c,e,b,u,g,f,l);q(d,function(a,c){a==p&&(d[c]=b[0])});for(r=L(b[0].childNodes,e);k.length;){n=k.shift();J=k.shift();var A=k.shift(),R=k.shift(),v=b[0];if(J!==z){var H=J.className;l.hasElementTranscludeDirective&&u.replace||(v=Eb(p));mb(A,y(J),v);ma(y(v),H)}J=m.transclude?Q(n,m.transclude):R;m(r,n,v,d,J)}k=null}).error(function(a,b,c,d){throw ja("tpload", -d.url);});return function(a,b,c,d,e){k?(k.push(b),k.push(c),k.push(d),k.push(e)):m(r,b,c,d,e)}}function E(a,b){var c=b.priority-a.priority;return 0!==c?c:a.name!==b.name?a.namea.status? -b:n.reject(b)}var d={method:"get",transformRequest:e.transformRequest,transformResponse:e.transformResponse},g=function(a){function b(a){var c;q(a,function(b,d){P(b)&&(c=b(),null!=c?a[d]=c:delete a[d])})}var c=e.headers,d=D({},a.headers),g,f,c=D({},c.common,c[K(a.method)]);b(c);b(d);a:for(g in c){a=K(g);for(f in d)if(K(f)===a)continue a;d[g]=c[g]}return d}(a);D(d,a);d.headers=g;d.method=Fa(d.method);(a=Ib(d.url)?b.cookies()[d.xsrfCookieName||e.xsrfCookieName]:s)&&(g[d.xsrfHeaderName||e.xsrfHeaderName]= -a);var f=[function(a){g=a.headers;var b=uc(a.data,tc(g),a.transformRequest);E(a.data)&&q(g,function(a,b){"content-type"===K(b)&&delete g[b]});E(a.withCredentials)&&!E(e.withCredentials)&&(a.withCredentials=e.withCredentials);return z(a,b,g).then(c,c)},s],h=n.when(d);for(q(v,function(a){(a.request||a.requestError)&&f.unshift(a.request,a.requestError);(a.response||a.responseError)&&f.push(a.response,a.responseError)});f.length;){a=f.shift();var k=f.shift(),h=h.then(a,k)}h.success=function(a){h.then(function(b){a(b.data, -b.status,b.headers,d)});return h};h.error=function(a){h.then(null,function(b){a(b.data,b.status,b.headers,d)});return h};return h}function z(b,c,g){function f(a,b,c,e){v&&(200<=a&&300>a?v.put(s,[a,b,sc(c),e]):v.remove(s));l(b,a,c,e);d.$$phase||d.$apply()}function l(a,c,d,e){c=Math.max(c,0);(200<=c&&300>c?p.resolve:p.reject)({data:a,status:c,headers:tc(d),config:b,statusText:e})}function k(){var a=db(r.pendingRequests,b);-1!==a&&r.pendingRequests.splice(a,1)}var p=n.defer(),z=p.promise,v,q,s=u(b.url, -b.params);r.pendingRequests.push(b);z.then(k,k);(b.cache||e.cache)&&(!1!==b.cache&&"GET"==b.method)&&(v=X(b.cache)?b.cache:X(e.cache)?e.cache:F);if(v)if(q=v.get(s),B(q)){if(q.then)return q.then(k,k),q;M(q)?l(q[1],q[0],ba(q[2]),q[3]):l(q,200,{},"OK")}else v.put(s,z);E(q)&&a(b.method,s,c,f,g,b.timeout,b.withCredentials,b.responseType);return z}function u(a,b){if(!b)return a;var c=[];Sc(b,function(a,b){null===a||E(a)||(M(a)||(a=[a]),q(a,function(a){X(a)&&(a=qa(a));c.push(za(b)+"="+za(a))}))});0=S&&(!b.match(/^(get|post|head|put|delete|options)$/i)||!O.XMLHttpRequest))return new O.ActiveXObject("Microsoft.XMLHTTP");if(O.XMLHttpRequest)return new O.XMLHttpRequest;throw t("$httpBackend")("noxhr");}function Ud(){this.$get=["$browser","$window","$document",function(b,a,c){return ve(b,ue,b.defer,a.angular.callbacks,c[0])}]}function ve(b,a,c,d,e){function g(a,b){var c=e.createElement("script"),d=function(){c.onreadystatechange= -c.onload=c.onerror=null;e.body.removeChild(c);b&&b()};c.type="text/javascript";c.src=a;S&&8>=S?c.onreadystatechange=function(){/loaded|complete/.test(c.readyState)&&d()}:c.onload=c.onerror=function(){d()};e.body.appendChild(c);return d}var f=-1;return function(e,l,k,m,n,p,r,z){function u(){v=f;A&&A();x&&x.abort()}function F(a,d,e,g,f){L&&c.cancel(L);A=x=null;0===d&&(d=e?200:"file"==sa(l).protocol?404:0);a(1223===d?204:d,e,g,f||"");b.$$completeOutstandingRequest(C)}var v;b.$$incOutstandingRequestCount(); -l=l||b.url();if("jsonp"==K(e)){var J="_"+(d.counter++).toString(36);d[J]=function(a){d[J].data=a};var A=g(l.replace("JSON_CALLBACK","angular.callbacks."+J),function(){d[J].data?F(m,200,d[J].data):F(m,v||-2);d[J]=Ea.noop})}else{var x=a(e);x.open(e,l,!0);q(n,function(a,b){B(a)&&x.setRequestHeader(b,a)});x.onreadystatechange=function(){if(x&&4==x.readyState){var a=null,b=null;v!==f&&(a=x.getAllResponseHeaders(),b="response"in x?x.response:x.responseText);F(m,v||x.status,b,a,x.statusText||"")}};r&&(x.withCredentials= -!0);if(z)try{x.responseType=z}catch(s){if("json"!==z)throw s;}x.send(k||null)}if(0=h&&(n.resolve(r),m(p.$$intervalId),delete e[p.$$intervalId]);z||b.$apply()},f);e[p.$$intervalId]=n;return p}var e={};d.cancel=function(a){return a&&a.$$intervalId in e?(e[a.$$intervalId].reject("canceled"),clearInterval(a.$$intervalId),delete e[a.$$intervalId], -!0):!1};return d}]}function ad(){this.$get=function(){return{id:"en-us",NUMBER_FORMATS:{DECIMAL_SEP:".",GROUP_SEP:",",PATTERNS:[{minInt:1,minFrac:0,maxFrac:3,posPre:"",posSuf:"",negPre:"-",negSuf:"",gSize:3,lgSize:3},{minInt:1,minFrac:2,maxFrac:2,posPre:"\u00a4",posSuf:"",negPre:"(\u00a4",negSuf:")",gSize:3,lgSize:3}],CURRENCY_SYM:"$"},DATETIME_FORMATS:{MONTH:"January February March April May June July August September October November December".split(" "),SHORTMONTH:"Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec".split(" "), -DAY:"Sunday Monday Tuesday Wednesday Thursday Friday Saturday".split(" "),SHORTDAY:"Sun Mon Tue Wed Thu Fri Sat".split(" "),AMPMS:["AM","PM"],medium:"MMM d, y h:mm:ss a","short":"M/d/yy h:mm a",fullDate:"EEEE, MMMM d, y",longDate:"MMMM d, y",mediumDate:"MMM d, y",shortDate:"M/d/yy",mediumTime:"h:mm:ss a",shortTime:"h:mm a"},pluralCat:function(b){return 1===b?"one":"other"}}}}function wc(b){b=b.split("/");for(var a=b.length;a--;)b[a]=wb(b[a]);return b.join("/")}function xc(b,a,c){b=sa(b,c);a.$$protocol= -b.protocol;a.$$host=b.hostname;a.$$port=Y(b.port)||we[b.protocol]||null}function yc(b,a,c){var d="/"!==b.charAt(0);d&&(b="/"+b);b=sa(b,c);a.$$path=decodeURIComponent(d&&"/"===b.pathname.charAt(0)?b.pathname.substring(1):b.pathname);a.$$search=Yb(b.search);a.$$hash=decodeURIComponent(b.hash);a.$$path&&"/"!=a.$$path.charAt(0)&&(a.$$path="/"+a.$$path)}function oa(b,a){if(0===a.indexOf(b))return a.substr(b.length)}function Ya(b){var a=b.indexOf("#");return-1==a?b:b.substr(0,a)}function Jb(b){return b.substr(0, -Ya(b).lastIndexOf("/")+1)}function zc(b,a){this.$$html5=!0;a=a||"";var c=Jb(b);xc(b,this,b);this.$$parse=function(a){var e=oa(c,a);if(!w(e))throw Kb("ipthprfx",a,c);yc(e,this,b);this.$$path||(this.$$path="/");this.$$compose()};this.$$compose=function(){var a=Zb(this.$$search),b=this.$$hash?"#"+wb(this.$$hash):"";this.$$url=wc(this.$$path)+(a?"?"+a:"")+b;this.$$absUrl=c+this.$$url.substr(1)};this.$$rewrite=function(d){var e;if((e=oa(b,d))!==s)return d=e,(e=oa(a,e))!==s?c+(oa("/",e)||e):b+d;if((e=oa(c, -d))!==s)return c+e;if(c==d+"/")return c}}function Lb(b,a){var c=Jb(b);xc(b,this,b);this.$$parse=function(d){var e=oa(b,d)||oa(c,d),e="#"==e.charAt(0)?oa(a,e):this.$$html5?e:"";if(!w(e))throw Kb("ihshprfx",d,a);yc(e,this,b);d=this.$$path;var g=/^\/?.*?:(\/.*)/;0===e.indexOf(b)&&(e=e.replace(b,""));g.exec(e)||(d=(e=g.exec(d))?e[1]:d);this.$$path=d;this.$$compose()};this.$$compose=function(){var c=Zb(this.$$search),e=this.$$hash?"#"+wb(this.$$hash):"";this.$$url=wc(this.$$path)+(c?"?"+c:"")+e;this.$$absUrl= -b+(this.$$url?a+this.$$url:"")};this.$$rewrite=function(a){if(Ya(b)==Ya(a))return a}}function Ac(b,a){this.$$html5=!0;Lb.apply(this,arguments);var c=Jb(b);this.$$rewrite=function(d){var e;if(b==Ya(d))return d;if(e=oa(c,d))return b+a+e;if(c===d+"/")return c}}function nb(b){return function(){return this[b]}}function Bc(b,a){return function(c){if(E(c))return this[b];this[b]=a(c);this.$$compose();return this}}function Vd(){var b="",a=!1;this.hashPrefix=function(a){return B(a)?(b=a,this):b};this.html5Mode= -function(b){return B(b)?(a=b,this):a};this.$get=["$rootScope","$browser","$sniffer","$rootElement",function(c,d,e,g){function f(a){c.$broadcast("$locationChangeSuccess",h.absUrl(),a)}var h,l=d.baseHref(),k=d.url();a?(l=k.substring(0,k.indexOf("/",k.indexOf("//")+2))+(l||"/"),e=e.history?zc:Ac):(l=Ya(k),e=Lb);h=new e(l,"#"+b);h.$$parse(h.$$rewrite(k));g.on("click",function(a){if(!a.ctrlKey&&!a.metaKey&&2!=a.which){for(var b=y(a.target);"a"!==K(b[0].nodeName);)if(b[0]===g[0]||!(b=b.parent())[0])return; -var e=b.prop("href");X(e)&&"[object SVGAnimatedString]"===e.toString()&&(e=sa(e.animVal).href);var f=h.$$rewrite(e);e&&(!b.attr("target")&&f&&!a.isDefaultPrevented())&&(a.preventDefault(),f!=d.url()&&(h.$$parse(f),c.$apply(),O.angular["ff-684208-preventDefault"]=!0))}});h.absUrl()!=k&&d.url(h.absUrl(),!0);d.onUrlChange(function(a){h.absUrl()!=a&&(c.$evalAsync(function(){var b=h.absUrl();h.$$parse(a);c.$broadcast("$locationChangeStart",a,b).defaultPrevented?(h.$$parse(b),d.url(b)):f(b)}),c.$$phase|| -c.$digest())});var m=0;c.$watch(function(){var a=d.url(),b=h.$$replace;m&&a==h.absUrl()||(m++,c.$evalAsync(function(){c.$broadcast("$locationChangeStart",h.absUrl(),a).defaultPrevented?h.$$parse(a):(d.url(h.absUrl(),b),f(a))}));h.$$replace=!1;return m});return h}]}function Wd(){var b=!0,a=this;this.debugEnabled=function(a){return B(a)?(b=a,this):b};this.$get=["$window",function(c){function d(a){a instanceof Error&&(a.stack?a=a.message&&-1===a.stack.indexOf(a.message)?"Error: "+a.message+"\n"+a.stack: -a.stack:a.sourceURL&&(a=a.message+"\n"+a.sourceURL+":"+a.line));return a}function e(a){var b=c.console||{},e=b[a]||b.log||C;a=!1;try{a=!!e.apply}catch(l){}return a?function(){var a=[];q(arguments,function(b){a.push(d(b))});return e.apply(b,a)}:function(a,b){e(a,null==b?"":b)}}return{log:e("log"),info:e("info"),warn:e("warn"),error:e("error"),debug:function(){var c=e("debug");return function(){b&&c.apply(a,arguments)}}()}}]}function fa(b,a){if("constructor"===b)throw Ba("isecfld",a);return b}function Za(b, -a){if(b){if(b.constructor===b)throw Ba("isecfn",a);if(b.document&&b.location&&b.alert&&b.setInterval)throw Ba("isecwindow",a);if(b.children&&(b.nodeName||b.prop&&b.attr&&b.find))throw Ba("isecdom",a);}return b}function ob(b,a,c,d,e){e=e||{};a=a.split(".");for(var g,f=0;1e?Cc(d[0],d[1],d[2],d[3],d[4],c,a):function(b,g){var f=0,h;do h=Cc(d[f++],d[f++],d[f++],d[f++],d[f++],c,a)(b,g),g=s,b=h;while(fa)for(b in l++,e)e.hasOwnProperty(b)&&!d.hasOwnProperty(b)&&(q--,delete e[b])}else e!==d&&(e=d,l++);return l},function(){p?(p=!1,b(d,d,c)):b(d,f,c);if(h)if(X(d))if(ab(d)){f=Array(d.length);for(var a=0;as&&(y=4-s,Q[y]||(Q[y]=[]),H=P(d.exp)?"fn: "+(d.exp.name||d.exp.toString()):d.exp,H+="; newVal: "+qa(g)+"; oldVal: "+qa(f),Q[y].push(H));else if(d===c){x=!1;break a}}catch(w){p.$$phase= -null,e(w)}if(!(h=L.$$childHead||L!==this&&L.$$nextSibling))for(;L!==this&&!(h=L.$$nextSibling);)L=L.$parent}while(L=h);if((x||k.length)&&!s--)throw p.$$phase=null,a("infdig",b,qa(Q));}while(x||k.length);for(p.$$phase=null;m.length;)try{m.shift()()}catch(T){e(T)}},$destroy:function(){if(!this.$$destroyed){var a=this.$parent;this.$broadcast("$destroy");this.$$destroyed=!0;this!==p&&(q(this.$$listenerCount,eb(null,m,this)),a.$$childHead==this&&(a.$$childHead=this.$$nextSibling),a.$$childTail==this&& -(a.$$childTail=this.$$prevSibling),this.$$prevSibling&&(this.$$prevSibling.$$nextSibling=this.$$nextSibling),this.$$nextSibling&&(this.$$nextSibling.$$prevSibling=this.$$prevSibling),this.$parent=this.$$nextSibling=this.$$prevSibling=this.$$childHead=this.$$childTail=this.$root=null,this.$$listeners={},this.$$watchers=this.$$asyncQueue=this.$$postDigestQueue=[],this.$destroy=this.$digest=this.$apply=C,this.$on=this.$watch=function(){return C})}},$eval:function(a,b){return g(a)(this,b)},$evalAsync:function(a){p.$$phase|| -p.$$asyncQueue.length||f.defer(function(){p.$$asyncQueue.length&&p.$digest()});this.$$asyncQueue.push({scope:this,expression:a})},$$postDigest:function(a){this.$$postDigestQueue.push(a)},$apply:function(a){try{return l("$apply"),this.$eval(a)}catch(b){e(b)}finally{p.$$phase=null;try{p.$digest()}catch(c){throw e(c),c;}}},$on:function(a,b){var c=this.$$listeners[a];c||(this.$$listeners[a]=c=[]);c.push(b);var d=this;do d.$$listenerCount[a]||(d.$$listenerCount[a]=0),d.$$listenerCount[a]++;while(d=d.$parent); -var e=this;return function(){c[db(c,b)]=null;m(e,1,a)}},$emit:function(a,b){var c=[],d,g=this,f=!1,h={name:a,targetScope:g,stopPropagation:function(){f=!0},preventDefault:function(){h.defaultPrevented=!0},defaultPrevented:!1},l=[h].concat(ya.call(arguments,1)),k,m;do{d=g.$$listeners[a]||c;h.currentScope=g;k=0;for(m=d.length;kc.msieDocumentMode)throw ua("iequirks");var e=ba(ga);e.isEnabled=function(){return b};e.trustAs=d.trustAs;e.getTrusted=d.getTrusted;e.valueOf=d.valueOf;b||(e.trustAs=e.getTrusted=function(a,b){return b},e.valueOf=Da);e.parseAs=function(b,c){var d=a(c);return d.literal&&d.constant?d:function(a,c){return e.getTrusted(b, -d(a,c))}};var g=e.parseAs,f=e.getTrusted,h=e.trustAs;q(ga,function(a,b){var c=K(b);e[Ta("parse_as_"+c)]=function(b){return g(a,b)};e[Ta("get_trusted_"+c)]=function(b){return f(a,b)};e[Ta("trust_as_"+c)]=function(b){return h(a,b)}});return e}]}function be(){this.$get=["$window","$document",function(b,a){var c={},d=Y((/android (\d+)/.exec(K((b.navigator||{}).userAgent))||[])[1]),e=/Boxee/i.test((b.navigator||{}).userAgent),g=a[0]||{},f=g.documentMode,h,l=/^(Moz|webkit|O|ms)(?=[A-Z])/,k=g.body&&g.body.style, -m=!1,n=!1;if(k){for(var p in k)if(m=l.exec(p)){h=m[0];h=h.substr(0,1).toUpperCase()+h.substr(1);break}h||(h="WebkitOpacity"in k&&"webkit");m=!!("transition"in k||h+"Transition"in k);n=!!("animation"in k||h+"Animation"in k);!d||m&&n||(m=w(g.body.style.webkitTransition),n=w(g.body.style.webkitAnimation))}return{history:!(!b.history||!b.history.pushState||4>d||e),hashchange:"onhashchange"in b&&(!f||7b;b=Math.abs(b);var f=b+"",h="",l=[],k=!1;if(-1!==f.indexOf("e")){var m=f.match(/([\d\.]+)e(-?)(\d+)/);m&&"-"==m[2]&&m[3]>e+1?f="0":(h=f,k=!0)}if(k)0b)&&(h=b.toFixed(e)); -else{f=(f.split(Nc)[1]||"").length;E(e)&&(e=Math.min(Math.max(a.minFrac,f),a.maxFrac));f=Math.pow(10,e);b=Math.round(b*f)/f;b=(""+b).split(Nc);f=b[0];b=b[1]||"";var m=0,n=a.lgSize,p=a.gSize;if(f.length>=n+p)for(m=f.length-n,k=0;kb&&(d="-",b=-b);for(b=""+b;b.length-c)e+=c;0===e&&-12==c&&(e=12);return Ob(e,a,d)}}function pb(b,a){return function(c,d){var e=c["get"+b](),g=Fa(a?"SHORT"+b:b);return d[g][e]}}function Jc(b){function a(a){var b;if(b=a.match(c)){a=new Date(0);var g=0,f=0,h=b[8]?a.setUTCFullYear:a.setFullYear,l=b[8]?a.setUTCHours:a.setHours;b[9]&&(g=Y(b[9]+b[10]),f=Y(b[9]+b[11])); -h.call(a,Y(b[1]),Y(b[2])-1,Y(b[3]));g=Y(b[4]||0)-g;f=Y(b[5]||0)-f;h=Y(b[6]||0);b=Math.round(1E3*parseFloat("0."+(b[7]||0)));l.call(a,g,f,h,b)}return a}var c=/^(\d{4})-?(\d\d)-?(\d\d)(?:T(\d\d)(?::?(\d\d)(?::?(\d\d)(?:\.(\d+))?)?)?(Z|([+-])(\d\d):?(\d\d))?)?$/;return function(c,e){var g="",f=[],h,l;e=e||"mediumDate";e=b.DATETIME_FORMATS[e]||e;w(c)&&(c=Ge.test(c)?Y(c):a(c));vb(c)&&(c=new Date(c));if(!Na(c))return c;for(;e;)(l=He.exec(e))?(f=f.concat(ya.call(l,1)),e=f.pop()):(f.push(e),e=null);q(f,function(a){h= -Ie[a];g+=h?h(c,b.DATETIME_FORMATS):a.replace(/(^'|'$)/g,"").replace(/''/g,"'")});return g}}function Ce(){return function(b){return qa(b,!0)}}function De(){return function(b,a){if(!M(b)&&!w(b))return b;a=Y(a);if(w(b))return a?0<=a?b.slice(0,a):b.slice(a,b.length):"";var c=[],d,e;a>b.length?a=b.length:a<-b.length&&(a=-b.length);0a||37<=a&&40>=a)||m()});if(e.hasEvent("paste"))a.on("paste cut",m)}a.on("change",l);d.$render=function(){a.val(d.$isEmpty(d.$viewValue)? -"":d.$viewValue)};var n=c.ngPattern;n&&((e=n.match(/^\/(.*)\/([gim]*)$/))?(n=RegExp(e[1],e[2]),e=function(a){return pa(d,"pattern",d.$isEmpty(a)||n.test(a),a)}):e=function(c){var e=b.$eval(n);if(!e||!e.test)throw t("ngPattern")("noregexp",n,e,ha(a));return pa(d,"pattern",d.$isEmpty(c)||e.test(c),c)},d.$formatters.push(e),d.$parsers.push(e));if(c.ngMinlength){var p=Y(c.ngMinlength);e=function(a){return pa(d,"minlength",d.$isEmpty(a)||a.length>=p,a)};d.$parsers.push(e);d.$formatters.push(e)}if(c.ngMaxlength){var r= -Y(c.ngMaxlength);e=function(a){return pa(d,"maxlength",d.$isEmpty(a)||a.length<=r,a)};d.$parsers.push(e);d.$formatters.push(e)}}function Pb(b,a){b="ngClass"+b;return["$animate",function(c){function d(a,b){var c=[],d=0;a:for(;dS?function(b){b=b.nodeName?b:b[0];return b.scopeName&&"HTML"!=b.scopeName?Fa(b.scopeName+":"+b.nodeName):b.nodeName}:function(b){return b.nodeName?b.nodeName:b[0].nodeName};var Xc=/[A-Z]/g,$c={full:"1.2.16",major:1,minor:2,dot:16,codeName:"badger-enumeration"},Ua=N.cache={},gb=N.expando="ng-"+(new Date).getTime(), -me=1,Pc=O.document.addEventListener?function(b,a,c){b.addEventListener(a,c,!1)}:function(b,a,c){b.attachEvent("on"+a,c)},Fb=O.document.removeEventListener?function(b,a,c){b.removeEventListener(a,c,!1)}:function(b,a,c){b.detachEvent("on"+a,c)};N._data=function(b){return this.cache[b[this.expando]]||{}};var he=/([\:\-\_]+(.))/g,ie=/^moz([A-Z])/,Bb=t("jqLite"),je=/^<(\w+)\s*\/?>(?:<\/\1>|)$/,Cb=/<|&#?\w+;/,ke=/<([\w:]+)/,le=/<(?!area|br|col|embed|hr|img|input|link|meta|param)(([\w:]+)[^>]*)\/>/gi,ea= -{option:[1,'"],thead:[1,"","
"],col:[2,"","
"],tr:[2,"","
"],td:[3,"","
"],_default:[0,"",""]};ea.optgroup=ea.option;ea.tbody=ea.tfoot=ea.colgroup=ea.caption=ea.thead;ea.th=ea.td;var Ja=N.prototype={ready:function(b){function a(){c||(c=!0,b())}var c=!1;"complete"===U.readyState?setTimeout(a):(this.on("DOMContentLoaded",a),N(O).on("load",a))},toString:function(){var b= -[];q(this,function(a){b.push(""+a)});return"["+b.join(", ")+"]"},eq:function(b){return 0<=b?y(this[b]):y(this[this.length+b])},length:0,push:Ke,sort:[].sort,splice:[].splice},kb={};q("multiple selected checked disabled readOnly required open".split(" "),function(b){kb[K(b)]=b});var nc={};q("input select option textarea button form details".split(" "),function(b){nc[Fa(b)]=!0});q({data:jc,inheritedData:jb,scope:function(b){return y(b).data("$scope")||jb(b.parentNode||b,["$isolateScope","$scope"])}, -isolateScope:function(b){return y(b).data("$isolateScope")||y(b).data("$isolateScopeNoTemplate")},controller:kc,injector:function(b){return jb(b,"$injector")},removeAttr:function(b,a){b.removeAttribute(a)},hasClass:Gb,css:function(b,a,c){a=Ta(a);if(B(c))b.style[a]=c;else{var d;8>=S&&(d=b.currentStyle&&b.currentStyle[a],""===d&&(d="auto"));d=d||b.style[a];8>=S&&(d=""===d?s:d);return d}},attr:function(b,a,c){var d=K(a);if(kb[d])if(B(c))c?(b[a]=!0,b.setAttribute(a,d)):(b[a]=!1,b.removeAttribute(d)); -else return b[a]||(b.attributes.getNamedItem(a)||C).specified?d:s;else if(B(c))b.setAttribute(a,c);else if(b.getAttribute)return b=b.getAttribute(a,2),null===b?s:b},prop:function(b,a,c){if(B(c))b[a]=c;else return b[a]},text:function(){function b(b,d){var e=a[b.nodeType];if(E(d))return e?b[e]:"";b[e]=d}var a=[];9>S?(a[1]="innerText",a[3]="nodeValue"):a[1]=a[3]="textContent";b.$dv="";return b}(),val:function(b,a){if(E(a)){if("SELECT"===Ka(b)&&b.multiple){var c=[];q(b.options,function(a){a.selected&& -c.push(a.value||a.text)});return 0===c.length?null:c}return b.value}b.value=a},html:function(b,a){if(E(a))return b.innerHTML;for(var c=0,d=b.childNodes;c":function(a,c,d,e){return d(a,c)>e(a,c)},"<=":function(a,c,d,e){return d(a,c)<=e(a,c)},">=":function(a,c,d,e){return d(a,c)>=e(a,c)},"&&":function(a,c,d,e){return d(a,c)&&e(a,c)},"||":function(a,c,d,e){return d(a,c)||e(a,c)},"&":function(a,c,d,e){return d(a,c)&e(a,c)},"|":function(a,c,d,e){return e(a,c)(a,c,d(a,c))},"!":function(a,c,d){return!d(a,c)}},Ne={n:"\n",f:"\f",r:"\r",t:"\t",v:"\v","'":"'",'"':'"'}, -Nb=function(a){this.options=a};Nb.prototype={constructor:Nb,lex:function(a){this.text=a;this.index=0;this.ch=s;this.lastCh=":";this.tokens=[];var c;for(a=[];this.index=a},isWhitespace:function(a){return" "===a||"\r"===a||"\t"===a||"\n"===a||"\v"===a||"\u00a0"=== -a},isIdent:function(a){return"a"<=a&&"z">=a||"A"<=a&&"Z">=a||"_"===a||"$"===a},isExpOperator:function(a){return"-"===a||"+"===a||this.isNumber(a)},throwError:function(a,c,d){d=d||this.index;c=B(c)?"s "+c+"-"+this.index+" ["+this.text.substring(c,d)+"]":" "+d;throw Ba("lexerr",a,c,this.text);},readNumber:function(){for(var a="",c=this.index;this.index","<=",">="))a=this.binaryFn(a,c.fn,this.relational());return a},additive:function(){for(var a=this.multiplicative(),c;c=this.expect("+","-");)a=this.binaryFn(a,c.fn,this.multiplicative());return a},multiplicative:function(){for(var a=this.unary(),c;c=this.expect("*","/","%");)a=this.binaryFn(a,c.fn,this.unary());return a},unary:function(){var a;return this.expect("+")?this.primary():(a=this.expect("-"))?this.binaryFn($a.ZERO,a.fn, -this.unary()):(a=this.expect("!"))?this.unaryFn(a.fn,this.unary()):this.primary()},fieldAccess:function(a){var c=this,d=this.expect().text,e=Dc(d,this.options,this.text);return D(function(c,d,h){return e(h||a(c,d))},{assign:function(e,f,h){return ob(a(e,h),d,f,c.text,c.options)}})},objectIndex:function(a){var c=this,d=this.expression();this.consume("]");return D(function(e,g){var f=a(e,g),h=d(e,g),l;if(!f)return s;(f=Za(f[h],c.text))&&(f.then&&c.options.unwrapPromises)&&(l=f,"$$v"in f||(l.$$v=s,l.then(function(a){l.$$v= -a})),f=f.$$v);return f},{assign:function(e,g,f){var h=d(e,f);return Za(a(e,f),c.text)[h]=g}})},functionCall:function(a,c){var d=[];if(")"!==this.peekToken().text){do d.push(this.expression());while(this.expect(","))}this.consume(")");var e=this;return function(g,f){for(var h=[],l=c?c(g,f):g,k=0;ka.getHours()?c.AMPMS[0]:c.AMPMS[1]},Z:function(a){a=-1*a.getTimezoneOffset();return a=(0<=a?"+":"")+(Ob(Math[0=S&&(c.href||c.name||c.$set("href",""),a.append(U.createComment("IE fix")));if(!c.href&&!c.xlinkHref&&!c.name)return function(a,c){var g="[object SVGAnimatedString]"===wa.call(c.prop("href"))?"xlink:href":"href";c.on("click",function(a){c.attr(g)||a.preventDefault()})}}}),zb={};q(kb,function(a,c){if("multiple"!=a){var d=na("ng-"+c);zb[d]=function(){return{priority:100,link:function(a,g,f){a.$watch(f[d],function(a){f.$set(c,!!a)})}}}}});q(["src", -"srcset","href"],function(a){var c=na("ng-"+a);zb[c]=function(){return{priority:99,link:function(d,e,g){var f=a,h=a;"href"===a&&"[object SVGAnimatedString]"===wa.call(e.prop("href"))&&(h="xlinkHref",g.$attr[h]="xlink:href",f=null);g.$observe(c,function(a){a&&(g.$set(h,a),S&&f&&e.prop(f,g[h]))})}}}});var sb={$addControl:C,$removeControl:C,$setValidity:C,$setDirty:C,$setPristine:C};Oc.$inject=["$element","$attrs","$scope","$animate"];var Qc=function(a){return["$timeout",function(c){return{name:"form", -restrict:a?"EAC":"E",controller:Oc,compile:function(){return{pre:function(a,e,g,f){if(!g.action){var h=function(a){a.preventDefault?a.preventDefault():a.returnValue=!1};Pc(e[0],"submit",h);e.on("$destroy",function(){c(function(){Fb(e[0],"submit",h)},0,!1)})}var l=e.parent().controller("form"),k=g.name||g.ngForm;k&&ob(a,k,f,k);if(l)e.on("$destroy",function(){l.$removeControl(f);k&&ob(a,k,s,k);D(f,sb)})}}}}}]},dd=Qc(),qd=Qc(!0),Oe=/^(ftp|http|https):\/\/(\w+:{0,1}\w*@)?(\S+)(:[0-9]+)?(\/|\/([\w#!:.?+=&%@!\-\/]))?$/, -Pe=/^[a-z0-9!#$%&'*+/=?^_`{|}~.-]+@[a-z0-9-]+(\.[a-z0-9-]+)*$/i,Qe=/^\s*(\-|\+)?(\d+|(\d*(\.\d*)))\s*$/,Rc={text:ub,number:function(a,c,d,e,g,f){ub(a,c,d,e,g,f);e.$parsers.push(function(a){var c=e.$isEmpty(a);if(c||Qe.test(a))return e.$setValidity("number",!0),""===a?null:c?a:parseFloat(a);e.$setValidity("number",!1);return s});Je(e,"number",c);e.$formatters.push(function(a){return e.$isEmpty(a)?"":""+a});d.min&&(a=function(a){var c=parseFloat(d.min);return pa(e,"min",e.$isEmpty(a)||a>=c,a)},e.$parsers.push(a), -e.$formatters.push(a));d.max&&(a=function(a){var c=parseFloat(d.max);return pa(e,"max",e.$isEmpty(a)||a<=c,a)},e.$parsers.push(a),e.$formatters.push(a));e.$formatters.push(function(a){return pa(e,"number",e.$isEmpty(a)||vb(a),a)})},url:function(a,c,d,e,g,f){ub(a,c,d,e,g,f);a=function(a){return pa(e,"url",e.$isEmpty(a)||Oe.test(a),a)};e.$formatters.push(a);e.$parsers.push(a)},email:function(a,c,d,e,g,f){ub(a,c,d,e,g,f);a=function(a){return pa(e,"email",e.$isEmpty(a)||Pe.test(a),a)};e.$formatters.push(a); -e.$parsers.push(a)},radio:function(a,c,d,e){E(d.name)&&c.attr("name",bb());c.on("click",function(){c[0].checked&&a.$apply(function(){e.$setViewValue(d.value)})});e.$render=function(){c[0].checked=d.value==e.$viewValue};d.$observe("value",e.$render)},checkbox:function(a,c,d,e){var g=d.ngTrueValue,f=d.ngFalseValue;w(g)||(g=!0);w(f)||(f=!1);c.on("click",function(){a.$apply(function(){e.$setViewValue(c[0].checked)})});e.$render=function(){c[0].checked=e.$viewValue};e.$isEmpty=function(a){return a!==g}; -e.$formatters.push(function(a){return a===g});e.$parsers.push(function(a){return a?g:f})},hidden:C,button:C,submit:C,reset:C,file:C},dc=["$browser","$sniffer",function(a,c){return{restrict:"E",require:"?ngModel",link:function(d,e,g,f){f&&(Rc[K(g.type)]||Rc.text)(d,e,g,f,c,a)}}}],rb="ng-valid",qb="ng-invalid",La="ng-pristine",tb="ng-dirty",Re=["$scope","$exceptionHandler","$attrs","$element","$parse","$animate",function(a,c,d,e,g,f){function h(a,c){c=c?"-"+fb(c,"-"):"";f.removeClass(e,(a?qb:rb)+c); -f.addClass(e,(a?rb:qb)+c)}this.$modelValue=this.$viewValue=Number.NaN;this.$parsers=[];this.$formatters=[];this.$viewChangeListeners=[];this.$pristine=!0;this.$dirty=!1;this.$valid=!0;this.$invalid=!1;this.$name=d.name;var l=g(d.ngModel),k=l.assign;if(!k)throw t("ngModel")("nonassign",d.ngModel,ha(e));this.$render=C;this.$isEmpty=function(a){return E(a)||""===a||null===a||a!==a};var m=e.inheritedData("$formController")||sb,n=0,p=this.$error={};e.addClass(La);h(!0);this.$setValidity=function(a,c){p[a]!== -!c&&(c?(p[a]&&n--,n||(h(!0),this.$valid=!0,this.$invalid=!1)):(h(!1),this.$invalid=!0,this.$valid=!1,n++),p[a]=!c,h(c,a),m.$setValidity(a,c,this))};this.$setPristine=function(){this.$dirty=!1;this.$pristine=!0;f.removeClass(e,tb);f.addClass(e,La)};this.$setViewValue=function(d){this.$viewValue=d;this.$pristine&&(this.$dirty=!0,this.$pristine=!1,f.removeClass(e,La),f.addClass(e,tb),m.$setDirty());q(this.$parsers,function(a){d=a(d)});this.$modelValue!==d&&(this.$modelValue=d,k(a,d),q(this.$viewChangeListeners, -function(a){try{a()}catch(d){c(d)}}))};var r=this;a.$watch(function(){var c=l(a);if(r.$modelValue!==c){var d=r.$formatters,e=d.length;for(r.$modelValue=c;e--;)c=d[e](c);r.$viewValue!==c&&(r.$viewValue=c,r.$render())}return c})}],Fd=function(){return{require:["ngModel","^?form"],controller:Re,link:function(a,c,d,e){var g=e[0],f=e[1]||sb;f.$addControl(g);a.$on("$destroy",function(){f.$removeControl(g)})}}},Hd=aa({require:"ngModel",link:function(a,c,d,e){e.$viewChangeListeners.push(function(){a.$eval(d.ngChange)})}}), -ec=function(){return{require:"?ngModel",link:function(a,c,d,e){if(e){d.required=!0;var g=function(a){if(d.required&&e.$isEmpty(a))e.$setValidity("required",!1);else return e.$setValidity("required",!0),a};e.$formatters.push(g);e.$parsers.unshift(g);d.$observe("required",function(){g(e.$viewValue)})}}}},Gd=function(){return{require:"ngModel",link:function(a,c,d,e){var g=(a=/\/(.*)\//.exec(d.ngList))&&RegExp(a[1])||d.ngList||",";e.$parsers.push(function(a){if(!E(a)){var c=[];a&&q(a.split(g),function(a){a&& -c.push(ca(a))});return c}});e.$formatters.push(function(a){return M(a)?a.join(", "):s});e.$isEmpty=function(a){return!a||!a.length}}}},Se=/^(true|false|\d+)$/,Id=function(){return{priority:100,compile:function(a,c){return Se.test(c.ngValue)?function(a,c,g){g.$set("value",a.$eval(g.ngValue))}:function(a,c,g){a.$watch(g.ngValue,function(a){g.$set("value",a)})}}}},id=va(function(a,c,d){c.addClass("ng-binding").data("$binding",d.ngBind);a.$watch(d.ngBind,function(a){c.text(a==s?"":a)})}),kd=["$interpolate", -function(a){return function(c,d,e){c=a(d.attr(e.$attr.ngBindTemplate));d.addClass("ng-binding").data("$binding",c);e.$observe("ngBindTemplate",function(a){d.text(a)})}}],jd=["$sce","$parse",function(a,c){return function(d,e,g){e.addClass("ng-binding").data("$binding",g.ngBindHtml);var f=c(g.ngBindHtml);d.$watch(function(){return(f(d)||"").toString()},function(c){e.html(a.getTrustedHtml(f(d))||"")})}}],ld=Pb("",!0),nd=Pb("Odd",0),md=Pb("Even",1),od=va({compile:function(a,c){c.$set("ngCloak",s);a.removeClass("ng-cloak")}}), -pd=[function(){return{scope:!0,controller:"@",priority:500}}],fc={};q("click dblclick mousedown mouseup mouseover mouseout mousemove mouseenter mouseleave keydown keyup keypress submit focus blur copy cut paste".split(" "),function(a){var c=na("ng-"+a);fc[c]=["$parse",function(d){return{compile:function(e,g){var f=d(g[c]);return function(c,d,e){d.on(K(a),function(a){c.$apply(function(){f(c,{$event:a})})})}}}}]});var sd=["$animate",function(a){return{transclude:"element",priority:600,terminal:!0,restrict:"A", -$$tlb:!0,link:function(c,d,e,g,f){var h,l,k;c.$watch(e.ngIf,function(g){Qa(g)?l||(l=c.$new(),f(l,function(c){c[c.length++]=U.createComment(" end ngIf: "+e.ngIf+" ");h={clone:c};a.enter(c,d.parent(),d)})):(k&&(k.remove(),k=null),l&&(l.$destroy(),l=null),h&&(k=yb(h.clone),a.leave(k,function(){k=null}),h=null))})}}}],td=["$http","$templateCache","$anchorScroll","$animate","$sce",function(a,c,d,e,g){return{restrict:"ECA",priority:400,terminal:!0,transclude:"element",controller:Ea.noop,compile:function(f, -h){var l=h.ngInclude||h.src,k=h.onload||"",m=h.autoscroll;return function(f,h,q,s,u){var F=0,v,y,A,x=function(){y&&(y.remove(),y=null);v&&(v.$destroy(),v=null);A&&(e.leave(A,function(){y=null}),y=A,A=null)};f.$watch(g.parseAsResourceUrl(l),function(g){var l=function(){!B(m)||m&&!f.$eval(m)||d()},q=++F;g?(a.get(g,{cache:c}).success(function(a){if(q===F){var c=f.$new();s.template=a;a=u(c,function(a){x();e.enter(a,null,h,l)});v=c;A=a;v.$emit("$includeContentLoaded");f.$eval(k)}}).error(function(){q=== -F&&x()}),f.$emit("$includeContentRequested")):(x(),s.template=null)})}}}}],Jd=["$compile",function(a){return{restrict:"ECA",priority:-400,require:"ngInclude",link:function(c,d,e,g){d.html(g.template);a(d.contents())(c)}}}],ud=va({priority:450,compile:function(){return{pre:function(a,c,d){a.$eval(d.ngInit)}}}}),vd=va({terminal:!0,priority:1E3}),wd=["$locale","$interpolate",function(a,c){var d=/{}/g;return{restrict:"EA",link:function(e,g,f){var h=f.count,l=f.$attr.when&&g.attr(f.$attr.when),k=f.offset|| -0,m=e.$eval(l)||{},n={},p=c.startSymbol(),r=c.endSymbol(),s=/^when(Minus)?(.+)$/;q(f,function(a,c){s.test(c)&&(m[K(c.replace("when","").replace("Minus","-"))]=g.attr(f.$attr[c]))});q(m,function(a,e){n[e]=c(a.replace(d,p+h+"-"+k+r))});e.$watch(function(){var c=parseFloat(e.$eval(h));if(isNaN(c))return"";c in m||(c=a.pluralCat(c-k));return n[c](e,g,!0)},function(a){g.text(a)})}}}],xd=["$parse","$animate",function(a,c){var d=t("ngRepeat");return{transclude:"element",priority:1E3,terminal:!0,$$tlb:!0, -link:function(e,g,f,h,l){var k=f.ngRepeat,m=k.match(/^\s*([\s\S]+?)\s+in\s+([\s\S]+?)(?:\s+track\s+by\s+([\s\S]+?))?\s*$/),n,p,r,s,u,F,v={$id:Ia};if(!m)throw d("iexp",k);f=m[1];h=m[2];(m=m[3])?(n=a(m),p=function(a,c,d){F&&(v[F]=a);v[u]=c;v.$index=d;return n(e,v)}):(r=function(a,c){return Ia(c)},s=function(a){return a});m=f.match(/^(?:([\$\w]+)|\(([\$\w]+)\s*,\s*([\$\w]+)\))$/);if(!m)throw d("iidexp",f);u=m[3]||m[1];F=m[2];var B={};e.$watchCollection(h,function(a){var f,h,m=g[0],n,v={},H,R,w,C,T,t, -E=[];if(ab(a))T=a,n=p||r;else{n=p||s;T=[];for(w in a)a.hasOwnProperty(w)&&"$"!=w.charAt(0)&&T.push(w);T.sort()}H=T.length;h=E.length=T.length;for(f=0;fA;)z.pop().element.remove()}for(;x.length>I;)x.pop()[0].element.remove()}var k;if(!(k=t.match(d)))throw Te("iexp",t,ha(f));var l=c(k[2]||k[1]),m=k[4]||k[6],n=k[5],p=c(k[3]||""),q= -c(k[2]?k[1]:m),y=c(k[7]),w=k[8]?c(k[8]):null,x=[[{element:f,label:""}]];u&&(a(u)(e),u.removeClass("ng-scope"),u.remove());f.empty();f.on("change",function(){e.$apply(function(){var a,c=y(e)||[],d={},h,k,l,p,t,v,u;if(r)for(k=[],p=0,v=x.length;p@charset "UTF-8";[ng\\:cloak],[ng-cloak],[data-ng-cloak],[x-ng-cloak],.ng-cloak,.x-ng-cloak,.ng-hide{display:none !important;}ng\\:form{display:block;}.ng-animate-block-transitions{transition:0s all!important;-webkit-transition:0s all!important;}'); -//# sourceMappingURL=angular.min.js.map diff --git a/release-0.19.0/examples/update-demo/local/angular.min.js.map b/release-0.19.0/examples/update-demo/local/angular.min.js.map deleted file mode 100644 index 0dddf2aab5d..00000000000 --- a/release-0.19.0/examples/update-demo/local/angular.min.js.map +++ /dev/null @@ -1,8 +0,0 @@ -{ -"version":3, -"file":"angular.min.js", -"lineCount":209, -"mappings":"A;;;;;aAKC,SAAQ,CAACA,CAAD,CAASC,CAAT,CAAmBC,CAAnB,CAA8B,CA8BvCC,QAAAA,EAAAA,CAAAA,CAAAA,CAAAA,CAAAA,MAAAA,SAAAA,EAAAA,CAAAA,IAAAA,EAAAA,SAAAA,CAAAA,CAAAA,CAAAA,CAAAA,CAAAA,CAAAA,EAAAA,GAAAA,EAAAA,CAAAA,CAAAA,CAAAA,CAAAA,GAAAA,CAAAA,EAAAA,EAAAA,CAAAA,CAAAA,uCAAAA,EAAAA,CAAAA,CAAAA,CAAAA,CAAAA,GAAAA,CAAAA,EAAAA,EAAAA,CAAAA,KAAAA,CAAAA,CAAAA,CAAAA,CAAAA,CAAAA,CAAAA,SAAAA,OAAAA,CAAAA,CAAAA,EAAAA,CAAAA,CAAAA,CAAAA,CAAAA,EAAAA,CAAAA,EAAAA,CAAAA,CAAAA,GAAAA,CAAAA,GAAAA,EAAAA,GAAAA,EAAAA,CAAAA,CAAAA,CAAAA,EAAAA,GAAAA,CAAAA,kBAAAA,CAAAA,UAAAA,EAAAA,MAAAA,UAAAA,CAAAA,CAAAA,CAAAA,CAAAA,SAAAA,CAAAA,CAAAA,CAAAA,SAAAA,EAAAA,QAAAA,CAAAA,aAAAA,CAAAA,EAAAA,CAAAA,CAAAA,WAAAA,EAAAA,MAAAA,UAAAA,CAAAA,CAAAA,CAAAA,CAAAA,WAAAA,CAAAA,QAAAA,EAAAA,MAAAA,UAAAA,CAAAA,CAAAA,CAAAA,CAAAA,IAAAA,UAAAA,CAAAA,SAAAA,CAAAA,CAAAA,CAAAA,CAAAA,CAAAA,SAAAA,CAAAA,CAAAA,CAAAA,CAAAA,OAAAA,MAAAA,CAAAA,CAAAA,CAAAA,CAAAA,CAuOAC,QAASA,GAAW,CAACC,CAAD,CAAM,CACxB,GAAW,IAAX,EAAIA,CAAJ,EAAmBC,EAAA,CAASD,CAAT,CAAnB,CACE,MAAO,CAAA,CAGT;IAAIE,EAASF,CAAAE,OAEb,OAAqB,EAArB,GAAIF,CAAAG,SAAJ,EAA0BD,CAA1B,CACS,CAAA,CADT,CAIOE,CAAA,CAASJ,CAAT,CAJP,EAIwBK,CAAA,CAAQL,CAAR,CAJxB,EAImD,CAJnD,GAIwCE,CAJxC,EAKyB,QALzB,GAKO,MAAOA,EALd,EAK8C,CAL9C,CAKqCA,CALrC,EAKoDA,CALpD,CAK6D,CAL7D,GAKmEF,EAZ3C,CA4C1BM,QAASA,EAAO,CAACN,CAAD,CAAMO,CAAN,CAAgBC,CAAhB,CAAyB,CACvC,IAAIC,CACJ,IAAIT,CAAJ,CACE,GAAIU,CAAA,CAAWV,CAAX,CAAJ,CACE,IAAKS,CAAL,GAAYT,EAAZ,CAGa,WAAX,EAAIS,CAAJ,GAAiC,QAAjC,EAA0BA,CAA1B,EAAoD,MAApD,EAA6CA,CAA7C,EAAgET,CAAAW,eAAhE,EAAsF,CAAAX,CAAAW,eAAA,CAAmBF,CAAnB,CAAtF,GACEF,CAAAK,KAAA,CAAcJ,CAAd,CAAuBR,CAAA,CAAIS,CAAJ,CAAvB,CAAiCA,CAAjC,CALN,KAQO,IAAIT,CAAAM,QAAJ,EAAmBN,CAAAM,QAAnB,GAAmCA,CAAnC,CACLN,CAAAM,QAAA,CAAYC,CAAZ,CAAsBC,CAAtB,CADK,KAEA,IAAIT,EAAA,CAAYC,CAAZ,CAAJ,CACL,IAAKS,CAAL,CAAW,CAAX,CAAcA,CAAd,CAAoBT,CAAAE,OAApB,CAAgCO,CAAA,EAAhC,CACEF,CAAAK,KAAA,CAAcJ,CAAd,CAAuBR,CAAA,CAAIS,CAAJ,CAAvB,CAAiCA,CAAjC,CAFG,KAIL,KAAKA,CAAL,GAAYT,EAAZ,CACMA,CAAAW,eAAA,CAAmBF,CAAnB,CAAJ,EACEF,CAAAK,KAAA,CAAcJ,CAAd,CAAuBR,CAAA,CAAIS,CAAJ,CAAvB,CAAiCA,CAAjC,CAKR,OAAOT,EAxBgC,CA2BzCa,QAASA,GAAU,CAACb,CAAD,CAAM,CACvB,IAAIc,EAAO,EAAX,CACSL,CAAT,KAASA,CAAT,GAAgBT,EAAhB,CACMA,CAAAW,eAAA,CAAmBF,CAAnB,CAAJ,EACEK,CAAAC,KAAA,CAAUN,CAAV,CAGJ,OAAOK,EAAAE,KAAA,EAPgB,CAUzBC,QAASA,GAAa,CAACjB,CAAD;AAAMO,CAAN,CAAgBC,CAAhB,CAAyB,CAE7C,IADA,IAAIM,EAAOD,EAAA,CAAWb,CAAX,CAAX,CACUkB,EAAI,CAAd,CAAiBA,CAAjB,CAAqBJ,CAAAZ,OAArB,CAAkCgB,CAAA,EAAlC,CACEX,CAAAK,KAAA,CAAcJ,CAAd,CAAuBR,CAAA,CAAIc,CAAA,CAAKI,CAAL,CAAJ,CAAvB,CAAqCJ,CAAA,CAAKI,CAAL,CAArC,CAEF,OAAOJ,EALsC,CAc/CK,QAASA,GAAa,CAACC,CAAD,CAAa,CACjC,MAAO,SAAQ,CAACC,CAAD,CAAQZ,CAAR,CAAa,CAAEW,CAAA,CAAWX,CAAX,CAAgBY,CAAhB,CAAF,CADK,CAYnCC,QAASA,GAAO,EAAG,CAIjB,IAHA,IAAIC,EAAQC,EAAAtB,OAAZ,CACIuB,CAEJ,CAAMF,CAAN,CAAA,CAAa,CACXA,CAAA,EACAE,EAAA,CAAQD,EAAA,CAAID,CAAJ,CAAAG,WAAA,CAAsB,CAAtB,CACR,IAAa,EAAb,EAAID,CAAJ,CAEE,MADAD,GAAA,CAAID,CAAJ,CACO,CADM,GACN,CAAAC,EAAAG,KAAA,CAAS,EAAT,CAET,IAAa,EAAb,EAAIF,CAAJ,CACED,EAAA,CAAID,CAAJ,CAAA,CAAa,GADf,KAIE,OADAC,GAAA,CAAID,CAAJ,CACO,CADMK,MAAAC,aAAA,CAAoBJ,CAApB,CAA4B,CAA5B,CACN,CAAAD,EAAAG,KAAA,CAAS,EAAT,CAXE,CAcbH,EAAAM,QAAA,CAAY,GAAZ,CACA,OAAON,GAAAG,KAAA,CAAS,EAAT,CAnBU,CA4BnBI,QAASA,GAAU,CAAC/B,CAAD,CAAMgC,CAAN,CAAS,CACtBA,CAAJ,CACEhC,CAAAiC,UADF,CACkBD,CADlB,CAIE,OAAOhC,CAAAiC,UALiB,CAuB5BC,QAASA,EAAM,CAACC,CAAD,CAAM,CACnB,IAAIH,EAAIG,CAAAF,UACR3B,EAAA,CAAQ8B,SAAR,CAAmB,QAAQ,CAACpC,CAAD,CAAK,CAC1BA,CAAJ,GAAYmC,CAAZ,EACE7B,CAAA,CAAQN,CAAR,CAAa,QAAQ,CAACqB,CAAD,CAAQZ,CAAR,CAAY,CAC/B0B,CAAA,CAAI1B,CAAJ,CAAA,CAAWY,CADoB,CAAjC,CAF4B,CAAhC,CAQAU,GAAA,CAAWI,CAAX,CAAeH,CAAf,CACA,OAAOG,EAXY,CAcrBE,QAASA,EAAG,CAACC,CAAD,CAAM,CAChB,MAAOC,SAAA,CAASD,CAAT;AAAc,EAAd,CADS,CAKlBE,QAASA,GAAO,CAACC,CAAD,CAASC,CAAT,CAAgB,CAC9B,MAAOR,EAAA,CAAO,KAAKA,CAAA,CAAO,QAAQ,EAAG,EAAlB,CAAsB,WAAWO,CAAX,CAAtB,CAAL,CAAP,CAA0DC,CAA1D,CADuB,CAoBhCC,QAASA,EAAI,EAAG,EAoBhBC,QAASA,GAAQ,CAACC,CAAD,CAAI,CAAC,MAAOA,EAAR,CAIrBC,QAASA,GAAO,CAACzB,CAAD,CAAQ,CAAC,MAAO,SAAQ,EAAG,CAAC,MAAOA,EAAR,CAAnB,CAcxB0B,QAASA,EAAW,CAAC1B,CAAD,CAAO,CAAC,MAAwB,WAAxB,GAAO,MAAOA,EAAf,CAe3B2B,QAASA,EAAS,CAAC3B,CAAD,CAAO,CAAC,MAAwB,WAAxB,GAAO,MAAOA,EAAf,CAgBzB4B,QAASA,EAAQ,CAAC5B,CAAD,CAAO,CAAC,MAAgB,KAAhB,EAAOA,CAAP,EAAyC,QAAzC,GAAwB,MAAOA,EAAhC,CAexBjB,QAASA,EAAQ,CAACiB,CAAD,CAAO,CAAC,MAAwB,QAAxB,GAAO,MAAOA,EAAf,CAexB6B,QAASA,GAAQ,CAAC7B,CAAD,CAAO,CAAC,MAAwB,QAAxB,GAAO,MAAOA,EAAf,CAexB8B,QAASA,GAAM,CAAC9B,CAAD,CAAO,CACpB,MAAgC,eAAhC,GAAO+B,EAAAxC,KAAA,CAAcS,CAAd,CADa,CAiBtBhB,QAASA,EAAO,CAACgB,CAAD,CAAQ,CACtB,MAAgC,gBAAhC,GAAO+B,EAAAxC,KAAA,CAAcS,CAAd,CADe,CAiBxBX,QAASA,EAAU,CAACW,CAAD,CAAO,CAAC,MAAwB,UAAxB,GAAO,MAAOA,EAAf,CA9lBa;AAwmBvCgC,QAASA,GAAQ,CAAChC,CAAD,CAAQ,CACvB,MAAgC,iBAAhC,GAAO+B,EAAAxC,KAAA,CAAcS,CAAd,CADgB,CAYzBpB,QAASA,GAAQ,CAACD,CAAD,CAAM,CACrB,MAAOA,EAAP,EAAcA,CAAAJ,SAAd,EAA8BI,CAAAsD,SAA9B,EAA8CtD,CAAAuD,MAA9C,EAA2DvD,CAAAwD,YADtC,CAoDvBC,QAASA,GAAS,CAACC,CAAD,CAAO,CACvB,MAAO,EAAGA,CAAAA,CAAH,EACJ,EAAAA,CAAAC,SAAA,EACGD,CAAAE,KADH,EACgBF,CAAAG,KADhB,EAC6BH,CAAAI,KAD7B,CADI,CADgB,CA+BzBC,QAASA,GAAG,CAAC/D,CAAD,CAAMO,CAAN,CAAgBC,CAAhB,CAAyB,CACnC,IAAIwD,EAAU,EACd1D,EAAA,CAAQN,CAAR,CAAa,QAAQ,CAACqB,CAAD,CAAQE,CAAR,CAAe0C,CAAf,CAAqB,CACxCD,CAAAjD,KAAA,CAAaR,CAAAK,KAAA,CAAcJ,CAAd,CAAuBa,CAAvB,CAA8BE,CAA9B,CAAqC0C,CAArC,CAAb,CADwC,CAA1C,CAGA,OAAOD,EAL4B,CAwCrCE,QAASA,GAAO,CAACC,CAAD,CAAQnE,CAAR,CAAa,CAC3B,GAAImE,CAAAD,QAAJ,CAAmB,MAAOC,EAAAD,QAAA,CAAclE,CAAd,CAE1B,KAAK,IAAIkB,EAAI,CAAb,CAAgBA,CAAhB,CAAoBiD,CAAAjE,OAApB,CAAkCgB,CAAA,EAAlC,CACE,GAAIlB,CAAJ,GAAYmE,CAAA,CAAMjD,CAAN,CAAZ,CAAsB,MAAOA,EAE/B,OAAQ,EANmB,CAS7BkD,QAASA,GAAW,CAACD,CAAD,CAAQ9C,CAAR,CAAe,CACjC,IAAIE,EAAQ2C,EAAA,CAAQC,CAAR,CAAe9C,CAAf,CACA,EAAZ,EAAIE,CAAJ,EACE4C,CAAAE,OAAA,CAAa9C,CAAb,CAAoB,CAApB,CACF,OAAOF,EAJ0B,CA4EnCiD,QAASA,GAAI,CAACC,CAAD,CAASC,CAAT,CAAqB,CAChC,GAAIvE,EAAA,CAASsE,CAAT,CAAJ,EAAgCA,CAAhC,EAAgCA,CA3MlBE,WA2Md,EAAgCF,CA3MAG,OA2MhC,CACE,KAAMC,GAAA,CAAS,MAAT,CAAN;AAIF,GAAKH,CAAL,CAaO,CACL,GAAID,CAAJ,GAAeC,CAAf,CAA4B,KAAMG,GAAA,CAAS,KAAT,CAAN,CAE5B,GAAItE,CAAA,CAAQkE,CAAR,CAAJ,CAEE,IAAM,IAAIrD,EADVsD,CAAAtE,OACUgB,CADW,CACrB,CAAiBA,CAAjB,CAAqBqD,CAAArE,OAArB,CAAoCgB,CAAA,EAApC,CACEsD,CAAAzD,KAAA,CAAiBuD,EAAA,CAAKC,CAAA,CAAOrD,CAAP,CAAL,CAAjB,CAHJ,KAKO,CACDc,CAAAA,CAAIwC,CAAAvC,UACR3B,EAAA,CAAQkE,CAAR,CAAqB,QAAQ,CAACnD,CAAD,CAAQZ,CAAR,CAAY,CACvC,OAAO+D,CAAA,CAAY/D,CAAZ,CADgC,CAAzC,CAGA,KAAMA,IAAIA,CAAV,GAAiB8D,EAAjB,CACEC,CAAA,CAAY/D,CAAZ,CAAA,CAAmB6D,EAAA,CAAKC,CAAA,CAAO9D,CAAP,CAAL,CAErBsB,GAAA,CAAWyC,CAAX,CAAuBxC,CAAvB,CARK,CARF,CAbP,IAEE,CADAwC,CACA,CADcD,CACd,IACMlE,CAAA,CAAQkE,CAAR,CAAJ,CACEC,CADF,CACgBF,EAAA,CAAKC,CAAL,CAAa,EAAb,CADhB,CAEWpB,EAAA,CAAOoB,CAAP,CAAJ,CACLC,CADK,CACS,IAAII,IAAJ,CAASL,CAAAM,QAAA,EAAT,CADT,CAEIxB,EAAA,CAASkB,CAAT,CAAJ,CACLC,CADK,CACaM,MAAJ,CAAWP,CAAAA,OAAX,CADT,CAEItB,CAAA,CAASsB,CAAT,CAFJ,GAGLC,CAHK,CAGSF,EAAA,CAAKC,CAAL,CAAa,EAAb,CAHT,CALT,CA8BF,OAAOC,EAtCyB,CA4ClCO,QAASA,GAAW,CAACC,CAAD,CAAM7C,CAAN,CAAW,CAC7BA,CAAA,CAAMA,CAAN,EAAa,EAEb,KAAI1B,IAAIA,CAAR,GAAeuE,EAAf,CAGM,CAAAA,CAAArE,eAAA,CAAmBF,CAAnB,CAAJ,EAAmD,GAAnD,GAAiCA,CAAAwE,OAAA,CAAW,CAAX,CAAjC,EAA4E,GAA5E,GAA0DxE,CAAAwE,OAAA,CAAW,CAAX,CAA1D,GACE9C,CAAA,CAAI1B,CAAJ,CADF,CACauE,CAAA,CAAIvE,CAAJ,CADb,CAKF,OAAO0B,EAXsB,CA4C/B+C,QAASA,GAAM,CAACC,CAAD,CAAKC,CAAL,CAAS,CACtB,GAAID,CAAJ,GAAWC,CAAX,CAAe,MAAO,CAAA,CACtB,IAAW,IAAX,GAAID,CAAJ,EAA0B,IAA1B,GAAmBC,CAAnB,CAAgC,MAAO,CAAA,CACvC,IAAID,CAAJ,GAAWA,CAAX,EAAiBC,CAAjB,GAAwBA,CAAxB,CAA4B,MAAO,CAAA,CAHb;IAIlBC,EAAK,MAAOF,EAJM,CAIsB1E,CAC5C,IAAI4E,CAAJ,EADyBC,MAAOF,EAChC,EACY,QADZ,EACMC,CADN,CAEI,GAAIhF,CAAA,CAAQ8E,CAAR,CAAJ,CAAiB,CACf,GAAI,CAAC9E,CAAA,CAAQ+E,CAAR,CAAL,CAAkB,MAAO,CAAA,CACzB,KAAKlF,CAAL,CAAciF,CAAAjF,OAAd,GAA4BkF,CAAAlF,OAA5B,CAAuC,CACrC,IAAIO,CAAJ,CAAQ,CAAR,CAAWA,CAAX,CAAeP,CAAf,CAAuBO,CAAA,EAAvB,CACE,GAAI,CAACyE,EAAA,CAAOC,CAAA,CAAG1E,CAAH,CAAP,CAAgB2E,CAAA,CAAG3E,CAAH,CAAhB,CAAL,CAA+B,MAAO,CAAA,CAExC,OAAO,CAAA,CAJ8B,CAFxB,CAAjB,IAQO,CAAA,GAAI0C,EAAA,CAAOgC,CAAP,CAAJ,CACL,MAAOhC,GAAA,CAAOiC,CAAP,CAAP,EAAqBD,CAAAN,QAAA,EAArB,EAAqCO,CAAAP,QAAA,EAChC,IAAIxB,EAAA,CAAS8B,CAAT,CAAJ,EAAoB9B,EAAA,CAAS+B,CAAT,CAApB,CACL,MAAOD,EAAA/B,SAAA,EAAP,EAAwBgC,CAAAhC,SAAA,EAExB,IAAY+B,CAAZ,EAAYA,CAtTJV,WAsTR,EAAYU,CAtTcT,OAsT1B,EAA2BU,CAA3B,EAA2BA,CAtTnBX,WAsTR,EAA2BW,CAtTDV,OAsT1B,EAAkCzE,EAAA,CAASkF,CAAT,CAAlC,EAAkDlF,EAAA,CAASmF,CAAT,CAAlD,EAAkE/E,CAAA,CAAQ+E,CAAR,CAAlE,CAA+E,MAAO,CAAA,CACtFG,EAAA,CAAS,EACT,KAAI9E,CAAJ,GAAW0E,EAAX,CACE,GAAsB,GAAtB,GAAI1E,CAAAwE,OAAA,CAAW,CAAX,CAAJ,EAA6B,CAAAvE,CAAA,CAAWyE,CAAA,CAAG1E,CAAH,CAAX,CAA7B,CAAA,CACA,GAAI,CAACyE,EAAA,CAAOC,CAAA,CAAG1E,CAAH,CAAP,CAAgB2E,CAAA,CAAG3E,CAAH,CAAhB,CAAL,CAA+B,MAAO,CAAA,CACtC8E,EAAA,CAAO9E,CAAP,CAAA,CAAc,CAAA,CAFd,CAIF,IAAIA,CAAJ,GAAW2E,EAAX,CACE,GAAI,CAACG,CAAA5E,eAAA,CAAsBF,CAAtB,CAAL,EACsB,GADtB,GACIA,CAAAwE,OAAA,CAAW,CAAX,CADJ,EAEIG,CAAA,CAAG3E,CAAH,CAFJ,GAEgBZ,CAFhB,EAGI,CAACa,CAAA,CAAW0E,CAAA,CAAG3E,CAAH,CAAX,CAHL,CAG0B,MAAO,CAAA,CAEnC;MAAO,CAAA,CAlBF,CAsBX,MAAO,CAAA,CArCe,CAyCxB+E,QAASA,GAAG,EAAG,CACb,MAAQ5F,EAAA6F,eAAR,EAAmC7F,CAAA6F,eAAAC,SAAnC,EACK9F,CAAA+F,cADL,EAEI,EAAG,CAAA/F,CAAA+F,cAAA,CAAuB,UAAvB,CAAH,EAAyC,CAAA/F,CAAA+F,cAAA,CAAuB,eAAvB,CAAzC,CAHS,CAmCfC,QAASA,GAAI,CAACC,CAAD,CAAOC,CAAP,CAAW,CACtB,IAAIC,EAA+B,CAAnB,CAAA3D,SAAAlC,OAAA,CAxBT8F,EAAApF,KAAA,CAwB0CwB,SAxB1C,CAwBqD6D,CAxBrD,CAwBS,CAAiD,EACjE,OAAI,CAAAvF,CAAA,CAAWoF,CAAX,CAAJ,EAAwBA,CAAxB,WAAsChB,OAAtC,CAcSgB,CAdT,CACSC,CAAA7F,OACA,CAAH,QAAQ,EAAG,CACT,MAAOkC,UAAAlC,OACA,CAAH4F,CAAAI,MAAA,CAASL,CAAT,CAAeE,CAAAI,OAAA,CAAiBH,EAAApF,KAAA,CAAWwB,SAAX,CAAsB,CAAtB,CAAjB,CAAf,CAAG,CACH0D,CAAAI,MAAA,CAASL,CAAT,CAAeE,CAAf,CAHK,CAAR,CAKH,QAAQ,EAAG,CACT,MAAO3D,UAAAlC,OACA,CAAH4F,CAAAI,MAAA,CAASL,CAAT,CAAezD,SAAf,CAAG,CACH0D,CAAAlF,KAAA,CAAQiF,CAAR,CAHK,CATK,CAqBxBO,QAASA,GAAc,CAAC3F,CAAD,CAAMY,CAAN,CAAa,CAClC,IAAIgF,EAAMhF,CAES,SAAnB,GAAI,MAAOZ,EAAX,EAAiD,GAAjD,GAA+BA,CAAAwE,OAAA,CAAW,CAAX,CAA/B,CACEoB,CADF;AACQxG,CADR,CAEWI,EAAA,CAASoB,CAAT,CAAJ,CACLgF,CADK,CACC,SADD,CAEIhF,CAAJ,EAAczB,CAAd,GAA2ByB,CAA3B,CACLgF,CADK,CACC,WADD,CAEYhF,CAFZ,GAEYA,CA5YLoD,WA0YP,EAEYpD,CA5YaqD,OA0YzB,IAGL2B,CAHK,CAGC,QAHD,CAMP,OAAOA,EAb2B,CA+BpCC,QAASA,GAAM,CAACtG,CAAD,CAAMuG,CAAN,CAAc,CAC3B,MAAmB,WAAnB,GAAI,MAAOvG,EAAX,CAAuCH,CAAvC,CACO2G,IAAAC,UAAA,CAAezG,CAAf,CAAoBoG,EAApB,CAAoCG,CAAA,CAAS,IAAT,CAAgB,IAApD,CAFoB,CAkB7BG,QAASA,GAAQ,CAACC,CAAD,CAAO,CACtB,MAAOvG,EAAA,CAASuG,CAAT,CACA,CAADH,IAAAI,MAAA,CAAWD,CAAX,CAAC,CACDA,CAHgB,CAOxBE,QAASA,GAAS,CAACxF,CAAD,CAAQ,CACH,UAArB,GAAI,MAAOA,EAAX,CACEA,CADF,CACU,CAAA,CADV,CAEWA,CAAJ,EAA8B,CAA9B,GAAaA,CAAAnB,OAAb,EACD4G,CACJ,CADQC,CAAA,CAAU,EAAV,CAAe1F,CAAf,CACR,CAAAA,CAAA,CAAQ,EAAO,GAAP,EAAEyF,CAAF,EAAmB,GAAnB,EAAcA,CAAd,EAA+B,OAA/B,EAA0BA,CAA1B,EAA+C,IAA/C,EAA0CA,CAA1C,EAA4D,GAA5D,EAAuDA,CAAvD,EAAwE,IAAxE,EAAmEA,CAAnE,CAFH,EAILzF,CAJK,CAIG,CAAA,CAEV,OAAOA,EATiB,CAe1B2F,QAASA,GAAW,CAACC,CAAD,CAAU,CAC5BA,CAAA,CAAUC,CAAA,CAAOD,CAAP,CAAAE,MAAA,EACV,IAAI,CAGFF,CAAAG,MAAA,EAHE,CAIF,MAAMC,CAAN,CAAS,EAGX,IAAIC,EAAWJ,CAAA,CAAO,OAAP,CAAAK,OAAA,CAAuBN,CAAvB,CAAAO,KAAA,EACf,IAAI,CACF,MAHcC,EAGP,GAAAR,CAAA,CAAQ,CAAR,CAAA9G,SAAA,CAAoC4G,CAAA,CAAUO,CAAV,CAApC,CACHA,CAAAI,MAAA,CACQ,YADR,CACA,CAAsB,CAAtB,CAAAC,QAAA,CACU,aADV;AACyB,QAAQ,CAACD,CAAD,CAAQ/D,CAAR,CAAkB,CAAE,MAAO,GAAP,CAAaoD,CAAA,CAAUpD,CAAV,CAAf,CADnD,CAHF,CAKF,MAAM0D,CAAN,CAAS,CACT,MAAON,EAAA,CAAUO,CAAV,CADE,CAfiB,CAgC9BM,QAASA,GAAqB,CAACvG,CAAD,CAAQ,CACpC,GAAI,CACF,MAAOwG,mBAAA,CAAmBxG,CAAnB,CADL,CAEF,MAAMgG,CAAN,CAAS,EAHyB,CAatCS,QAASA,GAAa,CAAYC,CAAZ,CAAsB,CAAA,IACtC/H,EAAM,EADgC,CAC5BgI,CAD4B,CACjBvH,CACzBH,EAAA,CAAS2H,CAAAF,CAAAE,EAAY,EAAZA,OAAA,CAAsB,GAAtB,CAAT,CAAqC,QAAQ,CAACF,CAAD,CAAU,CAChDA,CAAL,GACEC,CAEA,CAFYD,CAAAE,MAAA,CAAe,GAAf,CAEZ,CADAxH,CACA,CADMmH,EAAA,CAAsBI,CAAA,CAAU,CAAV,CAAtB,CACN,CAAKhF,CAAA,CAAUvC,CAAV,CAAL,GACM4F,CACJ,CADUrD,CAAA,CAAUgF,CAAA,CAAU,CAAV,CAAV,CAAA,CAA0BJ,EAAA,CAAsBI,CAAA,CAAU,CAAV,CAAtB,CAA1B,CAAgE,CAAA,CAC1E,CAAKhI,CAAA,CAAIS,CAAJ,CAAL,CAEUJ,CAAA,CAAQL,CAAA,CAAIS,CAAJ,CAAR,CAAH,CACLT,CAAA,CAAIS,CAAJ,CAAAM,KAAA,CAAcsF,CAAd,CADK,CAGLrG,CAAA,CAAIS,CAAJ,CAHK,CAGM,CAACT,CAAA,CAAIS,CAAJ,CAAD,CAAU4F,CAAV,CALb,CACErG,CAAA,CAAIS,CAAJ,CADF,CACa4F,CAHf,CAHF,CADqD,CAAvD,CAgBA,OAAOrG,EAlBmC,CAqB5CkI,QAASA,GAAU,CAAClI,CAAD,CAAM,CACvB,IAAImI,EAAQ,EACZ7H,EAAA,CAAQN,CAAR,CAAa,QAAQ,CAACqB,CAAD,CAAQZ,CAAR,CAAa,CAC5BJ,CAAA,CAAQgB,CAAR,CAAJ,CACEf,CAAA,CAAQe,CAAR,CAAe,QAAQ,CAAC+G,CAAD,CAAa,CAClCD,CAAApH,KAAA,CAAWsH,EAAA,CAAe5H,CAAf,CAAoB,CAAA,CAApB,CAAX,EAC2B,CAAA,CAAf,GAAA2H,CAAA,CAAsB,EAAtB,CAA2B,GAA3B,CAAiCC,EAAA,CAAeD,CAAf,CAA2B,CAAA,CAA3B,CAD7C,EADkC,CAApC,CADF,CAMAD,CAAApH,KAAA,CAAWsH,EAAA,CAAe5H,CAAf,CAAoB,CAAA,CAApB,CAAX,EACsB,CAAA,CAAV,GAAAY,CAAA,CAAiB,EAAjB,CAAsB,GAAtB,CAA4BgH,EAAA,CAAehH,CAAf,CAAsB,CAAA,CAAtB,CADxC,EAPgC,CAAlC,CAWA,OAAO8G,EAAAjI,OAAA,CAAeiI,CAAAxG,KAAA,CAAW,GAAX,CAAf,CAAiC,EAbjB,CA4BzB2G,QAASA,GAAgB,CAACjC,CAAD,CAAM,CAC7B,MAAOgC,GAAA,CAAehC,CAAf;AAAoB,CAAA,CAApB,CAAAsB,QAAA,CACY,OADZ,CACqB,GADrB,CAAAA,QAAA,CAEY,OAFZ,CAEqB,GAFrB,CAAAA,QAAA,CAGY,OAHZ,CAGqB,GAHrB,CADsB,CAmB/BU,QAASA,GAAc,CAAChC,CAAD,CAAMkC,CAAN,CAAuB,CAC5C,MAAOC,mBAAA,CAAmBnC,CAAnB,CAAAsB,QAAA,CACY,OADZ,CACqB,GADrB,CAAAA,QAAA,CAEY,OAFZ,CAEqB,GAFrB,CAAAA,QAAA,CAGY,MAHZ,CAGoB,GAHpB,CAAAA,QAAA,CAIY,OAJZ,CAIqB,GAJrB,CAAAA,QAAA,CAKY,MALZ,CAKqBY,CAAA,CAAkB,KAAlB,CAA0B,GAL/C,CADqC,CAwD9CE,QAASA,GAAW,CAACxB,CAAD,CAAUyB,CAAV,CAAqB,CAOvCnB,QAASA,EAAM,CAACN,CAAD,CAAU,CACvBA,CAAA,EAAW0B,CAAA5H,KAAA,CAAckG,CAAd,CADY,CAPc,IACnC0B,EAAW,CAAC1B,CAAD,CADwB,CAEnC2B,CAFmC,CAGnCC,CAHmC,CAInCC,EAAQ,CAAC,QAAD,CAAW,QAAX,CAAqB,UAArB,CAAiC,aAAjC,CAJ2B,CAKnCC,EAAsB,mCAM1BzI,EAAA,CAAQwI,CAAR,CAAe,QAAQ,CAACE,CAAD,CAAO,CAC5BF,CAAA,CAAME,CAAN,CAAA,CAAc,CAAA,CACdzB,EAAA,CAAO3H,CAAAqJ,eAAA,CAAwBD,CAAxB,CAAP,CACAA,EAAA,CAAOA,CAAArB,QAAA,CAAa,GAAb,CAAkB,KAAlB,CACHV,EAAAiC,iBAAJ,GACE5I,CAAA,CAAQ2G,CAAAiC,iBAAA,CAAyB,GAAzB,CAA+BF,CAA/B,CAAR,CAA8CzB,CAA9C,CAEA,CADAjH,CAAA,CAAQ2G,CAAAiC,iBAAA,CAAyB,GAAzB;AAA+BF,CAA/B,CAAsC,KAAtC,CAAR,CAAsDzB,CAAtD,CACA,CAAAjH,CAAA,CAAQ2G,CAAAiC,iBAAA,CAAyB,GAAzB,CAA+BF,CAA/B,CAAsC,GAAtC,CAAR,CAAoDzB,CAApD,CAHF,CAJ4B,CAA9B,CAWAjH,EAAA,CAAQqI,CAAR,CAAkB,QAAQ,CAAC1B,CAAD,CAAU,CAClC,GAAI,CAAC2B,CAAL,CAAiB,CAEf,IAAIlB,EAAQqB,CAAAI,KAAA,CADI,GACJ,CADUlC,CAAAmC,UACV,CAD8B,GAC9B,CACR1B,EAAJ,EACEkB,CACA,CADa3B,CACb,CAAA4B,CAAA,CAAUlB,CAAAD,CAAA,CAAM,CAAN,CAAAC,EAAY,EAAZA,SAAA,CAAwB,MAAxB,CAAgC,GAAhC,CAFZ,EAIErH,CAAA,CAAQ2G,CAAAoC,WAAR,CAA4B,QAAQ,CAACxF,CAAD,CAAO,CACpC+E,CAAAA,CAAL,EAAmBE,CAAA,CAAMjF,CAAAmF,KAAN,CAAnB,GACEJ,CACA,CADa3B,CACb,CAAA4B,CAAA,CAAShF,CAAAxC,MAFX,CADyC,CAA3C,CAPa,CADiB,CAApC,CAiBIuH,EAAJ,EACEF,CAAA,CAAUE,CAAV,CAAsBC,CAAA,CAAS,CAACA,CAAD,CAAT,CAAoB,EAA1C,CAxCqC,CAkGzCH,QAASA,GAAS,CAACzB,CAAD,CAAUqC,CAAV,CAAmB,CACnC,IAAIC,EAAcA,QAAQ,EAAG,CAC3BtC,CAAA,CAAUC,CAAA,CAAOD,CAAP,CAEV,IAAIA,CAAAuC,SAAA,EAAJ,CAAwB,CACtB,IAAIC,EAAOxC,CAAA,CAAQ,CAAR,CAAD,GAAgBrH,CAAhB,CAA4B,UAA5B,CAAyCoH,EAAA,CAAYC,CAAZ,CACnD,MAAMtC,GAAA,CAAS,SAAT,CAAwE8E,CAAxE,CAAN,CAFsB,CAKxBH,CAAA,CAAUA,CAAV,EAAqB,EACrBA,EAAAxH,QAAA,CAAgB,CAAC,UAAD,CAAa,QAAQ,CAAC4H,CAAD,CAAW,CAC9CA,CAAArI,MAAA,CAAe,cAAf,CAA+B4F,CAA/B,CAD8C,CAAhC,CAAhB,CAGAqC,EAAAxH,QAAA,CAAgB,IAAhB,CACI0H,EAAAA,CAAWG,EAAA,CAAeL,CAAf,CACfE,EAAAI,OAAA,CAAgB,CAAC,YAAD,CAAe,cAAf,CAA+B,UAA/B,CAA2C,WAA3C,CAAwD,UAAxD;AACb,QAAQ,CAACC,CAAD,CAAQ5C,CAAR,CAAiB6C,CAAjB,CAA0BN,CAA1B,CAAoCO,CAApC,CAA6C,CACpDF,CAAAG,OAAA,CAAa,QAAQ,EAAG,CACtB/C,CAAAgD,KAAA,CAAa,WAAb,CAA0BT,CAA1B,CACAM,EAAA,CAAQ7C,CAAR,CAAA,CAAiB4C,CAAjB,CAFsB,CAAxB,CADoD,CADxC,CAAhB,CAQA,OAAOL,EAtBoB,CAA7B,CAyBIU,EAAqB,sBAEzB,IAAIvK,CAAJ,EAAc,CAACuK,CAAAC,KAAA,CAAwBxK,CAAAqJ,KAAxB,CAAf,CACE,MAAOO,EAAA,EAGT5J,EAAAqJ,KAAA,CAAcrJ,CAAAqJ,KAAArB,QAAA,CAAoBuC,CAApB,CAAwC,EAAxC,CACdE,GAAAC,gBAAA,CAA0BC,QAAQ,CAACC,CAAD,CAAe,CAC/CjK,CAAA,CAAQiK,CAAR,CAAsB,QAAQ,CAAC1B,CAAD,CAAS,CACrCS,CAAAvI,KAAA,CAAa8H,CAAb,CADqC,CAAvC,CAGAU,EAAA,EAJ+C,CAjCd,CA0CrCiB,QAASA,GAAU,CAACxB,CAAD,CAAOyB,CAAP,CAAiB,CAClCA,CAAA,CAAYA,CAAZ,EAAyB,GACzB,OAAOzB,EAAArB,QAAA,CAAa+C,EAAb,CAAgC,QAAQ,CAACC,CAAD,CAASC,CAAT,CAAc,CAC3D,OAAQA,CAAA,CAAMH,CAAN,CAAkB,EAA1B,EAAgCE,CAAAE,YAAA,EAD2B,CAAtD,CAF2B,CAkCpCC,QAASA,GAAS,CAACC,CAAD,CAAM/B,CAAN,CAAYgC,CAAZ,CAAoB,CACpC,GAAI,CAACD,CAAL,CACE,KAAMpG,GAAA,CAAS,MAAT,CAA2CqE,CAA3C,EAAmD,GAAnD,CAA0DgC,CAA1D,EAAoE,UAApE,CAAN,CAEF,MAAOD,EAJ6B,CAOtCE,QAASA,GAAW,CAACF,CAAD,CAAM/B,CAAN,CAAYkC,CAAZ,CAAmC,CACjDA,CAAJ,EAA6B7K,CAAA,CAAQ0K,CAAR,CAA7B,GACIA,CADJ,CACUA,CAAA,CAAIA,CAAA7K,OAAJ,CAAiB,CAAjB,CADV,CAIA4K,GAAA,CAAUpK,CAAA,CAAWqK,CAAX,CAAV,CAA2B/B,CAA3B,CAAiC,sBAAjC,EACK+B,CAAA,EAAqB,QAArB,EAAO,MAAOA,EAAd;AAAgCA,CAAAI,YAAAnC,KAAhC,EAAwD,QAAxD,CAAmE,MAAO+B,EAD/E,EAEA,OAAOA,EAP8C,CAevDK,QAASA,GAAuB,CAACpC,CAAD,CAAOxI,CAAP,CAAgB,CAC9C,GAAa,gBAAb,GAAIwI,CAAJ,CACE,KAAMrE,GAAA,CAAS,SAAT,CAA8DnE,CAA9D,CAAN,CAF4C,CAchD6K,QAASA,GAAM,CAACrL,CAAD,CAAMsL,CAAN,CAAYC,CAAZ,CAA2B,CACxC,GAAI,CAACD,CAAL,CAAW,MAAOtL,EACdc,EAAAA,CAAOwK,CAAArD,MAAA,CAAW,GAAX,CAKX,KAJA,IAAIxH,CAAJ,CACI+K,EAAexL,CADnB,CAEIyL,EAAM3K,CAAAZ,OAFV,CAISgB,EAAI,CAAb,CAAgBA,CAAhB,CAAoBuK,CAApB,CAAyBvK,CAAA,EAAzB,CACET,CACA,CADMK,CAAA,CAAKI,CAAL,CACN,CAAIlB,CAAJ,GACEA,CADF,CACQ,CAACwL,CAAD,CAAgBxL,CAAhB,EAAqBS,CAArB,CADR,CAIF,OAAI,CAAC8K,CAAL,EAAsB7K,CAAA,CAAWV,CAAX,CAAtB,CACS4F,EAAA,CAAK4F,CAAL,CAAmBxL,CAAnB,CADT,CAGOA,CAhBiC,CAwB1C0L,QAASA,GAAgB,CAACC,CAAD,CAAQ,CAAA,IAC3BC,EAAYD,CAAA,CAAM,CAAN,CACZE,EAAAA,CAAUF,CAAA,CAAMA,CAAAzL,OAAN,CAAqB,CAArB,CACd,IAAI0L,CAAJ,GAAkBC,CAAlB,CACE,MAAO3E,EAAA,CAAO0E,CAAP,CAIT,KAAIjD,EAAW,CAAC1B,CAAD,CAEf,GAAG,CACDA,CAAA,CAAUA,CAAA6E,YACV,IAAI,CAAC7E,CAAL,CAAc,KACd0B,EAAA5H,KAAA,CAAckG,CAAd,CAHC,CAAH,MAISA,CAJT,GAIqB4E,CAJrB,CAMA,OAAO3E,EAAA,CAAOyB,CAAP,CAhBwB,CA4BjCoD,QAASA,GAAiB,CAACpM,CAAD,CAAS,CAEjC,IAAIqM,EAAkBlM,CAAA,CAAO,WAAP,CAAtB,CACI6E,EAAW7E,CAAA,CAAO,IAAP,CAMXsK,EAAAA,CAAiBzK,CAHZ,QAGLyK,GAAiBzK,CAHE,QAGnByK,CAH+B,EAG/BA,CAGJA,EAAA6B,SAAA,CAAmB7B,CAAA6B,SAAnB,EAAuCnM,CAEvC,OAAcsK,EARL,OAQT;CAAcA,CARS,OAQvB,CAAiC8B,QAAQ,EAAG,CAE1C,IAAI5C,EAAU,EAqDd,OAAOT,SAAe,CAACG,CAAD,CAAOmD,CAAP,CAAiBC,CAAjB,CAA2B,CAE7C,GAAa,gBAAb,GAKsBpD,CALtB,CACE,KAAMrE,EAAA,CAAS,SAAT,CAIoBnE,QAJpB,CAAN,CAKA2L,CAAJ,EAAgB7C,CAAA3I,eAAA,CAAuBqI,CAAvB,CAAhB,GACEM,CAAA,CAAQN,CAAR,CADF,CACkB,IADlB,CAGA,OAAcM,EA1ET,CA0EkBN,CA1ElB,CA0EL,GAAcM,CA1EK,CA0EIN,CA1EJ,CA0EnB,CAA6BkD,QAAQ,EAAG,CAgNtCG,QAASA,EAAW,CAACC,CAAD,CAAWC,CAAX,CAAmBC,CAAnB,CAAiC,CACnD,MAAO,SAAQ,EAAG,CAChBC,CAAA,CAAYD,CAAZ,EAA4B,MAA5B,CAAA,CAAoC,CAACF,CAAD,CAAWC,CAAX,CAAmBnK,SAAnB,CAApC,CACA,OAAOsK,EAFS,CADiC,CA/MrD,GAAI,CAACP,CAAL,CACE,KAAMH,EAAA,CAAgB,OAAhB,CAEiDhD,CAFjD,CAAN,CAMF,IAAIyD,EAAc,EAAlB,CAGIE,EAAY,EAHhB,CAKIC,EAASP,CAAA,CAAY,WAAZ,CAAyB,QAAzB,CALb,CAQIK,EAAiB,cAELD,CAFK,YAGPE,CAHO,UAcTR,CAdS,MAuBbnD,CAvBa,UAoCTqD,CAAA,CAAY,UAAZ,CAAwB,UAAxB,CApCS,SA+CVA,CAAA,CAAY,UAAZ,CAAwB,SAAxB,CA/CU,SA0DVA,CAAA,CAAY,UAAZ,CAAwB,SAAxB,CA1DU,OAqEZA,CAAA,CAAY,UAAZ,CAAwB,OAAxB,CArEY,UAiFTA,CAAA,CAAY,UAAZ;AAAwB,UAAxB,CAAoC,SAApC,CAjFS,WAmHRA,CAAA,CAAY,kBAAZ,CAAgC,UAAhC,CAnHQ,QA8HXA,CAAA,CAAY,iBAAZ,CAA+B,UAA/B,CA9HW,YA0IPA,CAAA,CAAY,qBAAZ,CAAmC,UAAnC,CA1IO,WAuJRA,CAAA,CAAY,kBAAZ,CAAgC,WAAhC,CAvJQ,QAkKXO,CAlKW,KA8KdC,QAAQ,CAACC,CAAD,CAAQ,CACnBH,CAAA5L,KAAA,CAAe+L,CAAf,CACA,OAAO,KAFY,CA9KF,CAoLjBV,EAAJ,EACEQ,CAAA,CAAOR,CAAP,CAGF,OAAQM,EAxM8B,CA1ET,EA0E/B,CAX+C,CAvDP,CART,EAQnC,CAdiC,CAiZnCK,QAASA,GAAkB,CAAC3C,CAAD,CAAS,CAClClI,CAAA,CAAOkI,CAAP,CAAgB,WACD1B,EADC,MAENpE,EAFM,QAGJpC,CAHI,QAIJgD,EAJI,SAKHgC,CALG,SAMH5G,CANG,UAOFqJ,EAPE,MAQPhH,CARO,MASPiD,EATO,QAUJU,EAVI,UAWFI,EAXE,UAYH9D,EAZG,aAaCG,CAbD,WAcDC,CAdC,UAeF5C,CAfE,YAgBAM,CAhBA,UAiBFuC,CAjBE,UAkBFC,EAlBE,WAmBDO,EAnBC,SAoBHpD,CApBG;QAqBH2M,EArBG,QAsBJ7J,EAtBI,WAuBD4D,CAvBC,WAwBDkG,EAxBC,WAyBD,SAAU,CAAV,CAzBC,UA0BFnN,CA1BE,OA2BL0F,EA3BK,CAAhB,CA8BA0H,GAAA,CAAgBnB,EAAA,CAAkBpM,CAAlB,CAChB,IAAI,CACFuN,EAAA,CAAc,UAAd,CADE,CAEF,MAAO7F,CAAP,CAAU,CACV6F,EAAA,CAAc,UAAd,CAA0B,EAA1B,CAAAZ,SAAA,CAAuC,SAAvC,CAAkDa,EAAlD,CADU,CAIZD,EAAA,CAAc,IAAd,CAAoB,CAAC,UAAD,CAApB,CAAkC,CAAC,UAAD,CAChCE,QAAiB,CAAC1D,CAAD,CAAW,CAE1BA,CAAA4C,SAAA,CAAkB,eACDe,EADC,CAAlB,CAGA3D,EAAA4C,SAAA,CAAkB,UAAlB,CAA8BgB,EAA9B,CAAAC,UAAA,CACY,GACHC,EADG,OAECC,EAFD,UAGIA,EAHJ,MAIAC,EAJA,QAKEC,EALF,QAMEC,EANF,OAOCC,EAPD,QAQEC,EARF,QASEC,EATF,YAUMC,EAVN,gBAWUC,EAXV,SAYGC,EAZH,aAaOC,EAbP,YAcMC,EAdN,SAeGC,EAfH,cAgBQC,EAhBR,QAiBEC,EAjBF,QAkBEC,EAlBF,MAmBAC,EAnBA,WAoBKC,EApBL;OAqBEC,EArBF,eAsBSC,EAtBT,aAuBOC,EAvBP,UAwBIC,EAxBJ,QAyBEC,EAzBF,SA0BGC,EA1BH,UA2BIC,EA3BJ,cA4BQC,EA5BR,iBA6BWC,EA7BX,WA8BKC,EA9BL,cA+BQC,EA/BR,SAgCGC,EAhCH,QAiCEC,EAjCF,UAkCIC,EAlCJ,UAmCIC,EAnCJ,YAoCMA,EApCN,SAqCGC,EArCH,CADZ,CAAAnC,UAAA,CAwCY,WACGoC,EADH,CAxCZ,CAAApC,UAAA,CA2CYqC,EA3CZ,CAAArC,UAAA,CA4CYsC,EA5CZ,CA6CAnG,EAAA4C,SAAA,CAAkB,eACDwD,EADC,UAENC,EAFM,UAGNC,EAHM,eAIDC,EAJC,aAKHC,EALG,WAMLC,EANK,mBAOGC,EAPH,SAQPC,EARO,cASFC,EATE,WAULC,EAVK,OAWTC,EAXS,cAYFC,EAZE,WAaLC,EAbK,MAcVC,EAdU,QAeRC,EAfQ,YAgBJC,EAhBI;GAiBZC,EAjBY,MAkBVC,EAlBU,cAmBFC,EAnBE,UAoBNC,EApBM,gBAqBAC,EArBA,UAsBNC,EAtBM,SAuBPC,EAvBO,OAwBTC,EAxBS,iBAyBEC,EAzBF,CAAlB,CAlD0B,CADI,CAAlC,CAtCkC,CAwPpCC,QAASA,GAAS,CAACvI,CAAD,CAAO,CACvB,MAAOA,EAAArB,QAAA,CACG6J,EADH,CACyB,QAAQ,CAACC,CAAD,CAAIhH,CAAJ,CAAeE,CAAf,CAAuB+G,CAAvB,CAA+B,CACnE,MAAOA,EAAA,CAAS/G,CAAAgH,YAAA,EAAT,CAAgChH,CAD4B,CADhE,CAAAhD,QAAA,CAIGiK,EAJH,CAIoB,OAJpB,CADgB,CAgBzBC,QAASA,GAAuB,CAAC7I,CAAD,CAAO8I,CAAP,CAAqBC,CAArB,CAAkCC,CAAlC,CAAuD,CAMrFC,QAASA,EAAW,CAACC,CAAD,CAAQ,CAAA,IAEtBjO,EAAO8N,CAAA,EAAeG,CAAf,CAAuB,CAAC,IAAAC,OAAA,CAAYD,CAAZ,CAAD,CAAvB,CAA8C,CAAC,IAAD,CAF/B,CAGtBE,EAAYN,CAHU,CAItBO,CAJsB,CAIjBC,CAJiB,CAIPC,CAJO,CAKtBtL,CALsB,CAKbuL,CALa,CAKYC,CAEtC,IAAI,CAACT,CAAL,EAAqC,IAArC,EAA4BE,CAA5B,CACE,IAAA,CAAMjO,CAAA/D,OAAN,CAAA,CAEE,IADAmS,CACkB,CADZpO,CAAAyO,MAAA,EACY,CAAdJ,CAAc,CAAH,CAAG,CAAAC,CAAA,CAAYF,CAAAnS,OAA9B,CAA0CoS,CAA1C,CAAqDC,CAArD,CAAgED,CAAA,EAAhE,CAOE,IANArL,CAMoB,CANVC,CAAA,CAAOmL,CAAA,CAAIC,CAAJ,CAAP,CAMU,CALhBF,CAAJ,CACEnL,CAAA0L,eAAA,CAAuB,UAAvB,CADF,CAGEP,CAHF,CAGc,CAACA,CAEK,CAAhBI,CAAgB,CAAH,CAAG,CAAAI,CAAA,CAAe1S,CAAAuS,CAAAvS,CAAW+G,CAAAwL,SAAA,EAAXvS,QAAnC,CACIsS,CADJ,CACiBI,CADjB,CAEIJ,CAAA,EAFJ,CAGEvO,CAAAlD,KAAA,CAAU8R,EAAA,CAAOJ,CAAA,CAASD,CAAT,CAAP,CAAV,CAKR,OAAOM,EAAA5M,MAAA,CAAmB,IAAnB,CAAyB9D,SAAzB,CAzBmB,CANyD;AACrF,IAAI0Q,EAAeD,EAAA/M,GAAA,CAAUkD,CAAV,CAAnB,CACA8J,EAAeA,CAAAC,UAAfD,EAAyCA,CACzCb,EAAAc,UAAA,CAAwBD,CACxBD,GAAA/M,GAAA,CAAUkD,CAAV,CAAA,CAAkBiJ,CAJmE,CAyGvFe,QAASA,EAAM,CAAC/L,CAAD,CAAU,CACvB,GAAIA,CAAJ,WAAuB+L,EAAvB,CACE,MAAO/L,EAEL7G,EAAA,CAAS6G,CAAT,CAAJ,GACEA,CADF,CACYgM,EAAA,CAAKhM,CAAL,CADZ,CAGA,IAAI,EAAE,IAAF,WAAkB+L,EAAlB,CAAJ,CAA+B,CAC7B,GAAI5S,CAAA,CAAS6G,CAAT,CAAJ,EAA8C,GAA9C,EAAyBA,CAAAhC,OAAA,CAAe,CAAf,CAAzB,CACE,KAAMiO,GAAA,CAAa,OAAb,CAAN,CAEF,MAAO,KAAIF,CAAJ,CAAW/L,CAAX,CAJsB,CAO/B,GAAI7G,CAAA,CAAS6G,CAAT,CAAJ,CAAuB,CACgBA,IAAAA,EAAAA,CA1BvCzG,EAAA,CAAqBZ,CACrB,KAAIuT,CAEJ,IAAKA,CAAL,CAAcC,EAAAjK,KAAA,CAAuB3B,CAAvB,CAAd,CACS,CAAA,CAAA,CAAA,CAAA,cAAA,CAAA,CAAA,CAAA,CAAA,CAAA,CAAA,CADT,KAAA,CAIO,IAAA,EAAA,CAAA,CA1CQiC,CACX4J,EAAAA,CAAW7S,CAAA8S,uBAAA,EACX3H,EAAAA,CAAQ,EAEZ,IARQ4H,EAAApJ,KAAA,CA8CD3C,CA9CC,CAQR,CAGO,CACLgM,CAAA,CAAMH,CAAAI,YAAA,CAAqBjT,CAAAkT,cAAA,CAAsB,KAAtB,CAArB,CAENjK,EAAA,CAAM,CAACkK,EAAAxK,KAAA,CAgCF3B,CAhCE,CAAD,EAA+B,CAAC,EAAD,CAAK,EAAL,CAA/B,EAAyC,CAAzC,CAAAqD,YAAA,EACN+I,EAAA,CAAOC,EAAA,CAAQpK,CAAR,CAAP,EAAuBoK,EAAAC,SACvBN,EAAAO,UAAA,CAAgB,mBAAhB,CACEH,CAAA,CAAK,CAAL,CADF,CA8BKpM,CA7BOG,QAAA,CAAaqM,EAAb,CAA+B,WAA/B,CADZ,CAC0DJ,CAAA,CAAK,CAAL,CAC1DJ;CAAAS,YAAA,CAAgBT,CAAAU,WAAhB,CAIA,KADAhT,CACA,CADI0S,CAAA,CAAK,CAAL,CACJ,CAAO1S,CAAA,EAAP,CAAA,CACEsS,CAAA,CAAMA,CAAAW,UAGHC,EAAA,CAAE,CAAP,KAAUC,CAAV,CAAab,CAAAc,WAAApU,OAAb,CAAoCkU,CAApC,CAAsCC,CAAtC,CAA0C,EAAED,CAA5C,CAA+CzI,CAAA5K,KAAA,CAAWyS,CAAAc,WAAA,CAAeF,CAAf,CAAX,CAE/CZ,EAAA,CAAMH,CAAAa,WACNV,EAAAe,YAAA,CAAkB,EAlBb,CAHP,IAEE5I,EAAA5K,KAAA,CAAWP,CAAAgU,eAAA,CAoCNhN,CApCM,CAAX,CAuBF6L,EAAAkB,YAAA,CAAuB,EACvBlB,EAAAU,UAAA,CAAqB,EACrB,EAAA,CAAOpI,CAOP,CAuBE8I,EAAA,CAAe,IAAf,CAvBF,CAuBE,CACevN,EAAAmM,CAAOzT,CAAA0T,uBAAA,EAAPD,CACf9L,OAAA,CAAgB,IAAhB,CAHqB,CAAvB,IAKEkN,GAAA,CAAe,IAAf,CAAqBxN,CAArB,CAnBqB,CAuBzByN,QAASA,GAAW,CAACzN,CAAD,CAAU,CAC5B,MAAOA,EAAA0N,UAAA,CAAkB,CAAA,CAAlB,CADqB,CAI9BC,QAASA,GAAY,CAAC3N,CAAD,CAAS,CAC5B4N,EAAA,CAAiB5N,CAAjB,CAD4B,KAElB/F,EAAI,CAAd,KAAiBuR,CAAjB,CAA4BxL,CAAAqN,WAA5B,EAAkD,EAAlD,CAAsDpT,CAAtD,CAA0DuR,CAAAvS,OAA1D,CAA2EgB,CAAA,EAA3E,CACE0T,EAAA,CAAanC,CAAA,CAASvR,CAAT,CAAb,CAH0B,CAO9B4T,QAASA,GAAS,CAAC7N,CAAD,CAAU8N,CAAV,CAAgBjP,CAAhB,CAAoBkP,CAApB,CAAiC,CACjD,GAAIhS,CAAA,CAAUgS,CAAV,CAAJ,CAA4B,KAAM9B,GAAA,CAAa,SAAb,CAAN,CADqB,IAG7C+B,EAASC,EAAA,CAAmBjO,CAAnB,CAA4B,QAA5B,CACAiO,GAAAC,CAAmBlO,CAAnBkO,CAA4B,QAA5BA,CAEb,GAEIpS,CAAA,CAAYgS,CAAZ,CAAJ,CACEzU,CAAA,CAAQ2U,CAAR;AAAgB,QAAQ,CAACG,CAAD,CAAeL,CAAf,CAAqB,CAC3CM,EAAA,CAAsBpO,CAAtB,CAA+B8N,CAA/B,CAAqCK,CAArC,CACA,QAAOH,CAAA,CAAOF,CAAP,CAFoC,CAA7C,CADF,CAMEzU,CAAA,CAAQyU,CAAA9M,MAAA,CAAW,GAAX,CAAR,CAAyB,QAAQ,CAAC8M,CAAD,CAAO,CAClChS,CAAA,CAAY+C,CAAZ,CAAJ,EACEuP,EAAA,CAAsBpO,CAAtB,CAA+B8N,CAA/B,CAAqCE,CAAA,CAAOF,CAAP,CAArC,CACA,CAAA,OAAOE,CAAA,CAAOF,CAAP,CAFT,EAIE3Q,EAAA,CAAY6Q,CAAA,CAAOF,CAAP,CAAZ,EAA4B,EAA5B,CAAgCjP,CAAhC,CALoC,CAAxC,CARF,CANiD,CAyBnD+O,QAASA,GAAgB,CAAC5N,CAAD,CAAU+B,CAAV,CAAgB,CAAA,IACnCsM,EAAYrO,CAAA,CAAQsO,EAAR,CADuB,CAEnCC,EAAeC,EAAA,CAAQH,CAAR,CAEfE,EAAJ,GACMxM,CAAJ,CACE,OAAOyM,EAAA,CAAQH,CAAR,CAAArL,KAAA,CAAwBjB,CAAxB,CADT,EAKIwM,CAAAL,OAKJ,GAJEK,CAAAP,OAAAS,SACA,EADgCF,CAAAL,OAAA,CAAoB,EAApB,CAAwB,UAAxB,CAChC,CAAAL,EAAA,CAAU7N,CAAV,CAGF,EADA,OAAOwO,EAAA,CAAQH,CAAR,CACP,CAAArO,CAAA,CAAQsO,EAAR,CAAA,CAAkB1V,CAVlB,CADF,CAJuC,CAmBzCqV,QAASA,GAAkB,CAACjO,CAAD,CAAUxG,CAAV,CAAeY,CAAf,CAAsB,CAAA,IAC3CiU,EAAYrO,CAAA,CAAQsO,EAAR,CAD+B,CAE3CC,EAAeC,EAAA,CAAQH,CAAR,EAAsB,EAAtB,CAEnB,IAAItS,CAAA,CAAU3B,CAAV,CAAJ,CACOmU,CAIL,GAHEvO,CAAA,CAAQsO,EAAR,CACA,CADkBD,CAClB,CA1NuB,EAAEK,EA0NzB,CAAAH,CAAA,CAAeC,EAAA,CAAQH,CAAR,CAAf,CAAoC,EAEtC,EAAAE,CAAA,CAAa/U,CAAb,CAAA,CAAoBY,CALtB,KAOE,OAAOmU,EAAP,EAAuBA,CAAA,CAAa/U,CAAb,CAXsB,CAejDmV,QAASA,GAAU,CAAC3O,CAAD,CAAUxG,CAAV,CAAeY,CAAf,CAAsB,CAAA,IACnC4I,EAAOiL,EAAA,CAAmBjO,CAAnB,CAA4B,MAA5B,CAD4B,CAEnC4O,EAAW7S,CAAA,CAAU3B,CAAV,CAFwB,CAGnCyU,EAAa,CAACD,CAAdC,EAA0B9S,CAAA,CAAUvC,CAAV,CAHS,CAInCsV,EAAiBD,CAAjBC,EAA+B,CAAC9S,CAAA,CAASxC,CAAT,CAE/BwJ,EAAL,EAAc8L,CAAd,EACEb,EAAA,CAAmBjO,CAAnB,CAA4B,MAA5B,CAAoCgD,CAApC,CAA2C,EAA3C,CAGF,IAAI4L,CAAJ,CACE5L,CAAA,CAAKxJ,CAAL,CAAA,CAAYY,CADd,KAGE,IAAIyU,CAAJ,CAAgB,CACd,GAAIC,CAAJ,CAEE,MAAO9L,EAAP,EAAeA,CAAA,CAAKxJ,CAAL,CAEfyB;CAAA,CAAO+H,CAAP,CAAaxJ,CAAb,CALY,CAAhB,IAQE,OAAOwJ,EArB4B,CA0BzC+L,QAASA,GAAc,CAAC/O,CAAD,CAAUgP,CAAV,CAAoB,CACzC,MAAKhP,EAAAiP,aAAL,CAEuC,EAFvC,CACSvO,CAAA,GAAAA,EAAOV,CAAAiP,aAAA,CAAqB,OAArB,CAAPvO,EAAwC,EAAxCA,EAA8C,GAA9CA,SAAA,CAA2D,SAA3D,CAAsE,GAAtE,CAAAzD,QAAA,CACI,GADJ,CACU+R,CADV,CACqB,GADrB,CADT,CAAkC,CAAA,CADO,CAM3CE,QAASA,GAAiB,CAAClP,CAAD,CAAUmP,CAAV,CAAsB,CAC1CA,CAAJ,EAAkBnP,CAAAoP,aAAlB,EACE/V,CAAA,CAAQ8V,CAAAnO,MAAA,CAAiB,GAAjB,CAAR,CAA+B,QAAQ,CAACqO,CAAD,CAAW,CAChDrP,CAAAoP,aAAA,CAAqB,OAArB,CAA8BpD,EAAA,CACzBtL,CAAA,GAAAA,EAAOV,CAAAiP,aAAA,CAAqB,OAArB,CAAPvO,EAAwC,EAAxCA,EAA8C,GAA9CA,SAAA,CACQ,SADR,CACmB,GADnB,CAAAA,QAAA,CAEQ,GAFR,CAEcsL,EAAA,CAAKqD,CAAL,CAFd,CAE+B,GAF/B,CAEoC,GAFpC,CADyB,CAA9B,CADgD,CAAlD,CAF4C,CAYhDC,QAASA,GAAc,CAACtP,CAAD,CAAUmP,CAAV,CAAsB,CAC3C,GAAIA,CAAJ,EAAkBnP,CAAAoP,aAAlB,CAAwC,CACtC,IAAIG,EAAmB7O,CAAA,GAAAA,EAAOV,CAAAiP,aAAA,CAAqB,OAArB,CAAPvO,EAAwC,EAAxCA,EAA8C,GAA9CA,SAAA,CACU,SADV,CACqB,GADrB,CAGvBrH,EAAA,CAAQ8V,CAAAnO,MAAA,CAAiB,GAAjB,CAAR,CAA+B,QAAQ,CAACqO,CAAD,CAAW,CAChDA,CAAA,CAAWrD,EAAA,CAAKqD,CAAL,CAC4C,GAAvD,GAAIE,CAAAtS,QAAA,CAAwB,GAAxB,CAA8BoS,CAA9B,CAAyC,GAAzC,CAAJ;CACEE,CADF,EACqBF,CADrB,CACgC,GADhC,CAFgD,CAAlD,CAOArP,EAAAoP,aAAA,CAAqB,OAArB,CAA8BpD,EAAA,CAAKuD,CAAL,CAA9B,CAXsC,CADG,CAgB7C/B,QAASA,GAAc,CAACgC,CAAD,CAAO9N,CAAP,CAAiB,CACtC,GAAIA,CAAJ,CAAc,CACZA,CAAA,CAAaA,CAAAhF,SACF,EADuB,CAAAX,CAAA,CAAU2F,CAAAzI,OAAV,CACvB,EADsDD,EAAA,CAAS0I,CAAT,CACtD,CACP,CAAEA,CAAF,CADO,CAAPA,CAEJ,KAAI,IAAIzH,EAAE,CAAV,CAAaA,CAAb,CAAiByH,CAAAzI,OAAjB,CAAkCgB,CAAA,EAAlC,CACEuV,CAAA1V,KAAA,CAAU4H,CAAA,CAASzH,CAAT,CAAV,CALU,CADwB,CAWxCwV,QAASA,GAAgB,CAACzP,CAAD,CAAU+B,CAAV,CAAgB,CACvC,MAAO2N,GAAA,CAAoB1P,CAApB,CAA6B,GAA7B,EAAoC+B,CAApC,EAA4C,cAA5C,EAA+D,YAA/D,CADgC,CAIzC2N,QAASA,GAAmB,CAAC1P,CAAD,CAAU+B,CAAV,CAAgB3H,CAAhB,CAAuB,CACjD4F,CAAA,CAAUC,CAAA,CAAOD,CAAP,CAIgB,EAA1B,EAAGA,CAAA,CAAQ,CAAR,CAAA9G,SAAH,GACE8G,CADF,CACYA,CAAAnD,KAAA,CAAa,MAAb,CADZ,CAKA,KAFIgF,CAEJ,CAFYzI,CAAA,CAAQ2I,CAAR,CAAA,CAAgBA,CAAhB,CAAuB,CAACA,CAAD,CAEnC,CAAO/B,CAAA/G,OAAP,CAAA,CAAuB,CAErB,IADA,IAAIwD,EAAOuD,CAAA,CAAQ,CAAR,CAAX,CACS/F,EAAI,CADb,CACgB0V,EAAK9N,CAAA5I,OAArB,CAAmCgB,CAAnC,CAAuC0V,CAAvC,CAA2C1V,CAAA,EAA3C,CACE,IAAKG,CAAL,CAAa4F,CAAAgD,KAAA,CAAanB,CAAA,CAAM5H,CAAN,CAAb,CAAb,IAAyCrB,CAAzC,CAAoD,MAAOwB,EAM7D4F,EAAA,CAAUC,CAAA,CAAOxD,CAAAmT,WAAP,EAA6C,EAA7C,GAA2BnT,CAAAvD,SAA3B,EAAmDuD,CAAAoT,KAAnD,CATW,CAV0B,CAuBnDC,QAASA,GAAW,CAAC9P,CAAD,CAAU,CAC5B,IAD4B,IACnB/F,EAAI,CADe,CACZoT,EAAarN,CAAAqN,WAA7B,CAAiDpT,CAAjD,CAAqDoT,CAAApU,OAArD,CAAwEgB,CAAA,EAAxE,CACE0T,EAAA,CAAaN,CAAA,CAAWpT,CAAX,CAAb,CAEF,KAAA,CAAO+F,CAAAiN,WAAP,CAAA,CACEjN,CAAAgN,YAAA,CAAoBhN,CAAAiN,WAApB,CAL0B,CAp7ES;AAm/EvC8C,QAASA,GAAkB,CAAC/P,CAAD,CAAU+B,CAAV,CAAgB,CAEzC,IAAIiO,EAAcC,EAAA,CAAalO,CAAA6B,YAAA,EAAb,CAGlB,OAAOoM,EAAP,EAAsBE,EAAA,CAAiBlQ,CAAAtD,SAAjB,CAAtB,EAA4DsT,CALnB,CAgM3CG,QAASA,GAAkB,CAACnQ,CAAD,CAAUgO,CAAV,CAAkB,CAC3C,IAAIG,EAAeA,QAAS,CAACiC,CAAD,CAAQtC,CAAR,CAAc,CACnCsC,CAAAC,eAAL,GACED,CAAAC,eADF,CACyBC,QAAQ,EAAG,CAChCF,CAAAG,YAAA,CAAoB,CAAA,CADY,CADpC,CAMKH,EAAAI,gBAAL,GACEJ,CAAAI,gBADF,CAC0BC,QAAQ,EAAG,CACjCL,CAAAM,aAAA,CAAqB,CAAA,CADY,CADrC,CAMKN,EAAAO,OAAL,GACEP,CAAAO,OADF,CACiBP,CAAAQ,WADjB,EACqCjY,CADrC,CAIA,IAAImD,CAAA,CAAYsU,CAAAS,iBAAZ,CAAJ,CAAyC,CACvC,IAAIC,EAAUV,CAAAC,eACdD,EAAAC,eAAA,CAAuBC,QAAQ,EAAG,CAChCF,CAAAS,iBAAA,CAAyB,CAAA,CACzBC,EAAAnX,KAAA,CAAayW,CAAb,CAFgC,CAIlCA,EAAAS,iBAAA,CAAyB,CAAA,CANc,CASzCT,CAAAW,mBAAA,CAA2BC,QAAQ,EAAG,CACpC,MAAOZ,EAAAS,iBAAP,EAAuD,CAAA,CAAvD,GAAiCT,CAAAG,YADG,CAKtC,KAAIU,EAAoBnT,EAAA,CAAYkQ,CAAA,CAAOF,CAAP;AAAesC,CAAAtC,KAAf,CAAZ,EAA0C,EAA1C,CAExBzU,EAAA,CAAQ4X,CAAR,CAA2B,QAAQ,CAACpS,CAAD,CAAK,CACtCA,CAAAlF,KAAA,CAAQqG,CAAR,CAAiBoQ,CAAjB,CADsC,CAAxC,CAMY,EAAZ,EAAIc,CAAJ,EAEEd,CAAAC,eAEA,CAFuB,IAEvB,CADAD,CAAAI,gBACA,CADwB,IACxB,CAAAJ,CAAAW,mBAAA,CAA2B,IAJ7B,GAOE,OAAOX,CAAAC,eAEP,CADA,OAAOD,CAAAI,gBACP,CAAA,OAAOJ,CAAAW,mBATT,CAvCwC,CAmD1C5C,EAAAgD,KAAA,CAAoBnR,CACpB,OAAOmO,EArDoC,CA+S7CiD,QAASA,GAAO,CAACrY,CAAD,CAAM,CAAA,IAChBsY,EAAU,MAAOtY,EADD,CAEhBS,CAEW,SAAf,EAAI6X,CAAJ,EAAmC,IAAnC,GAA2BtY,CAA3B,CACsC,UAApC,EAAI,OAAQS,CAAR,CAAcT,CAAAiC,UAAd,CAAJ,CAEExB,CAFF,CAEQT,CAAAiC,UAAA,EAFR,CAGWxB,CAHX,GAGmBZ,CAHnB,GAIEY,CAJF,CAIQT,CAAAiC,UAJR,CAIwBX,EAAA,EAJxB,CADF,CAQEb,CARF,CAQQT,CAGR,OAAOsY,EAAP,CAAiB,GAAjB,CAAuB7X,CAfH,CAqBtB8X,QAASA,GAAO,CAACpU,CAAD,CAAO,CACrB7D,CAAA,CAAQ6D,CAAR,CAAe,IAAAqU,IAAf,CAAyB,IAAzB,CADqB,CAkGvBC,QAASA,GAAQ,CAAC3S,CAAD,CAAK,CAAA,IAChB4S,CADgB,CAEhBC,CAIa,WAAjB,EAAI,MAAO7S,EAAX,EACQ4S,CADR,CACkB5S,CAAA4S,QADlB,IAEIA,CAUA,CAVU,EAUV,CATI5S,CAAA5F,OASJ,GAREyY,CAEA,CAFS7S,CAAA1C,SAAA,EAAAuE,QAAA,CAAsBiR,EAAtB;AAAsC,EAAtC,CAET,CADAC,CACA,CADUF,CAAAjR,MAAA,CAAaoR,EAAb,CACV,CAAAxY,CAAA,CAAQuY,CAAA,CAAQ,CAAR,CAAA5Q,MAAA,CAAiB8Q,EAAjB,CAAR,CAAwC,QAAQ,CAAChO,CAAD,CAAK,CACnDA,CAAApD,QAAA,CAAYqR,EAAZ,CAAoB,QAAQ,CAACC,CAAD,CAAMC,CAAN,CAAkBlQ,CAAlB,CAAuB,CACjD0P,CAAA3X,KAAA,CAAaiI,CAAb,CADiD,CAAnD,CADmD,CAArD,CAMF,EAAAlD,CAAA4S,QAAA,CAAaA,CAZjB,EAcWrY,CAAA,CAAQyF,CAAR,CAAJ,EACLqT,CAEA,CAFOrT,CAAA5F,OAEP,CAFmB,CAEnB,CADA+K,EAAA,CAAYnF,CAAA,CAAGqT,CAAH,CAAZ,CAAsB,IAAtB,CACA,CAAAT,CAAA,CAAU5S,CAAAE,MAAA,CAAS,CAAT,CAAYmT,CAAZ,CAHL,EAKLlO,EAAA,CAAYnF,CAAZ,CAAgB,IAAhB,CAAsB,CAAA,CAAtB,CAEF,OAAO4S,EA3Ba,CAygBtB/O,QAASA,GAAc,CAACyP,CAAD,CAAgB,CAmCrCC,QAASA,EAAa,CAACC,CAAD,CAAW,CAC/B,MAAO,SAAQ,CAAC7Y,CAAD,CAAMY,CAAN,CAAa,CAC1B,GAAI4B,CAAA,CAASxC,CAAT,CAAJ,CACEH,CAAA,CAAQG,CAAR,CAAaU,EAAA,CAAcmY,CAAd,CAAb,CADF,KAGE,OAAOA,EAAA,CAAS7Y,CAAT,CAAcY,CAAd,CAJiB,CADG,CAUjCiL,QAASA,EAAQ,CAACtD,CAAD,CAAOuQ,CAAP,CAAkB,CACjCnO,EAAA,CAAwBpC,CAAxB,CAA8B,SAA9B,CACA,IAAItI,CAAA,CAAW6Y,CAAX,CAAJ,EAA6BlZ,CAAA,CAAQkZ,CAAR,CAA7B,CACEA,CAAA,CAAYC,CAAAC,YAAA,CAA6BF,CAA7B,CAEd,IAAI,CAACA,CAAAG,KAAL,CACE,KAAM1N,GAAA,CAAgB,MAAhB,CAA2EhD,CAA3E,CAAN,CAEF,MAAO2Q,EAAA,CAAc3Q,CAAd,CAAqB4Q,CAArB,CAAP,CAA8CL,CARb,CAWnCrN,QAASA,EAAO,CAAClD,CAAD,CAAO6Q,CAAP,CAAkB,CAAE,MAAOvN,EAAA,CAAStD,CAAT,CAAe,MAAQ6Q,CAAR,CAAf,CAAT,CA6BlCC,QAASA,EAAW,CAACV,CAAD,CAAe,CAAA,IAC7BzM,EAAY,EADiB,CACboN,CADa,CACHtN,CADG,CACUvL,CADV,CACa0V,CAC9CtW,EAAA,CAAQ8Y,CAAR,CAAuB,QAAQ,CAACvQ,CAAD,CAAS,CACtC,GAAI,CAAAmR,CAAAC,IAAA,CAAkBpR,CAAlB,CAAJ,CAAA,CACAmR,CAAAxB,IAAA,CAAkB3P,CAAlB,CAA0B,CAAA,CAA1B,CAEA,IAAI,CACF,GAAIzI,CAAA,CAASyI,CAAT,CAAJ,CAIE,IAHAkR,CAGgD;AAHrC7M,EAAA,CAAcrE,CAAd,CAGqC,CAFhD8D,CAEgD,CAFpCA,CAAAxG,OAAA,CAAiB2T,CAAA,CAAYC,CAAA5N,SAAZ,CAAjB,CAAAhG,OAAA,CAAwD4T,CAAAG,WAAxD,CAEoC,CAA5CzN,CAA4C,CAA9BsN,CAAAI,aAA8B,CAAPjZ,CAAO,CAAH,CAAG,CAAA0V,CAAA,CAAKnK,CAAAvM,OAArD,CAAyEgB,CAAzE,CAA6E0V,CAA7E,CAAiF1V,CAAA,EAAjF,CAAsF,CAAA,IAChFkZ,EAAa3N,CAAA,CAAYvL,CAAZ,CADmE,CAEhFoL,EAAWkN,CAAAS,IAAA,CAAqBG,CAAA,CAAW,CAAX,CAArB,CAEf9N,EAAA,CAAS8N,CAAA,CAAW,CAAX,CAAT,CAAAlU,MAAA,CAA8BoG,CAA9B,CAAwC8N,CAAA,CAAW,CAAX,CAAxC,CAJoF,CAJxF,IAUW1Z,EAAA,CAAWmI,CAAX,CAAJ,CACH8D,CAAA5L,KAAA,CAAeyY,CAAA5P,OAAA,CAAwBf,CAAxB,CAAf,CADG,CAEIxI,CAAA,CAAQwI,CAAR,CAAJ,CACH8D,CAAA5L,KAAA,CAAeyY,CAAA5P,OAAA,CAAwBf,CAAxB,CAAf,CADG,CAGLoC,EAAA,CAAYpC,CAAZ,CAAoB,QAApB,CAhBA,CAkBF,MAAOxB,CAAP,CAAU,CAYV,KAXIhH,EAAA,CAAQwI,CAAR,CAWE,GAVJA,CAUI,CAVKA,CAAA,CAAOA,CAAA3I,OAAP,CAAuB,CAAvB,CAUL,EARFmH,CAAAgT,QAQE,GARWhT,CAAAiT,MAQX,EARqD,EAQrD,EARsBjT,CAAAiT,MAAApW,QAAA,CAAgBmD,CAAAgT,QAAhB,CAQtB,IAFJhT,CAEI,CAFAA,CAAAgT,QAEA,CAFY,IAEZ,CAFmBhT,CAAAiT,MAEnB,EAAAtO,EAAA,CAAgB,UAAhB,CACInD,CADJ,CACYxB,CAAAiT,MADZ,EACuBjT,CAAAgT,QADvB,EACoChT,CADpC,CAAN,CAZU,CArBZ,CADsC,CAAxC,CAsCA,OAAOsF,EAxC0B,CA+CnC4N,QAASA,EAAsB,CAACC,CAAD,CAAQtO,CAAR,CAAiB,CAE9CuO,QAASA,EAAU,CAACC,CAAD,CAAc,CAC/B,GAAIF,CAAA7Z,eAAA,CAAqB+Z,CAArB,CAAJ,CAAuC,CACrC,GAAIF,CAAA,CAAME,CAAN,CAAJ,GAA2BC,CAA3B,CACE,KAAM3O,GAAA,CAAgB,MAAhB,CAA0DV,CAAA3J,KAAA,CAAU,MAAV,CAA1D,CAAN,CAEF,MAAO6Y,EAAA,CAAME,CAAN,CAJ8B,CAMrC,GAAI,CAGF,MAFApP,EAAAxJ,QAAA,CAAa4Y,CAAb,CAEO;AADPF,CAAA,CAAME,CAAN,CACO,CADcC,CACd,CAAAH,CAAA,CAAME,CAAN,CAAA,CAAqBxO,CAAA,CAAQwO,CAAR,CAH1B,CAIF,MAAOE,CAAP,CAAY,CAIZ,KAHIJ,EAAA,CAAME,CAAN,CAGEE,GAHqBD,CAGrBC,EAFJ,OAAOJ,CAAA,CAAME,CAAN,CAEHE,CAAAA,CAAN,CAJY,CAJd,OASU,CACRtP,CAAAoH,MAAA,EADQ,CAhBmB,CAsBjC9I,QAASA,EAAM,CAAC9D,CAAD,CAAKD,CAAL,CAAWgV,CAAX,CAAkB,CAAA,IAC3BC,EAAO,EADoB,CAE3BpC,EAAUD,EAAA,CAAS3S,CAAT,CAFiB,CAG3B5F,CAH2B,CAGnBgB,CAHmB,CAI3BT,CAEAS,EAAA,CAAI,CAAR,KAAWhB,CAAX,CAAoBwY,CAAAxY,OAApB,CAAoCgB,CAApC,CAAwChB,CAAxC,CAAgDgB,CAAA,EAAhD,CAAqD,CACnDT,CAAA,CAAMiY,CAAA,CAAQxX,CAAR,CACN,IAAmB,QAAnB,GAAI,MAAOT,EAAX,CACE,KAAMuL,GAAA,CAAgB,MAAhB,CACyEvL,CADzE,CAAN,CAGFqa,CAAA/Z,KAAA,CACE8Z,CACA,EADUA,CAAAla,eAAA,CAAsBF,CAAtB,CACV,CAAEoa,CAAA,CAAOpa,CAAP,CAAF,CACEga,CAAA,CAAWha,CAAX,CAHJ,CANmD,CAYhDqF,CAAA4S,QAAL,GAEE5S,CAFF,CAEOA,CAAA,CAAG5F,CAAH,CAFP,CAOA,OAAO4F,EAAAI,MAAA,CAASL,CAAT,CAAeiV,CAAf,CAzBwB,CAyCjC,MAAO,QACGlR,CADH,aAbP6P,QAAoB,CAACsB,CAAD,CAAOF,CAAP,CAAe,CAAA,IAC7BG,EAAcA,QAAQ,EAAG,EADI,CAEnBC,CAIdD,EAAAE,UAAA,CAAyBA,CAAA7a,CAAA,CAAQ0a,CAAR,CAAA,CAAgBA,CAAA,CAAKA,CAAA7a,OAAL,CAAmB,CAAnB,CAAhB,CAAwC6a,CAAxCG,WACzBC,EAAA,CAAW,IAAIH,CACfC,EAAA,CAAgBrR,CAAA,CAAOmR,CAAP,CAAaI,CAAb,CAAuBN,CAAvB,CAEhB,OAAO5X,EAAA,CAASgY,CAAT,CAAA,EAA2Bva,CAAA,CAAWua,CAAX,CAA3B,CAAuDA,CAAvD,CAAuEE,CAV7C,CAa5B,KAGAV,CAHA,UAIKhC,EAJL,KAKA2C,QAAQ,CAACpS,CAAD,CAAO,CAClB,MAAO2Q,EAAAhZ,eAAA,CAA6BqI,CAA7B,CAAoC4Q,CAApC,CAAP,EAA8DY,CAAA7Z,eAAA,CAAqBqI,CAArB,CAD5C,CALf,CAjEuC,CApIX;AAAA,IACjC2R,EAAgB,EADiB,CAEjCf,EAAiB,UAFgB,CAGjCtO,EAAO,EAH0B,CAIjC0O,EAAgB,IAAIzB,EAJa,CAKjCoB,EAAgB,UACJ,UACIN,CAAA,CAAc/M,CAAd,CADJ,SAEG+M,CAAA,CAAcnN,CAAd,CAFH,SAGGmN,CAAA,CAiDnBgC,QAAgB,CAACrS,CAAD,CAAOmC,CAAP,CAAoB,CAClC,MAAOe,EAAA,CAAQlD,CAAR,CAAc,CAAC,WAAD,CAAc,QAAQ,CAACsS,CAAD,CAAY,CACrD,MAAOA,EAAA7B,YAAA,CAAsBtO,CAAtB,CAD8C,CAAlC,CAAd,CAD2B,CAjDjB,CAHH,OAICkO,CAAA,CAsDjBhY,QAAc,CAAC2H,CAAD,CAAO3C,CAAP,CAAY,CAAE,MAAO6F,EAAA,CAAQlD,CAAR,CAAclG,EAAA,CAAQuD,CAAR,CAAd,CAAT,CAtDT,CAJD,UAKIgT,CAAA,CAuDpBkC,QAAiB,CAACvS,CAAD,CAAO3H,CAAP,CAAc,CAC7B+J,EAAA,CAAwBpC,CAAxB,CAA8B,UAA9B,CACA2Q,EAAA,CAAc3Q,CAAd,CAAA,CAAsB3H,CACtBma,EAAA,CAAcxS,CAAd,CAAA,CAAsB3H,CAHO,CAvDX,CALJ,WAkEhBoa,QAAkB,CAACf,CAAD,CAAcgB,CAAd,CAAuB,CAAA,IACnCC,EAAenC,CAAAS,IAAA,CAAqBS,CAArB,CAAmCd,CAAnC,CADoB,CAEnCgC,EAAWD,CAAAjC,KAEfiC,EAAAjC,KAAA,CAAoBmC,QAAQ,EAAG,CAC7B,IAAIC,EAAeC,CAAAnS,OAAA,CAAwBgS,CAAxB,CAAkCD,CAAlC,CACnB,OAAOI,EAAAnS,OAAA,CAAwB8R,CAAxB,CAAiC,IAAjC,CAAuC,WAAYI,CAAZ,CAAvC,CAFsB,CAJQ,CAlEzB,CADI,CALiB,CAejCtC,EAAoBG,CAAA2B,UAApB9B,CACIe,CAAA,CAAuBZ,CAAvB,CAAsC,QAAQ,EAAG,CAC/C,KAAM3N,GAAA,CAAgB,MAAhB,CAAiDV,CAAA3J,KAAA,CAAU,MAAV,CAAjD,CAAN,CAD+C,CAAjD,CAhB6B,CAmBjC6Z,EAAgB,EAnBiB,CAoBjCO,EAAoBP,CAAAF,UAApBS,CACIxB,CAAA,CAAuBiB,CAAvB,CAAsC,QAAQ,CAACQ,CAAD,CAAc,CACtD1P,CAAAA,CAAWkN,CAAAS,IAAA,CAAqB+B,CAArB;AAAmCpC,CAAnC,CACf,OAAOmC,EAAAnS,OAAA,CAAwB0C,CAAAoN,KAAxB,CAAuCpN,CAAvC,CAFmD,CAA5D,CAMRhM,EAAA,CAAQwZ,CAAA,CAAYV,CAAZ,CAAR,CAAoC,QAAQ,CAACtT,CAAD,CAAK,CAAEiW,CAAAnS,OAAA,CAAwB9D,CAAxB,EAA8BnD,CAA9B,CAAF,CAAjD,CAEA,OAAOoZ,EA7B8B,CAkQvCjM,QAASA,GAAqB,EAAG,CAE/B,IAAImM,EAAuB,CAAA,CAE3B,KAAAC,qBAAA,CAA4BC,QAAQ,EAAG,CACrCF,CAAA,CAAuB,CAAA,CADc,CAIvC,KAAAvC,KAAA,CAAY,CAAC,SAAD,CAAY,WAAZ,CAAyB,YAAzB,CAAuC,QAAQ,CAAC0C,CAAD,CAAUC,CAAV,CAAqBC,CAArB,CAAiC,CAO1FC,QAASA,EAAc,CAACtY,CAAD,CAAO,CAC5B,IAAIuY,EAAS,IACblc,EAAA,CAAQ2D,CAAR,CAAc,QAAQ,CAACgD,CAAD,CAAU,CACzBuV,CAAL,EAA+C,GAA/C,GAAezV,CAAA,CAAUE,CAAAtD,SAAV,CAAf,GAAoD6Y,CAApD,CAA6DvV,CAA7D,CAD8B,CAAhC,CAGA,OAAOuV,EALqB,CAQ9BC,QAASA,EAAM,EAAG,CAAA,IACZC,EAAOL,CAAAK,KAAA,EADK,CACaC,CAGxBD,EAAL,CAGK,CAAKC,CAAL,CAAW/c,CAAAqJ,eAAA,CAAwByT,CAAxB,CAAX,EAA2CC,CAAAC,eAAA,EAA3C,CAGA,CAAKD,CAAL,CAAWJ,CAAA,CAAe3c,CAAAid,kBAAA,CAA2BH,CAA3B,CAAf,CAAX,EAA8DC,CAAAC,eAAA,EAA9D,CAGa,KAHb,GAGIF,CAHJ,EAGoBN,CAAAU,SAAA,CAAiB,CAAjB,CAAoB,CAApB,CATzB,CAAWV,CAAAU,SAAA,CAAiB,CAAjB,CAAoB,CAApB,CAJK,CAdlB,IAAIld,EAAWwc,CAAAxc,SAgCXqc,EAAJ,EACEK,CAAA5X,OAAA,CAAkBqY,QAAwB,EAAG,CAAC,MAAOV,EAAAK,KAAA,EAAR,CAA7C;AACEM,QAA8B,EAAG,CAC/BV,CAAA7X,WAAA,CAAsBgY,CAAtB,CAD+B,CADnC,CAMF,OAAOA,EAxCmF,CAAhF,CARmB,CA0SjCnL,QAASA,GAAuB,EAAE,CAChC,IAAAoI,KAAA,CAAY,CAAC,OAAD,CAAU,UAAV,CAAsB,QAAQ,CAACuD,CAAD,CAAQC,CAAR,CAAkB,CAC1D,MAAOD,EAAAE,UACA,CAAH,QAAQ,CAACrX,CAAD,CAAK,CAAE,MAAOmX,EAAA,CAAMnX,CAAN,CAAT,CAAV,CACH,QAAQ,CAACA,CAAD,CAAK,CACb,MAAOoX,EAAA,CAASpX,CAAT,CAAa,CAAb,CAAgB,CAAA,CAAhB,CADM,CAHyC,CAAhD,CADoB,CAgClCsX,QAASA,GAAO,CAACzd,CAAD,CAASC,CAAT,CAAmByd,CAAnB,CAAyBC,CAAzB,CAAmC,CAsBjDC,QAASA,EAA0B,CAACzX,CAAD,CAAK,CACtC,GAAI,CACFA,CAAAI,MAAA,CAAS,IAAT,CArvGGF,EAAApF,KAAA,CAqvGsBwB,SArvGtB,CAqvGiC6D,CArvGjC,CAqvGH,CADE,CAAJ,OAEU,CAER,GADAuX,CAAA,EACI,CAA4B,CAA5B,GAAAA,CAAJ,CACE,IAAA,CAAMC,CAAAvd,OAAN,CAAA,CACE,GAAI,CACFud,CAAAC,IAAA,EAAA,EADE,CAEF,MAAOrW,CAAP,CAAU,CACVgW,CAAAM,MAAA,CAAWtW,CAAX,CADU,CANR,CAH4B,CAmExCuW,QAASA,EAAW,CAACC,CAAD,CAAWC,CAAX,CAAuB,CACxCC,SAASA,EAAK,EAAG,CAChBzd,CAAA,CAAQ0d,CAAR,CAAiB,QAAQ,CAACC,CAAD,CAAQ,CAAEA,CAAA,EAAF,CAAjC,CACAC,EAAA,CAAcJ,CAAA,CAAWC,CAAX,CAAkBF,CAAlB,CAFE,CAAjBE,CAAA,EADwC,CAuE3CI,QAASA,EAAa,EAAG,CACvBC,CAAA,CAAc,IACVC,EAAJ,EAAsBxY,CAAAyY,IAAA,EAAtB,GAEAD,CACA,CADiBxY,CAAAyY,IAAA,EACjB,CAAAhe,CAAA,CAAQie,EAAR,CAA4B,QAAQ,CAACC,CAAD,CAAW,CAC7CA,CAAA,CAAS3Y,CAAAyY,IAAA,EAAT,CAD6C,CAA/C,CAHA,CAFuB,CAhKwB,IAC7CzY,EAAO,IADsC,CAE7C4Y,EAAc7e,CAAA,CAAS,CAAT,CAF+B,CAG7C0D,EAAW3D,CAAA2D,SAHkC,CAI7Cob,EAAU/e,CAAA+e,QAJmC;AAK7CZ,EAAane,CAAAme,WALgC,CAM7Ca,EAAehf,CAAAgf,aAN8B,CAO7CC,EAAkB,EAEtB/Y,EAAAgZ,OAAA,CAAc,CAAA,CAEd,KAAIrB,EAA0B,CAA9B,CACIC,EAA8B,EAGlC5X,EAAAiZ,6BAAA,CAAoCvB,CACpC1X,EAAAkZ,6BAAA,CAAoCC,QAAQ,EAAG,CAAExB,CAAA,EAAF,CA6B/C3X,EAAAoZ,gCAAA,CAAuCC,QAAQ,CAACC,CAAD,CAAW,CAIxD7e,CAAA,CAAQ0d,CAAR,CAAiB,QAAQ,CAACC,CAAD,CAAQ,CAAEA,CAAA,EAAF,CAAjC,CAEgC,EAAhC,GAAIT,CAAJ,CACE2B,CAAA,EADF,CAGE1B,CAAA1c,KAAA,CAAiCoe,CAAjC,CATsD,CA7CT,KA6D7CnB,EAAU,EA7DmC,CA8D7CE,CAaJrY,EAAAuZ,UAAA,CAAiBC,QAAQ,CAACvZ,CAAD,CAAK,CACxB/C,CAAA,CAAYmb,CAAZ,CAAJ,EAA8BN,CAAA,CAAY,GAAZ,CAAiBE,CAAjB,CAC9BE,EAAAjd,KAAA,CAAa+E,CAAb,CACA,OAAOA,EAHqB,CA3EmB,KAoG7CuY,EAAiB/a,CAAAgc,KApG4B,CAqG7CC,EAAc3f,CAAAkE,KAAA,CAAc,MAAd,CArG+B,CAsG7Csa,EAAc,IAqBlBvY,EAAAyY,IAAA,CAAWkB,QAAQ,CAAClB,CAAD,CAAM3W,CAAN,CAAe,CAE5BrE,CAAJ,GAAiB3D,CAAA2D,SAAjB,GAAkCA,CAAlC,CAA6C3D,CAAA2D,SAA7C,CACIob,EAAJ,GAAgB/e,CAAA+e,QAAhB,GAAgCA,CAAhC,CAA0C/e,CAAA+e,QAA1C,CAGA,IAAIJ,CAAJ,CACE,IAAID,CAAJ,EAAsBC,CAAtB,CAiBA,MAhBAD,EAgBOxY,CAhBUyY,CAgBVzY,CAfHyX,CAAAoB,QAAJ,CACM/W,CAAJ,CAAa+W,CAAAe,aAAA,CAAqB,IAArB,CAA2B,EAA3B,CAA+BnB,CAA/B,CAAb,EAEEI,CAAAgB,UAAA,CAAkB,IAAlB,CAAwB,EAAxB;AAA4BpB,CAA5B,CAEA,CAAAiB,CAAA1b,KAAA,CAAiB,MAAjB,CAAyB0b,CAAA1b,KAAA,CAAiB,MAAjB,CAAzB,CAJF,CADF,EAQEua,CACA,CADcE,CACd,CAAI3W,CAAJ,CACErE,CAAAqE,QAAA,CAAiB2W,CAAjB,CADF,CAGEhb,CAAAgc,KAHF,CAGkBhB,CAZpB,CAeOzY,CAAAA,CAjBP,CADF,IAwBE,OAAOuY,EAAP,EAAsB9a,CAAAgc,KAAA3X,QAAA,CAAsB,MAAtB,CAA6B,GAA7B,CA9BQ,CA3He,KA6J7C4W,GAAqB,EA7JwB,CA8J7CoB,EAAgB,CAAA,CAiCpB9Z,EAAA+Z,YAAA,CAAmBC,QAAQ,CAACV,CAAD,CAAW,CAEpC,GAAI,CAACQ,CAAL,CAAoB,CAMlB,GAAIrC,CAAAoB,QAAJ,CAAsBxX,CAAA,CAAOvH,CAAP,CAAAmgB,GAAA,CAAkB,UAAlB,CAA8B3B,CAA9B,CAEtB,IAAIb,CAAAyC,WAAJ,CAAyB7Y,CAAA,CAAOvH,CAAP,CAAAmgB,GAAA,CAAkB,YAAlB,CAAgC3B,CAAhC,CAAzB,KAEKtY,EAAAuZ,UAAA,CAAejB,CAAf,CAELwB,EAAA,CAAgB,CAAA,CAZE,CAepBpB,EAAAxd,KAAA,CAAwBoe,CAAxB,CACA,OAAOA,EAlB6B,CAkCtCtZ,EAAAma,SAAA,CAAgBC,QAAQ,EAAG,CACzB,IAAIX,EAAOC,CAAA1b,KAAA,CAAiB,MAAjB,CACX,OAAOyb,EAAA,CAAOA,CAAA3X,QAAA,CAAa,wBAAb,CAAuC,EAAvC,CAAP,CAAoD,EAFlC,CAQ3B,KAAIuY,EAAc,EAAlB,CACIC,GAAmB,EADvB,CAEIC,EAAava,CAAAma,SAAA,EAsBjBna,EAAAwa,QAAA,CAAeC,QAAQ,CAACtX,CAAD,CAAO3H,CAAP,CAAc,CAAA,IAE/Bkf,CAF+B,CAEJC,CAFI,CAEItf,CAFJ,CAEOK,CAE1C,IAAIyH,CAAJ,CACM3H,CAAJ,GAAcxB,CAAd,CACE4e,CAAA+B,OADF,CACuBC,MAAA,CAAOzX,CAAP,CADvB,CACsC,SADtC,CACkDoX,CADlD,CAE0B,wCAF1B;AAIMhgB,CAAA,CAASiB,CAAT,CAJN,GAKIkf,CAOA,CAPgBrgB,CAAAue,CAAA+B,OAAAtgB,CAAqBugB,MAAA,CAAOzX,CAAP,CAArB9I,CAAoC,GAApCA,CAA0CugB,MAAA,CAAOpf,CAAP,CAA1CnB,CACM,QADNA,CACiBkgB,CADjBlgB,QAOhB,CANsD,CAMtD,CAAmB,IAAnB,CAAIqgB,CAAJ,EACElD,CAAAqD,KAAA,CAAU,UAAV,CAAsB1X,CAAtB,CACE,6DADF,CAEEuX,CAFF,CAEiB,iBAFjB,CAbN,CADF,KAoBO,CACL,GAAI9B,CAAA+B,OAAJ,GAA2BL,EAA3B,CAKE,IAJAA,EAIK,CAJc1B,CAAA+B,OAId,CAHLG,CAGK,CAHSR,EAAAlY,MAAA,CAAuB,IAAvB,CAGT,CAFLiY,CAEK,CAFS,EAET,CAAAhf,CAAA,CAAI,CAAT,CAAYA,CAAZ,CAAgByf,CAAAzgB,OAAhB,CAAoCgB,CAAA,EAApC,CACEsf,CAEA,CAFSG,CAAA,CAAYzf,CAAZ,CAET,CADAK,CACA,CADQif,CAAAtc,QAAA,CAAe,GAAf,CACR,CAAY,CAAZ,CAAI3C,CAAJ,GACEyH,CAIA,CAJO4X,QAAA,CAASJ,CAAAK,UAAA,CAAiB,CAAjB,CAAoBtf,CAApB,CAAT,CAIP,CAAI2e,CAAA,CAAYlX,CAAZ,CAAJ,GAA0BnJ,CAA1B,GACEqgB,CAAA,CAAYlX,CAAZ,CADF,CACsB4X,QAAA,CAASJ,CAAAK,UAAA,CAAiBtf,CAAjB,CAAyB,CAAzB,CAAT,CADtB,CALF,CAWJ,OAAO2e,EApBF,CAxB4B,CA+DrCra,EAAAib,MAAA,CAAaC,QAAQ,CAACjb,CAAD,CAAKkb,CAAL,CAAY,CAC/B,IAAIC,CACJzD,EAAA,EACAyD,EAAA,CAAYnD,CAAA,CAAW,QAAQ,EAAG,CAChC,OAAOc,CAAA,CAAgBqC,CAAhB,CACP1D,EAAA,CAA2BzX,CAA3B,CAFgC,CAAtB,CAGTkb,CAHS,EAGA,CAHA,CAIZpC,EAAA,CAAgBqC,CAAhB,CAAA,CAA6B,CAAA,CAC7B,OAAOA,EARwB,CAsBjCpb,EAAAib,MAAAI,OAAA,CAAoBC,QAAQ,CAACC,CAAD,CAAU,CACpC,MAAIxC,EAAA,CAAgBwC,CAAhB,CAAJ,EACE,OAAOxC,CAAA,CAAgBwC,CAAhB,CAGA;AAFPzC,CAAA,CAAayC,CAAb,CAEO,CADP7D,CAAA,CAA2B5a,CAA3B,CACO,CAAA,CAAA,CAJT,EAMO,CAAA,CAP6B,CAtVW,CAkWnDqN,QAASA,GAAgB,EAAE,CACzB,IAAA0J,KAAA,CAAY,CAAC,SAAD,CAAY,MAAZ,CAAoB,UAApB,CAAgC,WAAhC,CACR,QAAQ,CAAE0C,CAAF,CAAaiB,CAAb,CAAqBC,CAArB,CAAiC+D,CAAjC,CAA2C,CACjD,MAAO,KAAIjE,EAAJ,CAAYhB,CAAZ,CAAqBiF,CAArB,CAAgChE,CAAhC,CAAsCC,CAAtC,CAD0C,CAD3C,CADa,CAsF3BrN,QAASA,GAAqB,EAAG,CAE/B,IAAAyJ,KAAA,CAAY4H,QAAQ,EAAG,CAGrBC,QAASA,EAAY,CAACC,CAAD,CAAUC,CAAV,CAAmB,CAwMtCC,QAASA,EAAO,CAACC,CAAD,CAAQ,CAClBA,CAAJ,EAAaC,CAAb,GACOC,CAAL,CAEWA,CAFX,EAEuBF,CAFvB,GAGEE,CAHF,CAGaF,CAAAG,EAHb,EACED,CADF,CACaF,CAQb,CAHAI,CAAA,CAAKJ,CAAAG,EAAL,CAAcH,CAAAK,EAAd,CAGA,CAFAD,CAAA,CAAKJ,CAAL,CAAYC,CAAZ,CAEA,CADAA,CACA,CADWD,CACX,CAAAC,CAAAE,EAAA,CAAa,IAVf,CADsB,CAmBxBC,QAASA,EAAI,CAACE,CAAD,CAAYC,CAAZ,CAAuB,CAC9BD,CAAJ,EAAiBC,CAAjB,GACMD,CACJ,GADeA,CAAAD,EACf,CAD6BE,CAC7B,EAAIA,CAAJ,GAAeA,CAAAJ,EAAf,CAA6BG,CAA7B,CAFF,CADkC,CA1NpC,GAAIT,CAAJ,GAAeW,EAAf,CACE,KAAMriB,EAAA,CAAO,eAAP,CAAA,CAAwB,KAAxB,CAAkE0hB,CAAlE,CAAN,CAFoC,IAKlCY,EAAO,CAL2B,CAMlCC,EAAQngB,CAAA,CAAO,EAAP,CAAWuf,CAAX,CAAoB,IAAKD,CAAL,CAApB,CAN0B,CAOlCvX,EAAO,EAP2B,CAQlCqY,EAAYb,CAAZa,EAAuBb,CAAAa,SAAvBA,EAA4CC,MAAAC,UARV,CASlCC,EAAU,EATwB,CAUlCb,EAAW,IAVuB,CAWlCC,EAAW,IAyCf,OAAOM,EAAA,CAAOX,CAAP,CAAP,CAAyB,KAoBlBhJ,QAAQ,CAAC/X,CAAD,CAAMY,CAAN,CAAa,CACxB,GAAIihB,CAAJ,CAAeC,MAAAC,UAAf,CAAiC,CAC/B,IAAIE,EAAWD,CAAA,CAAQhiB,CAAR,CAAXiiB,GAA4BD,CAAA,CAAQhiB,CAAR,CAA5BiiB,CAA2C,KAAMjiB,CAAN,CAA3CiiB,CAEJhB;CAAA,CAAQgB,CAAR,CAH+B,CAMjC,GAAI,CAAA3f,CAAA,CAAY1B,CAAZ,CAAJ,CAQA,MAPMZ,EAOCY,GAPM4I,EAON5I,EAPa+gB,CAAA,EAOb/gB,CANP4I,CAAA,CAAKxJ,CAAL,CAMOY,CANKA,CAMLA,CAJH+gB,CAIG/gB,CAJIihB,CAIJjhB,EAHL,IAAAshB,OAAA,CAAYd,CAAAphB,IAAZ,CAGKY,CAAAA,CAfiB,CApBH,KAiDlB4Y,QAAQ,CAACxZ,CAAD,CAAM,CACjB,GAAI6hB,CAAJ,CAAeC,MAAAC,UAAf,CAAiC,CAC/B,IAAIE,EAAWD,CAAA,CAAQhiB,CAAR,CAEf,IAAI,CAACiiB,CAAL,CAAe,MAEfhB,EAAA,CAAQgB,CAAR,CAL+B,CAQjC,MAAOzY,EAAA,CAAKxJ,CAAL,CATU,CAjDI,QAwEfkiB,QAAQ,CAACliB,CAAD,CAAM,CACpB,GAAI6hB,CAAJ,CAAeC,MAAAC,UAAf,CAAiC,CAC/B,IAAIE,EAAWD,CAAA,CAAQhiB,CAAR,CAEf,IAAI,CAACiiB,CAAL,CAAe,MAEXA,EAAJ,EAAgBd,CAAhB,GAA0BA,CAA1B,CAAqCc,CAAAV,EAArC,CACIU,EAAJ,EAAgBb,CAAhB,GAA0BA,CAA1B,CAAqCa,CAAAZ,EAArC,CACAC,EAAA,CAAKW,CAAAZ,EAAL,CAAgBY,CAAAV,EAAhB,CAEA,QAAOS,CAAA,CAAQhiB,CAAR,CATwB,CAYjC,OAAOwJ,CAAA,CAAKxJ,CAAL,CACP2hB,EAAA,EAdoB,CAxEC,WAkGZQ,QAAQ,EAAG,CACpB3Y,CAAA,CAAO,EACPmY,EAAA,CAAO,CACPK,EAAA,CAAU,EACVb,EAAA,CAAWC,CAAX,CAAsB,IAJF,CAlGC,SAmHdgB,QAAQ,EAAG,CAGlBJ,CAAA,CADAJ,CACA,CAFApY,CAEA,CAFO,IAGP,QAAOkY,CAAA,CAAOX,CAAP,CAJW,CAnHG,MA2IjBsB,QAAQ,EAAG,CACf,MAAO5gB,EAAA,CAAO,EAAP,CAAWmgB,CAAX,CAAkB,MAAOD,CAAP,CAAlB,CADQ,CA3IM,CApDa,CAFxC,IAAID,EAAS,EA+ObZ,EAAAuB,KAAA,CAAoBC,QAAQ,EAAG,CAC7B,IAAID,EAAO,EACXxiB,EAAA,CAAQ6hB,CAAR,CAAgB,QAAQ,CAAC3H,CAAD,CAAQgH,CAAR,CAAiB,CACvCsB,CAAA,CAAKtB,CAAL,CAAA,CAAgBhH,CAAAsI,KAAA,EADuB,CAAzC,CAGA,OAAOA,EALsB,CAmB/BvB,EAAAtH,IAAA,CAAmB+I,QAAQ,CAACxB,CAAD,CAAU,CACnC,MAAOW,EAAA,CAAOX,CAAP,CAD4B,CAKrC;MAAOD,EAxQc,CAFQ,CAwTjCrQ,QAASA,GAAsB,EAAG,CAChC,IAAAwI,KAAA,CAAY,CAAC,eAAD,CAAkB,QAAQ,CAACuJ,CAAD,CAAgB,CACpD,MAAOA,EAAA,CAAc,WAAd,CAD6C,CAA1C,CADoB,CAmgBlC3V,QAASA,GAAgB,CAAC5D,CAAD,CAAWwZ,CAAX,CAAkC,CAAA,IACrDC,EAAgB,EADqC,CAErDC,EAAS,WAF4C,CAGrDC,EAA2B,wCAH0B,CAIrDC,EAAyB,gCAJ4B,CASrDC,EAA4B,yBAiB/B,KAAAhW,UAAA,CAAiBiW,QAASC,EAAiB,CAACza,CAAD,CAAO0a,CAAP,CAAyB,CACnEtY,EAAA,CAAwBpC,CAAxB,CAA8B,WAA9B,CACI5I,EAAA,CAAS4I,CAAT,CAAJ,EACE8B,EAAA,CAAU4Y,CAAV,CAA4B,kBAA5B,CA2BA,CA1BKP,CAAAxiB,eAAA,CAA6BqI,CAA7B,CA0BL,GAzBEma,CAAA,CAAcna,CAAd,CACA,CADsB,EACtB,CAAAU,CAAAwC,QAAA,CAAiBlD,CAAjB,CAAwBoa,CAAxB,CAAgC,CAAC,WAAD,CAAc,mBAAd,CAC9B,QAAQ,CAAC9H,CAAD,CAAYqI,CAAZ,CAA+B,CACrC,IAAIC,EAAa,EACjBtjB,EAAA,CAAQ6iB,CAAA,CAAcna,CAAd,CAAR,CAA6B,QAAQ,CAAC0a,CAAD,CAAmBniB,CAAnB,CAA0B,CAC7D,GAAI,CACF,IAAIgM,EAAY+N,CAAA1R,OAAA,CAAiB8Z,CAAjB,CACZhjB,EAAA,CAAW6M,CAAX,CAAJ,CACEA,CADF,CACc,SAAWzK,EAAA,CAAQyK,CAAR,CAAX,CADd,CAEYzD,CAAAyD,CAAAzD,QAFZ,EAEiCyD,CAAAwU,KAFjC,GAGExU,CAAAzD,QAHF;AAGsBhH,EAAA,CAAQyK,CAAAwU,KAAR,CAHtB,CAKAxU,EAAAsW,SAAA,CAAqBtW,CAAAsW,SAArB,EAA2C,CAC3CtW,EAAAhM,MAAA,CAAkBA,CAClBgM,EAAAvE,KAAA,CAAiBuE,CAAAvE,KAAjB,EAAmCA,CACnCuE,EAAAuW,QAAA,CAAoBvW,CAAAuW,QAApB,EAA0CvW,CAAAwW,WAA1C,EAAkExW,CAAAvE,KAClEuE,EAAAyW,SAAA,CAAqBzW,CAAAyW,SAArB,EAA2C,GAC3CJ,EAAA7iB,KAAA,CAAgBwM,CAAhB,CAZE,CAaF,MAAOlG,CAAP,CAAU,CACVsc,CAAA,CAAkBtc,CAAlB,CADU,CAdiD,CAA/D,CAkBA,OAAOuc,EApB8B,CADT,CAAhC,CAwBF,EAAAT,CAAA,CAAcna,CAAd,CAAAjI,KAAA,CAAyB2iB,CAAzB,CA5BF,EA8BEpjB,CAAA,CAAQ0I,CAAR,CAAc7H,EAAA,CAAcsiB,CAAd,CAAd,CAEF,OAAO,KAlC4D,CA0DrE,KAAAQ,2BAAA,CAAkCC,QAAQ,CAACC,CAAD,CAAS,CACjD,MAAInhB,EAAA,CAAUmhB,CAAV,CAAJ,EACEjB,CAAAe,2BAAA,CAAiDE,CAAjD,CACO,CAAA,IAFT,EAISjB,CAAAe,2BAAA,EALwC,CA8BnD,KAAAG,4BAAA,CAAmCC,QAAQ,CAACF,CAAD,CAAS,CAClD,MAAInhB,EAAA,CAAUmhB,CAAV,CAAJ,EACEjB,CAAAkB,4BAAA,CAAkDD,CAAlD,CACO,CAAA,IAFT,EAISjB,CAAAkB,4BAAA,EALyC,CASpD,KAAA1K,KAAA,CAAY,CACF,WADE,CACW,cADX;AAC2B,mBAD3B,CACgD,OADhD,CACyD,gBADzD,CAC2E,QAD3E,CAEF,aAFE,CAEa,YAFb,CAE2B,WAF3B,CAEwC,MAFxC,CAEgD,UAFhD,CAE4D,eAF5D,CAGV,QAAQ,CAAC4B,CAAD,CAAcgJ,CAAd,CAA8BX,CAA9B,CAAmDY,CAAnD,CAA4DC,CAA5D,CAA8EC,CAA9E,CACCC,CADD,CACgBpI,CADhB,CAC8B+E,CAD9B,CAC2CsD,CAD3C,CACmDC,CADnD,CAC+DC,CAD/D,CAC8E,CAqLtF/a,QAASA,EAAO,CAACgb,CAAD,CAAgBC,CAAhB,CAA8BC,CAA9B,CAA2CC,CAA3C,CACIC,CADJ,CAC4B,CACpCJ,CAAN,WAA+B5d,EAA/B,GAGE4d,CAHF,CAGkB5d,CAAA,CAAO4d,CAAP,CAHlB,CAOAxkB,EAAA,CAAQwkB,CAAR,CAAuB,QAAQ,CAACphB,CAAD,CAAOnC,CAAP,CAAa,CACrB,CAArB,EAAImC,CAAAvD,SAAJ,EAA0CuD,CAAAyhB,UAAAzd,MAAA,CAAqB,KAArB,CAA1C,GACEod,CAAA,CAAcvjB,CAAd,CADF,CACgC2F,CAAA,CAAOxD,CAAP,CAAAkQ,KAAA,CAAkB,eAAlB,CAAAnR,OAAA,EAAA,CAA4C,CAA5C,CADhC,CAD0C,CAA5C,CAKA,KAAI2iB,EACIC,CAAA,CAAaP,CAAb,CAA4BC,CAA5B,CAA0CD,CAA1C,CACaE,CADb,CAC0BC,CAD1B,CAC2CC,CAD3C,CAERI,GAAA,CAAaR,CAAb,CAA4B,UAA5B,CACA,OAAOS,SAAqB,CAAC1b,CAAD,CAAQ2b,CAAR,CAAwBC,CAAxB,CAA8C,CACxE3a,EAAA,CAAUjB,CAAV,CAAiB,OAAjB,CAGA,KAAI6b,EAAYF,CACA,CAAZG,EAAAxe,MAAAvG,KAAA,CAA2BkkB,CAA3B,CAAY,CACZA,CAEJxkB,EAAA,CAAQmlB,CAAR,CAA+B,QAAQ,CAACtK,CAAD,CAAWnS,CAAX,CAAiB,CACtD0c,CAAAzb,KAAA,CAAe,GAAf,CAAqBjB,CAArB,CAA4B,YAA5B,CAA0CmS,CAA1C,CADsD,CAAxD,CAKQja,EAAAA,CAAI,CAAZ,KAAI,IAAW0V,EAAK8O,CAAAxlB,OAApB,CAAsCgB,CAAtC,CAAwC0V,CAAxC,CAA4C1V,CAAA,EAA5C,CAAiD,CAC/C,IACIf;AADOulB,CAAAhiB,CAAUxC,CAAVwC,CACIvD,SACE,EAAjB,GAAIA,CAAJ,EAAiD,CAAjD,GAAoCA,CAApC,EACEulB,CAAAE,GAAA,CAAa1kB,CAAb,CAAA+I,KAAA,CAAqB,QAArB,CAA+BJ,CAA/B,CAJ6C,CAQ7C2b,CAAJ,EAAoBA,CAAA,CAAeE,CAAf,CAA0B7b,CAA1B,CAChBub,EAAJ,EAAqBA,CAAA,CAAgBvb,CAAhB,CAAuB6b,CAAvB,CAAkCA,CAAlC,CACrB,OAAOA,EAvBiE,CAjBhC,CA4C5CJ,QAASA,GAAY,CAACO,CAAD,CAAWzc,CAAX,CAAsB,CACzC,GAAI,CACFyc,CAAAC,SAAA,CAAkB1c,CAAlB,CADE,CAEF,MAAM/B,CAAN,CAAS,EAH8B,CAwB3Cge,QAASA,EAAY,CAACU,CAAD,CAAWhB,CAAX,CAAyBiB,CAAzB,CAAuChB,CAAvC,CAAoDC,CAApD,CACGC,CADH,CAC2B,CAoC9CE,QAASA,EAAe,CAACvb,CAAD,CAAQkc,CAAR,CAAkBC,CAAlB,CAAgCC,CAAhC,CAAmD,CAAA,IACzDC,CADyD,CAC5CxiB,CAD4C,CACtCyiB,CADsC,CAC/BC,CAD+B,CACAllB,CADA,CACG0V,CADH,CACOkL,CAG5EuE,EAAAA,CAAiBN,CAAA7lB,OAArB,KACIomB,EAAqBC,KAAJ,CAAUF,CAAV,CACrB,KAAKnlB,CAAL,CAAS,CAAT,CAAYA,CAAZ,CAAgBmlB,CAAhB,CAAgCnlB,CAAA,EAAhC,CACEolB,CAAA,CAAeplB,CAAf,CAAA,CAAoB6kB,CAAA,CAAS7kB,CAAT,CAGX4gB,EAAP,CAAA5gB,CAAA,CAAI,CAAR,KAAkB0V,CAAlB,CAAuB4P,CAAAtmB,OAAvB,CAAuCgB,CAAvC,CAA2C0V,CAA3C,CAA+CkL,CAAA,EAA/C,CACEpe,CAKA,CALO4iB,CAAA,CAAexE,CAAf,CAKP,CAJA2E,CAIA,CAJaD,CAAA,CAAQtlB,CAAA,EAAR,CAIb,CAHAglB,CAGA,CAHcM,CAAA,CAAQtlB,CAAA,EAAR,CAGd,CAFAilB,CAEA,CAFQjf,CAAA,CAAOxD,CAAP,CAER,CAAI+iB,CAAJ,EACMA,CAAA5c,MAAJ,EACEuc,CACA,CADavc,CAAA6c,KAAA,EACb,CAAAP,CAAAlc,KAAA,CAAW,QAAX,CAAqBmc,CAArB,CAFF,EAIEA,CAJF,CAIevc,CAGf,CAAA,CADA8c,CACA,CADoBF,CAAAG,WACpB,GAA2BX,CAAAA,CAA3B,EAAgDlB,CAAhD,CACE0B,CAAA,CAAWP,CAAX,CAAwBE,CAAxB,CAAoC1iB,CAApC,CAA0CsiB,CAA1C,CACEa,CAAA,CAAwBhd,CAAxB,CAA+B8c,CAA/B,EAAoD5B,CAApD,CADF,CADF,CAKE0B,CAAA,CAAWP,CAAX,CAAwBE,CAAxB,CAAoC1iB,CAApC,CAA0CsiB,CAA1C,CAAwDC,CAAxD,CAbJ,EAeWC,CAfX,EAgBEA,CAAA,CAAYrc,CAAZ,CAAmBnG,CAAA4Q,WAAnB,CAAoCzU,CAApC,CAA+ComB,CAA/C,CAhCqE,CAhC3E,IAJ8C,IAC1CO,EAAU,EADgC,CAE1CM,CAF0C,CAEnClD,CAFmC,CAEXtP,CAFW,CAEcyS,CAFd,CAIrC7lB,EAAI,CAAb,CAAgBA,CAAhB,CAAoB6kB,CAAA7lB,OAApB,CAAqCgB,CAAA,EAArC,CACE4lB,CAyBA,CAzBQ,IAAIE,EAyBZ,CAtBApD,CAsBA,CAtBaqD,EAAA,CAAkBlB,CAAA,CAAS7kB,CAAT,CAAlB,CAA+B,EAA/B,CAAmC4lB,CAAnC;AAAgD,CAAN,GAAA5lB,CAAA,CAAU8jB,CAAV,CAAwBnlB,CAAlE,CACmBolB,CADnB,CAsBb,EAnBAwB,CAmBA,CAnBc7C,CAAA1jB,OACD,CAAPgnB,EAAA,CAAsBtD,CAAtB,CAAkCmC,CAAA,CAAS7kB,CAAT,CAAlC,CAA+C4lB,CAA/C,CAAsD/B,CAAtD,CAAoEiB,CAApE,CACwB,IADxB,CAC8B,EAD9B,CACkC,EADlC,CACsCd,CADtC,CAAO,CAEP,IAgBN,GAdkBuB,CAAA5c,MAclB,EAbEyb,EAAA,CAAape,CAAA,CAAO6e,CAAA,CAAS7kB,CAAT,CAAP,CAAb,CAAkC,UAAlC,CAaF,CAVAglB,CAUA,CAVeO,CAGD,EAHeA,CAAAU,SAGf,EAFA,EAAE7S,CAAF,CAAeyR,CAAA,CAAS7kB,CAAT,CAAAoT,WAAf,CAEA,EADA,CAACA,CAAApU,OACD,CAAR,IAAQ,CACRmlB,CAAA,CAAa/Q,CAAb,CACGmS,CAAA,CAAaA,CAAAG,WAAb,CAAqC7B,CADxC,CAMN,CAHAyB,CAAAzlB,KAAA,CAAa0lB,CAAb,CAAyBP,CAAzB,CAGA,CAFAa,CAEA,CAFcA,CAEd,EAF6BN,CAE7B,EAF2CP,CAE3C,CAAAhB,CAAA,CAAyB,IAI3B,OAAO6B,EAAA,CAAc3B,CAAd,CAAgC,IAlCO,CA0EhDyB,QAASA,EAAuB,CAAChd,CAAD,CAAQkb,CAAR,CAAsB,CACpD,MAAOkB,SAA0B,CAACmB,CAAD,CAAmBC,CAAnB,CAA4BC,CAA5B,CAAyC,CACxE,IAAIC,EAAe,CAAA,CAEdH,EAAL,GACEA,CAEA,CAFmBvd,CAAA6c,KAAA,EAEnB,CAAAa,CAAA,CADAH,CAAAI,cACA,CADiC,CAAA,CAFnC,CAMIrgB,EAAAA,CAAQ4d,CAAA,CAAaqC,CAAb,CAA+BC,CAA/B,CAAwCC,CAAxC,CACZ,IAAIC,CAAJ,CACEpgB,CAAA2Y,GAAA,CAAS,UAAT,CAAqBla,EAAA,CAAKwhB,CAAL,CAAuBA,CAAA1R,SAAvB,CAArB,CAEF,OAAOvO,EAbiE,CADtB,CA4BtD8f,QAASA,GAAiB,CAACvjB,CAAD,CAAOkgB,CAAP,CAAmBkD,CAAnB,CAA0B9B,CAA1B,CAAuCC,CAAvC,CAAwD,CAAA,IAE5EwC,EAAWX,CAAAY,MAFiE,CAG5EhgB,CAGJ,QALehE,CAAAvD,SAKf,EACE,KAAK,CAAL,CAEEwnB,CAAA,CAAa/D,CAAb,CACIgE,EAAA,CAAmBC,EAAA,CAAUnkB,CAAV,CAAAmH,YAAA,EAAnB,CADJ,CACuD,GADvD,CAC4Dma,CAD5D,CACyEC,CADzE,CAFF,KAMWphB,CANX,CAMiBmF,CANjB,CAMuB8e,CAA0BC,EAAAA,CAASrkB,CAAA2F,WAAxD,KANF,IAOW+K,EAAI,CAPf,CAOkBC;AAAK0T,CAAL1T,EAAe0T,CAAA7nB,OAD/B,CAC8CkU,CAD9C,CACkDC,CADlD,CACsDD,CAAA,EADtD,CAC2D,CACzD,IAAI4T,EAAgB,CAAA,CAApB,CACIC,EAAc,CAAA,CAElBpkB,EAAA,CAAOkkB,CAAA,CAAO3T,CAAP,CACP,IAAI,CAAC+D,CAAL,EAAqB,CAArB,EAAaA,CAAb,EAA0BtU,CAAAqkB,UAA1B,CAA0C,CACxClf,CAAA,CAAOnF,CAAAmF,KAEPmf,EAAA,CAAaP,EAAA,CAAmB5e,CAAnB,CACTof,EAAAje,KAAA,CAAqBge,CAArB,CAAJ,GACEnf,CADF,CACSwB,EAAA,CAAW2d,CAAAE,OAAA,CAAkB,CAAlB,CAAX,CAAiC,GAAjC,CADT,CAIA,KAAIC,EAAiBH,CAAAxgB,QAAA,CAAmB,cAAnB,CAAmC,EAAnC,CACjBwgB,EAAJ,GAAmBG,CAAnB,CAAoC,OAApC,GACEN,CAEA,CAFgBhf,CAEhB,CADAif,CACA,CADcjf,CAAAqf,OAAA,CAAY,CAAZ,CAAerf,CAAA9I,OAAf,CAA6B,CAA7B,CACd,CADgD,KAChD,CAAA8I,CAAA,CAAOA,CAAAqf,OAAA,CAAY,CAAZ,CAAerf,CAAA9I,OAAf,CAA6B,CAA7B,CAHT,CAMA4nB,EAAA,CAAQF,EAAA,CAAmB5e,CAAA6B,YAAA,EAAnB,CACR4c,EAAA,CAASK,CAAT,CAAA,CAAkB9e,CAClB8d,EAAA,CAAMgB,CAAN,CAAA,CAAezmB,CAAf,CAAuB4R,EAAA,CAAKpP,CAAAxC,MAAL,CACnB2V,GAAA,CAAmBtT,CAAnB,CAAyBokB,CAAzB,CAAJ,GACEhB,CAAA,CAAMgB,CAAN,CADF,CACiB,CAAA,CADjB,CAGAS,EAAA,CAA4B7kB,CAA5B,CAAkCkgB,CAAlC,CAA8CviB,CAA9C,CAAqDymB,CAArD,CACAH,EAAA,CAAa/D,CAAb,CAAyBkE,CAAzB,CAAgC,GAAhC,CAAqC9C,CAArC,CAAkDC,CAAlD,CAAmE+C,CAAnE,CACcC,CADd,CAtBwC,CALe,CAiC3D7e,CAAA,CAAY1F,CAAA0F,UACZ,IAAIhJ,CAAA,CAASgJ,CAAT,CAAJ,EAAyC,EAAzC,GAA2BA,CAA3B,CACE,IAAA,CAAO1B,CAAP,CAAe4b,CAAAna,KAAA,CAA4BC,CAA5B,CAAf,CAAA,CACE0e,CAIA,CAJQF,EAAA,CAAmBlgB,CAAA,CAAM,CAAN,CAAnB,CAIR,CAHIigB,CAAA,CAAa/D,CAAb,CAAyBkE,CAAzB,CAAgC,GAAhC,CAAqC9C,CAArC,CAAkDC,CAAlD,CAGJ,GAFE6B,CAAA,CAAMgB,CAAN,CAEF,CAFiB7U,EAAA,CAAKvL,CAAA,CAAM,CAAN,CAAL,CAEjB,EAAA0B,CAAA,CAAYA,CAAAif,OAAA,CAAiB3gB,CAAAnG,MAAjB,CAA+BmG,CAAA,CAAM,CAAN,CAAAxH,OAA/B,CAGhB,MACF,MAAK,CAAL,CACEsoB,CAAA,CAA4B5E,CAA5B,CAAwClgB,CAAAyhB,UAAxC,CACA,MACF,MAAK,CAAL,CACE,GAAI,CAEF,GADAzd,CACA;AADQ2b,CAAAla,KAAA,CAA8BzF,CAAAyhB,UAA9B,CACR,CACE2C,CACA,CADQF,EAAA,CAAmBlgB,CAAA,CAAM,CAAN,CAAnB,CACR,CAAIigB,CAAA,CAAa/D,CAAb,CAAyBkE,CAAzB,CAAgC,GAAhC,CAAqC9C,CAArC,CAAkDC,CAAlD,CAAJ,GACE6B,CAAA,CAAMgB,CAAN,CADF,CACiB7U,EAAA,CAAKvL,CAAA,CAAM,CAAN,CAAL,CADjB,CAJA,CAQF,MAAOL,CAAP,CAAU,EAhEhB,CAwEAuc,CAAA5iB,KAAA,CAAgBynB,CAAhB,CACA,OAAO7E,EA/EyE,CA0FlF8E,QAASA,EAAS,CAAChlB,CAAD,CAAOilB,CAAP,CAAkBC,CAAlB,CAA2B,CAC3C,IAAIjd,EAAQ,EAAZ,CACIkd,EAAQ,CACZ,IAAIF,CAAJ,EAAiBjlB,CAAAolB,aAAjB,EAAsCplB,CAAAolB,aAAA,CAAkBH,CAAlB,CAAtC,EAEE,EAAG,CACD,GAAI,CAACjlB,CAAL,CACE,KAAMqlB,GAAA,CAAe,SAAf,CAEIJ,CAFJ,CAEeC,CAFf,CAAN,CAImB,CAArB,EAAIllB,CAAAvD,SAAJ,GACMuD,CAAAolB,aAAA,CAAkBH,CAAlB,CACJ,EADkCE,CAAA,EAClC,CAAInlB,CAAAolB,aAAA,CAAkBF,CAAlB,CAAJ,EAAgCC,CAAA,EAFlC,CAIAld,EAAA5K,KAAA,CAAW2C,CAAX,CACAA,EAAA,CAAOA,CAAAoI,YAXN,CAAH,MAYiB,CAZjB,CAYS+c,CAZT,CAFF,KAgBEld,EAAA5K,KAAA,CAAW2C,CAAX,CAGF,OAAOwD,EAAA,CAAOyE,CAAP,CAtBoC,CAiC7Cqd,QAASA,EAA0B,CAACC,CAAD,CAASN,CAAT,CAAoBC,CAApB,CAA6B,CAC9D,MAAO,SAAQ,CAAC/e,CAAD,CAAQ5C,CAAR,CAAiB6f,CAAjB,CAAwBQ,CAAxB,CAAqCvC,CAArC,CAAmD,CAChE9d,CAAA,CAAUyhB,CAAA,CAAUzhB,CAAA,CAAQ,CAAR,CAAV,CAAsB0hB,CAAtB,CAAiCC,CAAjC,CACV,OAAOK,EAAA,CAAOpf,CAAP,CAAc5C,CAAd,CAAuB6f,CAAvB,CAA8BQ,CAA9B,CAA2CvC,CAA3C,CAFyD,CADJ,CA8BhEmC,QAASA,GAAqB,CAACtD,CAAD,CAAasF,CAAb,CAA0BC,CAA1B,CAAyCpE,CAAzC,CACCqE,CADD,CACeC,CADf,CACyCC,CADzC,CACqDC,CADrD,CAECrE,CAFD,CAEyB,CAiMrDsE,QAASA,EAAU,CAACC,CAAD,CAAMC,CAAN,CAAYf,CAAZ,CAAuBC,CAAvB,CAAgC,CACjD,GAAIa,CAAJ,CAAS,CACHd,CAAJ,GAAec,CAAf,CAAqBT,CAAA,CAA2BS,CAA3B,CAAgCd,CAAhC,CAA2CC,CAA3C,CAArB,CACAa,EAAA3F,QAAA,CAAcvW,CAAAuW,QACd,IAAI6F,CAAJ;AAAiCpc,CAAjC,EAA8CA,CAAAqc,eAA9C,CACEH,CAAA,CAAMI,EAAA,CAAmBJ,CAAnB,CAAwB,cAAe,CAAA,CAAf,CAAxB,CAERH,EAAAvoB,KAAA,CAAgB0oB,CAAhB,CANO,CAQT,GAAIC,CAAJ,CAAU,CACJf,CAAJ,GAAee,CAAf,CAAsBV,CAAA,CAA2BU,CAA3B,CAAiCf,CAAjC,CAA4CC,CAA5C,CAAtB,CACAc,EAAA5F,QAAA,CAAevW,CAAAuW,QACf,IAAI6F,CAAJ,GAAiCpc,CAAjC,EAA8CA,CAAAqc,eAA9C,CACEF,CAAA,CAAOG,EAAA,CAAmBH,CAAnB,CAAyB,cAAe,CAAA,CAAf,CAAzB,CAETH,EAAAxoB,KAAA,CAAiB2oB,CAAjB,CANQ,CATuC,CAoBnDI,QAASA,EAAc,CAAChG,CAAD,CAAU+B,CAAV,CAAoBkE,CAApB,CAAwC,CAAA,IACzD1oB,CADyD,CAClD2oB,EAAkB,MADgC,CACxBC,EAAW,CAAA,CAChD,IAAI7pB,CAAA,CAAS0jB,CAAT,CAAJ,CAAuB,CACrB,IAAA,CAAqC,GAArC,GAAOziB,CAAP,CAAeyiB,CAAA7e,OAAA,CAAe,CAAf,CAAf,GAAqD,GAArD,EAA4C5D,CAA5C,CAAA,CACEyiB,CAIA,CAJUA,CAAAuE,OAAA,CAAe,CAAf,CAIV,CAHa,GAGb,EAHIhnB,CAGJ,GAFE2oB,CAEF,CAFoB,eAEpB,EAAAC,CAAA,CAAWA,CAAX,EAAgC,GAAhC,EAAuB5oB,CAEzBA,EAAA,CAAQ,IAEJ0oB,EAAJ,EAA8C,MAA9C,GAA0BC,CAA1B,GACE3oB,CADF,CACU0oB,CAAA,CAAmBjG,CAAnB,CADV,CAGAziB,EAAA,CAAQA,CAAR,EAAiBwkB,CAAA,CAASmE,CAAT,CAAA,CAA0B,GAA1B,CAAgClG,CAAhC,CAA0C,YAA1C,CAEjB,IAAI,CAACziB,CAAL,EAAc,CAAC4oB,CAAf,CACE,KAAMlB,GAAA,CAAe,OAAf,CAEFjF,CAFE,CAEOoG,CAFP,CAAN,CAhBmB,CAAvB,IAqBW7pB,EAAA,CAAQyjB,CAAR,CAAJ,GACLziB,CACA,CADQ,EACR,CAAAf,CAAA,CAAQwjB,CAAR,CAAiB,QAAQ,CAACA,CAAD,CAAU,CACjCziB,CAAAN,KAAA,CAAW+oB,CAAA,CAAehG,CAAf,CAAwB+B,CAAxB,CAAkCkE,CAAlC,CAAX,CADiC,CAAnC,CAFK,CAMP,OAAO1oB,EA7BsD,CAiC/DolB,QAASA,EAAU,CAACP,CAAD,CAAcrc,CAAd,CAAqBsgB,CAArB,CAA+BnE,CAA/B,CAA6CC,CAA7C,CAAgE,CAmKjFmE,QAASA,EAA0B,CAACvgB,CAAD,CAAQwgB,CAAR,CAAuB,CACxD,IAAI5E,CAGmB,EAAvB,CAAIrjB,SAAAlC,OAAJ;CACEmqB,CACA,CADgBxgB,CAChB,CAAAA,CAAA,CAAQhK,CAFV,CAKIyqB,EAAJ,GACE7E,CADF,CAC0BsE,EAD1B,CAIA,OAAO9D,EAAA,CAAkBpc,CAAlB,CAAyBwgB,CAAzB,CAAwC5E,CAAxC,CAbiD,CAnKuB,IAC7EqB,CAD6E,CACtEjB,CADsE,CACzDjP,CADyD,CACrDqS,CADqD,CAC7ClF,CAD6C,CACjCwG,CADiC,CACnBR,GAAqB,EADF,CACMhF,EAGrF+B,EAAA,CADEoC,CAAJ,GAAoBiB,CAApB,CACUhB,CADV,CAGUpkB,EAAA,CAAYokB,CAAZ,CAA2B,IAAInC,EAAJ,CAAe9f,CAAA,CAAOijB,CAAP,CAAf,CAAiChB,CAAAzB,MAAjC,CAA3B,CAEV7B,EAAA,CAAWiB,CAAA0D,UAEX,IAAIb,CAAJ,CAA8B,CAC5B,IAAIc,EAAe,8BACf/E,EAAAA,CAAYxe,CAAA,CAAOijB,CAAP,CAEhBI,EAAA,CAAe1gB,CAAA6c,KAAA,CAAW,CAAA,CAAX,CAEXgE,GAAJ,EAA0BA,EAA1B,GAAgDf,CAAAgB,oBAAhD,CACEjF,CAAAzb,KAAA,CAAe,eAAf,CAAgCsgB,CAAhC,CADF,CAGE7E,CAAAzb,KAAA,CAAe,yBAAf,CAA0CsgB,CAA1C,CAKFjF,GAAA,CAAaI,CAAb,CAAwB,kBAAxB,CAEAplB,EAAA,CAAQqpB,CAAA9f,MAAR,CAAwC,QAAQ,CAAC+gB,CAAD,CAAaC,CAAb,CAAwB,CAAA,IAClEnjB,EAAQkjB,CAAAljB,MAAA,CAAiB+iB,CAAjB,CAAR/iB,EAA0C,EADwB,CAElEojB,EAAWpjB,CAAA,CAAM,CAAN,CAAXojB,EAAuBD,CAF2C,CAGlEZ,EAAwB,GAAxBA,EAAYviB,CAAA,CAAM,CAAN,CAHsD,CAIlEqjB,EAAOrjB,CAAA,CAAM,CAAN,CAJ2D,CAKlEsjB,CALkE,CAMlEC,CANkE,CAMvDC,CANuD,CAM5CC,CAE1BZ,EAAAa,kBAAA,CAA+BP,CAA/B,CAAA,CAA4CE,CAA5C,CAAmDD,CAEnD,QAAQC,CAAR,EAEE,KAAK,GAAL,CACEjE,CAAAuE,SAAA,CAAeP,CAAf,CAAyB,QAAQ,CAACzpB,CAAD,CAAQ,CACvCkpB,CAAA,CAAaM,CAAb,CAAA,CAA0BxpB,CADa,CAAzC,CAGAylB,EAAAwE,YAAA,CAAkBR,CAAlB,CAAAS,QAAA,CAAsC1hB,CAClCid,EAAA,CAAMgE,CAAN,CAAJ,GAGEP,CAAA,CAAaM,CAAb,CAHF,CAG4BvG,CAAA,CAAawC,CAAA,CAAMgE,CAAN,CAAb,CAAA,CAA8BjhB,CAA9B,CAH5B,CAKA;KAEF,MAAK,GAAL,CACE,GAAIogB,CAAJ,EAAgB,CAACnD,CAAA,CAAMgE,CAAN,CAAjB,CACE,KAEFG,EAAA,CAAYxG,CAAA,CAAOqC,CAAA,CAAMgE,CAAN,CAAP,CAEVK,EAAA,CADEF,CAAAO,QAAJ,CACYtmB,EADZ,CAGYimB,QAAQ,CAACM,CAAD,CAAGC,CAAH,CAAM,CAAE,MAAOD,EAAP,GAAaC,CAAf,CAE1BR,EAAA,CAAYD,CAAAU,OAAZ,EAAgC,QAAQ,EAAG,CAEzCX,CAAA,CAAYT,CAAA,CAAaM,CAAb,CAAZ,CAAsCI,CAAA,CAAUphB,CAAV,CACtC,MAAMkf,GAAA,CAAe,WAAf,CAEFjC,CAAA,CAAMgE,CAAN,CAFE,CAEenB,CAAA3gB,KAFf,CAAN,CAHyC,CAO3CgiB,EAAA,CAAYT,CAAA,CAAaM,CAAb,CAAZ,CAAsCI,CAAA,CAAUphB,CAAV,CACtC0gB,EAAA7lB,OAAA,CAAoBknB,QAAyB,EAAG,CAC9C,IAAIC,EAAcZ,CAAA,CAAUphB,CAAV,CACbshB,EAAA,CAAQU,CAAR,CAAqBtB,CAAA,CAAaM,CAAb,CAArB,CAAL,GAEOM,CAAA,CAAQU,CAAR,CAAqBb,CAArB,CAAL,CAKEE,CAAA,CAAUrhB,CAAV,CAAiBgiB,CAAjB,CAA+BtB,CAAA,CAAaM,CAAb,CAA/B,CALF,CAEEN,CAAA,CAAaM,CAAb,CAFF,CAE4BgB,CAJ9B,CAUA,OAAOb,EAAP,CAAmBa,CAZ2B,CAAhD,CAaG,IAbH,CAaSZ,CAAAO,QAbT,CAcA,MAEF,MAAK,GAAL,CACEP,CAAA,CAAYxG,CAAA,CAAOqC,CAAA,CAAMgE,CAAN,CAAP,CACZP,EAAA,CAAaM,CAAb,CAAA,CAA0B,QAAQ,CAAChQ,CAAD,CAAS,CACzC,MAAOoQ,EAAA,CAAUphB,CAAV,CAAiBgR,CAAjB,CADkC,CAG3C,MAEF,SACE,KAAMkO,GAAA,CAAe,MAAf,CAGFY,CAAA3gB,KAHE,CAG6B6hB,CAH7B,CAGwCD,CAHxC,CAAN,CAxDJ,CAVsE,CAAxE,CAhB4B,CAyF9B7F,EAAA,CAAekB,CAAf,EAAoCmE,CAChC0B,EAAJ,EACExrB,CAAA,CAAQwrB,CAAR,CAA8B,QAAQ,CAACve,CAAD,CAAY,CAAA,IAC5CsN,EAAS,QACHtN,CAAA,GAAcoc,CAAd,EAA0Cpc,CAAAqc,eAA1C,CAAqEW,CAArE,CAAoF1gB,CADjF,UAEDgc,CAFC,QAGHiB,CAHG,aAIE/B,EAJF,CADmC,CAM7CgH,CAEHhI,EAAA,CAAaxW,CAAAwW,WACK,IAAlB,EAAIA,CAAJ,GACEA,CADF;AACe+C,CAAA,CAAMvZ,CAAAvE,KAAN,CADf,CAIA+iB,EAAA,CAAqBrH,CAAA,CAAYX,CAAZ,CAAwBlJ,CAAxB,CAMrBkP,GAAA,CAAmBxc,CAAAvE,KAAnB,CAAA,CAAqC+iB,CAChCzB,EAAL,EACEzE,CAAA5b,KAAA,CAAc,GAAd,CAAoBsD,CAAAvE,KAApB,CAAqC,YAArC,CAAmD+iB,CAAnD,CAGExe,EAAAye,aAAJ,GACEnR,CAAAoR,OAAA,CAAc1e,CAAAye,aAAd,CADF,CAC0CD,CAD1C,CAxBgD,CAAlD,CA+BE7qB,EAAA,CAAI,CAAR,KAAW0V,CAAX,CAAgB0S,CAAAppB,OAAhB,CAAmCgB,CAAnC,CAAuC0V,CAAvC,CAA2C1V,CAAA,EAA3C,CACE,GAAI,CACF+nB,CACA,CADSK,CAAA,CAAWpoB,CAAX,CACT,CAAA+nB,CAAA,CAAOA,CAAAsB,aAAA,CAAsBA,CAAtB,CAAqC1gB,CAA5C,CAAmDgc,CAAnD,CAA6DiB,CAA7D,CACImC,CAAAnF,QADJ,EACsBgG,CAAA,CAAeb,CAAAnF,QAAf,CAA+B+B,CAA/B,CAAyCkE,EAAzC,CADtB,CACoFhF,EADpF,CAFE,CAIF,MAAO1d,CAAP,CAAU,CACVsc,CAAA,CAAkBtc,CAAlB,CAAqBL,EAAA,CAAY6e,CAAZ,CAArB,CADU,CAQVqG,CAAAA,CAAeriB,CACf8f,EAAJ,GAAiCA,CAAAwC,SAAjC,EAA+G,IAA/G,GAAsExC,CAAAyC,YAAtE,IACEF,CADF,CACiB3B,CADjB,CAGArE,EAAA,EAAeA,CAAA,CAAYgG,CAAZ,CAA0B/B,CAAA7V,WAA1B,CAA+CzU,CAA/C,CAA0DomB,CAA1D,CAGf,KAAI/kB,CAAJ,CAAQqoB,CAAArpB,OAAR,CAA6B,CAA7B,CAAqC,CAArC,EAAgCgB,CAAhC,CAAwCA,CAAA,EAAxC,CACE,GAAI,CACF+nB,CACA,CADSM,CAAA,CAAYroB,CAAZ,CACT,CAAA+nB,CAAA,CAAOA,CAAAsB,aAAA,CAAsBA,CAAtB,CAAqC1gB,CAA5C,CAAmDgc,CAAnD,CAA6DiB,CAA7D,CACImC,CAAAnF,QADJ,EACsBgG,CAAA,CAAeb,CAAAnF,QAAf,CAA+B+B,CAA/B,CAAyCkE,EAAzC,CADtB,CACoFhF,EADpF,CAFE,CAIF,MAAO1d,CAAP,CAAU,CACVsc,CAAA,CAAkBtc,CAAlB,CAAqBL,EAAA,CAAY6e,CAAZ,CAArB,CADU,CA7JmE,CArPnFX,CAAA,CAAyBA,CAAzB,EAAmD,EAoBnD,KArBqD,IAGjDmH,EAAmB,CAAC9J,MAAAC,UAH6B,CAIjD8J,CAJiD,CAKjDR,EAAuB5G,CAAA4G,qBAL0B,CAMjDnC,EAA2BzE,CAAAyE,yBANsB;AAOjDe,GAAoBxF,CAAAwF,kBAP6B,CAQjD6B,EAA4BrH,CAAAqH,0BARqB,CASjDC,EAAyB,CAAA,CATwB,CAUjDlC,EAAgCpF,CAAAoF,8BAViB,CAWjDmC,EAAetD,CAAAqB,UAAfiC,CAAyCvlB,CAAA,CAAOgiB,CAAP,CAXQ,CAYjD3b,CAZiD,CAajD2c,CAbiD,CAcjDwC,CAdiD,CAgBjD/F,GAAoB5B,CAhB6B,CAiBjDkE,CAjBiD,CAqB7C/nB,EAAI,CArByC,CAqBtC0V,EAAKgN,CAAA1jB,OAApB,CAAuCgB,CAAvC,CAA2C0V,CAA3C,CAA+C1V,CAAA,EAA/C,CAAoD,CAClDqM,CAAA,CAAYqW,CAAA,CAAW1iB,CAAX,CACZ,KAAIynB,GAAYpb,CAAAof,QAAhB,CACI/D,EAAUrb,CAAAqf,MAGVjE,GAAJ,GACE8D,CADF,CACiB/D,CAAA,CAAUQ,CAAV,CAAuBP,EAAvB,CAAkCC,CAAlC,CADjB,CAGA8D,EAAA,CAAY7sB,CAEZ,IAAIwsB,CAAJ,CAAuB9e,CAAAsW,SAAvB,CACE,KAGF,IAAIgJ,CAAJ,CAAqBtf,CAAA1D,MAArB,CACEyiB,CAIA,CAJoBA,CAIpB,EAJyC/e,CAIzC,CAAKA,CAAA6e,YAAL,GACEU,CAAA,CAAkB,oBAAlB,CAAwCnD,CAAxC,CAAkEpc,CAAlE,CACkBkf,CADlB,CAEA,CAAIxpB,CAAA,CAAS4pB,CAAT,CAAJ,GACElD,CADF,CAC6Bpc,CAD7B,CAHF,CASF2c,EAAA,CAAgB3c,CAAAvE,KAEXojB,EAAA7e,CAAA6e,YAAL,EAA8B7e,CAAAwW,WAA9B,GACE8I,CAIA,CAJiBtf,CAAAwW,WAIjB,CAHA+H,CAGA,CAHuBA,CAGvB,EAH+C,EAG/C,CAFAgB,CAAA,CAAkB,GAAlB,CAAwB5C,CAAxB,CAAwC,cAAxC,CACI4B,CAAA,CAAqB5B,CAArB,CADJ,CACyC3c,CADzC,CACoDkf,CADpD,CAEA,CAAAX,CAAA,CAAqB5B,CAArB,CAAA,CAAsC3c,CALxC,CAQA,IAAIsf,CAAJ,CAAqBtf,CAAAqZ,WAArB,CACE4F,CAUA,CAVyB,CAAA,CAUzB,CALKjf,CAAAwf,MAKL,GAJED,CAAA,CAAkB,cAAlB,CAAkCP,CAAlC,CAA6Dhf,CAA7D,CAAwEkf,CAAxE,CACA,CAAAF,CAAA,CAA4Bhf,CAG9B,EAAsB,SAAtB,EAAIsf,CAAJ,EACEvC,CASA,CATgC,CAAA,CAShC,CARA+B,CAQA,CARmB9e,CAAAsW,SAQnB;AAPA6I,CAOA,CAPYhE,CAAA,CAAUQ,CAAV,CAAuBP,EAAvB,CAAkCC,CAAlC,CAOZ,CANA6D,CAMA,CANetD,CAAAqB,UAMf,CALItjB,CAAA,CAAOtH,CAAAotB,cAAA,CAAuB,GAAvB,CAA6B9C,CAA7B,CAA6C,IAA7C,CACuBf,CAAA,CAAce,CAAd,CADvB,CACsD,GADtD,CAAP,CAKJ,CAHAhB,CAGA,CAHcuD,CAAA,CAAa,CAAb,CAGd,CAFAQ,EAAA,CAAY7D,CAAZ,CAA0BliB,CAAA,CA1pK7BlB,EAAApF,KAAA,CA0pK8C8rB,CA1pK9C,CAA+B,CAA/B,CA0pK6B,CAA1B,CAAwDxD,CAAxD,CAEA,CAAAvC,EAAA,CAAoB7c,CAAA,CAAQ4iB,CAAR,CAAmB3H,CAAnB,CAAiCsH,CAAjC,CACQa,CADR,EAC4BA,CAAAlkB,KAD5B,CACmD,2BAQdujB,CARc,CADnD,CAVtB,GAsBEG,CAEA,CAFYxlB,CAAA,CAAOwN,EAAA,CAAYwU,CAAZ,CAAP,CAAAiE,SAAA,EAEZ,CADAV,CAAArlB,MAAA,EACA,CAAAuf,EAAA,CAAoB7c,CAAA,CAAQ4iB,CAAR,CAAmB3H,CAAnB,CAxBtB,CA4BF,IAAIxX,CAAA4e,SAAJ,CAUE,GATAW,CAAA,CAAkB,UAAlB,CAA8BpC,EAA9B,CAAiDnd,CAAjD,CAA4Dkf,CAA5D,CASI9kB,CARJ+iB,EAQI/iB,CARgB4F,CAQhB5F,CANJklB,CAMIllB,CANcjH,CAAA,CAAW6M,CAAA4e,SAAX,CACD,CAAX5e,CAAA4e,SAAA,CAAmBM,CAAnB,CAAiCtD,CAAjC,CAAW,CACX5b,CAAA4e,SAIFxkB,CAFJklB,CAEIllB,CAFaylB,CAAA,CAAoBP,CAApB,CAEbllB,CAAA4F,CAAA5F,QAAJ,CAAuB,CACrBulB,CAAA,CAAmB3f,CAEjBmf,EAAA,CAz8HJnZ,EAAApJ,KAAA,CAw8HuB0iB,CAx8HvB,CAw8HE,CAGc3lB,CAAA,CAAO2lB,CAAP,CAHd,CACc,EAId3D,EAAA,CAAcwD,CAAA,CAAU,CAAV,CAEd,IAAwB,CAAxB,EAAIA,CAAAxsB,OAAJ,EAAsD,CAAtD,GAA6BgpB,CAAA/oB,SAA7B,CACE,KAAM4oB,GAAA,CAAe,OAAf,CAEFmB,CAFE,CAEa,EAFb,CAAN,CAKF+C,EAAA,CAAY7D,CAAZ,CAA0BqD,CAA1B,CAAwCvD,CAAxC,CAEImE,EAAAA,CAAmB,OAAQ,EAAR,CAOnBC,EAAAA,CAAqBrG,EAAA,CAAkBiC,CAAlB,CAA+B,EAA/B,CAAmCmE,CAAnC,CACzB,KAAIE,EAAwB3J,CAAAvf,OAAA,CAAkBnD,CAAlB,CAAsB,CAAtB,CAAyB0iB,CAAA1jB,OAAzB,EAA8CgB,CAA9C,CAAkD,CAAlD,EAExByoB,EAAJ,EACE6D,EAAA,CAAwBF,CAAxB,CAEF1J,EAAA,CAAaA,CAAAzd,OAAA,CAAkBmnB,CAAlB,CAAAnnB,OAAA,CAA6ConB,CAA7C,CACbE,EAAA,CAAwBtE,CAAxB,CAAuCkE,CAAvC,CAEAzW,EAAA,CAAKgN,CAAA1jB,OAjCgB,CAAvB,IAmCEusB,EAAAjlB,KAAA,CAAkBqlB,CAAlB,CAIJ;GAAItf,CAAA6e,YAAJ,CACEU,CAAA,CAAkB,UAAlB,CAA8BpC,EAA9B,CAAiDnd,CAAjD,CAA4Dkf,CAA5D,CAcA,CAbA/B,EAaA,CAboBnd,CAapB,CAXIA,CAAA5F,QAWJ,GAVEulB,CAUF,CAVqB3f,CAUrB,EAPAkZ,CAOA,CAPaiH,CAAA,CAAmB9J,CAAAvf,OAAA,CAAkBnD,CAAlB,CAAqB0iB,CAAA1jB,OAArB,CAAyCgB,CAAzC,CAAnB,CAAgEurB,CAAhE,CACTtD,CADS,CACMC,CADN,CACoBzC,EADpB,CACuC2C,CADvC,CACmDC,CADnD,CACgE,sBACjDuC,CADiD,0BAE7CnC,CAF6C,mBAGpDe,EAHoD,2BAI5C6B,CAJ4C,CADhE,CAOb,CAAA3V,CAAA,CAAKgN,CAAA1jB,OAfP,KAgBO,IAAIqN,CAAAzD,QAAJ,CACL,GAAI,CACFmf,CACA,CADS1b,CAAAzD,QAAA,CAAkB2iB,CAAlB,CAAgCtD,CAAhC,CAA+CxC,EAA/C,CACT,CAAIjmB,CAAA,CAAWuoB,CAAX,CAAJ,CACEO,CAAA,CAAW,IAAX,CAAiBP,CAAjB,CAAyBN,EAAzB,CAAoCC,CAApC,CADF,CAEWK,CAFX,EAGEO,CAAA,CAAWP,CAAAQ,IAAX,CAAuBR,CAAAS,KAAvB,CAAoCf,EAApC,CAA+CC,CAA/C,CALA,CAOF,MAAOvhB,EAAP,CAAU,CACVsc,CAAA,CAAkBtc,EAAlB,CAAqBL,EAAA,CAAYylB,CAAZ,CAArB,CADU,CAKVlf,CAAA4Z,SAAJ,GACEV,CAAAU,SACA,CADsB,CAAA,CACtB,CAAAkF,CAAA,CAAmBsB,IAAAC,IAAA,CAASvB,CAAT,CAA2B9e,CAAAsW,SAA3B,CAFrB,CA5JkD,CAmKpD4C,CAAA5c,MAAA,CAAmByiB,CAAnB,EAAoE,CAAA,CAApE,GAAwCA,CAAAziB,MACxC4c,EAAAG,WAAA,CAAwB4F,CAAxB,EAAkD7F,EAClDzB,EAAAoF,8BAAA,CAAuDA,CAGvD,OAAO7D,EA7L8C,CA2avD+G,QAASA,GAAuB,CAAC5J,CAAD,CAAa,CAE3C,IAF2C,IAElCxP,EAAI,CAF8B,CAE3BC,EAAKuP,CAAA1jB,OAArB,CAAwCkU,CAAxC,CAA4CC,CAA5C,CAAgDD,CAAA,EAAhD,CACEwP,CAAA,CAAWxP,CAAX,CAAA;AAAgB5R,EAAA,CAAQohB,CAAA,CAAWxP,CAAX,CAAR,CAAuB,gBAAiB,CAAA,CAAjB,CAAvB,CAHyB,CAqB7CuT,QAASA,EAAY,CAACkG,CAAD,CAAc7kB,CAAd,CAAoB1F,CAApB,CAA8B0hB,CAA9B,CAA2CC,CAA3C,CAA4D6I,CAA5D,CACCC,CADD,CACc,CACjC,GAAI/kB,CAAJ,GAAaic,CAAb,CAA8B,MAAO,KACjCvd,EAAAA,CAAQ,IACZ,IAAIyb,CAAAxiB,eAAA,CAA6BqI,CAA7B,CAAJ,CAAwC,CAAA,IAC9BuE,CAAWqW,EAAAA,CAAatI,CAAArB,IAAA,CAAcjR,CAAd,CAAqBoa,CAArB,CAAhC,KADsC,IAElCliB,EAAI,CAF8B,CAE3B0V,EAAKgN,CAAA1jB,OADhB,CACmCgB,CADnC,CACqC0V,CADrC,CACyC1V,CAAA,EADzC,CAEE,GAAI,CACFqM,CACA,CADYqW,CAAA,CAAW1iB,CAAX,CACZ,EAAM8jB,CAAN,GAAsBnlB,CAAtB,EAAmCmlB,CAAnC,CAAiDzX,CAAAsW,SAAjD,GAC8C,EAD9C,EACKtW,CAAAyW,SAAA9f,QAAA,CAA2BZ,CAA3B,CADL,GAEMwqB,CAIJ,GAHEvgB,CAGF,CAHc/K,EAAA,CAAQ+K,CAAR,CAAmB,SAAUugB,CAAV,OAAgCC,CAAhC,CAAnB,CAGd,EADAF,CAAA9sB,KAAA,CAAiBwM,CAAjB,CACA,CAAA7F,CAAA,CAAQ6F,CANV,CAFE,CAUF,MAAMlG,CAAN,CAAS,CAAEsc,CAAA,CAAkBtc,CAAlB,CAAF,CAbyB,CAgBxC,MAAOK,EAnB0B,CA+BnC+lB,QAASA,EAAuB,CAACtrB,CAAD,CAAM6C,CAAN,CAAW,CAAA,IACrCgpB,EAAUhpB,CAAA0iB,MAD2B,CAErCuG,EAAU9rB,CAAAulB,MAF2B,CAGrC7B,EAAW1jB,CAAAqoB,UAGflqB,EAAA,CAAQ6B,CAAR,CAAa,QAAQ,CAACd,CAAD,CAAQZ,CAAR,CAAa,CACX,GAArB,EAAIA,CAAAwE,OAAA,CAAW,CAAX,CAAJ,GACMD,CAAA,CAAIvE,CAAJ,CAGJ,GAFEY,CAEF,GAFoB,OAAR,GAAAZ,CAAA,CAAkB,GAAlB,CAAwB,GAEpC,EAF2CuE,CAAA,CAAIvE,CAAJ,CAE3C,EAAA0B,CAAA+rB,KAAA,CAASztB,CAAT,CAAcY,CAAd,CAAqB,CAAA,CAArB,CAA2B2sB,CAAA,CAAQvtB,CAAR,CAA3B,CAJF,CADgC,CAAlC,CAUAH,EAAA,CAAQ0E,CAAR,CAAa,QAAQ,CAAC3D,CAAD,CAAQZ,CAAR,CAAa,CACrB,OAAX,EAAIA,CAAJ,EACE6kB,EAAA,CAAaO,CAAb,CAAuBxkB,CAAvB,CACA,CAAAc,CAAA,CAAI,OAAJ,CAAA,EAAgBA,CAAA,CAAI,OAAJ,CAAA;AAAeA,CAAA,CAAI,OAAJ,CAAf,CAA8B,GAA9B,CAAoC,EAApD,EAA0Dd,CAF5D,EAGkB,OAAX,EAAIZ,CAAJ,EACLolB,CAAAhiB,KAAA,CAAc,OAAd,CAAuBgiB,CAAAhiB,KAAA,CAAc,OAAd,CAAvB,CAAgD,GAAhD,CAAsDxC,CAAtD,CACA,CAAAc,CAAA,MAAA,EAAgBA,CAAA,MAAA,CAAeA,CAAA,MAAf,CAA8B,GAA9B,CAAoC,EAApD,EAA0Dd,CAFrD,EAMqB,GANrB,EAMIZ,CAAAwE,OAAA,CAAW,CAAX,CANJ,EAM6B9C,CAAAxB,eAAA,CAAmBF,CAAnB,CAN7B,GAOL0B,CAAA,CAAI1B,CAAJ,CACA,CADWY,CACX,CAAA4sB,CAAA,CAAQxtB,CAAR,CAAA,CAAeutB,CAAA,CAAQvtB,CAAR,CARV,CAJyB,CAAlC,CAhByC,CAkC3CitB,QAASA,EAAkB,CAAC9J,CAAD,CAAa6I,CAAb,CAA2B0B,CAA3B,CACvBnI,CADuB,CACTW,CADS,CACU2C,CADV,CACsBC,CADtB,CACmCrE,CADnC,CAC2D,CAAA,IAChFkJ,EAAY,EADoE,CAEhFC,CAFgF,CAGhFC,CAHgF,CAIhFC,EAA4B9B,CAAA,CAAa,CAAb,CAJoD,CAKhF+B,EAAqB5K,CAAAlR,MAAA,EAL2D,CAOhF+b,EAAuBvsB,CAAA,CAAO,EAAP,CAAWssB,CAAX,CAA+B,aACvC,IADuC,YACrB,IADqB,SACN,IADM,qBACqBA,CADrB,CAA/B,CAPyD,CAUhFpC,EAAe1rB,CAAA,CAAW8tB,CAAApC,YAAX,CACD,CAARoC,CAAApC,YAAA,CAA+BK,CAA/B,CAA6C0B,CAA7C,CAAQ,CACRK,CAAApC,YAEVK,EAAArlB,MAAA,EAEAmd,EAAAtK,IAAA,CAAU0K,CAAA+J,sBAAA,CAA2BtC,CAA3B,CAAV,CAAmD,OAAQ5H,CAAR,CAAnD,CAAAmK,QAAA,CACU,QAAQ,CAACC,CAAD,CAAU,CAAA,IACpB1F,CADoB,CACuB2F,CAE/CD,EAAA,CAAUxB,CAAA,CAAoBwB,CAApB,CAEV,IAAIJ,CAAA7mB,QAAJ,CAAgC,CAE5B+kB,CAAA,CAp3IJnZ,EAAApJ,KAAA,CAm3IuBykB,CAn3IvB,CAm3IE,CAGc1nB,CAAA,CAAO0nB,CAAP,CAHd,CACc,EAId1F,EAAA,CAAcwD,CAAA,CAAU,CAAV,CAEd,IAAwB,CAAxB,EAAIA,CAAAxsB,OAAJ;AAAsD,CAAtD,GAA6BgpB,CAAA/oB,SAA7B,CACE,KAAM4oB,GAAA,CAAe,OAAf,CAEFyF,CAAAxlB,KAFE,CAEuBojB,CAFvB,CAAN,CAKF0C,CAAA,CAAoB,OAAQ,EAAR,CACpB7B,GAAA,CAAYjH,CAAZ,CAA0ByG,CAA1B,CAAwCvD,CAAxC,CACA,KAAIoE,EAAqBrG,EAAA,CAAkBiC,CAAlB,CAA+B,EAA/B,CAAmC4F,CAAnC,CAErB7rB,EAAA,CAASurB,CAAA3kB,MAAT,CAAJ,EACE2jB,EAAA,CAAwBF,CAAxB,CAEF1J,EAAA,CAAa0J,CAAAnnB,OAAA,CAA0Byd,CAA1B,CACb6J,EAAA,CAAwBU,CAAxB,CAAgCW,CAAhC,CAtB8B,CAAhC,IAwBE5F,EACA,CADcqF,CACd,CAAA9B,CAAAjlB,KAAA,CAAkBonB,CAAlB,CAGFhL,EAAA9hB,QAAA,CAAmB2sB,CAAnB,CAEAJ,EAAA,CAA0BnH,EAAA,CAAsBtD,CAAtB,CAAkCsF,CAAlC,CAA+CiF,CAA/C,CACtBxH,CADsB,CACH8F,CADG,CACW+B,CADX,CAC+BlF,CAD/B,CAC2CC,CAD3C,CAEtBrE,CAFsB,CAG1B5kB,EAAA,CAAQ0lB,CAAR,CAAsB,QAAQ,CAACtiB,CAAD,CAAOxC,CAAP,CAAU,CAClCwC,CAAJ,EAAYwlB,CAAZ,GACElD,CAAA,CAAa9kB,CAAb,CADF,CACoBurB,CAAA,CAAa,CAAb,CADpB,CADsC,CAAxC,CAQA,KAHA6B,CAGA,CAH2BjJ,CAAA,CAAaoH,CAAA,CAAa,CAAb,CAAAnY,WAAb,CAAyCqS,CAAzC,CAG3B,CAAMyH,CAAAluB,OAAN,CAAA,CAAwB,CAClB2J,CAAAA,CAAQukB,CAAA1b,MAAA,EACRqc,EAAAA,CAAyBX,CAAA1b,MAAA,EAFP,KAGlBsc,EAAkBZ,CAAA1b,MAAA,EAHA,CAIlBuT,EAAoBmI,CAAA1b,MAAA,EAJF,CAKlByX,EAAWsC,CAAA,CAAa,CAAb,CAEf,IAAIsC,CAAJ,GAA+BR,CAA/B,CAA0D,CACxD,IAAIU,EAAaF,CAAA3lB,UAEX8b,EAAAoF,8BAAN,EACIkE,CAAA7mB,QADJ,GAGEwiB,CAHF,CAGazV,EAAA,CAAYwU,CAAZ,CAHb,CAMA+D,GAAA,CAAY+B,CAAZ,CAA6B9nB,CAAA,CAAO6nB,CAAP,CAA7B,CAA6D5E,CAA7D,CAGA7E,GAAA,CAAape,CAAA,CAAOijB,CAAP,CAAb,CAA+B8E,CAA/B,CAZwD,CAexDJ,CAAA,CADER,CAAAzH,WAAJ,CAC2BC,CAAA,CAAwBhd,CAAxB,CAA+BwkB,CAAAzH,WAA/B,CAD3B,CAG2BX,CAE3BoI,EAAA,CAAwBC,CAAxB,CAAkDzkB,CAAlD,CAAyDsgB,CAAzD,CAAmEnE,CAAnE,CACE6I,CADF,CA1BsB,CA6BxBT,CAAA,CAAY,IA3EY,CAD5B,CAAAzQ,MAAA,CA8EQ,QAAQ,CAACuR,CAAD,CAAWC,CAAX,CAAiBC,CAAjB,CAA0BxiB,CAA1B,CAAkC,CAC9C,KAAMmc,GAAA,CAAe,QAAf;AAAyDnc,CAAA0R,IAAzD,CAAN,CAD8C,CA9ElD,CAkFA,OAAO+Q,SAA0B,CAACC,CAAD,CAAoBzlB,CAApB,CAA2BnG,CAA3B,CAAiC6rB,CAAjC,CAA8CtJ,CAA9C,CAAiE,CAC5FmI,CAAJ,EACEA,CAAArtB,KAAA,CAAe8I,CAAf,CAGA,CAFAukB,CAAArtB,KAAA,CAAe2C,CAAf,CAEA,CADA0qB,CAAArtB,KAAA,CAAewuB,CAAf,CACA,CAAAnB,CAAArtB,KAAA,CAAeklB,CAAf,CAJF,EAMEoI,CAAA,CAAwBC,CAAxB,CAAkDzkB,CAAlD,CAAyDnG,CAAzD,CAA+D6rB,CAA/D,CAA4EtJ,CAA5E,CAP8F,CAlGd,CAkHtFwC,QAASA,EAAU,CAACgD,CAAD,CAAIC,CAAJ,CAAO,CACxB,IAAI8D,EAAO9D,CAAA7H,SAAP2L,CAAoB/D,CAAA5H,SACxB,OAAa,EAAb,GAAI2L,CAAJ,CAAuBA,CAAvB,CACI/D,CAAAziB,KAAJ,GAAe0iB,CAAA1iB,KAAf,CAA+ByiB,CAAAziB,KAAD,CAAU0iB,CAAA1iB,KAAV,CAAqB,EAArB,CAAyB,CAAvD,CACOyiB,CAAAlqB,MADP,CACiBmqB,CAAAnqB,MAJO,CAQ1BurB,QAASA,EAAiB,CAAC2C,CAAD,CAAOC,CAAP,CAA0BniB,CAA1B,CAAqCtG,CAArC,CAA8C,CACtE,GAAIyoB,CAAJ,CACE,KAAM3G,GAAA,CAAe,UAAf,CACF2G,CAAA1mB,KADE,CACsBuE,CAAAvE,KADtB,CACsCymB,CADtC,CAC4CzoB,EAAA,CAAYC,CAAZ,CAD5C,CAAN,CAFoE,CAQxEuhB,QAASA,EAA2B,CAAC5E,CAAD,CAAa+L,CAAb,CAAmB,CACrD,IAAIC,EAAgBtL,CAAA,CAAaqL,CAAb,CAAmB,CAAA,CAAnB,CAChBC,EAAJ,EACEhM,CAAA7iB,KAAA,CAAgB,UACJ,CADI,SAEL+B,EAAA,CAAQ+sB,QAA8B,CAAChmB,CAAD,CAAQnG,CAAR,CAAc,CAAA,IACvDjB,EAASiB,CAAAjB,OAAA,EAD8C,CAEvDqtB,EAAWrtB,CAAAwH,KAAA,CAAY,UAAZ,CAAX6lB,EAAsC,EAC1CA,EAAA/uB,KAAA,CAAc6uB,CAAd,CACAtK,GAAA,CAAa7iB,CAAAwH,KAAA,CAAY,UAAZ,CAAwB6lB,CAAxB,CAAb,CAAgD,YAAhD,CACAjmB,EAAAnF,OAAA,CAAakrB,CAAb,CAA4BG,QAAiC,CAAC1uB,CAAD,CAAQ,CACnEqC,CAAA,CAAK,CAAL,CAAAyhB,UAAA,CAAoB9jB,CAD+C,CAArE,CAL2D,CAApD,CAFK,CAAhB,CAHmD,CAmBvD2uB,QAASA,EAAiB,CAACtsB,CAAD,CAAOusB,CAAP,CAA2B,CACnD,GAA0B,QAA1B;AAAIA,CAAJ,CACE,MAAOtL,EAAAuL,KAET,KAAIzmB,EAAMoe,EAAA,CAAUnkB,CAAV,CAEV,IAA0B,WAA1B,EAAIusB,CAAJ,EACY,MADZ,EACKxmB,CADL,EAC4C,QAD5C,EACsBwmB,CADtB,EAEY,KAFZ,EAEKxmB,CAFL,GAE4C,KAF5C,EAEsBwmB,CAFtB,EAG4C,OAH5C,EAGsBA,CAHtB,EAIE,MAAOtL,EAAAwL,aAV0C,CAerD5H,QAASA,EAA2B,CAAC7kB,CAAD,CAAOkgB,CAAP,CAAmBviB,CAAnB,CAA0B2H,CAA1B,CAAgC,CAClE,IAAI4mB,EAAgBtL,CAAA,CAAajjB,CAAb,CAAoB,CAAA,CAApB,CAGpB,IAAKuuB,CAAL,CAAA,CAGA,GAAa,UAAb,GAAI5mB,CAAJ,EAA+C,QAA/C,GAA2B6e,EAAA,CAAUnkB,CAAV,CAA3B,CACE,KAAMqlB,GAAA,CAAe,UAAf,CAEF/hB,EAAA,CAAYtD,CAAZ,CAFE,CAAN,CAKFkgB,CAAA7iB,KAAA,CAAgB,UACJ,GADI,SAEL+I,QAAQ,EAAG,CAChB,MAAO,KACAsmB,QAAiC,CAACvmB,CAAD,CAAQ5C,CAAR,CAAiBpD,CAAjB,CAAuB,CACvDynB,CAAAA,CAAeznB,CAAAynB,YAAfA,GAAoCznB,CAAAynB,YAApCA,CAAuD,EAAvDA,CAEJ,IAAI/H,CAAApZ,KAAA,CAA+BnB,CAA/B,CAAJ,CACE,KAAM+f,GAAA,CAAe,aAAf,CAAN,CAWF,GAJA6G,CAIA,CAJgBtL,CAAA,CAAazgB,CAAA,CAAKmF,CAAL,CAAb,CAAyB,CAAA,CAAzB,CAA+BgnB,CAAA,CAAkBtsB,CAAlB,CAAwBsF,CAAxB,CAA/B,CAIhB,CAIAnF,CAAA,CAAKmF,CAAL,CAEC,CAFY4mB,CAAA,CAAc/lB,CAAd,CAEZ,CADAwmB,CAAA/E,CAAA,CAAYtiB,CAAZ,CAAAqnB,GAAsB/E,CAAA,CAAYtiB,CAAZ,CAAtBqnB,CAA0C,EAA1CA,UACA,CADyD,CAAA,CACzD,CAAA3rB,CAAAb,CAAAynB,YAAA5mB,EAAoBb,CAAAynB,YAAA,CAAiBtiB,CAAjB,CAAAuiB,QAApB7mB,EAAsDmF,CAAtDnF,QAAA,CACQkrB,CADR,CACuBG,QAAiC,CAACO,CAAD,CAAWC,CAAX,CAAqB,CAO9D,OAAZ;AAAGvnB,CAAH,EAAuBsnB,CAAvB,EAAmCC,CAAnC,CACE1sB,CAAA2sB,aAAA,CAAkBF,CAAlB,CAA4BC,CAA5B,CADF,CAGE1sB,CAAAqqB,KAAA,CAAUllB,CAAV,CAAgBsnB,CAAhB,CAVwE,CAD7E,CArB0D,CADxD,CADS,CAFN,CAAhB,CATA,CAJkE,CAqEpErD,QAASA,GAAW,CAACjH,CAAD,CAAeyK,CAAf,CAAiCC,CAAjC,CAA0C,CAAA,IACxDC,EAAuBF,CAAA,CAAiB,CAAjB,CADiC,CAExDG,EAAcH,CAAAvwB,OAF0C,CAGxDuC,EAASkuB,CAAA9Z,WAH+C,CAIxD3V,CAJwD,CAIrD0V,CAEP,IAAIoP,CAAJ,CACE,IAAI9kB,CAAO,CAAH,CAAG,CAAA0V,CAAA,CAAKoP,CAAA9lB,OAAhB,CAAqCgB,CAArC,CAAyC0V,CAAzC,CAA6C1V,CAAA,EAA7C,CACE,GAAI8kB,CAAA,CAAa9kB,CAAb,CAAJ,EAAuByvB,CAAvB,CAA6C,CAC3C3K,CAAA,CAAa9kB,CAAA,EAAb,CAAA,CAAoBwvB,CACJG,EAAAA,CAAKzc,CAALyc,CAASD,CAATC,CAAuB,CAAvC,KAAK,IACIxc,EAAK2R,CAAA9lB,OADd,CAEKkU,CAFL,CAESC,CAFT,CAEaD,CAAA,EAAA,CAAKyc,CAAA,EAFlB,CAGMA,CAAJ,CAASxc,CAAT,CACE2R,CAAA,CAAa5R,CAAb,CADF,CACoB4R,CAAA,CAAa6K,CAAb,CADpB,CAGE,OAAO7K,CAAA,CAAa5R,CAAb,CAGX4R,EAAA9lB,OAAA,EAAuB0wB,CAAvB,CAAqC,CACrC,MAZ2C,CAiB7CnuB,CAAJ,EACEA,CAAAquB,aAAA,CAAoBJ,CAApB,CAA6BC,CAA7B,CAEEtd,EAAAA,CAAWzT,CAAA0T,uBAAA,EACfD,EAAAI,YAAA,CAAqBkd,CAArB,CACAD,EAAA,CAAQxpB,CAAA6pB,QAAR,CAAA,CAA0BJ,CAAA,CAAqBzpB,CAAA6pB,QAArB,CACjBC,EAAAA,CAAI,CAAb,KAAgBC,CAAhB,CAAqBR,CAAAvwB,OAArB,CAA8C8wB,CAA9C,CAAkDC,CAAlD,CAAsDD,CAAA,EAAtD,CACM/pB,CAGJ,CAHcwpB,CAAA,CAAiBO,CAAjB,CAGd,CAFA9pB,CAAA,CAAOD,CAAP,CAAA0b,OAAA,EAEA,CADAtP,CAAAI,YAAA,CAAqBxM,CAArB,CACA,CAAA,OAAOwpB,CAAA,CAAiBO,CAAjB,CAGTP,EAAA,CAAiB,CAAjB,CAAA,CAAsBC,CACtBD,EAAAvwB,OAAA,CAA0B,CAvCkC,CA2C9D2pB,QAASA,GAAkB,CAAC/jB,CAAD,CAAKorB,CAAL,CAAiB,CAC1C,MAAOhvB,EAAA,CAAO,QAAQ,EAAG,CAAE,MAAO4D,EAAAI,MAAA,CAAS,IAAT,CAAe9D,SAAf,CAAT,CAAlB;AAAyD0D,CAAzD,CAA6DorB,CAA7D,CADmC,CAjxC5C,IAAIlK,GAAaA,QAAQ,CAAC/f,CAAD,CAAUpD,CAAV,CAAgB,CACvC,IAAA2mB,UAAA,CAAiBvjB,CACjB,KAAAygB,MAAA,CAAa7jB,CAAb,EAAqB,EAFkB,CAKzCmjB,GAAA9L,UAAA,CAAuB,YACT0M,EADS,WAeTuJ,QAAQ,CAACC,CAAD,CAAW,CAC1BA,CAAH,EAAiC,CAAjC,CAAeA,CAAAlxB,OAAf,EACE0kB,CAAAkB,SAAA,CAAkB,IAAA0E,UAAlB,CAAkC4G,CAAlC,CAF2B,CAfV,cAgCNC,QAAQ,CAACD,CAAD,CAAW,CAC7BA,CAAH,EAAiC,CAAjC,CAAeA,CAAAlxB,OAAf,EACE0kB,CAAA0M,YAAA,CAAqB,IAAA9G,UAArB,CAAqC4G,CAArC,CAF8B,CAhCb,cAkDNZ,QAAQ,CAACe,CAAD,CAAatC,CAAb,CAAyB,CAC9C,IAAIuC,EAAQC,EAAA,CAAgBF,CAAhB,CAA4BtC,CAA5B,CAAZ,CACIyC,EAAWD,EAAA,CAAgBxC,CAAhB,CAA4BsC,CAA5B,CAEK,EAApB,GAAGC,CAAAtxB,OAAH,CACE0kB,CAAA0M,YAAA,CAAqB,IAAA9G,UAArB,CAAqCkH,CAArC,CADF,CAE8B,CAAvB,GAAGA,CAAAxxB,OAAH,CACL0kB,CAAAkB,SAAA,CAAkB,IAAA0E,UAAlB,CAAkCgH,CAAlC,CADK,CAGL5M,CAAA+M,SAAA,CAAkB,IAAAnH,UAAlB,CAAkCgH,CAAlC,CAAyCE,CAAzC,CAT4C,CAlD3B,MAwEfxD,QAAQ,CAACztB,CAAD,CAAMY,CAAN,CAAauwB,CAAb,CAAwB9G,CAAxB,CAAkC,CAAA,IAK1C+G,EAAa7a,EAAA,CAAmB,IAAAwT,UAAA,CAAe,CAAf,CAAnB,CAAsC/pB,CAAtC,CAIboxB,EAAJ,GACE,IAAArH,UAAA5mB,KAAA,CAAoBnD,CAApB,CAAyBY,CAAzB,CACA,CAAAypB,CAAA,CAAW+G,CAFb,CAKA,KAAA,CAAKpxB,CAAL,CAAA,CAAYY,CAGRypB,EAAJ,CACE,IAAApD,MAAA,CAAWjnB,CAAX,CADF;AACoBqqB,CADpB,EAGEA,CAHF,CAGa,IAAApD,MAAA,CAAWjnB,CAAX,CAHb,IAKI,IAAAinB,MAAA,CAAWjnB,CAAX,CALJ,CAKsBqqB,CALtB,CAKiCtgB,EAAA,CAAW/J,CAAX,CAAgB,GAAhB,CALjC,CASAkD,EAAA,CAAWkkB,EAAA,CAAU,IAAA2C,UAAV,CAGX,IAAkB,GAAlB,GAAK7mB,CAAL,EAAiC,MAAjC,GAAyBlD,CAAzB,EACkB,KADlB,GACKkD,CADL,EACmC,KADnC,GAC2BlD,CAD3B,CAEE,IAAA,CAAKA,CAAL,CAAA,CAAYY,CAAZ,CAAoBwjB,CAAA,CAAcxjB,CAAd,CAA6B,KAA7B,GAAqBZ,CAArB,CAGJ,EAAA,CAAlB,GAAImxB,CAAJ,GACgB,IAAd,GAAIvwB,CAAJ,EAAsBA,CAAtB,GAAgCxB,CAAhC,CACE,IAAA2qB,UAAAsH,WAAA,CAA0BhH,CAA1B,CADF,CAGE,IAAAN,UAAA3mB,KAAA,CAAoBinB,CAApB,CAA8BzpB,CAA9B,CAJJ,CAUA,EADIiqB,CACJ,CADkB,IAAAA,YAClB,GAAehrB,CAAA,CAAQgrB,CAAA,CAAY7qB,CAAZ,CAAR,CAA0B,QAAQ,CAACqF,CAAD,CAAK,CACpD,GAAI,CACFA,CAAA,CAAGzE,CAAH,CADE,CAEF,MAAOgG,CAAP,CAAU,CACVsc,CAAA,CAAkBtc,CAAlB,CADU,CAHwC,CAAvC,CA5C+B,CAxE3B,UAgJXgkB,QAAQ,CAAC5qB,CAAD,CAAMqF,CAAN,CAAU,CAAA,IACtBghB,EAAQ,IADc,CAEtBwE,EAAexE,CAAAwE,YAAfA,GAAqCxE,CAAAwE,YAArCA,CAAyD,EAAzDA,CAFsB,CAGtByG,EAAazG,CAAA,CAAY7qB,CAAZ,CAAbsxB,GAAkCzG,CAAA,CAAY7qB,CAAZ,CAAlCsxB,CAAqD,EAArDA,CAEJA,EAAAhxB,KAAA,CAAe+E,CAAf,CACAwW,EAAA7X,WAAA,CAAsB,QAAQ,EAAG,CAC1BstB,CAAA1B,QAAL,EAEEvqB,CAAA,CAAGghB,CAAA,CAAMrmB,CAAN,CAAH,CAH6B,CAAjC,CAMA,OAAOqF,EAZmB,CAhJP,CAP+D,KAuKlFksB,EAAc1N,CAAA0N,YAAA,EAvKoE,CAwKlFC,GAAY3N,CAAA2N,UAAA,EAxKsE,CAyKlF7E,EAAsC,IAChB,EADC4E,CACD,EADsC,IACtC,EADwBC,EACxB;AAAhBrvB,EAAgB,CAChBwqB,QAA4B,CAACjB,CAAD,CAAW,CACvC,MAAOA,EAAAxkB,QAAA,CAAiB,OAAjB,CAA0BqqB,CAA1B,CAAArqB,QAAA,CAA+C,KAA/C,CAAsDsqB,EAAtD,CADgC,CA3KqC,CA8KlF7J,EAAkB,cAGtB,OAAOte,EAjL+E,CAJ5E,CA3H6C,CAo6C3D8d,QAASA,GAAkB,CAAC5e,CAAD,CAAO,CAChC,MAAOuI,GAAA,CAAUvI,CAAArB,QAAA,CAAauqB,EAAb,CAA4B,EAA5B,CAAV,CADyB,CA4DlCT,QAASA,GAAe,CAACU,CAAD,CAAOC,CAAP,CAAa,CAAA,IAC/BC,EAAS,EADsB,CAE/BC,EAAUH,CAAAlqB,MAAA,CAAW,KAAX,CAFqB,CAG/BsqB,EAAUH,CAAAnqB,MAAA,CAAW,KAAX,CAHqB,CAM3B/G,EAAI,CADZ,EAAA,CACA,IAAA,CAAeA,CAAf,CAAmBoxB,CAAApyB,OAAnB,CAAmCgB,CAAA,EAAnC,CAAwC,CAEtC,IADA,IAAIsxB,EAAQF,CAAA,CAAQpxB,CAAR,CAAZ,CACQkT,EAAI,CAAZ,CAAeA,CAAf,CAAmBme,CAAAryB,OAAnB,CAAmCkU,CAAA,EAAnC,CACE,GAAGoe,CAAH,EAAYD,CAAA,CAAQne,CAAR,CAAZ,CAAwB,SAAS,CAEnCie,EAAA,GAA2B,CAAhB,CAAAA,CAAAnyB,OAAA,CAAoB,GAApB,CAA0B,EAArC,EAA2CsyB,CALL,CAOxC,MAAOH,EAb4B,CA0BrCniB,QAASA,GAAmB,EAAG,CAAA,IACzBoX,EAAc,EADW,CAEzBmL,EAAY,yBAWhB,KAAAC,SAAA,CAAgBC,QAAQ,CAAC3pB,CAAD,CAAOmC,CAAP,CAAoB,CAC1CC,EAAA,CAAwBpC,CAAxB,CAA8B,YAA9B,CACI/F,EAAA,CAAS+F,CAAT,CAAJ,CACE9G,CAAA,CAAOolB,CAAP,CAAoBte,CAApB,CADF,CAGEse,CAAA,CAAYte,CAAZ,CAHF,CAGsBmC,CALoB,CAU5C,KAAAuO,KAAA,CAAY,CAAC,WAAD,CAAc,SAAd,CAAyB,QAAQ,CAAC4B,CAAD,CAAYc,CAAZ,CAAqB,CAwBhE,MAAO,SAAQ,CAACwW,CAAD,CAAa/X,CAAb,CAAqB,CAAA,IAC9BM,CAD8B;AACbhQ,CADa,CACA0nB,CAE/BzyB,EAAA,CAASwyB,CAAT,CAAH,GACElrB,CAOA,CAPQkrB,CAAAlrB,MAAA,CAAiB+qB,CAAjB,CAOR,CANAtnB,CAMA,CANczD,CAAA,CAAM,CAAN,CAMd,CALAmrB,CAKA,CALanrB,CAAA,CAAM,CAAN,CAKb,CAJAkrB,CAIA,CAJatL,CAAA3mB,eAAA,CAA2BwK,CAA3B,CACA,CAAPmc,CAAA,CAAYnc,CAAZ,CAAO,CACPE,EAAA,CAAOwP,CAAAoR,OAAP,CAAsB9gB,CAAtB,CAAmC,CAAA,CAAnC,CADO,EACqCE,EAAA,CAAO+Q,CAAP,CAAgBjR,CAAhB,CAA6B,CAAA,CAA7B,CAElD,CAAAF,EAAA,CAAY2nB,CAAZ,CAAwBznB,CAAxB,CAAqC,CAAA,CAArC,CARF,CAWAgQ,EAAA,CAAWG,CAAA7B,YAAA,CAAsBmZ,CAAtB,CAAkC/X,CAAlC,CAEX,IAAIgY,CAAJ,CAAgB,CACd,GAAMhY,CAAAA,CAAN,EAAwC,QAAxC,EAAgB,MAAOA,EAAAoR,OAAvB,CACE,KAAMnsB,EAAA,CAAO,aAAP,CAAA,CAAsB,OAAtB,CAEFqL,CAFE,EAEaynB,CAAA5pB,KAFb,CAE8B6pB,CAF9B,CAAN,CAKFhY,CAAAoR,OAAA,CAAc4G,CAAd,CAAA,CAA4B1X,CAPd,CAUhB,MAAOA,EA1B2B,CAxB4B,CAAtD,CAvBiB,CAsG/BhL,QAASA,GAAiB,EAAE,CAC1B,IAAAuJ,KAAA,CAAY,CAAC,SAAD,CAAY,QAAQ,CAAC/Z,CAAD,CAAQ,CACtC,MAAOuH,EAAA,CAAOvH,CAAAC,SAAP,CAD+B,CAA5B,CADc,CAsC5BwQ,QAASA,GAAyB,EAAG,CACnC,IAAAsJ,KAAA,CAAY,CAAC,MAAD,CAAS,QAAQ,CAAC2D,CAAD,CAAO,CAClC,MAAO,SAAQ,CAACyV,CAAD,CAAYC,CAAZ,CAAmB,CAChC1V,CAAAM,MAAAzX,MAAA,CAAiBmX,CAAjB,CAAuBjb,SAAvB,CADgC,CADA,CAAxB,CADuB,CAcrC4wB,QAASA,GAAY,CAAC5D,CAAD,CAAU,CAAA,IACzBjc,EAAS,EADgB,CACZ1S,CADY,CACP4F,CADO,CACFnF,CAE3B,IAAI,CAACkuB,CAAL,CAAc,MAAOjc,EAErB7S,EAAA,CAAQ8uB,CAAAnnB,MAAA,CAAc,IAAd,CAAR,CAA6B,QAAQ,CAACgrB,CAAD,CAAO,CAC1C/xB,CAAA,CAAI+xB,CAAA/uB,QAAA,CAAa,GAAb,CACJzD,EAAA,CAAMsG,CAAA,CAAUkM,EAAA,CAAKggB,CAAA5K,OAAA,CAAY,CAAZ;AAAennB,CAAf,CAAL,CAAV,CACNmF,EAAA,CAAM4M,EAAA,CAAKggB,CAAA5K,OAAA,CAAYnnB,CAAZ,CAAgB,CAAhB,CAAL,CAEFT,EAAJ,GAEI0S,CAAA,CAAO1S,CAAP,CAFJ,CACM0S,CAAA,CAAO1S,CAAP,CAAJ,CACE0S,CAAA,CAAO1S,CAAP,CADF,EACiB,IADjB,CACwB4F,CADxB,EAGgBA,CAJlB,CAL0C,CAA5C,CAcA,OAAO8M,EAnBsB,CAmC/B+f,QAASA,GAAa,CAAC9D,CAAD,CAAU,CAC9B,IAAI+D,EAAalwB,CAAA,CAASmsB,CAAT,CAAA,CAAoBA,CAApB,CAA8BvvB,CAE/C,OAAO,SAAQ,CAACmJ,CAAD,CAAO,CACfmqB,CAAL,GAAiBA,CAAjB,CAA+BH,EAAA,CAAa5D,CAAb,CAA/B,CAEA,OAAIpmB,EAAJ,CACSmqB,CAAA,CAAWpsB,CAAA,CAAUiC,CAAV,CAAX,CADT,EACwC,IADxC,CAIOmqB,CAPa,CAHQ,CAyBhCC,QAASA,GAAa,CAACnpB,CAAD,CAAOmlB,CAAP,CAAgBiE,CAAhB,CAAqB,CACzC,GAAI3yB,CAAA,CAAW2yB,CAAX,CAAJ,CACE,MAAOA,EAAA,CAAIppB,CAAJ,CAAUmlB,CAAV,CAET9uB,EAAA,CAAQ+yB,CAAR,CAAa,QAAQ,CAACvtB,CAAD,CAAK,CACxBmE,CAAA,CAAOnE,CAAA,CAAGmE,CAAH,CAASmlB,CAAT,CADiB,CAA1B,CAIA,OAAOnlB,EARkC,CAiB3CuG,QAASA,GAAa,EAAG,CAAA,IACnB8iB,EAAa,kBADM,CAEnBC,EAAW,YAFQ,CAGnBC,EAAoB,cAHD,CAInBC,EAAgC,CAAC,cAAD,CAAiB,gCAAjB,CAJb,CAMnBC,EAAW,IAAAA,SAAXA,CAA2B,mBAEV,CAAC,QAAQ,CAACzpB,CAAD,CAAO,CAC7B7J,CAAA,CAAS6J,CAAT,CAAJ,GAEEA,CACA,CADOA,CAAAtC,QAAA,CAAa6rB,CAAb,CAAgC,EAAhC,CACP,CAAIF,CAAAnpB,KAAA,CAAgBF,CAAhB,CAAJ,EAA6BspB,CAAAppB,KAAA,CAAcF,CAAd,CAA7B,GACEA,CADF,CACSvD,EAAA,CAASuD,CAAT,CADT,CAHF,CAMA,OAAOA,EAP0B,CAAhB,CAFU,kBAaX,CAAC,QAAQ,CAAC0pB,CAAD,CAAI,CAC7B,MAAO1wB,EAAA,CAAS0wB,CAAT,CAAA;AAvhNmB,eAuhNnB,GAvhNJvwB,EAAAxC,KAAA,CAuhN2B+yB,CAvhN3B,CAuhNI,EAlhNmB,eAkhNnB,GAlhNJvwB,EAAAxC,KAAA,CAkhNyC+yB,CAlhNzC,CAkhNI,CAA0CrtB,EAAA,CAAOqtB,CAAP,CAA1C,CAAsDA,CADhC,CAAb,CAbW,SAkBpB,QACC,QACI,mCADJ,CADD,MAICrvB,EAAA,CAAKmvB,CAAL,CAJD,KAKCnvB,EAAA,CAAKmvB,CAAL,CALD,OAMCnvB,EAAA,CAAKmvB,CAAL,CAND,CAlBoB,gBA2Bb,YA3Ba,gBA4Bb,cA5Ba,CANR,CAyCnBG,EAAuB,IAAAC,aAAvBD,CAA2C,EAzCxB,CA+CnBE,EAA+B,IAAAC,qBAA/BD,CAA2D,EAE/D,KAAApa,KAAA,CAAY,CAAC,cAAD,CAAiB,UAAjB,CAA6B,eAA7B,CAA8C,YAA9C,CAA4D,IAA5D,CAAkE,WAAlE,CACR,QAAQ,CAACsa,CAAD,CAAeC,CAAf,CAAyBhR,CAAzB,CAAwC3G,CAAxC,CAAoD4X,CAApD,CAAwD5Y,CAAxD,CAAmE,CAihB7EiJ,QAASA,EAAK,CAAC4P,CAAD,CAAgB,CA6E5BC,QAASA,EAAiB,CAAClF,CAAD,CAAW,CAEnC,IAAImF,EAAOnyB,CAAA,CAAO,EAAP,CAAWgtB,CAAX,CAAqB,MACxBkE,EAAA,CAAclE,CAAAjlB,KAAd,CAA6BilB,CAAAE,QAA7B,CAA+CxiB,CAAAwnB,kBAA/C,CADwB,CAArB,CAGX,OAzpBC,IA0pBM,EADWlF,CAAAoF,OACX,EA1pBoB,GA0pBpB,CADWpF,CAAAoF,OACX;AAAHD,CAAG,CACHH,CAAAK,OAAA,CAAUF,CAAV,CAP+B,CA5ErC,IAAIznB,EAAS,QACH,KADG,kBAEO8mB,CAAAc,iBAFP,mBAGQd,CAAAU,kBAHR,CAAb,CAKIhF,EAiFJqF,QAAqB,CAAC7nB,CAAD,CAAS,CA2B5B8nB,QAASA,EAAW,CAACtF,CAAD,CAAU,CAC5B,IAAIuF,CAEJr0B,EAAA,CAAQ8uB,CAAR,CAAiB,QAAQ,CAACwF,CAAD,CAAWC,CAAX,CAAmB,CACtCn0B,CAAA,CAAWk0B,CAAX,CAAJ,GACED,CACA,CADgBC,CAAA,EAChB,CAAqB,IAArB,EAAID,CAAJ,CACEvF,CAAA,CAAQyF,CAAR,CADF,CACoBF,CADpB,CAGE,OAAOvF,CAAA,CAAQyF,CAAR,CALX,CAD0C,CAA5C,CAH4B,CA3BF,IACxBC,EAAapB,CAAAtE,QADW,CAExB2F,EAAa7yB,CAAA,CAAO,EAAP,CAAW0K,CAAAwiB,QAAX,CAFW,CAGxB4F,CAHwB,CAGeC,CAHf,CAK5BH,EAAa5yB,CAAA,CAAO,EAAP,CAAW4yB,CAAAI,OAAX,CAA8BJ,CAAA,CAAW/tB,CAAA,CAAU6F,CAAAL,OAAV,CAAX,CAA9B,CAGbmoB,EAAA,CAAYI,CAAZ,CACAJ,EAAA,CAAYK,CAAZ,CAGA,EAAA,CACA,IAAKC,CAAL,GAAsBF,EAAtB,CAAkC,CAChCK,CAAA,CAAyBpuB,CAAA,CAAUiuB,CAAV,CAEzB,KAAKC,CAAL,GAAsBF,EAAtB,CACE,GAAIhuB,CAAA,CAAUkuB,CAAV,CAAJ,GAAiCE,CAAjC,CACE,SAAS,CAIbJ,EAAA,CAAWC,CAAX,CAAA,CAA4BF,CAAA,CAAWE,CAAX,CATI,CAYlC,MAAOD,EAzBqB,CAjFhB,CAAaZ,CAAb,CAEdjyB,EAAA,CAAO0K,CAAP,CAAeunB,CAAf,CACAvnB,EAAAwiB,QAAA,CAAiBA,CACjBxiB,EAAAL,OAAA,CAAgBU,EAAA,CAAUL,CAAAL,OAAV,CAKhB,EAHI6oB,CAGJ,CAHgBC,EAAA,CAAgBzoB,CAAA0R,IAAhB,CACA,CAAV2V,CAAA5T,QAAA,EAAA,CAAmBzT,CAAA0oB,eAAnB,EAA4C5B,CAAA4B,eAA5C,CAAU,CACVz1B,CACN,IACEuvB,CAAA,CAASxiB,CAAA2oB,eAAT,EAAkC7B,CAAA6B,eAAlC,CADF;AACgEH,CADhE,CA0BA,KAAII,EAAQ,CArBQC,QAAQ,CAAC7oB,CAAD,CAAS,CACnCwiB,CAAA,CAAUxiB,CAAAwiB,QACV,KAAIsG,EAAUtC,EAAA,CAAcxmB,CAAA3C,KAAd,CAA2BipB,EAAA,CAAc9D,CAAd,CAA3B,CAAmDxiB,CAAA4nB,iBAAnD,CAGVzxB,EAAA,CAAY6J,CAAA3C,KAAZ,CAAJ,EACE3J,CAAA,CAAQ8uB,CAAR,CAAiB,QAAQ,CAAC/tB,CAAD,CAAQwzB,CAAR,CAAgB,CACb,cAA1B,GAAI9tB,CAAA,CAAU8tB,CAAV,CAAJ,EACI,OAAOzF,CAAA,CAAQyF,CAAR,CAF4B,CAAzC,CAOE9xB,EAAA,CAAY6J,CAAA+oB,gBAAZ,CAAJ,EAA4C,CAAA5yB,CAAA,CAAY2wB,CAAAiC,gBAAZ,CAA5C,GACE/oB,CAAA+oB,gBADF,CAC2BjC,CAAAiC,gBAD3B,CAKA,OAAOC,EAAA,CAAQhpB,CAAR,CAAgB8oB,CAAhB,CAAyBtG,CAAzB,CAAAyG,KAAA,CAAuCzB,CAAvC,CAA0DA,CAA1D,CAlB4B,CAqBzB,CAAgBv0B,CAAhB,CAAZ,CACIi2B,EAAU5B,CAAA6B,KAAA,CAAQnpB,CAAR,CAYd,KATAtM,CAAA,CAAQ01B,CAAR,CAA8B,QAAQ,CAACC,CAAD,CAAc,CAClD,CAAIA,CAAAC,QAAJ,EAA2BD,CAAAE,aAA3B,GACEX,CAAA1zB,QAAA,CAAcm0B,CAAAC,QAAd,CAAmCD,CAAAE,aAAnC,CAEF,EAAIF,CAAA/G,SAAJ,EAA4B+G,CAAAG,cAA5B,GACEZ,CAAAz0B,KAAA,CAAWk1B,CAAA/G,SAAX,CAAiC+G,CAAAG,cAAjC,CALgD,CAApD,CASA,CAAMZ,CAAAt1B,OAAN,CAAA,CAAoB,CACdm2B,CAAAA,CAASb,CAAA9iB,MAAA,EACb,KAAI4jB,EAAWd,CAAA9iB,MAAA,EAAf,CAEAojB,EAAUA,CAAAD,KAAA,CAAaQ,CAAb,CAAqBC,CAArB,CAJQ,CAOpBR,CAAAnH,QAAA,CAAkB4H,QAAQ,CAACzwB,CAAD,CAAK,CAC7BgwB,CAAAD,KAAA,CAAa,QAAQ,CAAC3G,CAAD,CAAW,CAC9BppB,CAAA,CAAGopB,CAAAjlB,KAAH;AAAkBilB,CAAAoF,OAAlB,CAAmCpF,CAAAE,QAAnC,CAAqDxiB,CAArD,CAD8B,CAAhC,CAGA,OAAOkpB,EAJsB,CAO/BA,EAAAnY,MAAA,CAAgB6Y,QAAQ,CAAC1wB,CAAD,CAAK,CAC3BgwB,CAAAD,KAAA,CAAa,IAAb,CAAmB,QAAQ,CAAC3G,CAAD,CAAW,CACpCppB,CAAA,CAAGopB,CAAAjlB,KAAH,CAAkBilB,CAAAoF,OAAlB,CAAmCpF,CAAAE,QAAnC,CAAqDxiB,CAArD,CADoC,CAAtC,CAGA,OAAOkpB,EAJoB,CAO7B,OAAOA,EA3EqB,CAiQ9BF,QAASA,EAAO,CAAChpB,CAAD,CAAS8oB,CAAT,CAAkBX,CAAlB,CAA8B,CAqD5C0B,QAASA,EAAI,CAACnC,CAAD,CAASpF,CAAT,CAAmBwH,CAAnB,CAAkCC,CAAlC,CAA8C,CACrDnc,CAAJ,GA93BC,GA+3BC,EAAc8Z,CAAd,EA/3ByB,GA+3BzB,CAAcA,CAAd,CACE9Z,CAAAhC,IAAA,CAAU8F,CAAV,CAAe,CAACgW,CAAD,CAASpF,CAAT,CAAmB8D,EAAA,CAAa0D,CAAb,CAAnB,CAAgDC,CAAhD,CAAf,CADF,CAIEnc,CAAAmI,OAAA,CAAarE,CAAb,CALJ,CASAsY,EAAA,CAAe1H,CAAf,CAAyBoF,CAAzB,CAAiCoC,CAAjC,CAAgDC,CAAhD,CACKra,EAAAua,QAAL,EAAyBva,CAAAtS,OAAA,EAXgC,CAkB3D4sB,QAASA,EAAc,CAAC1H,CAAD,CAAWoF,CAAX,CAAmBlF,CAAnB,CAA4BuH,CAA5B,CAAwC,CAE7DrC,CAAA,CAAS3G,IAAAC,IAAA,CAAS0G,CAAT,CAAiB,CAAjB,CAER,EAn5BA,GAm5BA,EAAUA,CAAV,EAn5B0B,GAm5B1B,CAAUA,CAAV,CAAoBwC,CAAAC,QAApB,CAAuCD,CAAAvC,OAAvC,EAAwD,MACjDrF,CADiD,QAE/CoF,CAF+C,SAG9CpB,EAAA,CAAc9D,CAAd,CAH8C,QAI/CxiB,CAJ+C,YAK1C+pB,CAL0C,CAAxD,CAJ4D,CAc/DK,QAASA,EAAgB,EAAG,CAC1B,IAAIC,EAAM/yB,EAAA,CAAQqgB,CAAA2S,gBAAR,CAA+BtqB,CAA/B,CACG,GAAb,GAAIqqB,CAAJ,EAAgB1S,CAAA2S,gBAAA7yB,OAAA,CAA6B4yB,CAA7B,CAAkC,CAAlC,CAFU,CArFgB,IACxCH,EAAW5C,CAAApT,MAAA,EAD6B,CAExCgV,EAAUgB,CAAAhB,QAF8B,CAGxCtb,CAHwC,CAIxC2c,CAJwC,CAKxC7Y,EAAM8Y,CAAA,CAASxqB,CAAA0R,IAAT;AAAqB1R,CAAAyqB,OAArB,CAEV9S,EAAA2S,gBAAAn2B,KAAA,CAA2B6L,CAA3B,CACAkpB,EAAAD,KAAA,CAAamB,CAAb,CAA+BA,CAA/B,CAGA,EAAKpqB,CAAA4N,MAAL,EAAqBkZ,CAAAlZ,MAArB,IAAyD,CAAA,CAAzD,GAAwC5N,CAAA4N,MAAxC,EAAmF,KAAnF,EAAkE5N,CAAAL,OAAlE,IACEiO,CADF,CACUvX,CAAA,CAAS2J,CAAA4N,MAAT,CAAA,CAAyB5N,CAAA4N,MAAzB,CACAvX,CAAA,CAASywB,CAAAlZ,MAAT,CAAA,CAA2BkZ,CAAAlZ,MAA3B,CACA8c,CAHV,CAMA,IAAI9c,CAAJ,CAEE,GADA2c,CACI,CADS3c,CAAAP,IAAA,CAAUqE,CAAV,CACT,CAAAtb,CAAA,CAAUm0B,CAAV,CAAJ,CAA2B,CACzB,GAAIA,CAAAtB,KAAJ,CAGE,MADAsB,EAAAtB,KAAA,CAAgBmB,CAAhB,CAAkCA,CAAlC,CACOG,CAAAA,CAGH92B,EAAA,CAAQ82B,CAAR,CAAJ,CACEP,CAAA,CAAeO,CAAA,CAAW,CAAX,CAAf,CAA8BA,CAAA,CAAW,CAAX,CAA9B,CAA6C7yB,EAAA,CAAK6yB,CAAA,CAAW,CAAX,CAAL,CAA7C,CAAkEA,CAAA,CAAW,CAAX,CAAlE,CADF,CAGEP,CAAA,CAAeO,CAAf,CAA2B,GAA3B,CAAgC,EAAhC,CAAoC,IAApC,CAVqB,CAA3B,IAeE3c,EAAAhC,IAAA,CAAU8F,CAAV,CAAewX,CAAf,CAKA/yB,EAAA,CAAYo0B,CAAZ,CAAJ,EACEnD,CAAA,CAAapnB,CAAAL,OAAb,CAA4B+R,CAA5B,CAAiCoX,CAAjC,CAA0Ce,CAA1C,CAAgD1B,CAAhD,CAA4DnoB,CAAA2qB,QAA5D,CACI3qB,CAAA+oB,gBADJ,CAC4B/oB,CAAA4qB,aAD5B,CAIF,OAAO1B,EA5CqC,CA4F9CsB,QAASA,EAAQ,CAAC9Y,CAAD,CAAM+Y,CAAN,CAAc,CACzB,GAAI,CAACA,CAAL,CAAa,MAAO/Y,EACpB,KAAInW,EAAQ,EACZlH,GAAA,CAAco2B,CAAd,CAAsB,QAAQ,CAACh2B,CAAD,CAAQZ,CAAR,CAAa,CAC3B,IAAd,GAAIY,CAAJ,EAAsB0B,CAAA,CAAY1B,CAAZ,CAAtB,GACKhB,CAAA,CAAQgB,CAAR,CAEL,GAFqBA,CAErB,CAF6B,CAACA,CAAD,CAE7B,EAAAf,CAAA,CAAQe,CAAR,CAAe,QAAQ,CAACyF,CAAD,CAAI,CACrB7D,CAAA,CAAS6D,CAAT,CAAJ,GACEA,CADF,CACMR,EAAA,CAAOQ,CAAP,CADN,CAGAqB,EAAApH,KAAA,CAAWsH,EAAA,CAAe5H,CAAf,CAAX,CAAiC,GAAjC,CACW4H,EAAA,CAAevB,CAAf,CADX,CAJyB,CAA3B,CAHA,CADyC,CAA3C,CAYkB,EAAlB,CAAGqB,CAAAjI,OAAH;CACEoe,CADF,GACgC,EAAtB,EAACA,CAAApa,QAAA,CAAY,GAAZ,CAAD,CAA2B,GAA3B,CAAiC,GAD3C,EACkDiE,CAAAxG,KAAA,CAAW,GAAX,CADlD,CAGA,OAAO2c,EAlBkB,CA52B/B,IAAIgZ,EAAerU,CAAA,CAAc,OAAd,CAAnB,CAOI+S,EAAuB,EAE3B11B,EAAA,CAAQszB,CAAR,CAA8B,QAAQ,CAAC6D,CAAD,CAAqB,CACzDzB,CAAAl0B,QAAA,CAA6B1B,CAAA,CAASq3B,CAAT,CACA,CAAvBnc,CAAArB,IAAA,CAAcwd,CAAd,CAAuB,CAAanc,CAAA1R,OAAA,CAAiB6tB,CAAjB,CAD1C,CADyD,CAA3D,CAKAn3B,EAAA,CAAQwzB,CAAR,CAAsC,QAAQ,CAAC2D,CAAD,CAAqBl2B,CAArB,CAA4B,CACxE,IAAIm2B,EAAat3B,CAAA,CAASq3B,CAAT,CACA,CAAXnc,CAAArB,IAAA,CAAcwd,CAAd,CAAW,CACXnc,CAAA1R,OAAA,CAAiB6tB,CAAjB,CAONzB,EAAA3xB,OAAA,CAA4B9C,CAA5B,CAAmC,CAAnC,CAAsC,UAC1B2tB,QAAQ,CAACA,CAAD,CAAW,CAC3B,MAAOwI,EAAA,CAAWxD,CAAA6B,KAAA,CAAQ7G,CAAR,CAAX,CADoB,CADO,eAIrBkH,QAAQ,CAAClH,CAAD,CAAW,CAChC,MAAOwI,EAAA,CAAWxD,CAAAK,OAAA,CAAUrF,CAAV,CAAX,CADyB,CAJE,CAAtC,CAVwE,CAA1E,CAooBA3K,EAAA2S,gBAAA,CAAwB,EA+FxBS,UAA2B,CAAC7uB,CAAD,CAAQ,CACjCxI,CAAA,CAAQ8B,SAAR,CAAmB,QAAQ,CAAC4G,CAAD,CAAO,CAChCub,CAAA,CAAMvb,CAAN,CAAA,CAAc,QAAQ,CAACsV,CAAD,CAAM1R,CAAN,CAAc,CAClC,MAAO2X,EAAA,CAAMriB,CAAA,CAAO0K,CAAP,EAAiB,EAAjB,CAAqB,QACxB5D,CADwB,KAE3BsV,CAF2B,CAArB,CAAN,CAD2B,CADJ,CAAlC,CADiC,CAAnCqZ,CA7CA,CAAmB,KAAnB,CAA0B,QAA1B,CAAoC,MAApC,CAA4C,OAA5C,CAyDAC,UAAmC,CAAC5uB,CAAD,CAAO,CACxC1I,CAAA,CAAQ8B,SAAR,CAAmB,QAAQ,CAAC4G,CAAD,CAAO,CAChCub,CAAA,CAAMvb,CAAN,CAAA,CAAc,QAAQ,CAACsV,CAAD,CAAMrU,CAAN,CAAY2C,CAAZ,CAAoB,CACxC,MAAO2X,EAAA,CAAMriB,CAAA,CAAO0K,CAAP;AAAiB,EAAjB,CAAqB,QACxB5D,CADwB,KAE3BsV,CAF2B,MAG1BrU,CAH0B,CAArB,CAAN,CADiC,CADV,CAAlC,CADwC,CAA1C2tB,CA9BA,CAA2B,MAA3B,CAAmC,KAAnC,CAYArT,EAAAmP,SAAA,CAAiBA,CAGjB,OAAOnP,EAhvBsE,CADnE,CAjDW,CAy7BzBsT,QAASA,GAAS,CAACtrB,CAAD,CAAS,CAIvB,GAAY,CAAZ,EAAI4L,CAAJ,GAAkB,CAAC5L,CAAA7E,MAAA,CAAa,uCAAb,CAAnB,EACE,CAAC/H,CAAAm4B,eADH,EAEE,MAAO,KAAIn4B,CAAAo4B,cAAJ,CAAyB,mBAAzB,CACF,IAAIp4B,CAAAm4B,eAAJ,CACL,MAAO,KAAIn4B,CAAAm4B,eAGb,MAAMh4B,EAAA,CAAO,cAAP,CAAA,CAAuB,OAAvB,CAAN,CAXuB,CA8B3B2Q,QAASA,GAAoB,EAAG,CAC9B,IAAAiJ,KAAA,CAAY,CAAC,UAAD,CAAa,SAAb,CAAwB,WAAxB,CAAqC,QAAQ,CAACua,CAAD,CAAW7X,CAAX,CAAoBiF,CAApB,CAA+B,CACtF,MAAO2W,GAAA,CAAkB/D,CAAlB,CAA4B4D,EAA5B,CAAuC5D,CAAAnT,MAAvC,CAAuD1E,CAAAhS,QAAA6tB,UAAvD,CAAkF5W,CAAA,CAAU,CAAV,CAAlF,CAD+E,CAA5E,CADkB,CAMhC2W,QAASA,GAAiB,CAAC/D,CAAD,CAAW4D,CAAX,CAAsBK,CAAtB,CAAqCD,CAArC,CAAgDxZ,CAAhD,CAA6D,CA4HrF0Z,QAASA,EAAQ,CAAC7Z,CAAD,CAAMmY,CAAN,CAAY,CAAA,IAIvB2B,EAAS3Z,CAAA/K,cAAA,CAA0B,QAA1B,CAJc,CAKvB2kB,EAAcA,QAAQ,EAAG,CACvBD,CAAAE,mBAAA;AAA4BF,CAAAG,OAA5B,CAA4CH,CAAAI,QAA5C,CAA6D,IAC7D/Z,EAAAga,KAAAxkB,YAAA,CAA6BmkB,CAA7B,CACI3B,EAAJ,EAAUA,CAAA,EAHa,CAM7B2B,EAAArjB,KAAA,CAAc,iBACdqjB,EAAApzB,IAAA,CAAasZ,CAETnG,EAAJ,EAAoB,CAApB,EAAYA,CAAZ,CACEigB,CAAAE,mBADF,CAC8BI,QAAQ,EAAG,CACjC,iBAAAvuB,KAAA,CAAuBiuB,CAAAO,WAAvB,CAAJ,EACEN,CAAA,EAFmC,CADzC,CAOED,CAAAG,OAPF,CAOkBH,CAAAI,QAPlB,CAOmCI,QAAQ,EAAG,CAC1CP,CAAA,EAD0C,CAK9C5Z,EAAAga,KAAAhlB,YAAA,CAA6B2kB,CAA7B,CACA,OAAOC,EA3BoB,CA3H7B,IAAIQ,EAAW,EAGf,OAAO,SAAQ,CAACtsB,CAAD,CAAS+R,CAAT,CAAcoL,CAAd,CAAoBvK,CAApB,CAA8BiQ,CAA9B,CAAuCmI,CAAvC,CAAgD5B,CAAhD,CAAiE6B,CAAjE,CAA+E,CA6F5FsB,QAASA,EAAc,EAAG,CACxBxE,CAAA,CAASuE,CACTE,EAAA,EAAaA,CAAA,EACbC,EAAA,EAAOA,CAAAC,MAAA,EAHiB,CAM1BC,QAASA,EAAe,CAAC/Z,CAAD,CAAWmV,CAAX,CAAmBpF,CAAnB,CAA6BwH,CAA7B,CAA4CC,CAA5C,CAAwD,CAE9E1V,CAAA,EAAaiX,CAAAhX,OAAA,CAAqBD,CAArB,CACb8X,EAAA,CAAYC,CAAZ,CAAkB,IAKH,EAAf,GAAI1E,CAAJ,GACEA,CADF,CACWpF,CAAA,CAAW,GAAX,CAA6C,MAA5B,EAAAiK,EAAA,CAAW7a,CAAX,CAAA8a,SAAA,CAAqC,GAArC,CAA2C,CADvE,CAQAja,EAAA,CAHoB,IAAXmV,GAAAA,CAAAA,CAAkB,GAAlBA,CAAwBA,CAGjC,CAAiBpF,CAAjB,CAA2BwH,CAA3B,CAFaC,CAEb,EAF2B,EAE3B,CACA1C,EAAAnV,6BAAA,CAAsCnc,CAAtC,CAjB8E,CAlGhF,IAAI2xB,CACJL,EAAAlV,6BAAA,EACAT;CAAA,CAAMA,CAAN,EAAa2V,CAAA3V,IAAA,EAEb,IAAyB,OAAzB,EAAIvX,CAAA,CAAUwF,CAAV,CAAJ,CAAkC,CAChC,IAAI8sB,EAAa,GAAbA,CAAoBj2B,CAAA60B,CAAAqB,QAAA,EAAAl2B,UAAA,CAA8B,EAA9B,CACxB60B,EAAA,CAAUoB,CAAV,CAAA,CAAwB,QAAQ,CAACpvB,CAAD,CAAO,CACrCguB,CAAA,CAAUoB,CAAV,CAAApvB,KAAA,CAA6BA,CADQ,CAIvC,KAAI8uB,EAAYZ,CAAA,CAAS7Z,CAAA3W,QAAA,CAAY,eAAZ,CAA6B,oBAA7B,CAAoD0xB,CAApD,CAAT,CACZ,QAAQ,EAAG,CACTpB,CAAA,CAAUoB,CAAV,CAAApvB,KAAJ,CACEivB,CAAA,CAAgB/Z,CAAhB,CAA0B,GAA1B,CAA+B8Y,CAAA,CAAUoB,CAAV,CAAApvB,KAA/B,CADF,CAGEivB,CAAA,CAAgB/Z,CAAhB,CAA0BmV,CAA1B,EAAqC,EAArC,CAEF2D,EAAA,CAAUoB,CAAV,CAAA,CAAwBjvB,EAAAzH,KANX,CADC,CANgB,CAAlC,IAeO,CAEL,IAAIq2B,EAAMnB,CAAA,CAAUtrB,CAAV,CAEVysB,EAAAO,KAAA,CAAShtB,CAAT,CAAiB+R,CAAjB,CAAsB,CAAA,CAAtB,CACAhe,EAAA,CAAQ8uB,CAAR,CAAiB,QAAQ,CAAC/tB,CAAD,CAAQZ,CAAR,CAAa,CAChCuC,CAAA,CAAU3B,CAAV,CAAJ,EACI23B,CAAAQ,iBAAA,CAAqB/4B,CAArB,CAA0BY,CAA1B,CAFgC,CAAtC,CASA23B,EAAAV,mBAAA,CAAyBmB,QAAQ,EAAG,CAQlC,GAAIT,CAAJ,EAA6B,CAA7B,EAAWA,CAAAL,WAAX,CAAgC,CAAA,IAC1Be,EAAkB,IADQ,CAE1BxK,EAAW,IAEZoF,EAAH,GAAcuE,CAAd,GACEa,CAIA,CAJkBV,CAAAW,sBAAA,EAIlB,CAAAzK,CAAA,CAAY,UAAD,EAAe8J,EAAf,CAAsBA,CAAA9J,SAAtB,CAAqC8J,CAAAY,aALlD,CAQAV,EAAA,CAAgB/Z,CAAhB,CACImV,CADJ,EACc0E,CAAA1E,OADd,CAEIpF,CAFJ,CAGIwK,CAHJ,CAIIV,CAAArC,WAJJ,EAIsB,EAJtB,CAZ8B,CARE,CA4BhChB,EAAJ,GACEqD,CAAArD,gBADF;AACwB,CAAA,CADxB,CAIA,IAAI6B,CAAJ,CACE,GAAI,CACFwB,CAAAxB,aAAA,CAAmBA,CADjB,CAEF,MAAOnwB,CAAP,CAAU,CAQV,GAAqB,MAArB,GAAImwB,CAAJ,CACE,KAAMnwB,EAAN,CATQ,CAcd2xB,CAAAa,KAAA,CAASnQ,CAAT,EAAiB,IAAjB,CA/DK,CAkEP,GAAc,CAAd,CAAI6N,CAAJ,CACE,IAAItW,EAAYiX,CAAA,CAAcY,CAAd,CAA8BvB,CAA9B,CADlB,KAEWA,EAAJ,EAAeA,CAAA1B,KAAf,EACL0B,CAAA1B,KAAA,CAAaiD,CAAb,CAzF0F,CAJT,CAiMvFxoB,QAASA,GAAoB,EAAG,CAC9B,IAAI0hB,EAAc,IAAlB,CACIC,EAAY,IAWhB,KAAAD,YAAA,CAAmB8H,QAAQ,CAACz4B,CAAD,CAAO,CAChC,MAAIA,EAAJ,EACE2wB,CACO,CADO3wB,CACP,CAAA,IAFT,EAIS2wB,CALuB,CAkBlC,KAAAC,UAAA,CAAiB8H,QAAQ,CAAC14B,CAAD,CAAO,CAC9B,MAAIA,EAAJ,EACE4wB,CACO,CADK5wB,CACL,CAAA,IAFT,EAIS4wB,CALqB,CAUhC,KAAAvY,KAAA,CAAY,CAAC,QAAD,CAAW,mBAAX,CAAgC,MAAhC,CAAwC,QAAQ,CAAC+K,CAAD,CAASd,CAAT,CAA4BgB,CAA5B,CAAkC,CA0C5FL,QAASA,EAAY,CAACqL,CAAD,CAAOqK,CAAP,CAA2BC,CAA3B,CAA2C,CAW9D,IAX8D,IAC1Dh0B,CAD0D,CAE1Di0B,CAF0D,CAG1D34B,EAAQ,CAHkD,CAI1D4G,EAAQ,EAJkD,CAK1DjI,EAASyvB,CAAAzvB,OALiD,CAM1Di6B,EAAmB,CAAA,CANuC,CAS1Dh0B,EAAS,EAEb,CAAM5E,CAAN,CAAcrB,CAAd,CAAA,CAC4D,EAA1D,GAAO+F,CAAP,CAAoB0pB,CAAAzrB,QAAA,CAAa8tB,CAAb,CAA0BzwB,CAA1B,CAApB,GAC+E,EAD/E,GACO24B,CADP,CACkBvK,CAAAzrB,QAAA,CAAa+tB,CAAb,CAAwBhsB,CAAxB,CAAqCm0B,CAArC,CADlB,GAEG74B,CAID,EAJU0E,CAIV,EAJyBkC,CAAApH,KAAA,CAAW4uB,CAAA9O,UAAA,CAAetf,CAAf,CAAsB0E,CAAtB,CAAX,CAIzB,CAHAkC,CAAApH,KAAA,CAAW+E,CAAX,CAAgB2e,CAAA,CAAO4V,CAAP,CAAa1K,CAAA9O,UAAA,CAAe5a,CAAf,CAA4Bm0B,CAA5B,CAA+CF,CAA/C,CAAb,CAAhB,CAGA;AAFAp0B,CAAAu0B,IAEA,CAFSA,CAET,CADA94B,CACA,CADQ24B,CACR,CADmBI,CACnB,CAAAH,CAAA,CAAmB,CAAA,CANrB,GASG54B,CACD,EADUrB,CACV,EADqBiI,CAAApH,KAAA,CAAW4uB,CAAA9O,UAAA,CAAetf,CAAf,CAAX,CACrB,CAAAA,CAAA,CAAQrB,CAVV,CAcF,EAAMA,CAAN,CAAeiI,CAAAjI,OAAf,IAEEiI,CAAApH,KAAA,CAAW,EAAX,CACA,CAAAb,CAAA,CAAS,CAHX,CAYA,IAAI+5B,CAAJ,EAAqC,CAArC,CAAsB9xB,CAAAjI,OAAtB,CACI,KAAMq6B,GAAA,CAAmB,UAAnB,CAGsD5K,CAHtD,CAAN,CAMJ,GAAI,CAACqK,CAAL,EAA4BG,CAA5B,CA8BE,MA7BAh0B,EAAAjG,OA6BO4F,CA7BS5F,CA6BT4F,CA5BPA,CA4BOA,CA5BFA,QAAQ,CAACtF,CAAD,CAAU,CACrB,GAAI,CACF,IADE,IACMU,EAAI,CADV,CACa0V,EAAK1W,CADlB,CAC0Bs6B,CAA5B,CAAkCt5B,CAAlC,CAAoC0V,CAApC,CAAwC1V,CAAA,EAAxC,CACkC,UAahC,EAbI,OAAQs5B,CAAR,CAAeryB,CAAA,CAAMjH,CAAN,CAAf,CAaJ,GAZEs5B,CAMA,CANOA,CAAA,CAAKh6B,CAAL,CAMP,CAJEg6B,CAIF,CALIP,CAAJ,CACStV,CAAA8V,WAAA,CAAgBR,CAAhB,CAAgCO,CAAhC,CADT,CAGS7V,CAAA+V,QAAA,CAAaF,CAAb,CAET,CAAa,IAAb,GAAIA,CAAJ,EAAqBz3B,CAAA,CAAYy3B,CAAZ,CAArB,CACEA,CADF,CACS,EADT,CAE0B,QAF1B,EAEW,MAAOA,EAFlB,GAGEA,CAHF,CAGSl0B,EAAA,CAAOk0B,CAAP,CAHT,CAMF,EAAAr0B,CAAA,CAAOjF,CAAP,CAAA,CAAYs5B,CAEd,OAAOr0B,EAAAxE,KAAA,CAAY,EAAZ,CAjBL,CAmBJ,MAAMiZ,CAAN,CAAW,CACL+f,CAEJ,CAFaJ,EAAA,CAAmB,QAAnB,CAA4D5K,CAA5D,CACT/U,CAAAxX,SAAA,EADS,CAEb,CAAAugB,CAAA,CAAkBgX,CAAlB,CAHS,CApBU,CA4BhB70B,CAFPA,CAAAu0B,IAEOv0B,CAFE6pB,CAEF7pB,CADPA,CAAAqC,MACOrC,CADIqC,CACJrC,CAAAA,CA3EqD,CA1C4B,IACxFs0B,EAAoBpI,CAAA9xB,OADoE,CAExFo6B,EAAkBrI,CAAA/xB,OAmItBokB,EAAA0N,YAAA,CAA2B4I,QAAQ,EAAG,CACpC,MAAO5I,EAD6B,CAgBtC1N,EAAA2N,UAAA,CAAyB4I,QAAQ,EAAG,CAClC,MAAO5I,EAD2B,CAIpC;MAAO3N,EAzJqF,CAAlF,CAzCkB,CAsMhC/T,QAASA,GAAiB,EAAG,CAC3B,IAAAmJ,KAAA,CAAY,CAAC,YAAD,CAAe,SAAf,CAA0B,IAA1B,CACP,QAAQ,CAAC4C,CAAD,CAAeF,CAAf,CAA0B8X,CAA1B,CAA8B,CA+HzCrW,QAASA,EAAQ,CAAC/X,CAAD,CAAKkb,CAAL,CAAY8Z,CAAZ,CAAmBC,CAAnB,CAAgC,CAAA,IAC3Cv3B,EAAc4Y,CAAA5Y,YAD6B,CAE3Cw3B,EAAgB5e,CAAA4e,cAF2B,CAG3ClE,EAAW5C,CAAApT,MAAA,EAHgC,CAI3CgV,EAAUgB,CAAAhB,QAJiC,CAK3CmF,EAAY,CAL+B,CAM3CC,EAAal4B,CAAA,CAAU+3B,CAAV,CAAbG,EAAuC,CAACH,CAE5CD,EAAA,CAAQ93B,CAAA,CAAU83B,CAAV,CAAA,CAAmBA,CAAnB,CAA2B,CAEnChF,EAAAD,KAAA,CAAa,IAAb,CAAmB,IAAnB,CAAyB/vB,CAAzB,CAEAgwB,EAAAqF,aAAA,CAAuB33B,CAAA,CAAY43B,QAAa,EAAG,CACjDtE,CAAAuE,OAAA,CAAgBJ,CAAA,EAAhB,CAEY,EAAZ,CAAIH,CAAJ,EAAiBG,CAAjB,EAA8BH,CAA9B,GACEhE,CAAAC,QAAA,CAAiBkE,CAAjB,CAEA,CADAD,CAAA,CAAclF,CAAAqF,aAAd,CACA,CAAA,OAAOG,CAAA,CAAUxF,CAAAqF,aAAV,CAHT,CAMKD,EAAL,EAAgB5e,CAAAtS,OAAA,EATiC,CAA5B,CAWpBgX,CAXoB,CAavBsa,EAAA,CAAUxF,CAAAqF,aAAV,CAAA,CAAkCrE,CAElC,OAAOhB,EA3BwC,CA9HjD,IAAIwF,EAAY,EAuKhBzd,EAAAqD,OAAA,CAAkBqa,QAAQ,CAACzF,CAAD,CAAU,CAClC,MAAIA,EAAJ,EAAeA,CAAAqF,aAAf,GAAuCG,EAAvC,EACEA,CAAA,CAAUxF,CAAAqF,aAAV,CAAA5G,OAAA,CAAuC,UAAvC,CAGO,CAFPyG,aAAA,CAAclF,CAAAqF,aAAd,CAEO,CADP,OAAOG,CAAA,CAAUxF,CAAAqF,aAAV,CACA;AAAA,CAAA,CAJT,EAMO,CAAA,CAP2B,CAUpC,OAAOtd,EAlLkC,CAD/B,CADe,CAkM7B1Q,QAASA,GAAe,EAAE,CACxB,IAAAuM,KAAA,CAAY4H,QAAQ,EAAG,CACrB,MAAO,IACD,OADC,gBAGW,aACD,GADC,WAEH,GAFG,UAGJ,CACR,QACU,CADV,SAEW,CAFX,SAGW,CAHX,QAIU,EAJV,QAKU,EALV,QAMU,GANV,QAOU,EAPV,OAQS,CART,QASU,CATV,CADQ,CAWN,QACQ,CADR,SAES,CAFT,SAGS,CAHT,QAIQ,QAJR,QAKQ,EALR,QAMQ,SANR,QAOQ,GAPR,OAQO,CARP,QASQ,CATR,CAXM,CAHI,cA0BA,GA1BA,CAHX,kBAgCa,OAEZ,uFAAA,MAAA,CAAA,GAAA,CAFY,YAIH,iDAAA,MAAA,CAAA,GAAA,CAJG;IAKX,0DAAA,MAAA,CAAA,GAAA,CALW,UAMN,6BAAA,MAAA,CAAA,GAAA,CANM,OAOT,CAAC,IAAD,CAAM,IAAN,CAPS,QAQR,oBARQ,CAShBka,OATgB,CAST,eATS,UAUN,iBAVM,UAWN,WAXM,YAYJ,UAZI,WAaL,QAbK,YAcJ,WAdI,WAeL,QAfK,CAhCb,WAkDMC,QAAQ,CAACC,CAAD,CAAM,CACvB,MAAY,EAAZ,GAAIA,CAAJ,CACS,KADT,CAGO,OAJgB,CAlDpB,CADc,CADC,CAyE1BC,QAASA,GAAU,CAACrwB,CAAD,CAAO,CACpBswB,CAAAA,CAAWtwB,CAAArD,MAAA,CAAW,GAAX,CAGf,KAHA,IACI/G,EAAI06B,CAAA17B,OAER,CAAOgB,CAAA,EAAP,CAAA,CACE06B,CAAA,CAAS16B,CAAT,CAAA,CAAcoH,EAAA,CAAiBszB,CAAA,CAAS16B,CAAT,CAAjB,CAGhB,OAAO06B,EAAAj6B,KAAA,CAAc,GAAd,CARiB,CAW1Bk6B,QAASA,GAAgB,CAACC,CAAD,CAAcC,CAAd,CAA2BC,CAA3B,CAAoC,CACvDC,CAAAA,CAAY9C,EAAA,CAAW2C,CAAX,CAAwBE,CAAxB,CAEhBD,EAAAG,WAAA;AAAyBD,CAAA7C,SACzB2C,EAAAI,OAAA,CAAqBF,CAAAG,SACrBL,EAAAM,OAAA,CAAqBh6B,CAAA,CAAI45B,CAAAK,KAAJ,CAArB,EAA4CC,EAAA,CAAcN,CAAA7C,SAAd,CAA5C,EAAiF,IALtB,CAS7DoD,QAASA,GAAW,CAACC,CAAD,CAAcV,CAAd,CAA2BC,CAA3B,CAAoC,CACtD,IAAIU,EAAsC,GAAtCA,GAAYD,CAAAx3B,OAAA,CAAmB,CAAnB,CACZy3B,EAAJ,GACED,CADF,CACgB,GADhB,CACsBA,CADtB,CAGI/0B,EAAAA,CAAQyxB,EAAA,CAAWsD,CAAX,CAAwBT,CAAxB,CACZD,EAAAY,OAAA,CAAqB90B,kBAAA,CAAmB60B,CAAA,EAAyC,GAAzC,GAAYh1B,CAAAk1B,SAAA33B,OAAA,CAAsB,CAAtB,CAAZ,CACpCyC,CAAAk1B,SAAA/b,UAAA,CAAyB,CAAzB,CADoC,CACNnZ,CAAAk1B,SADb,CAErBb,EAAAc,SAAA,CAAuB/0B,EAAA,CAAcJ,CAAAo1B,OAAd,CACvBf,EAAAgB,OAAA,CAAqBl1B,kBAAA,CAAmBH,CAAAgV,KAAnB,CAGjBqf,EAAAY,OAAJ,EAA0D,GAA1D,EAA0BZ,CAAAY,OAAA13B,OAAA,CAA0B,CAA1B,CAA1B,GACE82B,CAAAY,OADF,CACuB,GADvB,CAC6BZ,CAAAY,OAD7B,CAZsD,CAyBxDK,QAASA,GAAU,CAACC,CAAD,CAAQC,CAAR,CAAe,CAChC,GAA6B,CAA7B,GAAIA,CAAAh5B,QAAA,CAAc+4B,CAAd,CAAJ,CACE,MAAOC,EAAA7U,OAAA,CAAa4U,CAAA/8B,OAAb,CAFuB,CAOlCi9B,QAASA,GAAS,CAAC7e,CAAD,CAAM,CACtB,IAAI/c,EAAQ+c,CAAApa,QAAA,CAAY,GAAZ,CACZ,OAAiB,EAAV,EAAA3C,CAAA,CAAc+c,CAAd,CAAoBA,CAAA+J,OAAA,CAAW,CAAX,CAAc9mB,CAAd,CAFL,CAMxB67B,QAASA,GAAS,CAAC9e,CAAD,CAAM,CACtB,MAAOA,EAAA+J,OAAA,CAAW,CAAX;AAAc8U,EAAA,CAAU7e,CAAV,CAAA+e,YAAA,CAA2B,GAA3B,CAAd,CAAgD,CAAhD,CADe,CAkBxBC,QAASA,GAAgB,CAACtB,CAAD,CAAUuB,CAAV,CAAsB,CAC7C,IAAAC,QAAA,CAAe,CAAA,CACfD,EAAA,CAAaA,CAAb,EAA2B,EAC3B,KAAIE,EAAgBL,EAAA,CAAUpB,CAAV,CACpBH,GAAA,CAAiBG,CAAjB,CAA0B,IAA1B,CAAgCA,CAAhC,CAQA,KAAA0B,QAAA,CAAeC,QAAQ,CAACrf,CAAD,CAAM,CAC3B,IAAIsf,EAAUZ,EAAA,CAAWS,CAAX,CAA0Bnf,CAA1B,CACd,IAAI,CAACle,CAAA,CAASw9B,CAAT,CAAL,CACE,KAAMC,GAAA,CAAgB,UAAhB,CAA6Evf,CAA7E,CACFmf,CADE,CAAN,CAIFjB,EAAA,CAAYoB,CAAZ,CAAqB,IAArB,CAA2B5B,CAA3B,CAEK,KAAAW,OAAL,GACE,IAAAA,OADF,CACgB,GADhB,CAIA,KAAAmB,UAAA,EAb2B,CAoB7B,KAAAA,UAAA,CAAiBC,QAAQ,EAAG,CAAA,IACtBjB,EAAS50B,EAAA,CAAW,IAAA20B,SAAX,CADa,CAEtBngB,EAAO,IAAAqgB,OAAA,CAAc,GAAd,CAAoBz0B,EAAA,CAAiB,IAAAy0B,OAAjB,CAApB,CAAoD,EAE/D,KAAAiB,MAAA,CAAarC,EAAA,CAAW,IAAAgB,OAAX,CAAb,EAAwCG,CAAA,CAAS,GAAT,CAAeA,CAAf,CAAwB,EAAhE,EAAsEpgB,CACtE,KAAAuhB,SAAA,CAAgBR,CAAhB,CAAgC,IAAAO,MAAA3V,OAAA,CAAkB,CAAlB,CALN,CAQ5B,KAAA6V,UAAA,CAAiBC,QAAQ,CAAC7f,CAAD,CAAM,CAAA,IACzB8f,CAEJ,KAAMA,CAAN,CAAepB,EAAA,CAAWhB,CAAX,CAAoB1d,CAApB,CAAf,IAA6Cze,CAA7C,CAEE,MADAw+B,EACA,CADaD,CACb,CAAA,CAAMA,CAAN,CAAepB,EAAA,CAAWO,CAAX,CAAuBa,CAAvB,CAAf,IAAmDv+B,CAAnD,CACS49B,CADT,EAC0BT,EAAA,CAAW,GAAX,CAAgBoB,CAAhB,CAD1B,EACqDA,CADrD,EAGSpC,CAHT,CAGmBqC,CAEd,KAAMD,CAAN,CAAepB,EAAA,CAAWS,CAAX;AAA0Bnf,CAA1B,CAAf,IAAmDze,CAAnD,CACL,MAAO49B,EAAP,CAAuBW,CAClB,IAAIX,CAAJ,EAAqBnf,CAArB,CAA2B,GAA3B,CACL,MAAOmf,EAboB,CAxCc,CAoE/Ca,QAASA,GAAmB,CAACtC,CAAD,CAAUuC,CAAV,CAAsB,CAChD,IAAId,EAAgBL,EAAA,CAAUpB,CAAV,CAEpBH,GAAA,CAAiBG,CAAjB,CAA0B,IAA1B,CAAgCA,CAAhC,CAQA,KAAA0B,QAAA,CAAeC,QAAQ,CAACrf,CAAD,CAAM,CAC3B,IAAIkgB,EAAiBxB,EAAA,CAAWhB,CAAX,CAAoB1d,CAApB,CAAjBkgB,EAA6CxB,EAAA,CAAWS,CAAX,CAA0Bnf,CAA1B,CAAjD,CACImgB,EAA6C,GAC5B,EADAD,CAAAv5B,OAAA,CAAsB,CAAtB,CACA,CAAf+3B,EAAA,CAAWuB,CAAX,CAAuBC,CAAvB,CAAe,CACd,IAAAhB,QACD,CAAEgB,CAAF,CACE,EAER,IAAI,CAACp+B,CAAA,CAASq+B,CAAT,CAAL,CACE,KAAMZ,GAAA,CAAgB,UAAhB,CAA6Evf,CAA7E,CACFigB,CADE,CAAN,CAGF/B,EAAA,CAAYiC,CAAZ,CAA4B,IAA5B,CAAkCzC,CAAlC,CAEqCW,EAAAA,CAAAA,IAAAA,OAoBnC,KAAI+B,EAAqB,gBAKC,EAA1B,GAAIpgB,CAAApa,QAAA,CAzB4D83B,CAyB5D,CAAJ,GACE1d,CADF,CACQA,CAAA3W,QAAA,CA1BwDq0B,CA0BxD,CAAkB,EAAlB,CADR,CAQI0C,EAAAv1B,KAAA,CAAwBmV,CAAxB,CAAJ,GAKA,CALA,CAKO,CADPqgB,CACO,CADiBD,CAAAv1B,KAAA,CAAwBmC,CAAxB,CACjB,EAAwBqzB,CAAA,CAAsB,CAAtB,CAAxB,CAAmDrzB,CAL1D,CAjCF,KAAAqxB,OAAA,CAAc,CAEd,KAAAmB,UAAA,EAhB2B,CA4D7B,KAAAA,UAAA,CAAiBC,QAAQ,EAAG,CAAA,IACtBjB,EAAS50B,EAAA,CAAW,IAAA20B,SAAX,CADa,CAEtBngB,EAAO,IAAAqgB,OAAA,CAAc,GAAd,CAAoBz0B,EAAA,CAAiB,IAAAy0B,OAAjB,CAApB,CAAoD,EAE/D,KAAAiB,MAAA,CAAarC,EAAA,CAAW,IAAAgB,OAAX,CAAb,EAAwCG,CAAA,CAAS,GAAT,CAAeA,CAAf,CAAwB,EAAhE,EAAsEpgB,CACtE,KAAAuhB,SAAA;AAAgBjC,CAAhB,EAA2B,IAAAgC,MAAA,CAAaO,CAAb,CAA0B,IAAAP,MAA1B,CAAuC,EAAlE,CAL0B,CAQ5B,KAAAE,UAAA,CAAiBC,QAAQ,CAAC7f,CAAD,CAAM,CAC7B,GAAG6e,EAAA,CAAUnB,CAAV,CAAH,EAAyBmB,EAAA,CAAU7e,CAAV,CAAzB,CACE,MAAOA,EAFoB,CA/EiB,CAgGlDsgB,QAASA,GAA0B,CAAC5C,CAAD,CAAUuC,CAAV,CAAsB,CACvD,IAAAf,QAAA,CAAe,CAAA,CACfc,GAAAp4B,MAAA,CAA0B,IAA1B,CAAgC9D,SAAhC,CAEA,KAAIq7B,EAAgBL,EAAA,CAAUpB,CAAV,CAEpB,KAAAkC,UAAA,CAAiBC,QAAQ,CAAC7f,CAAD,CAAM,CAC7B,IAAI8f,CAEJ,IAAKpC,CAAL,EAAgBmB,EAAA,CAAU7e,CAAV,CAAhB,CACE,MAAOA,EACF,IAAM8f,CAAN,CAAepB,EAAA,CAAWS,CAAX,CAA0Bnf,CAA1B,CAAf,CACL,MAAO0d,EAAP,CAAiBuC,CAAjB,CAA8BH,CACzB,IAAKX,CAAL,GAAuBnf,CAAvB,CAA6B,GAA7B,CACL,MAAOmf,EARoB,CANwB,CAsNzDoB,QAASA,GAAc,CAACC,CAAD,CAAW,CAChC,MAAO,SAAQ,EAAG,CAChB,MAAO,KAAA,CAAKA,CAAL,CADS,CADc,CAOlCC,QAASA,GAAoB,CAACD,CAAD,CAAWE,CAAX,CAAuB,CAClD,MAAO,SAAQ,CAAC39B,CAAD,CAAQ,CACrB,GAAI0B,CAAA,CAAY1B,CAAZ,CAAJ,CACE,MAAO,KAAA,CAAKy9B,CAAL,CAET,KAAA,CAAKA,CAAL,CAAA,CAAiBE,CAAA,CAAW39B,CAAX,CACjB,KAAAy8B,UAAA,EAEA,OAAO,KAPc,CAD2B,CA6CpDptB,QAASA,GAAiB,EAAE,CAAA,IACtB6tB,EAAa,EADS,CAEtBU,EAAY,CAAA,CAShB,KAAAV,WAAA,CAAkBW,QAAQ,CAACC,CAAD,CAAS,CACjC,MAAIn8B,EAAA,CAAUm8B,CAAV,CAAJ,EACEZ,CACO,CADMY,CACN,CAAA,IAFT,EAISZ,CALwB,CAgBnC,KAAAU,UAAA;AAAiBG,QAAQ,CAACrU,CAAD,CAAO,CAC9B,MAAI/nB,EAAA,CAAU+nB,CAAV,CAAJ,EACEkU,CACO,CADKlU,CACL,CAAA,IAFT,EAISkU,CALqB,CAoChC,KAAAvlB,KAAA,CAAY,CAAC,YAAD,CAAe,UAAf,CAA2B,UAA3B,CAAuC,cAAvC,CACR,QAAQ,CAAE4C,CAAF,CAAgB2X,CAAhB,CAA4B3W,CAA5B,CAAwC0I,CAAxC,CAAsD,CAuGhEqZ,QAASA,EAAmB,CAACC,CAAD,CAAS,CACnChjB,CAAAijB,WAAA,CAAsB,wBAAtB,CAAgDljB,CAAAmjB,OAAA,EAAhD,CAAoEF,CAApE,CADmC,CAvG2B,IAC5DjjB,CAD4D,CAG5D2D,EAAWiU,CAAAjU,SAAA,EAHiD,CAI5Dyf,EAAaxL,CAAA3V,IAAA,EAGb2gB,EAAJ,EACEjD,CACA,CADqByD,CAlgBlB5e,UAAA,CAAc,CAAd,CAkgBkB4e,CAlgBDv7B,QAAA,CAAY,GAAZ,CAkgBCu7B,CAlgBgBv7B,QAAA,CAAY,IAAZ,CAAjB,CAAqC,CAArC,CAAjB,CAmgBH,EADoC8b,CACpC,EADgD,GAChD,EAAA0f,CAAA,CAAepiB,CAAAoB,QAAA,CAAmB4e,EAAnB,CAAsCsB,EAFvD,GAIE5C,CACA,CADUmB,EAAA,CAAUsC,CAAV,CACV,CAAAC,CAAA,CAAepB,EALjB,CAOAjiB,EAAA,CAAY,IAAIqjB,CAAJ,CAAiB1D,CAAjB,CAA0B,GAA1B,CAAgCuC,CAAhC,CACZliB,EAAAqhB,QAAA,CAAkBrhB,CAAA6hB,UAAA,CAAoBuB,CAApB,CAAlB,CAEAzZ,EAAAlG,GAAA,CAAgB,OAAhB,CAAyB,QAAQ,CAACzI,CAAD,CAAQ,CAIvC,GAAIsoB,CAAAtoB,CAAAsoB,QAAJ,EAAqBC,CAAAvoB,CAAAuoB,QAArB,EAAqD,CAArD,EAAsCvoB,CAAAwoB,MAAtC,CAAA,CAKA,IAHA,IAAIljB,EAAMzV,CAAA,CAAOmQ,CAAAO,OAAP,CAGV,CAAsC,GAAtC,GAAO7Q,CAAA,CAAU4V,CAAA,CAAI,CAAJ,CAAAhZ,SAAV,CAAP,CAAA,CAEE,GAAIgZ,CAAA,CAAI,CAAJ,CAAJ,GAAeqJ,CAAA,CAAa,CAAb,CAAf,EAAkC,CAAC,CAACrJ,CAAD,CAAOA,CAAAla,OAAA,EAAP,EAAqB,CAArB,CAAnC,CAA4D,MAG9D;IAAIq9B,EAAUnjB,CAAA/Y,KAAA,CAAS,MAAT,CAEVX,EAAA,CAAS68B,CAAT,CAAJ,EAAgD,4BAAhD,GAAyBA,CAAA18B,SAAA,EAAzB,GAGE08B,CAHF,CAGY3G,EAAA,CAAW2G,CAAAC,QAAX,CAAAzgB,KAHZ,CAMA,KAAI0gB,EAAe3jB,CAAA6hB,UAAA,CAAoB4B,CAApB,CAEfA,EAAJ,GAAgB,CAAAnjB,CAAA9Y,KAAA,CAAS,QAAT,CAAhB,EAAsCm8B,CAAtC,EAAuD,CAAA3oB,CAAAW,mBAAA,EAAvD,IACEX,CAAAC,eAAA,EACA,CAAI0oB,CAAJ,EAAoB/L,CAAA3V,IAAA,EAApB,GAEEjC,CAAAqhB,QAAA,CAAkBsC,CAAlB,CAGA,CAFA1jB,CAAAtS,OAAA,EAEA,CAAArK,CAAAyK,QAAA,CAAe,0BAAf,CAAA,CAA6C,CAAA,CAL/C,CAFF,CApBA,CAJuC,CAAzC,CAsCIiS,EAAAmjB,OAAA,EAAJ,EAA0BC,CAA1B,EACExL,CAAA3V,IAAA,CAAajC,CAAAmjB,OAAA,EAAb,CAAiC,CAAA,CAAjC,CAIFvL,EAAArU,YAAA,CAAqB,QAAQ,CAACqgB,CAAD,CAAS,CAChC5jB,CAAAmjB,OAAA,EAAJ,EAA0BS,CAA1B,GACE3jB,CAAA7X,WAAA,CAAsB,QAAQ,EAAG,CAC/B,IAAI66B,EAASjjB,CAAAmjB,OAAA,EAEbnjB,EAAAqhB,QAAA,CAAkBuC,CAAlB,CACI3jB,EAAAijB,WAAA,CAAsB,sBAAtB,CAA8CU,CAA9C,CACsBX,CADtB,CAAAxnB,iBAAJ,EAEEuE,CAAAqhB,QAAA,CAAkB4B,CAAlB,CACA,CAAArL,CAAA3V,IAAA,CAAaghB,CAAb,CAHF,EAKED,CAAA,CAAoBC,CAApB,CAT6B,CAAjC,CAYA,CAAKhjB,CAAAua,QAAL;AAAyBva,CAAA4jB,QAAA,EAb3B,CADoC,CAAtC,CAmBA,KAAIC,EAAgB,CACpB7jB,EAAA5X,OAAA,CAAkB07B,QAAuB,EAAG,CAC1C,IAAId,EAASrL,CAAA3V,IAAA,EAAb,CACI+hB,EAAiBhkB,CAAAikB,UAEhBH,EAAL,EAAsBb,CAAtB,EAAgCjjB,CAAAmjB,OAAA,EAAhC,GACEW,CAAA,EACA,CAAA7jB,CAAA7X,WAAA,CAAsB,QAAQ,EAAG,CAC3B6X,CAAAijB,WAAA,CAAsB,sBAAtB,CAA8CljB,CAAAmjB,OAAA,EAA9C,CAAkEF,CAAlE,CAAAxnB,iBAAJ,CAEEuE,CAAAqhB,QAAA,CAAkB4B,CAAlB,CAFF,EAIErL,CAAA3V,IAAA,CAAajC,CAAAmjB,OAAA,EAAb,CAAiCa,CAAjC,CACA,CAAAhB,CAAA,CAAoBC,CAApB,CALF,CAD+B,CAAjC,CAFF,CAYAjjB,EAAAikB,UAAA,CAAsB,CAAA,CAEtB,OAAOH,EAlBmC,CAA5C,CAqBA,OAAO9jB,EArGyD,CADtD,CA/Dc,CAuN5B1L,QAASA,GAAY,EAAE,CAAA,IACjB4vB,EAAQ,CAAA,CADS,CAEjB16B,EAAO,IASX,KAAA26B,aAAA,CAAoBC,QAAQ,CAACC,CAAD,CAAO,CACjC,MAAI19B,EAAA,CAAU09B,CAAV,CAAJ,EACEH,CACK,CADGG,CACH,CAAA,IAFP,EAISH,CALwB,CASnC,KAAA7mB,KAAA,CAAY,CAAC,SAAD,CAAY,QAAQ,CAAC0C,CAAD,CAAS,CAwDvCukB,QAASA,EAAW,CAAC51B,CAAD,CAAM,CACpBA,CAAJ,WAAmB61B,MAAnB,GACM71B,CAAAuP,MAAJ,CACEvP,CADF,CACSA,CAAAsP,QACD,EADoD,EACpD,GADgBtP,CAAAuP,MAAApW,QAAA,CAAkB6G,CAAAsP,QAAlB,CAChB,CAAA,SAAA,CAAYtP,CAAAsP,QAAZ,CAA0B,IAA1B,CAAiCtP,CAAAuP,MAAjC;AACAvP,CAAAuP,MAHR,CAIWvP,CAAA81B,UAJX,GAKE91B,CALF,CAKQA,CAAAsP,QALR,CAKsB,IALtB,CAK6BtP,CAAA81B,UAL7B,CAK6C,GAL7C,CAKmD91B,CAAAkoB,KALnD,CADF,CASA,OAAOloB,EAViB,CAa1B+1B,QAASA,EAAU,CAAC/rB,CAAD,CAAO,CAAA,IACpBgsB,EAAU3kB,CAAA2kB,QAAVA,EAA6B,EADT,CAEpBC,EAAQD,CAAA,CAAQhsB,CAAR,CAARisB,EAAyBD,CAAAE,IAAzBD,EAAwCr+B,CACxCu+B,EAAAA,CAAW,CAAA,CAIf,IAAI,CACFA,CAAA,CAAW,CAAC,CAACF,CAAA96B,MADX,CAEF,MAAOmB,CAAP,CAAU,EAEZ,MAAI65B,EAAJ,CACS,QAAQ,EAAG,CAChB,IAAIpmB,EAAO,EACXxa,EAAA,CAAQ8B,SAAR,CAAmB,QAAQ,CAAC2I,CAAD,CAAM,CAC/B+P,CAAA/Z,KAAA,CAAU4/B,CAAA,CAAY51B,CAAZ,CAAV,CAD+B,CAAjC,CAGA,OAAOi2B,EAAA96B,MAAA,CAAY66B,CAAZ,CAAqBjmB,CAArB,CALS,CADpB,CAYO,QAAQ,CAACqmB,CAAD,CAAOC,CAAP,CAAa,CAC1BJ,CAAA,CAAMG,CAAN,CAAoB,IAAR,EAAAC,CAAA,CAAe,EAAf,CAAoBA,CAAhC,CAD0B,CAvBJ,CApE1B,MAAO,KAQAN,CAAA,CAAW,KAAX,CARA,MAiBCA,CAAA,CAAW,MAAX,CAjBD,MA0BCA,CAAA,CAAW,MAAX,CA1BD,OAmCEA,CAAA,CAAW,OAAX,CAnCF,OA4CG,QAAS,EAAG,CAClB,IAAIh7B,EAAKg7B,CAAA,CAAW,OAAX,CAET,OAAO,SAAQ,EAAG,CACZP,CAAJ,EACEz6B,CAAAI,MAAA,CAASL,CAAT,CAAezD,SAAf,CAFc,CAHA,CAAZ,EA5CH,CADgC,CAA7B,CApBS,CAwJvBi/B,QAASA,GAAoB,CAACr4B,CAAD,CAAOs4B,CAAP,CAAuB,CAClD,GAAa,aAAb,GAAIt4B,CAAJ,CACE,KAAMu4B,GAAA,CAAa,SAAb,CAEFD,CAFE,CAAN,CAIF,MAAOt4B,EAN2C,CASpDw4B,QAASA,GAAgB,CAACxhC,CAAD;AAAMshC,CAAN,CAAsB,CAE7C,GAAIthC,CAAJ,CAAS,CACP,GAAIA,CAAAmL,YAAJ,GAAwBnL,CAAxB,CACE,KAAMuhC,GAAA,CAAa,QAAb,CAEFD,CAFE,CAAN,CAGK,GACHthC,CAAAJ,SADG,EACaI,CAAAsD,SADb,EAC6BtD,CAAAuD,MAD7B,EAC0CvD,CAAAwD,YAD1C,CAEL,KAAM+9B,GAAA,CAAa,YAAb,CAEFD,CAFE,CAAN,CAGK,GACHthC,CAAAyS,SADG,GACczS,CAAA2D,SADd,EAC+B3D,CAAA4D,KAD/B,EAC2C5D,CAAA6D,KAD3C,EACuD7D,CAAA8D,KADvD,EAEL,KAAMy9B,GAAA,CAAa,SAAb,CAEFD,CAFE,CAAN,CAZK,CAiBT,MAAOthC,EAnBsC,CA4yB/CyhC,QAASA,GAAM,CAACzhC,CAAD,CAAMsL,CAAN,CAAYo2B,CAAZ,CAAsBC,CAAtB,CAA+BlgB,CAA/B,CAAwC,CAErDA,CAAA,CAAUA,CAAV,EAAqB,EAEjBxa,EAAAA,CAAUqE,CAAArD,MAAA,CAAW,GAAX,CACd,KADA,IAA+BxH,CAA/B,CACSS,EAAI,CAAb,CAAiC,CAAjC,CAAgB+F,CAAA/G,OAAhB,CAAoCgB,CAAA,EAApC,CAAyC,CACvCT,CAAA,CAAM4gC,EAAA,CAAqBp6B,CAAAyL,MAAA,EAArB,CAAsCivB,CAAtC,CACN,KAAIC,EAAc5hC,CAAA,CAAIS,CAAJ,CACbmhC,EAAL,GACEA,CACA,CADc,EACd,CAAA5hC,CAAA,CAAIS,CAAJ,CAAA,CAAWmhC,CAFb,CAIA5hC,EAAA,CAAM4hC,CACF5hC,EAAA61B,KAAJ,EAAgBpU,CAAAogB,eAAhB,GACEC,EAAA,CAAeH,CAAf,CASA,CARM,KAQN,EARe3hC,EAQf,EAPG,QAAQ,CAAC81B,CAAD,CAAU,CACjBA,CAAAD,KAAA,CAAa,QAAQ,CAACxvB,CAAD,CAAM,CAAEyvB,CAAAiM,IAAA,CAAc17B,CAAhB,CAA3B,CADiB,CAAlB,CAECrG,CAFD,CAOH,CAHIA,CAAA+hC,IAGJ,GAHgBliC,CAGhB,GAFEG,CAAA+hC,IAEF,CAFY,EAEZ,EAAA/hC,CAAA,CAAMA,CAAA+hC,IAVR,CARuC,CAqBzCthC,CAAA,CAAM4gC,EAAA,CAAqBp6B,CAAAyL,MAAA,EAArB,CAAsCivB,CAAtC,CAEN,OADA3hC,EAAA,CAAIS,CAAJ,CACA,CADWihC,CA3B0C,CAsCvDM,QAASA,GAAe,CAACC,CAAD;AAAOC,CAAP,CAAaC,CAAb,CAAmBC,CAAnB,CAAyBC,CAAzB,CAA+BV,CAA/B,CAAwClgB,CAAxC,CAAiD,CACvE4f,EAAA,CAAqBY,CAArB,CAA2BN,CAA3B,CACAN,GAAA,CAAqBa,CAArB,CAA2BP,CAA3B,CACAN,GAAA,CAAqBc,CAArB,CAA2BR,CAA3B,CACAN,GAAA,CAAqBe,CAArB,CAA2BT,CAA3B,CACAN,GAAA,CAAqBgB,CAArB,CAA2BV,CAA3B,CAEA,OAAQlgB,EAAAogB,eACD,CAwBDS,QAAoC,CAACz4B,CAAD,CAAQgR,CAAR,CAAgB,CAAA,IAC9C0nB,EAAW1nB,CAAD,EAAWA,CAAAla,eAAA,CAAsBshC,CAAtB,CAAX,CAA0CpnB,CAA1C,CAAmDhR,CADf,CAE9CisB,CAEJ,IAAe,IAAf,EAAIyM,CAAJ,CAAqB,MAAOA,EAG5B,EADAA,CACA,CADUA,CAAA,CAAQN,CAAR,CACV,GAAeM,CAAA1M,KAAf,GACEiM,EAAA,CAAeH,CAAf,CAMA,CALM,KAKN,EALeY,EAKf,GAJEzM,CAEA,CAFUyM,CAEV,CADAzM,CAAAiM,IACA,CADcliC,CACd,CAAAi2B,CAAAD,KAAA,CAAa,QAAQ,CAACxvB,CAAD,CAAM,CAAEyvB,CAAAiM,IAAA,CAAc17B,CAAhB,CAA3B,CAEF,EAAAk8B,CAAA,CAAUA,CAAAR,IAPZ,CAUA,IAAI,CAACG,CAAL,CAAW,MAAOK,EAClB,IAAe,IAAf,EAAIA,CAAJ,CAAqB,MAAO1iC,EAE5B,EADA0iC,CACA,CADUA,CAAA,CAAQL,CAAR,CACV,GAAeK,CAAA1M,KAAf,GACEiM,EAAA,CAAeH,CAAf,CAMA,CALM,KAKN,EALeY,EAKf,GAJEzM,CAEA,CAFUyM,CAEV,CADAzM,CAAAiM,IACA,CADcliC,CACd,CAAAi2B,CAAAD,KAAA,CAAa,QAAQ,CAACxvB,CAAD,CAAM,CAAEyvB,CAAAiM,IAAA,CAAc17B,CAAhB,CAA3B,CAEF,EAAAk8B,CAAA,CAAUA,CAAAR,IAPZ,CAUA,IAAI,CAACI,CAAL,CAAW,MAAOI,EAClB,IAAe,IAAf,EAAIA,CAAJ,CAAqB,MAAO1iC,EAE5B,EADA0iC,CACA,CADUA,CAAA,CAAQJ,CAAR,CACV,GAAeI,CAAA1M,KAAf,GACEiM,EAAA,CAAeH,CAAf,CAMA,CALM,KAKN,EALeY,EAKf,GAJEzM,CAEA,CAFUyM,CAEV,CADAzM,CAAAiM,IACA,CADcliC,CACd,CAAAi2B,CAAAD,KAAA,CAAa,QAAQ,CAACxvB,CAAD,CAAM,CAAEyvB,CAAAiM,IAAA,CAAc17B,CAAhB,CAA3B,CAEF,EAAAk8B,CAAA,CAAUA,CAAAR,IAPZ,CAUA,IAAI,CAACK,CAAL,CAAW,MAAOG,EAClB,IAAe,IAAf;AAAIA,CAAJ,CAAqB,MAAO1iC,EAE5B,EADA0iC,CACA,CADUA,CAAA,CAAQH,CAAR,CACV,GAAeG,CAAA1M,KAAf,GACEiM,EAAA,CAAeH,CAAf,CAMA,CALM,KAKN,EALeY,EAKf,GAJEzM,CAEA,CAFUyM,CAEV,CADAzM,CAAAiM,IACA,CADcliC,CACd,CAAAi2B,CAAAD,KAAA,CAAa,QAAQ,CAACxvB,CAAD,CAAM,CAAEyvB,CAAAiM,IAAA,CAAc17B,CAAhB,CAA3B,CAEF,EAAAk8B,CAAA,CAAUA,CAAAR,IAPZ,CAUA,IAAI,CAACM,CAAL,CAAW,MAAOE,EAClB,IAAe,IAAf,EAAIA,CAAJ,CAAqB,MAAO1iC,EAE5B,EADA0iC,CACA,CADUA,CAAA,CAAQF,CAAR,CACV,GAAeE,CAAA1M,KAAf,GACEiM,EAAA,CAAeH,CAAf,CAMA,CALM,KAKN,EALeY,EAKf,GAJEzM,CAEA,CAFUyM,CAEV,CADAzM,CAAAiM,IACA,CADcliC,CACd,CAAAi2B,CAAAD,KAAA,CAAa,QAAQ,CAACxvB,CAAD,CAAM,CAAEyvB,CAAAiM,IAAA,CAAc17B,CAAhB,CAA3B,CAEF,EAAAk8B,CAAA,CAAUA,CAAAR,IAPZ,CASA,OAAOQ,EApE2C,CAxBnD,CAADC,QAAsB,CAAC34B,CAAD,CAAQgR,CAAR,CAAgB,CACpC,IAAI0nB,EAAW1nB,CAAD,EAAWA,CAAAla,eAAA,CAAsBshC,CAAtB,CAAX,CAA0CpnB,CAA1C,CAAmDhR,CAEjE,IAAe,IAAf,EAAI04B,CAAJ,CAAqB,MAAOA,EAC5BA,EAAA,CAAUA,CAAA,CAAQN,CAAR,CAEV,IAAI,CAACC,CAAL,CAAW,MAAOK,EAClB,IAAe,IAAf,EAAIA,CAAJ,CAAqB,MAAO1iC,EAC5B0iC,EAAA,CAAUA,CAAA,CAAQL,CAAR,CAEV,IAAI,CAACC,CAAL,CAAW,MAAOI,EAClB,IAAe,IAAf,EAAIA,CAAJ,CAAqB,MAAO1iC,EAC5B0iC,EAAA,CAAUA,CAAA,CAAQJ,CAAR,CAEV,IAAI,CAACC,CAAL,CAAW,MAAOG,EAClB,IAAe,IAAf,EAAIA,CAAJ,CAAqB,MAAO1iC,EAC5B0iC,EAAA,CAAUA,CAAA,CAAQH,CAAR,CAEV,OAAKC,EAAL,CACe,IAAf,EAAIE,CAAJ,CAA4B1iC,CAA5B,CACA0iC,CADA,CACUA,CAAA,CAAQF,CAAR,CAFV,CAAkBE,CAlBkB,CAR2B,CAwGzEE,QAASA,GAAe,CAACR,CAAD,CAAON,CAAP,CAAgB,CACtCN,EAAA,CAAqBY,CAArB,CAA2BN,CAA3B,CAEA,OAAOc,SAAwB,CAAC54B,CAAD;AAAQgR,CAAR,CAAgB,CAC7C,MAAa,KAAb,EAAIhR,CAAJ,CAA0BhK,CAA1B,CACO,CAAEgb,CAAD,EAAWA,CAAAla,eAAA,CAAsBshC,CAAtB,CAAX,CAA0CpnB,CAA1C,CAAmDhR,CAApD,EAA2Do4B,CAA3D,CAFsC,CAHT,CASxCS,QAASA,GAAe,CAACT,CAAD,CAAOC,CAAP,CAAaP,CAAb,CAAsB,CAC5CN,EAAA,CAAqBY,CAArB,CAA2BN,CAA3B,CACAN,GAAA,CAAqBa,CAArB,CAA2BP,CAA3B,CAEA,OAAOe,SAAwB,CAAC74B,CAAD,CAAQgR,CAAR,CAAgB,CAC7C,GAAa,IAAb,EAAIhR,CAAJ,CAAmB,MAAOhK,EAC1BgK,EAAA,CAAQ,CAAEgR,CAAD,EAAWA,CAAAla,eAAA,CAAsBshC,CAAtB,CAAX,CAA0CpnB,CAA1C,CAAmDhR,CAApD,EAA2Do4B,CAA3D,CACR,OAAgB,KAAT,EAAAp4B,CAAA,CAAgBhK,CAAhB,CAA4BgK,CAAA,CAAMq4B,CAAN,CAHU,CAJH,CAW9CS,QAASA,GAAQ,CAACr3B,CAAD,CAAOmW,CAAP,CAAgBkgB,CAAhB,CAAyB,CAIxC,GAAIiB,EAAAjiC,eAAA,CAA6B2K,CAA7B,CAAJ,CACE,MAAOs3B,GAAA,CAAct3B,CAAd,CAL+B,KAQpCu3B,EAAWv3B,CAAArD,MAAA,CAAW,GAAX,CARyB,CASpC66B,EAAiBD,CAAA3iC,OATmB,CAUpC4F,CAIJ,IAAK2b,CAAAogB,eAAL,EAAkD,CAAlD,GAA+BiB,CAA/B,CAEO,GAAKrhB,CAAAogB,eAAL,EAAkD,CAAlD,GAA+BiB,CAA/B,CAEA,GAAIrhB,CAAAjc,IAAJ,CAEHM,CAAA,CADmB,CAArB,CAAIg9B,CAAJ,CACOd,EAAA,CAAgBa,CAAA,CAAS,CAAT,CAAhB,CAA6BA,CAAA,CAAS,CAAT,CAA7B,CAA0CA,CAAA,CAAS,CAAT,CAA1C,CAAuDA,CAAA,CAAS,CAAT,CAAvD,CAAoEA,CAAA,CAAS,CAAT,CAApE,CAAiFlB,CAAjF,CACelgB,CADf,CADP,CAIO3b,QAAQ,CAAC+D,CAAD,CAAQgR,CAAR,CAAgB,CAAA,IACvB3Z,EAAI,CADmB,CAChBmF,CACX,GACEA,EAIA,CAJM27B,EAAA,CAAgBa,CAAA,CAAS3hC,CAAA,EAAT,CAAhB,CAA+B2hC,CAAA,CAAS3hC,CAAA,EAAT,CAA/B,CAA8C2hC,CAAA,CAAS3hC,CAAA,EAAT,CAA9C,CAA6D2hC,CAAA,CAAS3hC,CAAA,EAAT,CAA7D,CACgB2hC,CAAA,CAAS3hC,CAAA,EAAT,CADhB,CAC+BygC,CAD/B,CACwClgB,CADxC,CAAA,CACiD5X,CADjD,CACwDgR,CADxD,CAIN,CADAA,CACA,CADShb,CACT,CAAAgK,CAAA,CAAQxD,CALV,OAMSnF,CANT,CAMa4hC,CANb,CAOA,OAAOz8B,EAToB,CAL1B,KAiBA,CACL,IAAI8oB,EAAO,UACX7uB;CAAA,CAAQuiC,CAAR,CAAkB,QAAQ,CAACpiC,CAAD,CAAMc,CAAN,CAAa,CACrC8/B,EAAA,CAAqB5gC,CAArB,CAA0BkhC,CAA1B,CACAxS,EAAA,EAAQ,qCAAR,EACe5tB,CAEA,CAAG,GAAH,CAEG,yBAFH,CAE+Bd,CAF/B,CAEqC,UALpD,EAKkE,IALlE,CAKyEA,CALzE,CAKsF,OALtF,EAMSghB,CAAAogB,eACA,CAAG,2BAAH,CACaF,CAAAh6B,QAAA,CAAgB,YAAhB,CAA8B,MAA9B,CADb,CAQC,4GARD,CASG,EAhBZ,CAFqC,CAAvC,CAoBA,KAAAwnB,EAAAA,CAAAA,CAAQ,WAAR,CAGI4T,EAAiB,IAAIC,QAAJ,CAAa,GAAb,CAAkB,GAAlB,CAAuB,IAAvB,CAA6B7T,CAA7B,CAErB4T,EAAA3/B,SAAA,CAA0BN,EAAA,CAAQqsB,CAAR,CAC1BrpB,EAAA,CAAK2b,CAAAogB,eAAA,CAAyB,QAAQ,CAACh4B,CAAD,CAAQgR,CAAR,CAAgB,CACpD,MAAOkoB,EAAA,CAAel5B,CAAf,CAAsBgR,CAAtB,CAA8BinB,EAA9B,CAD6C,CAAjD,CAEDiB,CA9BC,CAnBA,IACLj9B,EAAA,CAAK48B,EAAA,CAAgBG,CAAA,CAAS,CAAT,CAAhB,CAA6BA,CAAA,CAAS,CAAT,CAA7B,CAA0ClB,CAA1C,CAHP,KACE77B,EAAA,CAAK28B,EAAA,CAAgBI,CAAA,CAAS,CAAT,CAAhB,CAA6BlB,CAA7B,CAuDM,iBAAb;AAAIr2B,CAAJ,GACEs3B,EAAA,CAAct3B,CAAd,CADF,CACwBxF,CADxB,CAGA,OAAOA,EAzEiC,CAgI1C8K,QAASA,GAAc,EAAG,CACxB,IAAI4J,EAAQ,EAAZ,CAEIyoB,EAAgB,KACb,CAAA,CADa,gBAEF,CAAA,CAFE,oBAGE,CAAA,CAHF,CAmDpB,KAAApB,eAAA,CAAsBqB,QAAQ,CAAC7hC,CAAD,CAAQ,CACpC,MAAI2B,EAAA,CAAU3B,CAAV,CAAJ,EACE4hC,CAAApB,eACO,CADwB,CAAC,CAACxgC,CAC1B,CAAA,IAFT,EAIS4hC,CAAApB,eAL2B,CA2BvC,KAAAsB,mBAAA,CAA0BC,QAAQ,CAAC/hC,CAAD,CAAQ,CACvC,MAAI2B,EAAA,CAAU3B,CAAV,CAAJ,EACE4hC,CAAAE,mBACO,CAD4B9hC,CAC5B,CAAA,IAFT,EAIS4hC,CAAAE,mBAL8B,CAUzC,KAAAzpB,KAAA,CAAY,CAAC,SAAD,CAAY,UAAZ,CAAwB,MAAxB,CAAgC,QAAQ,CAAC2pB,CAAD,CAAU/lB,CAAV,CAAoBD,CAApB,CAA0B,CAC5E4lB,CAAAz9B,IAAA,CAAoB8X,CAAA9X,IAEpBs8B,GAAA,CAAiBA,QAAyB,CAACH,CAAD,CAAU,CAC7CsB,CAAAE,mBAAL,EAAyC,CAAAG,EAAA3iC,eAAA,CAAmCghC,CAAnC,CAAzC,GACA2B,EAAA,CAAoB3B,CAApB,CACA,CAD+B,CAAA,CAC/B,CAAAtkB,CAAAqD,KAAA,CAAU,4CAAV,CAAyDihB,CAAzD,CACI,2EADJ,CAFA,CADkD,CAOpD;MAAO,SAAQ,CAACtH,CAAD,CAAM,CACnB,IAAIkJ,CAEJ,QAAQ,MAAOlJ,EAAf,EACE,KAAK,QAAL,CAEE,GAAI7f,CAAA7Z,eAAA,CAAqB05B,CAArB,CAAJ,CACE,MAAO7f,EAAA,CAAM6f,CAAN,CAGLmJ,EAAAA,CAAQ,IAAIC,EAAJ,CAAUR,CAAV,CAEZM,EAAA,CAAmB38B,CADN88B,IAAIC,EAAJD,CAAWF,CAAXE,CAAkBL,CAAlBK,CAA2BT,CAA3BS,CACM98B,OAAA,CAAayzB,CAAb,CAAkB,CAAA,CAAlB,CAEP,iBAAZ,GAAIA,CAAJ,GAGE7f,CAAA,CAAM6f,CAAN,CAHF,CAGekJ,CAHf,CAMA,OAAOA,EAET,MAAK,UAAL,CACE,MAAOlJ,EAET,SACE,MAAO13B,EAvBX,CAHmB,CAVuD,CAAlE,CA3FY,CA6S1BmO,QAASA,GAAU,EAAG,CAEpB,IAAA4I,KAAA,CAAY,CAAC,YAAD,CAAe,mBAAf,CAAoC,QAAQ,CAAC4C,CAAD,CAAaqH,CAAb,CAAgC,CACtF,MAAOigB,GAAA,CAAS,QAAQ,CAACzkB,CAAD,CAAW,CACjC7C,CAAA7X,WAAA,CAAsB0a,CAAtB,CADiC,CAA5B,CAEJwE,CAFI,CAD+E,CAA5E,CAFQ,CAkBtBigB,QAASA,GAAQ,CAACC,CAAD,CAAWC,CAAX,CAA6B,CAyR5CC,QAASA,EAAe,CAAC1iC,CAAD,CAAQ,CAC9B,MAAOA,EADuB,CAKhC2iC,QAASA,EAAc,CAACh5B,CAAD,CAAS,CAC9B,MAAOupB,EAAA,CAAOvpB,CAAP,CADuB,CAlRhC,IAAI8V,EAAQA,QAAQ,EAAG,CAAA,IACjBmjB,EAAU,EADO,CAEjB5iC,CAFiB,CAEVy1B,CA+HX,OA7HAA,EA6HA,CA7HW,SAEAC,QAAQ,CAAC1wB,CAAD,CAAM,CACrB,GAAI49B,CAAJ,CAAa,CACX,IAAIhM,EAAYgM,CAChBA,EAAA,CAAUpkC,CACVwB,EAAA,CAAQ6iC,CAAA,CAAI79B,CAAJ,CAEJ4xB,EAAA/3B,OAAJ,EACE2jC,CAAA,CAAS,QAAQ,EAAG,CAElB,IADA,IAAI1kB,CAAJ;AACSje,EAAI,CADb,CACgB0V,EAAKqhB,CAAA/3B,OAArB,CAAuCgB,CAAvC,CAA2C0V,CAA3C,CAA+C1V,CAAA,EAA/C,CACEie,CACA,CADW8Y,CAAA,CAAU/2B,CAAV,CACX,CAAAG,CAAAw0B,KAAA,CAAW1W,CAAA,CAAS,CAAT,CAAX,CAAwBA,CAAA,CAAS,CAAT,CAAxB,CAAqCA,CAAA,CAAS,CAAT,CAArC,CAJgB,CAApB,CANS,CADQ,CAFd,QAqBDoV,QAAQ,CAACvpB,CAAD,CAAS,CACvB8rB,CAAAC,QAAA,CAAiBoN,CAAA,CAA8Bn5B,CAA9B,CAAjB,CADuB,CArBhB,QA0BDqwB,QAAQ,CAAC+I,CAAD,CAAW,CACzB,GAAIH,CAAJ,CAAa,CACX,IAAIhM,EAAYgM,CAEZA,EAAA/jC,OAAJ,EACE2jC,CAAA,CAAS,QAAQ,EAAG,CAElB,IADA,IAAI1kB,CAAJ,CACSje,EAAI,CADb,CACgB0V,EAAKqhB,CAAA/3B,OAArB,CAAuCgB,CAAvC,CAA2C0V,CAA3C,CAA+C1V,CAAA,EAA/C,CACEie,CACA,CADW8Y,CAAA,CAAU/2B,CAAV,CACX,CAAAie,CAAA,CAAS,CAAT,CAAA,CAAYilB,CAAZ,CAJgB,CAApB,CAJS,CADY,CA1BlB,SA2CA,MACDvO,QAAQ,CAAC1W,CAAD,CAAWklB,CAAX,CAAoBC,CAApB,CAAkC,CAC9C,IAAI9nB,EAASsE,CAAA,EAAb,CAEIyjB,EAAkBA,QAAQ,CAACljC,CAAD,CAAQ,CACpC,GAAI,CACFmb,CAAAua,QAAA,CAAgB,CAAAr2B,CAAA,CAAWye,CAAX,CAAA,CAAuBA,CAAvB,CAAkC4kB,CAAlC,EAAmD1iC,CAAnD,CAAhB,CADE,CAEF,MAAMgG,CAAN,CAAS,CACTmV,CAAA+X,OAAA,CAAcltB,CAAd,CACA,CAAAy8B,CAAA,CAAiBz8B,CAAjB,CAFS,CAHyB,CAFtC,CAWIm9B,EAAiBA,QAAQ,CAACx5B,CAAD,CAAS,CACpC,GAAI,CACFwR,CAAAua,QAAA,CAAgB,CAAAr2B,CAAA,CAAW2jC,CAAX,CAAA,CAAsBA,CAAtB,CAAgCL,CAAhC,EAAgDh5B,CAAhD,CAAhB,CADE,CAEF,MAAM3D,CAAN,CAAS,CACTmV,CAAA+X,OAAA,CAAcltB,CAAd,CACA,CAAAy8B,CAAA,CAAiBz8B,CAAjB,CAFS,CAHyB,CAXtC,CAoBIo9B,EAAsBA,QAAQ,CAACL,CAAD,CAAW,CAC3C,GAAI,CACF5nB,CAAA6e,OAAA,CAAe,CAAA36B,CAAA,CAAW4jC,CAAX,CAAA,CAA2BA,CAA3B,CAA0CP,CAA1C,EAA2DK,CAA3D,CAAf,CADE,CAEF,MAAM/8B,CAAN,CAAS,CACTy8B,CAAA,CAAiBz8B,CAAjB,CADS,CAHgC,CAQzC48B,EAAJ,CACEA,CAAAljC,KAAA,CAAa,CAACwjC,CAAD,CAAkBC,CAAlB,CAAkCC,CAAlC,CAAb,CADF,CAGEpjC,CAAAw0B,KAAA,CAAW0O,CAAX,CAA4BC,CAA5B,CAA4CC,CAA5C,CAGF,OAAOjoB,EAAAsZ,QAnCuC,CADzC,CAuCP,OAvCO,CAuCE4O,QAAQ,CAACvlB,CAAD,CAAW,CAC1B,MAAO,KAAA0W,KAAA,CAAU,IAAV;AAAgB1W,CAAhB,CADmB,CAvCrB,CA2CP,SA3CO,CA2CIwlB,QAAQ,CAACxlB,CAAD,CAAW,CAE5BylB,QAASA,EAAW,CAACvjC,CAAD,CAAQwjC,CAAR,CAAkB,CACpC,IAAIroB,EAASsE,CAAA,EACT+jB,EAAJ,CACEroB,CAAAua,QAAA,CAAe11B,CAAf,CADF,CAGEmb,CAAA+X,OAAA,CAAclzB,CAAd,CAEF,OAAOmb,EAAAsZ,QAP6B,CAUtCgP,QAASA,EAAc,CAACzjC,CAAD,CAAQ0jC,CAAR,CAAoB,CACzC,IAAIC,EAAiB,IACrB,IAAI,CACFA,CAAA,CAAkB,CAAA7lB,CAAA,EAAW4kB,CAAX,GADhB,CAEF,MAAM18B,CAAN,CAAS,CACT,MAAOu9B,EAAA,CAAYv9B,CAAZ,CAAe,CAAA,CAAf,CADE,CAGX,MAAI29B,EAAJ,EAAsBtkC,CAAA,CAAWskC,CAAAnP,KAAX,CAAtB,CACSmP,CAAAnP,KAAA,CAAoB,QAAQ,EAAG,CACpC,MAAO+O,EAAA,CAAYvjC,CAAZ,CAAmB0jC,CAAnB,CAD6B,CAA/B,CAEJ,QAAQ,CAACpnB,CAAD,CAAQ,CACjB,MAAOinB,EAAA,CAAYjnB,CAAZ,CAAmB,CAAA,CAAnB,CADU,CAFZ,CADT,CAOSinB,CAAA,CAAYvjC,CAAZ,CAAmB0jC,CAAnB,CAdgC,CAkB3C,MAAO,KAAAlP,KAAA,CAAU,QAAQ,CAACx0B,CAAD,CAAQ,CAC/B,MAAOyjC,EAAA,CAAezjC,CAAf,CAAsB,CAAA,CAAtB,CADwB,CAA1B,CAEJ,QAAQ,CAACsc,CAAD,CAAQ,CACjB,MAAOmnB,EAAA,CAAennB,CAAf,CAAsB,CAAA,CAAtB,CADU,CAFZ,CA9BqB,CA3CvB,CA3CA,CAJU,CAAvB,CAqIIumB,EAAMA,QAAQ,CAAC7iC,CAAD,CAAQ,CACxB,MAAIA,EAAJ,EAAaX,CAAA,CAAWW,CAAAw0B,KAAX,CAAb,CAA4Cx0B,CAA5C,CACO,MACCw0B,QAAQ,CAAC1W,CAAD,CAAW,CACvB,IAAI3C,EAASsE,CAAA,EACb+iB,EAAA,CAAS,QAAQ,EAAG,CAClBrnB,CAAAua,QAAA,CAAe5X,CAAA,CAAS9d,CAAT,CAAf,CADkB,CAApB,CAGA,OAAOmb,EAAAsZ,QALgB,CADpB,CAFiB,CArI1B,CAuLIvB,EAASA,QAAQ,CAACvpB,CAAD,CAAS,CAC5B,IAAIwR,EAASsE,CAAA,EACbtE,EAAA+X,OAAA,CAAcvpB,CAAd,CACA,OAAOwR,EAAAsZ,QAHqB,CAvL9B,CA6LIqO,EAAgCA,QAAQ,CAACn5B,CAAD,CAAS,CACnD,MAAO,MACC6qB,QAAQ,CAAC1W,CAAD;AAAWklB,CAAX,CAAoB,CAChC,IAAI7nB,EAASsE,CAAA,EACb+iB,EAAA,CAAS,QAAQ,EAAG,CAClB,GAAI,CACFrnB,CAAAua,QAAA,CAAgB,CAAAr2B,CAAA,CAAW2jC,CAAX,CAAA,CAAsBA,CAAtB,CAAgCL,CAAhC,EAAgDh5B,CAAhD,CAAhB,CADE,CAEF,MAAM3D,CAAN,CAAS,CACTmV,CAAA+X,OAAA,CAAcltB,CAAd,CACA,CAAAy8B,CAAA,CAAiBz8B,CAAjB,CAFS,CAHO,CAApB,CAQA,OAAOmV,EAAAsZ,QAVyB,CAD7B,CAD4C,CAiIrD,OAAO,OACEhV,CADF,QAEGyT,CAFH,MAlGIwB,QAAQ,CAAC10B,CAAD,CAAQ8d,CAAR,CAAkBklB,CAAlB,CAA2BC,CAA3B,CAAyC,CAAA,IACtD9nB,EAASsE,CAAA,EAD6C,CAEtD2V,CAFsD,CAItD8N,EAAkBA,QAAQ,CAACljC,CAAD,CAAQ,CACpC,GAAI,CACF,MAAQ,CAAAX,CAAA,CAAWye,CAAX,CAAA,CAAuBA,CAAvB,CAAkC4kB,CAAlC,EAAmD1iC,CAAnD,CADN,CAEF,MAAOgG,CAAP,CAAU,CAEV,MADAy8B,EAAA,CAAiBz8B,CAAjB,CACO,CAAAktB,CAAA,CAAOltB,CAAP,CAFG,CAHwB,CAJoB,CAatDm9B,EAAiBA,QAAQ,CAACx5B,CAAD,CAAS,CACpC,GAAI,CACF,MAAQ,CAAAtK,CAAA,CAAW2jC,CAAX,CAAA,CAAsBA,CAAtB,CAAgCL,CAAhC,EAAgDh5B,CAAhD,CADN,CAEF,MAAO3D,CAAP,CAAU,CAEV,MADAy8B,EAAA,CAAiBz8B,CAAjB,CACO,CAAAktB,CAAA,CAAOltB,CAAP,CAFG,CAHwB,CAboB,CAsBtDo9B,EAAsBA,QAAQ,CAACL,CAAD,CAAW,CAC3C,GAAI,CACF,MAAQ,CAAA1jC,CAAA,CAAW4jC,CAAX,CAAA,CAA2BA,CAA3B,CAA0CP,CAA1C,EAA2DK,CAA3D,CADN,CAEF,MAAO/8B,CAAP,CAAU,CACVy8B,CAAA,CAAiBz8B,CAAjB,CADU,CAH+B,CAQ7Cw8B,EAAA,CAAS,QAAQ,EAAG,CAClBK,CAAA,CAAI7iC,CAAJ,CAAAw0B,KAAA,CAAgB,QAAQ,CAACx0B,CAAD,CAAQ,CAC1Bo1B,CAAJ,GACAA,CACA,CADO,CAAA,CACP,CAAAja,CAAAua,QAAA,CAAemN,CAAA,CAAI7iC,CAAJ,CAAAw0B,KAAA,CAAgB0O,CAAhB,CAAiCC,CAAjC,CAAiDC,CAAjD,CAAf,CAFA,CAD8B,CAAhC,CAIG,QAAQ,CAACz5B,CAAD,CAAS,CACdyrB,CAAJ,GACAA,CACA,CADO,CAAA,CACP,CAAAja,CAAAua,QAAA,CAAeyN,CAAA,CAAex5B,CAAf,CAAf,CAFA,CADkB,CAJpB,CAQG,QAAQ,CAACo5B,CAAD,CAAW,CAChB3N,CAAJ,EACAja,CAAA6e,OAAA,CAAcoJ,CAAA,CAAoBL,CAApB,CAAd,CAFoB,CARtB,CADkB,CAApB,CAeA,OAAO5nB,EAAAsZ,QA7CmD,CAkGrD;IAxBP7c,QAAY,CAACgsB,CAAD,CAAW,CAAA,IACjBnO,EAAWhW,CAAA,EADM,CAEjBwY,EAAU,CAFO,CAGjBt1B,EAAU3D,CAAA,CAAQ4kC,CAAR,CAAA,CAAoB,EAApB,CAAyB,EAEvC3kC,EAAA,CAAQ2kC,CAAR,CAAkB,QAAQ,CAACnP,CAAD,CAAUr1B,CAAV,CAAe,CACvC64B,CAAA,EACA4K,EAAA,CAAIpO,CAAJ,CAAAD,KAAA,CAAkB,QAAQ,CAACx0B,CAAD,CAAQ,CAC5B2C,CAAArD,eAAA,CAAuBF,CAAvB,CAAJ,GACAuD,CAAA,CAAQvD,CAAR,CACA,CADeY,CACf,CAAM,EAAEi4B,CAAR,EAAkBxC,CAAAC,QAAA,CAAiB/yB,CAAjB,CAFlB,CADgC,CAAlC,CAIG,QAAQ,CAACgH,CAAD,CAAS,CACdhH,CAAArD,eAAA,CAAuBF,CAAvB,CAAJ,EACAq2B,CAAAvC,OAAA,CAAgBvpB,CAAhB,CAFkB,CAJpB,CAFuC,CAAzC,CAYgB,EAAhB,GAAIsuB,CAAJ,EACExC,CAAAC,QAAA,CAAiB/yB,CAAjB,CAGF,OAAO8yB,EAAAhB,QArBc,CAwBhB,CA1UqC,CAkV9CzkB,QAASA,GAAa,EAAE,CACtB,IAAAqI,KAAA,CAAY,CAAC,SAAD,CAAY,UAAZ,CAAwB,QAAQ,CAAC0C,CAAD,CAAUc,CAAV,CAAoB,CAC9D,IAAIgoB,EAAwB9oB,CAAA8oB,sBAAxBA,EACwB9oB,CAAA+oB,4BADxBD,EAEwB9oB,CAAAgpB,yBAF5B,CAIIC,EAAuBjpB,CAAAipB,qBAAvBA,EACuBjpB,CAAAkpB,2BADvBD,EAEuBjpB,CAAAmpB,wBAFvBF,EAGuBjpB,CAAAopB,kCAP3B,CASIC,EAAe,CAAC,CAACP,CATrB,CAUIQ,EAAMD,CACA;AAAN,QAAQ,CAAC3/B,CAAD,CAAK,CACX,IAAI6/B,EAAKT,CAAA,CAAsBp/B,CAAtB,CACT,OAAO,SAAQ,EAAG,CAChBu/B,CAAA,CAAqBM,CAArB,CADgB,CAFP,CAAP,CAMN,QAAQ,CAAC7/B,CAAD,CAAK,CACX,IAAI8/B,EAAQ1oB,CAAA,CAASpX,CAAT,CAAa,KAAb,CAAoB,CAAA,CAApB,CACZ,OAAO,SAAQ,EAAG,CAChBoX,CAAAgE,OAAA,CAAgB0kB,CAAhB,CADgB,CAFP,CAOjBF,EAAAvoB,UAAA,CAAgBsoB,CAEhB,OAAOC,EA3BuD,CAApD,CADU,CAmGxB70B,QAASA,GAAkB,EAAE,CAC3B,IAAIg1B,EAAM,EAAV,CACIC,EAAmBhmC,CAAA,CAAO,YAAP,CADvB,CAEIimC,EAAiB,IAErB,KAAAC,UAAA,CAAiBC,QAAQ,CAAC5kC,CAAD,CAAQ,CAC3Be,SAAAlC,OAAJ,GACE2lC,CADF,CACQxkC,CADR,CAGA,OAAOwkC,EAJwB,CAOjC,KAAAnsB,KAAA,CAAY,CAAC,WAAD,CAAc,mBAAd,CAAmC,QAAnC,CAA6C,UAA7C,CACR,QAAQ,CAAE4B,CAAF,CAAeqI,CAAf,CAAoCc,CAApC,CAA8CwP,CAA9C,CAAwD,CA0ClEiS,QAASA,EAAK,EAAG,CACf,IAAAC,IAAA,CAAW7kC,EAAA,EACX,KAAAu1B,QAAA,CAAe,IAAAuP,QAAf,CAA8B,IAAAC,WAA9B,CACe,IAAAC,cADf,CACoC,IAAAC,cADpC,CAEe,IAAAC,YAFf,CAEkC,IAAAC,YAFlC,CAEqD,IACrD,KAAA,CAAK,MAAL,CAAA,CAAe,IAAAC,MAAf,CAA6B,IAC7B;IAAAC,YAAA,CAAmB,CAAA,CACnB,KAAAC,aAAA,CAAoB,EACpB,KAAAC,kBAAA,CAAyB,EACzB,KAAAC,YAAA,CAAmB,EACnB,KAAAC,gBAAA,CAAuB,EACvB,KAAA3b,kBAAA,CAAyB,EAXV,CA48BjB4b,QAASA,EAAU,CAACC,CAAD,CAAQ,CACzB,GAAI3qB,CAAAua,QAAJ,CACE,KAAMiP,EAAA,CAAiB,QAAjB,CAAsDxpB,CAAAua,QAAtD,CAAN,CAGFva,CAAAua,QAAA,CAAqBoQ,CALI,CAY3BC,QAASA,EAAW,CAAC7M,CAAD,CAAMrxB,CAAN,CAAY,CAC9B,IAAIlD,EAAK2e,CAAA,CAAO4V,CAAP,CACTpvB,GAAA,CAAYnF,CAAZ,CAAgBkD,CAAhB,CACA,OAAOlD,EAHuB,CAMhCqhC,QAASA,EAAsB,CAACC,CAAD,CAAUtM,CAAV,CAAiB9xB,CAAjB,CAAuB,CACpD,EACEo+B,EAAAL,gBAAA,CAAwB/9B,CAAxB,CAEA,EAFiC8xB,CAEjC,CAAsC,CAAtC,GAAIsM,CAAAL,gBAAA,CAAwB/9B,CAAxB,CAAJ,EACE,OAAOo+B,CAAAL,gBAAA,CAAwB/9B,CAAxB,CAJX,OAMUo+B,CANV,CAMoBA,CAAAhB,QANpB,CADoD,CActDiB,QAASA,EAAY,EAAG,EAt9BxBnB,CAAAhrB,UAAA,CAAkB,aACHgrB,CADG,MA0BVxf,QAAQ,CAAC4gB,CAAD,CAAU,CAIlBA,CAAJ,EACEC,CAIA,CAJQ,IAAIrB,CAIZ,CAHAqB,CAAAb,MAGA,CAHc,IAAAA,MAGd,CADAa,CAAAX,aACA,CADqB,IAAAA,aACrB,CAAAW,CAAAV,kBAAA;AAA0B,IAAAA,kBAL5B,GAOEW,CAKA,CALaA,QAAQ,EAAG,EAKxB,CAFAA,CAAAtsB,UAEA,CAFuB,IAEvB,CADAqsB,CACA,CADQ,IAAIC,CACZ,CAAAD,CAAApB,IAAA,CAAY7kC,EAAA,EAZd,CAcAimC,EAAA,CAAM,MAAN,CAAA,CAAgBA,CAChBA,EAAAT,YAAA,CAAoB,EACpBS,EAAAR,gBAAA,CAAwB,EACxBQ,EAAAnB,QAAA,CAAgB,IAChBmB,EAAAlB,WAAA,CAAmBkB,CAAAjB,cAAnB,CAAyCiB,CAAAf,YAAzC,CAA6De,CAAAd,YAA7D,CAAiF,IACjFc,EAAAhB,cAAA,CAAsB,IAAAE,YAClB,KAAAD,YAAJ,CAEE,IAAAC,YAFF,CACE,IAAAA,YAAAH,cADF,CACmCiB,CADnC,CAIE,IAAAf,YAJF,CAIqB,IAAAC,YAJrB,CAIwCc,CAExC,OAAOA,EA9Be,CA1BR,QAyKR7iC,QAAQ,CAAC+iC,CAAD,CAAWjpB,CAAX,CAAqBkpB,CAArB,CAAqC,CAAA,IAE/CztB,EAAMitB,CAAA,CAAYO,CAAZ,CAAsB,OAAtB,CAFyC,CAG/CtjC,EAFQ0F,IAEAw8B,WAHuC,CAI/CsB,EAAU,IACJnpB,CADI,MAEF6oB,CAFE,KAGHptB,CAHG,KAIHwtB,CAJG,IAKJ,CAAC,CAACC,CALE,CAQd3B,EAAA,CAAiB,IAGjB,IAAI,CAACrlC,CAAA,CAAW8d,CAAX,CAAL,CAA2B,CACzB,IAAIopB,EAAWV,CAAA,CAAY1oB,CAAZ,EAAwB7b,CAAxB,CAA8B,UAA9B,CACfglC,EAAA7hC,GAAA,CAAa+hC,QAAQ,CAACC,CAAD;AAASC,CAAT,CAAiBl+B,CAAjB,CAAwB,CAAC+9B,CAAA,CAAS/9B,CAAT,CAAD,CAFpB,CAK3B,GAAuB,QAAvB,EAAI,MAAO49B,EAAX,EAAmCxtB,CAAAsB,SAAnC,CAAiD,CAC/C,IAAIysB,EAAaL,CAAA7hC,GACjB6hC,EAAA7hC,GAAA,CAAa+hC,QAAQ,CAACC,CAAD,CAASC,CAAT,CAAiBl+B,CAAjB,CAAwB,CAC3Cm+B,CAAApnC,KAAA,CAAgB,IAAhB,CAAsBknC,CAAtB,CAA8BC,CAA9B,CAAsCl+B,CAAtC,CACAzF,GAAA,CAAYD,CAAZ,CAAmBwjC,CAAnB,CAF2C,CAFE,CAQ5CxjC,CAAL,GACEA,CADF,CA3BY0F,IA4BFw8B,WADV,CAC6B,EAD7B,CAKAliC,EAAArC,QAAA,CAAc6lC,CAAd,CAEA,OAAO,SAAQ,EAAG,CAChBvjC,EAAA,CAAYD,CAAZ,CAAmBwjC,CAAnB,CACA5B,EAAA,CAAiB,IAFD,CAnCiC,CAzKrC,kBA0QEkC,QAAQ,CAACjoC,CAAD,CAAMwe,CAAN,CAAgB,CACxC,IAAI3Y,EAAO,IAAX,CAEIyqB,CAFJ,CAKIC,CALJ,CAOI2X,CAPJ,CASIC,EAAuC,CAAvCA,CAAqB3pB,CAAAte,OATzB,CAUIkoC,EAAiB,CAVrB,CAWIC,EAAY5jB,CAAA,CAAOzkB,CAAP,CAXhB,CAYIsoC,EAAgB,EAZpB,CAaIC,EAAiB,EAbrB,CAcIC,EAAU,CAAA,CAdd,CAeIC,EAAY,CAsGhB,OAAO,KAAA/jC,OAAA,CApGPgkC,QAA8B,EAAG,CAC/BpY,CAAA,CAAW+X,CAAA,CAAUxiC,CAAV,CADoB,KAE3B8iC,CAF2B,CAEhBloC,CAEf,IAAKwC,CAAA,CAASqtB,CAAT,CAAL,CAKO,GAAIvwB,EAAA,CAAYuwB,CAAZ,CAAJ,CAgBL,IAfIC,CAeKrvB,GAfQonC,CAeRpnC,GAbPqvB,CAEA,CAFW+X,CAEX,CADAG,CACA,CADYlY,CAAArwB,OACZ,CAD8B,CAC9B,CAAAkoC,CAAA,EAWOlnC,EARTynC,CAQSznC,CARGovB,CAAApwB,OAQHgB,CANLunC,CAMKvnC,GANSynC,CAMTznC,GAJPknC,CAAA,EACA,CAAA7X,CAAArwB,OAAA,CAAkBuoC,CAAlB,CAA8BE,CAGvBznC,EAAAA,CAAAA,CAAI,CAAb,CAAgBA,CAAhB,CAAoBynC,CAApB,CAA+BznC,CAAA,EAA/B,CACiBqvB,CAAA,CAASrvB,CAAT,CAEf,GAF+BqvB,CAAA,CAASrvB,CAAT,CAE/B,EADKovB,CAAA,CAASpvB,CAAT,CACL,GADqBovB,CAAA,CAASpvB,CAAT,CACrB,EAAiBqvB,CAAA,CAASrvB,CAAT,CAAjB,GAAiCovB,CAAA,CAASpvB,CAAT,CAAjC,GACEknC,CAAA,EACA,CAAA7X,CAAA,CAASrvB,CAAT,CAAA,CAAcovB,CAAA,CAASpvB,CAAT,CAFhB,CAnBG,KAwBA,CACDqvB,CAAJ,GAAiBgY,CAAjB,GAEEhY,CAEA,CAFWgY,CAEX,CAF4B,EAE5B,CADAE,CACA,CADY,CACZ,CAAAL,CAAA,EAJF,CAOAO,EAAA;AAAY,CACZ,KAAKloC,CAAL,GAAY6vB,EAAZ,CACMA,CAAA3vB,eAAA,CAAwBF,CAAxB,CAAJ,GACEkoC,CAAA,EACA,CAAIpY,CAAA5vB,eAAA,CAAwBF,CAAxB,CAAJ,CACM8vB,CAAA,CAAS9vB,CAAT,CADN,GACwB6vB,CAAA,CAAS7vB,CAAT,CADxB,GAEI2nC,CAAA,EACA,CAAA7X,CAAA,CAAS9vB,CAAT,CAAA,CAAgB6vB,CAAA,CAAS7vB,CAAT,CAHpB,GAMEgoC,CAAA,EAEA,CADAlY,CAAA,CAAS9vB,CAAT,CACA,CADgB6vB,CAAA,CAAS7vB,CAAT,CAChB,CAAA2nC,CAAA,EARF,CAFF,CAcF,IAAIK,CAAJ,CAAgBE,CAAhB,CAGE,IAAIloC,CAAJ,GADA2nC,EAAA,EACW7X,CAAAA,CAAX,CACMA,CAAA5vB,eAAA,CAAwBF,CAAxB,CAAJ,EAAqC,CAAA6vB,CAAA3vB,eAAA,CAAwBF,CAAxB,CAArC,GACEgoC,CAAA,EACA,CAAA,OAAOlY,CAAA,CAAS9vB,CAAT,CAFT,CA5BC,CA7BP,IACM8vB,EAAJ,GAAiBD,CAAjB,GACEC,CACA,CADWD,CACX,CAAA8X,CAAA,EAFF,CA+DF,OAAOA,EApEwB,CAoG1B,CA7BPQ,QAA+B,EAAG,CAC5BJ,CAAJ,EACEA,CACA,CADU,CAAA,CACV,CAAAhqB,CAAA,CAAS8R,CAAT,CAAmBA,CAAnB,CAA6BzqB,CAA7B,CAFF,EAIE2Y,CAAA,CAAS8R,CAAT,CAAmB4X,CAAnB,CAAiCriC,CAAjC,CAIF,IAAIsiC,CAAJ,CACE,GAAKllC,CAAA,CAASqtB,CAAT,CAAL,CAGO,GAAIvwB,EAAA,CAAYuwB,CAAZ,CAAJ,CAA2B,CAChC4X,CAAA,CAAmB3hB,KAAJ,CAAU+J,CAAApwB,OAAV,CACf,KAAK,IAAIgB,EAAI,CAAb,CAAgBA,CAAhB,CAAoBovB,CAAApwB,OAApB,CAAqCgB,CAAA,EAArC,CACEgnC,CAAA,CAAahnC,CAAb,CAAA,CAAkBovB,CAAA,CAASpvB,CAAT,CAHY,CAA3B,IAOL,KAAST,CAAT,GADAynC,EACgB5X,CADD,EACCA,CAAAA,CAAhB,CACM3vB,EAAAC,KAAA,CAAoB0vB,CAApB,CAA8B7vB,CAA9B,CAAJ,GACEynC,CAAA,CAAaznC,CAAb,CADF,CACsB6vB,CAAA,CAAS7vB,CAAT,CADtB,CAXJ,KAEEynC,EAAA,CAAe5X,CAZa,CA6B3B,CAtHiC,CA1Q1B,SAkbP4P,QAAQ,EAAG,CAAA,IACd2I,CADc,CACPxnC,CADO,CACA8X,CADA,CAEd2vB,CAFc,CAGdC,EAAa,IAAAnC,aAHC,CAIdoC,EAAkB,IAAAnC,kBAJJ,CAKd3mC,CALc,CAMd+oC,CANc,CAMPC,EAAMrD,CANC,CAORuB,CAPQ,CAQd+B,EAAW,EARG,CASdC,CATc,CASNC,CATM,CASEC,CAEpBtC,EAAA,CAAW,SAAX,CAEAjB;CAAA,CAAiB,IAEjB,GAAG,CACDkD,CAAA,CAAQ,CAAA,CAGR,KAFA7B,CAEA,CAZ0BxvB,IAY1B,CAAMmxB,CAAA7oC,OAAN,CAAA,CAAyB,CACvB,GAAI,CACFopC,CACA,CADYP,CAAAr2B,MAAA,EACZ,CAAA42B,CAAAz/B,MAAA0/B,MAAA,CAAsBD,CAAA1W,WAAtB,CAFE,CAGF,MAAOvrB,CAAP,CAAU,CAsflBiV,CAAAua,QApfQ,CAofa,IApfb,CAAAlT,CAAA,CAAkBtc,CAAlB,CAFU,CAIZ0+B,CAAA,CAAiB,IARM,CAWzB,CAAA,CACA,EAAG,CACD,GAAK+C,CAAL,CAAgB1B,CAAAf,WAAhB,CAGE,IADAnmC,CACA,CADS4oC,CAAA5oC,OACT,CAAOA,CAAA,EAAP,CAAA,CACE,GAAI,CAIF,GAHA2oC,CAGA,CAHQC,CAAA,CAAS5oC,CAAT,CAGR,CACE,IAAKmB,CAAL,CAAawnC,CAAA5uB,IAAA,CAAUmtB,CAAV,CAAb,KAAsCjuB,CAAtC,CAA6C0vB,CAAA1vB,KAA7C,GACI,EAAE0vB,CAAAjjB,GACA,CAAI1gB,EAAA,CAAO7D,CAAP,CAAc8X,CAAd,CAAJ,CACqB,QADrB,EACK,MAAO9X,EADZ,EACgD,QADhD,EACiC,MAAO8X,EADxC,EAEQqwB,KAAA,CAAMnoC,CAAN,CAFR,EAEwBmoC,KAAA,CAAMrwB,CAAN,CAH1B,CADJ,CAKE8vB,CAIA,CAJQ,CAAA,CAIR,CAHAlD,CAGA,CAHiB8C,CAGjB,CAFAA,CAAA1vB,KAEA,CAFa0vB,CAAAjjB,GAAA,CAAWthB,EAAA,CAAKjD,CAAL,CAAX,CAAyBA,CAEtC,CADAwnC,CAAA/iC,GAAA,CAASzE,CAAT,CAAkB8X,CAAD,GAAUkuB,CAAV,CAA0BhmC,CAA1B,CAAkC8X,CAAnD,CAA0DiuB,CAA1D,CACA,CAAU,CAAV,CAAI8B,CAAJ,GACEE,CAMA,CANS,CAMT,CANaF,CAMb,CALKC,CAAA,CAASC,CAAT,CAKL,GALuBD,CAAA,CAASC,CAAT,CAKvB,CAL0C,EAK1C,EAJAC,CAIA,CAJU3oC,CAAA,CAAWmoC,CAAAxO,IAAX,CACD,CAAH,MAAG,EAAOwO,CAAAxO,IAAArxB,KAAP,EAAyB6/B,CAAAxO,IAAAj3B,SAAA,EAAzB,EACHylC,CAAAxO,IAEN,CADAgP,CACA,EADU,YACV,CADyB/iC,EAAA,CAAOjF,CAAP,CACzB,CADyC,YACzC,CADwDiF,EAAA,CAAO6S,CAAP,CACxD,CAAAgwB,CAAA,CAASC,CAAT,CAAAroC,KAAA,CAAsBsoC,CAAtB,CAPF,CATF,KAkBO,IAAIR,CAAJ,GAAc9C,CAAd,CAA8B,CAGnCkD,CAAA,CAAQ,CAAA,CACR,OAAM,CAJ6B,CAvBrC,CA8BF,MAAO5hC,CAAP,CAAU,CA2ctBiV,CAAAua,QAzcY;AAycS,IAzcT,CAAAlT,CAAA,CAAkBtc,CAAlB,CAFU,CAUhB,GAAI,EAAEoiC,CAAF,CAAUrC,CAAAZ,YAAV,EACCY,CADD,GArEoBxvB,IAqEpB,EACuBwvB,CAAAd,cADvB,CAAJ,CAEE,IAAA,CAAMc,CAAN,GAvEsBxvB,IAuEtB,EAA4B,EAAE6xB,CAAF,CAASrC,CAAAd,cAAT,CAA5B,CAAA,CACEc,CAAA,CAAUA,CAAAhB,QAhDb,CAAH,MAmDUgB,CAnDV,CAmDoBqC,CAnDpB,CAuDA,KAAIR,CAAJ,EAAaF,CAAA7oC,OAAb,GAAmC,CAAEgpC,CAAA,EAArC,CAEE,KAqbN5sB,EAAAua,QArbY,CAqbS,IArbT,CAAAiP,CAAA,CAAiB,QAAjB,CAGFD,CAHE,CAGGv/B,EAAA,CAAO6iC,CAAP,CAHH,CAAN,CAzED,CAAH,MA+ESF,CA/ET,EA+EkBF,CAAA7oC,OA/ElB,CAmFA,KA2aFoc,CAAAua,QA3aE,CA2amB,IA3anB,CAAMmS,CAAA9oC,OAAN,CAAA,CACE,GAAI,CACF8oC,CAAAt2B,MAAA,EAAA,EADE,CAEF,MAAOrL,CAAP,CAAU,CACVsc,CAAA,CAAkBtc,CAAlB,CADU,CArGI,CAlbJ,UAgkBNqO,QAAQ,EAAG,CAEnB,GAAIixB,CAAA,IAAAA,YAAJ,CAAA,CACA,IAAIlkC,EAAS,IAAA2jC,QAEb,KAAA7G,WAAA,CAAgB,UAAhB,CACA,KAAAoH,YAAA,CAAmB,CAAA,CACf,KAAJ,GAAarqB,CAAb,GAEAhc,CAAA,CAAQ,IAAAymC,gBAAR,CAA8BnhC,EAAA,CAAK,IAAL,CAAWuhC,CAAX,CAAmC,IAAnC,CAA9B,CA2BA,CAvBI1kC,CAAA+jC,YAuBJ,EAvB0B,IAuB1B,GAvBgC/jC,CAAA+jC,YAuBhC,CAvBqD,IAAAF,cAuBrD,EAtBI7jC,CAAAgkC,YAsBJ,EAtB0B,IAsB1B;CAtBgChkC,CAAAgkC,YAsBhC,CAtBqD,IAAAF,cAsBrD,EArBI,IAAAA,cAqBJ,GArBwB,IAAAA,cAAAD,cAqBxB,CArB2D,IAAAA,cAqB3D,EApBI,IAAAA,cAoBJ,GApBwB,IAAAA,cAAAC,cAoBxB,CApB2D,IAAAA,cAoB3D,EATA,IAAAH,QASA,CATe,IAAAE,cASf,CAToC,IAAAC,cASpC,CATyD,IAAAC,YASzD,CARI,IAAAC,YAQJ,CARuB,IAAAC,MAQvB,CARoC,IAQpC,CALA,IAAAI,YAKA,CALmB,EAKnB,CAJA,IAAAT,WAIA,CAJkB,IAAAO,aAIlB,CAJsC,IAAAC,kBAItC,CAJ+D,EAI/D,CADA,IAAAnxB,SACA,CADgB,IAAAwqB,QAChB,CAD+B,IAAAl2B,OAC/B,CAD6CrH,CAC7C,CAAA,IAAA+mC,IAAA,CAAW,IAAAhlC,OAAX,CAAyBilC,QAAQ,EAAG,CAAE,MAAOhnC,EAAT,CA7BpC,CALA,CAFmB,CAhkBL,OAmoBT4mC,QAAQ,CAACK,CAAD,CAAO/uB,CAAP,CAAe,CAC5B,MAAO4J,EAAA,CAAOmlB,CAAP,CAAA,CAAa,IAAb,CAAmB/uB,CAAnB,CADqB,CAnoBd,YAoqBJpW,QAAQ,CAACmlC,CAAD,CAAO,CAGpBttB,CAAAua,QAAL;AAA4Bva,CAAAsqB,aAAA1mC,OAA5B,EACE+zB,CAAAnT,MAAA,CAAe,QAAQ,EAAG,CACpBxE,CAAAsqB,aAAA1mC,OAAJ,EACEoc,CAAA4jB,QAAA,EAFsB,CAA1B,CAOF,KAAA0G,aAAA7lC,KAAA,CAAuB,OAAQ,IAAR,YAA0B6oC,CAA1B,CAAvB,CAXyB,CApqBX,cAkrBDC,QAAQ,CAAC/jC,CAAD,CAAK,CAC1B,IAAA+gC,kBAAA9lC,KAAA,CAA4B+E,CAA5B,CAD0B,CAlrBZ,QAmuBRkE,QAAQ,CAAC4/B,CAAD,CAAO,CACrB,GAAI,CAEF,MADA5C,EAAA,CAAW,QAAX,CACO,CAAA,IAAAuC,MAAA,CAAWK,CAAX,CAFL,CAGF,MAAOviC,CAAP,CAAU,CACVsc,CAAA,CAAkBtc,CAAlB,CADU,CAHZ,OAKU,CAsNZiV,CAAAua,QAAA,CAAqB,IApNjB,IAAI,CACFva,CAAA4jB,QAAA,EADE,CAEF,MAAO74B,CAAP,CAAU,CAEV,KADAsc,EAAA,CAAkBtc,CAAlB,CACMA,CAAAA,CAAN,CAFU,CAJJ,CANW,CAnuBP,KA8wBXqiC,QAAQ,CAAC1gC,CAAD,CAAOwV,CAAP,CAAiB,CAC5B,IAAIsrB,EAAiB,IAAAhD,YAAA,CAAiB99B,CAAjB,CAChB8gC,EAAL,GACE,IAAAhD,YAAA,CAAiB99B,CAAjB,CADF,CAC2B8gC,CAD3B,CAC4C,EAD5C,CAGAA,EAAA/oC,KAAA,CAAoByd,CAApB,CAEA,KAAI4oB,EAAU,IACd,GACOA,EAAAL,gBAAA,CAAwB/9B,CAAxB,CAGL,GAFEo+B,CAAAL,gBAAA,CAAwB/9B,CAAxB,CAEF,CAFkC,CAElC,EAAAo+B,CAAAL,gBAAA,CAAwB/9B,CAAxB,CAAA,EAJF,OAKUo+B,CALV,CAKoBA,CAAAhB,QALpB,CAOA;IAAIvgC,EAAO,IACX,OAAO,SAAQ,EAAG,CAChBikC,CAAA,CAAe5lC,EAAA,CAAQ4lC,CAAR,CAAwBtrB,CAAxB,CAAf,CAAA,CAAoD,IACpD2oB,EAAA,CAAuBthC,CAAvB,CAA6B,CAA7B,CAAgCmD,CAAhC,CAFgB,CAhBU,CA9wBd,OA2zBT+gC,QAAQ,CAAC/gC,CAAD,CAAO8R,CAAP,CAAa,CAAA,IACtB1T,EAAQ,EADc,CAEtB0iC,CAFsB,CAGtBjgC,EAAQ,IAHc,CAItB4N,EAAkB,CAAA,CAJI,CAKtBJ,EAAQ,MACArO,CADA,aAEOa,CAFP,iBAGW4N,QAAQ,EAAG,CAACA,CAAA,CAAkB,CAAA,CAAnB,CAHtB,gBAIUH,QAAQ,EAAG,CACzBD,CAAAS,iBAAA,CAAyB,CAAA,CADA,CAJrB,kBAOY,CAAA,CAPZ,CALc,CActBkyB,EAAsBC,CAAC5yB,CAAD4yB,CA92WzB9jC,OAAA,CAAcH,EAAApF,KAAA,CA82WoBwB,SA92WpB,CA82W+Bb,CA92W/B,CAAd,CAg2WyB,CAetBL,CAfsB,CAenBhB,CAEP,GAAG,CACD4pC,CAAA,CAAiBjgC,CAAAi9B,YAAA,CAAkB99B,CAAlB,CAAjB,EAA4C5B,CAC5CiQ,EAAA6yB,aAAA,CAAqBrgC,CAChB3I,EAAA,CAAE,CAAP,KAAUhB,CAAV,CAAiB4pC,CAAA5pC,OAAjB,CAAwCgB,CAAxC,CAA0ChB,CAA1C,CAAkDgB,CAAA,EAAlD,CAGE,GAAK4oC,CAAA,CAAe5oC,CAAf,CAAL,CAMA,GAAI,CAEF4oC,CAAA,CAAe5oC,CAAf,CAAAgF,MAAA,CAAwB,IAAxB,CAA8B8jC,CAA9B,CAFE,CAGF,MAAO3iC,CAAP,CAAU,CACVsc,CAAA,CAAkBtc,CAAlB,CADU,CATZ,IACEyiC,EAAAzlC,OAAA,CAAsBnD,CAAtB,CAAyB,CAAzB,CAEA,CADAA,CAAA,EACA,CAAAhB,CAAA,EAWJ,IAAIuX,CAAJ,CAAqB,KAErB5N,EAAA,CAAQA,CAAAu8B,QAtBP,CAAH,MAuBSv8B,CAvBT,CAyBA,OAAOwN,EA1CmB,CA3zBZ,YA83BJkoB,QAAQ,CAACv2B,CAAD,CAAO8R,CAAP,CAAa,CAgB/B,IAhB+B,IAE3BssB,EADSxvB,IADkB,CAG3B6xB,EAFS7xB,IADkB,CAI3BP,EAAQ,MACArO,CADA;YAHC4O,IAGD,gBAGUN,QAAQ,EAAG,CACzBD,CAAAS,iBAAA,CAAyB,CAAA,CADA,CAHrB,kBAMY,CAAA,CANZ,CAJmB,CAY3BkyB,EAAsBC,CAAC5yB,CAAD4yB,CA/6WzB9jC,OAAA,CAAcH,EAAApF,KAAA,CA+6WoBwB,SA/6WpB,CA+6W+Bb,CA/6W/B,CAAd,CAm6W8B,CAahBL,CAbgB,CAabhB,CAGlB,CAAQknC,CAAR,CAAkBqC,CAAlB,CAAA,CAAyB,CACvBpyB,CAAA6yB,aAAA,CAAqB9C,CACrBrV,EAAA,CAAYqV,CAAAN,YAAA,CAAoB99B,CAApB,CAAZ,EAAyC,EACpC9H,EAAA,CAAE,CAAP,KAAUhB,CAAV,CAAmB6xB,CAAA7xB,OAAnB,CAAqCgB,CAArC,CAAuChB,CAAvC,CAA+CgB,CAAA,EAA/C,CAEE,GAAK6wB,CAAA,CAAU7wB,CAAV,CAAL,CAOA,GAAI,CACF6wB,CAAA,CAAU7wB,CAAV,CAAAgF,MAAA,CAAmB,IAAnB,CAAyB8jC,CAAzB,CADE,CAEF,MAAM3iC,CAAN,CAAS,CACTsc,CAAA,CAAkBtc,CAAlB,CADS,CATX,IACE0qB,EAAA1tB,OAAA,CAAiBnD,CAAjB,CAAoB,CAApB,CAEA,CADAA,CAAA,EACA,CAAAhB,CAAA,EAeJ,IAAI,EAAEupC,CAAF,CAAWrC,CAAAL,gBAAA,CAAwB/9B,CAAxB,CAAX,EAA4Co+B,CAAAZ,YAA5C,EACCY,CADD,GAtCOxvB,IAsCP,EACuBwvB,CAAAd,cADvB,CAAJ,CAEE,IAAA,CAAMc,CAAN,GAxCSxvB,IAwCT,EAA4B,EAAE6xB,CAAF,CAASrC,CAAAd,cAAT,CAA5B,CAAA,CACEc,CAAA,CAAUA,CAAAhB,QA1BS,CA+BzB,MAAO/uB,EA/CwB,CA93BjB,CAi7BlB,KAAIiF,EAAa,IAAI4pB,CAErB,OAAO5pB,EAn/B2D,CADxD,CAZe,CA2iC7BjP,QAASA,GAAqB,EAAG,CAAA,IAC3B4W,EAA6B,mCADF,CAE7BG,EAA8B,qCAkBhC;IAAAH,2BAAA,CAAkCC,QAAQ,CAACC,CAAD,CAAS,CACjD,MAAInhB,EAAA,CAAUmhB,CAAV,CAAJ,EACEF,CACO,CADsBE,CACtB,CAAA,IAFT,EAIOF,CAL0C,CAyBnD,KAAAG,4BAAA,CAAmCC,QAAQ,CAACF,CAAD,CAAS,CAClD,MAAInhB,EAAA,CAAUmhB,CAAV,CAAJ,EACEC,CACO,CADuBD,CACvB,CAAA,IAFT,EAIOC,CAL2C,CAQpD,KAAA1K,KAAA,CAAY4H,QAAQ,EAAG,CACrB,MAAO6oB,SAAoB,CAACC,CAAD,CAAMC,CAAN,CAAe,CACxC,IAAIC,EAAQD,CAAA,CAAUjmB,CAAV,CAAwCH,CAApD,CACIsmB,CAEJ,IAAI,CAACpyB,CAAL,EAAqB,CAArB,EAAaA,CAAb,CAEE,GADAoyB,CACI,CADYpR,EAAA,CAAWiR,CAAX,CAAA9qB,KACZ,CAAkB,EAAlB,GAAAirB,CAAA,EAAwB,CAACA,CAAA7iC,MAAA,CAAoB4iC,CAApB,CAA7B,CACE,MAAO,SAAP,CAAiBC,CAGrB,OAAOH,EAViC,CADrB,CArDQ,CA4FjCI,QAASA,GAAa,CAACC,CAAD,CAAU,CAC9B,GAAgB,MAAhB,GAAIA,CAAJ,CACE,MAAOA,EACF,IAAIrqC,CAAA,CAASqqC,CAAT,CAAJ,CAAuB,CAK5B,GAA8B,EAA9B,CAAIA,CAAAvmC,QAAA,CAAgB,KAAhB,CAAJ,CACE,KAAMwmC,GAAA,CAAW,QAAX,CACsDD,CADtD,CAAN,CAGFA,CAAA,CAA0BA,CAjBrB9iC,QAAA,CAAU,+BAAV,CAA2C,MAA3C,CAAAA,QAAA,CACU,OADV,CACmB,OADnB,CAiBKA,QAAA,CACY,QADZ,CACsB,IADtB,CAAAA,QAAA,CAEY,KAFZ,CAEmB,YAFnB,CAGV,OAAW7C,OAAJ,CAAW,GAAX;AAAiB2lC,CAAjB,CAA2B,GAA3B,CAZqB,CAavB,GAAIpnC,EAAA,CAASonC,CAAT,CAAJ,CAIL,MAAW3lC,OAAJ,CAAW,GAAX,CAAiB2lC,CAAAlmC,OAAjB,CAAkC,GAAlC,CAEP,MAAMmmC,GAAA,CAAW,UAAX,CAAN,CAtB4B,CA4BhCC,QAASA,GAAc,CAACC,CAAD,CAAW,CAChC,IAAIC,EAAmB,EACnB7nC,EAAA,CAAU4nC,CAAV,CAAJ,EACEtqC,CAAA,CAAQsqC,CAAR,CAAkB,QAAQ,CAACH,CAAD,CAAU,CAClCI,CAAA9pC,KAAA,CAAsBypC,EAAA,CAAcC,CAAd,CAAtB,CADkC,CAApC,CAIF,OAAOI,EAPyB,CA4ElC75B,QAASA,GAAoB,EAAG,CAC9B,IAAA85B,aAAA,CAAoBA,EADU,KAI1BC,EAAuB,CAAC,MAAD,CAJG,CAK1BC,EAAuB,EAwB3B,KAAAD,qBAAA,CAA4BE,QAAS,CAAC5pC,CAAD,CAAQ,CACvCe,SAAAlC,OAAJ,GACE6qC,CADF,CACyBJ,EAAA,CAAetpC,CAAf,CADzB,CAGA,OAAO0pC,EAJoC,CAkC7C,KAAAC,qBAAA,CAA4BE,QAAS,CAAC7pC,CAAD,CAAQ,CACvCe,SAAAlC,OAAJ,GACE8qC,CADF,CACyBL,EAAA,CAAetpC,CAAf,CADzB,CAGA,OAAO2pC,EAJoC,CAO7C,KAAAtxB,KAAA,CAAY,CAAC,WAAD,CAAc,QAAQ,CAAC4B,CAAD,CAAY,CA0C5C6vB,QAASA,EAAkB,CAACC,CAAD,CAAO,CAChC,IAAIC,EAAaA,QAA+B,CAACC,CAAD,CAAe,CAC7D,IAAAC,qBAAA,CAA4BC,QAAQ,EAAG,CACrC,MAAOF,EAD8B,CADsB,CAK3DF,EAAJ,GACEC,CAAAnwB,UADF,CACyB,IAAIkwB,CAD7B,CAGAC,EAAAnwB,UAAAwf,QAAA;AAA+B+Q,QAAmB,EAAG,CACnD,MAAO,KAAAF,qBAAA,EAD4C,CAGrDF,EAAAnwB,UAAA9X,SAAA,CAAgCsoC,QAAoB,EAAG,CACrD,MAAO,KAAAH,qBAAA,EAAAnoC,SAAA,EAD8C,CAGvD,OAAOioC,EAfyB,CAxClC,IAAIM,EAAgBA,QAAsB,CAACnkC,CAAD,CAAO,CAC/C,KAAMkjC,GAAA,CAAW,QAAX,CAAN,CAD+C,CAI7CpvB,EAAAF,IAAA,CAAc,WAAd,CAAJ,GACEuwB,CADF,CACkBrwB,CAAArB,IAAA,CAAc,WAAd,CADlB,CAN4C,KA4DxC2xB,EAAyBT,CAAA,EA5De,CA6DxCU,EAAS,EAEbA,EAAA,CAAOf,EAAA5a,KAAP,CAAA,CAA4Bib,CAAA,CAAmBS,CAAnB,CAC5BC,EAAA,CAAOf,EAAAgB,IAAP,CAAA,CAA2BX,CAAA,CAAmBS,CAAnB,CAC3BC,EAAA,CAAOf,EAAAiB,IAAP,CAAA,CAA2BZ,CAAA,CAAmBS,CAAnB,CAC3BC,EAAA,CAAOf,EAAAkB,GAAP,CAAA,CAA0Bb,CAAA,CAAmBS,CAAnB,CAC1BC,EAAA,CAAOf,EAAA3a,aAAP,CAAA,CAAoCgb,CAAA,CAAmBU,CAAA,CAAOf,EAAAiB,IAAP,CAAnB,CAyGpC,OAAO,SAtFPE,QAAgB,CAACl3B,CAAD,CAAOu2B,CAAP,CAAqB,CACnC,IAAItwB,EAAe6wB,CAAAlrC,eAAA,CAAsBoU,CAAtB,CAAA,CAA8B82B,CAAA,CAAO92B,CAAP,CAA9B,CAA6C,IAChE,IAAI,CAACiG,CAAL,CACE,KAAM0vB,GAAA,CAAW,UAAX,CAEF31B,CAFE,CAEIu2B,CAFJ,CAAN,CAIF,GAAqB,IAArB,GAAIA,CAAJ,EAA6BA,CAA7B,GAA8CzrC,CAA9C,EAA4E,EAA5E,GAA2DyrC,CAA3D,CACE,MAAOA,EAIT,IAA4B,QAA5B,GAAI,MAAOA,EAAX,CACE,KAAMZ,GAAA,CAAW,OAAX,CAEF31B,CAFE,CAAN,CAIF,MAAO,KAAIiG,CAAJ,CAAgBswB,CAAhB,CAjB4B,CAsF9B;WAzBP7Q,QAAmB,CAAC1lB,CAAD,CAAOm3B,CAAP,CAAqB,CACtC,GAAqB,IAArB,GAAIA,CAAJ,EAA6BA,CAA7B,GAA8CrsC,CAA9C,EAA4E,EAA5E,GAA2DqsC,CAA3D,CACE,MAAOA,EAET,KAAI/gC,EAAe0gC,CAAAlrC,eAAA,CAAsBoU,CAAtB,CAAA,CAA8B82B,CAAA,CAAO92B,CAAP,CAA9B,CAA6C,IAChE,IAAI5J,CAAJ,EAAmB+gC,CAAnB,WAA2C/gC,EAA3C,CACE,MAAO+gC,EAAAX,qBAAA,EAKT,IAAIx2B,CAAJ,GAAa+1B,EAAA3a,aAAb,CAAwC,CAzIpC8L,IAAAA,EAAY9C,EAAA,CA0ImB+S,CA1IR9oC,SAAA,EAAX,CAAZ64B,CACA/6B,CADA+6B,CACGna,CADHma,CACMkQ,EAAU,CAAA,CAEfjrC,EAAA,CAAI,CAAT,KAAY4gB,CAAZ,CAAgBipB,CAAA7qC,OAAhB,CAA6CgB,CAA7C,CAAiD4gB,CAAjD,CAAoD5gB,CAAA,EAApD,CACE,GAbc,MAAhB,GAae6pC,CAAAN,CAAqBvpC,CAArBupC,CAbf,CACSpV,EAAA,CAY+B4G,CAZ/B,CADT,CAae8O,CAAAN,CAAqBvpC,CAArBupC,CATJthC,KAAA,CAS6B8yB,CAThB3c,KAAb,CAST,CAAkD,CAChD6sB,CAAA,CAAU,CAAA,CACV,MAFgD,CAKpD,GAAIA,CAAJ,CAEE,IAAKjrC,CAAO,CAAH,CAAG,CAAA4gB,CAAA,CAAIkpB,CAAA9qC,OAAhB,CAA6CgB,CAA7C,CAAiD4gB,CAAjD,CAAoD5gB,CAAA,EAApD,CACE,GArBY,MAAhB,GAqBiB8pC,CAAAP,CAAqBvpC,CAArBupC,CArBjB,CACSpV,EAAA,CAoBiC4G,CApBjC,CADT,CAqBiB+O,CAAAP,CAAqBvpC,CAArBupC,CAjBNthC,KAAA,CAiB+B8yB,CAjBlB3c,KAAb,CAiBP,CAAkD,CAChD6sB,CAAA,CAAU,CAAA,CACV,MAFgD,CA8HpD,GAxHKA,CAwHL,CACE,MAAOD,EAEP,MAAMxB,GAAA,CAAW,UAAX,CAEFwB,CAAA9oC,SAAA,EAFE,CAAN,CAJoC,CAQjC,GAAI2R,CAAJ,GAAa+1B,EAAA5a,KAAb,CACL,MAAOyb,EAAA,CAAcO,CAAd,CAET,MAAMxB,GAAA,CAAW,QAAX,CAAN,CAtBsC,CAyBjC,SAhDPhQ,QAAgB,CAACwR,CAAD,CAAe,CAC7B,MAAIA,EAAJ;AAA4BN,CAA5B,CACSM,CAAAX,qBAAA,EADT,CAGSW,CAJoB,CAgDxB,CA5KqC,CAAlC,CAtEkB,CAmhBhCn7B,QAASA,GAAY,EAAG,CACtB,IAAIq7B,EAAU,CAAA,CAad,KAAAA,QAAA,CAAeC,QAAS,CAAChrC,CAAD,CAAQ,CAC1Be,SAAAlC,OAAJ,GACEksC,CADF,CACY,CAAC,CAAC/qC,CADd,CAGA,OAAO+qC,EAJuB,CAsDhC,KAAA1yB,KAAA,CAAY,CAAC,QAAD,CAAW,UAAX,CAAuB,cAAvB,CAAuC,QAAQ,CAC7C+K,CAD6C,CACnCnH,CADmC,CACvBgvB,CADuB,CACT,CAGhD,GAAIF,CAAJ,EAAe9uB,CAAAnF,KAAf,EAA4D,CAA5D,CAAgCmF,CAAAivB,iBAAhC,CACE,KAAM7B,GAAA,CAAW,UAAX,CAAN,CAMF,IAAI8B,EAAMloC,EAAA,CAAKwmC,EAAL,CAaV0B,EAAAC,UAAA,CAAgBC,QAAS,EAAG,CAC1B,MAAON,EADmB,CAG5BI,EAAAP,QAAA,CAAcK,CAAAL,QACdO,EAAA/R,WAAA,CAAiB6R,CAAA7R,WACjB+R,EAAA9R,QAAA,CAAc4R,CAAA5R,QAET0R,EAAL,GACEI,CAAAP,QACA,CADcO,CAAA/R,WACd,CAD+BkS,QAAQ,CAAC53B,CAAD,CAAO1T,CAAP,CAAc,CAAE,MAAOA,EAAT,CACrD,CAAAmrC,CAAA9R,QAAA,CAAc93B,EAFhB,CAwBA4pC,EAAAI,QAAA,CAAcC,QAAmB,CAAC93B,CAAD,CAAO60B,CAAP,CAAa,CAC5C,IAAIz2B,EAASsR,CAAA,CAAOmlB,CAAP,CACb,OAAIz2B,EAAAqY,QAAJ,EAAsBrY,CAAAoI,SAAtB,CACSpI,CADT,CAGS25B,QAA0B,CAACjnC,CAAD,CAAOgV,CAAP,CAAe,CAC9C,MAAO2xB,EAAA/R,WAAA,CAAe1lB,CAAf;AAAqB5B,CAAA,CAAOtN,CAAP,CAAagV,CAAb,CAArB,CADuC,CALN,CAtDE,KAoT5CjU,EAAQ4lC,CAAAI,QApToC,CAqT5CnS,EAAa+R,CAAA/R,WArT+B,CAsT5CwR,EAAUO,CAAAP,QAEd3rC,EAAA,CAAQwqC,EAAR,CAAsB,QAAS,CAACiC,CAAD,CAAY/jC,CAAZ,CAAkB,CAC/C,IAAIgkC,EAAQjmC,CAAA,CAAUiC,CAAV,CACZwjC,EAAA,CAAIj7B,EAAA,CAAU,WAAV,CAAwBy7B,CAAxB,CAAJ,CAAA,CAAsC,QAAS,CAACpD,CAAD,CAAO,CACpD,MAAOhjC,EAAA,CAAMmmC,CAAN,CAAiBnD,CAAjB,CAD6C,CAGtD4C,EAAA,CAAIj7B,EAAA,CAAU,cAAV,CAA2By7B,CAA3B,CAAJ,CAAA,CAAyC,QAAS,CAAC3rC,CAAD,CAAQ,CACxD,MAAOo5B,EAAA,CAAWsS,CAAX,CAAsB1rC,CAAtB,CADiD,CAG1DmrC,EAAA,CAAIj7B,EAAA,CAAU,WAAV,CAAwBy7B,CAAxB,CAAJ,CAAA,CAAsC,QAAS,CAAC3rC,CAAD,CAAQ,CACrD,MAAO4qC,EAAA,CAAQc,CAAR,CAAmB1rC,CAAnB,CAD8C,CARR,CAAjD,CAaA,OAAOmrC,EArUyC,CADtC,CApEU,CA6ZxBv7B,QAASA,GAAgB,EAAG,CAC1B,IAAAyI,KAAA,CAAY,CAAC,SAAD,CAAY,WAAZ,CAAyB,QAAQ,CAAC0C,CAAD,CAAUiF,CAAV,CAAqB,CAAA,IAC5D4rB,EAAe,EAD6C,CAE5DC,EACE7qC,CAAA,CAAI,CAAC,eAAA8G,KAAA,CAAqBpC,CAAA,CAAWomC,CAAA/wB,CAAAgxB,UAAAD,EAAqB,EAArBA,WAAX,CAArB,CAAD,EAAyE,EAAzE,EAA6E,CAA7E,CAAJ,CAH0D,CAI5DE,EAAQ,QAAAljC,KAAA,CAAegjC,CAAA/wB,CAAAgxB,UAAAD,EAAqB,EAArBA,WAAf,CAJoD,CAK5DvtC,EAAWyhB,CAAA,CAAU,CAAV,CAAXzhB,EAA2B,EALiC,CAM5D0tC,EAAe1tC,CAAA0tC,aAN6C,CAO5DC,CAP4D,CAQ5DC,EAAc,6BAR8C,CAS5DC,EAAY7tC,CAAA64B,KAAZgV,EAA6B7tC,CAAA64B,KAAAiV,MAT+B;AAU5DC,EAAc,CAAA,CAV8C,CAW5DC,EAAa,CAAA,CAGjB,IAAIH,CAAJ,CAAe,CACb,IAAI7pC,IAAIA,CAAR,GAAgB6pC,EAAhB,CACE,GAAG/lC,CAAH,CAAW8lC,CAAArkC,KAAA,CAAiBvF,CAAjB,CAAX,CAAmC,CACjC2pC,CAAA,CAAe7lC,CAAA,CAAM,CAAN,CACf6lC,EAAA,CAAeA,CAAAllB,OAAA,CAAoB,CAApB,CAAuB,CAAvB,CAAA1W,YAAA,EAAf,CAAyD47B,CAAAllB,OAAA,CAAoB,CAApB,CACzD,MAHiC,CAOjCklB,CAAJ,GACEA,CADF,CACkB,eADlB,EACqCE,EADrC,EACmD,QADnD,CAIAE,EAAA,CAAc,CAAC,EAAG,YAAH,EAAmBF,EAAnB,EAAkCF,CAAlC,CAAiD,YAAjD,EAAiEE,EAAjE,CACfG,EAAA,CAAc,CAAC,EAAG,WAAH,EAAkBH,EAAlB,EAAiCF,CAAjC,CAAgD,WAAhD,EAA+DE,EAA/D,CAEXP,EAAAA,CAAJ,EAAiBS,CAAjB,EAA+BC,CAA/B,GACED,CACA,CADcvtC,CAAA,CAASR,CAAA64B,KAAAiV,MAAAG,iBAAT,CACd,CAAAD,CAAA,CAAaxtC,CAAA,CAASR,CAAA64B,KAAAiV,MAAAI,gBAAT,CAFf,CAhBa,CAuBf,MAAO,SAUI,EAAGpvB,CAAAtC,CAAAsC,QAAH,EAAsBgB,CAAAtD,CAAAsC,QAAAgB,UAAtB,EAA+D,CAA/D,CAAqDwtB,CAArD,EAAsEG,CAAtE,CAVJ,YAYO,cAZP,EAYyBjxB,EAZzB,GAcQ,CAACkxB,CAdT,EAcwC,CAdxC,CAcyBA,CAdzB,WAeKS,QAAQ,CAAC12B,CAAD,CAAQ,CAIxB,GAAa,OAAb,EAAIA,CAAJ,EAAgC,CAAhC,EAAwBc,CAAxB,CAAmC,MAAO,CAAA,CAE1C,IAAIpV,CAAA,CAAYkqC,CAAA,CAAa51B,CAAb,CAAZ,CAAJ,CAAsC,CACpC,IAAI22B,EAASpuC,CAAA8T,cAAA,CAAuB,KAAvB,CACbu5B,EAAA,CAAa51B,CAAb,CAAA,CAAsB,IAAtB;AAA6BA,CAA7B,GAAsC22B,EAFF,CAKtC,MAAOf,EAAA,CAAa51B,CAAb,CAXiB,CAfrB,KA4BA7R,EAAA,EA5BA,cA6BS+nC,CA7BT,aA8BSI,CA9BT,YA+BQC,CA/BR,SAgCIV,CAhCJ,MAiCE/0B,CAjCF,kBAkCam1B,CAlCb,CArCyD,CAAtD,CADc,CA6E5Bn8B,QAASA,GAAgB,EAAG,CAC1B,IAAAuI,KAAA,CAAY,CAAC,YAAD,CAAe,UAAf,CAA2B,IAA3B,CAAiC,mBAAjC,CACP,QAAQ,CAAC4C,CAAD,CAAe2X,CAAf,CAA2BC,CAA3B,CAAiCvQ,CAAjC,CAAoD,CA6B/D4T,QAASA,EAAO,CAACzxB,CAAD,CAAKkb,CAAL,CAAY+Z,CAAZ,CAAyB,CAAA,IACnCjE,EAAW5C,CAAApT,MAAA,EADwB,CAEnCgV,EAAUgB,CAAAhB,QAFyB,CAGnCoF,EAAal4B,CAAA,CAAU+3B,CAAV,CAAbG,EAAuC,CAACH,CAG5C9Z,EAAA,CAAYgT,CAAAnT,MAAA,CAAe,QAAQ,EAAG,CACpC,GAAI,CACFgW,CAAAC,QAAA,CAAiBjxB,CAAA,EAAjB,CADE,CAEF,MAAMuB,CAAN,CAAS,CACTyvB,CAAAvC,OAAA,CAAgBltB,CAAhB,CACA,CAAAsc,CAAA,CAAkBtc,CAAlB,CAFS,CAFX,OAMQ,CACN,OAAO4mC,CAAA,CAAUnY,CAAAoY,YAAV,CADD,CAIHhT,CAAL,EAAgB5e,CAAAtS,OAAA,EAXoB,CAA1B,CAYTgX,CAZS,CAcZ8U,EAAAoY,YAAA,CAAsBjtB,CACtBgtB,EAAA,CAAUhtB,CAAV,CAAA,CAAuB6V,CAEvB,OAAOhB,EAvBgC,CA5BzC,IAAImY,EAAY,EAmEhB1W,EAAArW,OAAA,CAAiBitB,QAAQ,CAACrY,CAAD,CAAU,CACjC,MAAIA,EAAJ,EAAeA,CAAAoY,YAAf,GAAsCD,EAAtC,EACEA,CAAA,CAAUnY,CAAAoY,YAAV,CAAA3Z,OAAA,CAAsC,UAAtC,CAEO;AADP,OAAO0Z,CAAA,CAAUnY,CAAAoY,YAAV,CACA,CAAAja,CAAAnT,MAAAI,OAAA,CAAsB4U,CAAAoY,YAAtB,CAHT,EAKO,CAAA,CAN0B,CASnC,OAAO3W,EA7EwD,CADrD,CADc,CAkJ5B4B,QAASA,GAAU,CAAC7a,CAAD,CAAM8vB,CAAN,CAAY,CAC7B,IAAI9uB,EAAOhB,CAEPnG,EAAJ,GAGEk2B,CAAAh4B,aAAA,CAA4B,MAA5B,CAAoCiJ,CAApC,CACA,CAAAA,CAAA,CAAO+uB,CAAA/uB,KAJT,CAOA+uB,EAAAh4B,aAAA,CAA4B,MAA5B,CAAoCiJ,CAApC,CAGA,OAAO,MACC+uB,CAAA/uB,KADD,UAEK+uB,CAAAjV,SAAA,CAA0BiV,CAAAjV,SAAAzxB,QAAA,CAAgC,IAAhC,CAAsC,EAAtC,CAA1B,CAAsE,EAF3E,MAGC0mC,CAAAv3B,KAHD,QAIGu3B,CAAAvR,OAAA,CAAwBuR,CAAAvR,OAAAn1B,QAAA,CAA8B,KAA9B,CAAqC,EAArC,CAAxB,CAAmE,EAJtE,MAKC0mC,CAAA3xB,KAAA,CAAsB2xB,CAAA3xB,KAAA/U,QAAA,CAA4B,IAA5B,CAAkC,EAAlC,CAAtB,CAA8D,EAL/D,UAMK0mC,CAAAjS,SANL,MAOCiS,CAAA/R,KAPD,UAQ4C,GACvC,GADC+R,CAAAzR,SAAA33B,OAAA,CAA+B,CAA/B,CACD,CAANopC,CAAAzR,SAAM,CACN,GADM,CACAyR,CAAAzR,SAVL,CAbsB,CAkC/BvH,QAASA,GAAe,CAACiZ,CAAD,CAAa,CAC/Bn7B,CAAAA,CAAU/S,CAAA,CAASkuC,CAAT,CAAD,CAAyBnV,EAAA,CAAWmV,CAAX,CAAzB,CAAkDA,CAC/D,OAAQn7B,EAAAimB,SAAR,GAA4BmV,EAAAnV,SAA5B,EACQjmB,CAAA2D,KADR,GACwBy3B,EAAAz3B,KAHW,CAr0bE;AAm3bvC1F,QAASA,GAAe,EAAE,CACxB,IAAAsI,KAAA,CAAY5W,EAAA,CAAQnD,CAAR,CADY,CA+E1B0Q,QAASA,GAAe,CAAC3G,CAAD,CAAW,CAWjCgpB,QAASA,EAAQ,CAAC1pB,CAAD,CAAOkD,CAAP,CAAgB,CAC/B,GAAGjJ,CAAA,CAAS+F,CAAT,CAAH,CAAmB,CACjB,IAAIwlC,EAAU,EACdluC,EAAA,CAAQ0I,CAAR,CAAc,QAAQ,CAACmJ,CAAD,CAAS1R,CAAT,CAAc,CAClC+tC,CAAA,CAAQ/tC,CAAR,CAAA,CAAeiyB,CAAA,CAASjyB,CAAT,CAAc0R,CAAd,CADmB,CAApC,CAGA,OAAOq8B,EALU,CAOjB,MAAO9kC,EAAAwC,QAAA,CAAiBlD,CAAjB,CAAwBylC,CAAxB,CAAgCviC,CAAhC,CARsB,CAVjC,IAAIuiC,EAAS,QAqBb,KAAA/b,SAAA,CAAgBA,CAEhB,KAAAhZ,KAAA,CAAY,CAAC,WAAD,CAAc,QAAQ,CAAC4B,CAAD,CAAY,CAC5C,MAAO,SAAQ,CAACtS,CAAD,CAAO,CACpB,MAAOsS,EAAArB,IAAA,CAAcjR,CAAd,CAAqBylC,CAArB,CADa,CADsB,CAAlC,CAoBZ/b,EAAA,CAAS,UAAT,CAAqBgc,EAArB,CACAhc,EAAA,CAAS,MAAT,CAAiBic,EAAjB,CACAjc,EAAA,CAAS,QAAT,CAAmBkc,EAAnB,CACAlc,EAAA,CAAS,MAAT,CAAiBmc,EAAjB,CACAnc,EAAA,CAAS,SAAT,CAAoBoc,EAApB,CACApc,EAAA,CAAS,WAAT,CAAsBqc,EAAtB,CACArc,EAAA,CAAS,QAAT,CAAmBsc,EAAnB,CACAtc,EAAA,CAAS,SAAT,CAAoBuc,EAApB,CACAvc,EAAA,CAAS,WAAT,CAAsBwc,EAAtB,CApDiC,CAwKnCN,QAASA,GAAY,EAAG,CACtB,MAAO,SAAQ,CAACzqC,CAAD,CAAQyuB,CAAR,CAAoBuc,CAApB,CAAgC,CAC7C,GAAI,CAAC9uC,CAAA,CAAQ8D,CAAR,CAAL,CAAqB,MAAOA,EADiB,KAGzCirC,EAAiB,MAAOD,EAHiB,CAIzCE,EAAa,EAEjBA,EAAAtxB,MAAA,CAAmBuxB,QAAQ,CAACjuC,CAAD,CAAQ,CACjC,IAAK,IAAI+S,EAAI,CAAb,CAAgBA,CAAhB,CAAoBi7B,CAAAnvC,OAApB,CAAuCkU,CAAA,EAAvC,CACE,GAAG,CAACi7B,CAAA,CAAWj7B,CAAX,CAAA,CAAc/S,CAAd,CAAJ,CACE,MAAO,CAAA,CAGX;MAAO,CAAA,CAN0B,CASZ,WAAvB,GAAI+tC,CAAJ,GAEID,CAFJ,CACyB,SAAvB,GAAIC,CAAJ,EAAoCD,CAApC,CACeA,QAAQ,CAACnvC,CAAD,CAAM2vB,CAAN,CAAY,CAC/B,MAAOvlB,GAAAlF,OAAA,CAAelF,CAAf,CAAoB2vB,CAApB,CADwB,CADnC,CAKewf,QAAQ,CAACnvC,CAAD,CAAM2vB,CAAN,CAAY,CAC/B,GAAI3vB,CAAJ,EAAW2vB,CAAX,EAAkC,QAAlC,GAAmB,MAAO3vB,EAA1B,EAA8D,QAA9D,GAA8C,MAAO2vB,EAArD,CAAwE,CACtE,IAAK4f,IAAIA,CAAT,GAAmBvvC,EAAnB,CACE,GAAyB,GAAzB,GAAIuvC,CAAAtqC,OAAA,CAAc,CAAd,CAAJ,EAAgCtE,EAAAC,KAAA,CAAoBZ,CAApB,CAAyBuvC,CAAzB,CAAhC,EACIJ,CAAA,CAAWnvC,CAAA,CAAIuvC,CAAJ,CAAX,CAAwB5f,CAAA,CAAK4f,CAAL,CAAxB,CADJ,CAEE,MAAO,CAAA,CAGX,OAAO,CAAA,CAP+D,CASxE5f,CAAA,CAAQ9kB,CAAA,EAAAA,CAAG8kB,CAAH9kB,aAAA,EACR,OAA+C,EAA/C,CAAQA,CAAA,EAAAA,CAAG7K,CAAH6K,aAAA,EAAA3G,QAAA,CAA8ByrB,CAA9B,CAXuB,CANrC,CAsBA,KAAImN,EAASA,QAAQ,CAAC98B,CAAD,CAAM2vB,CAAN,CAAW,CAC9B,GAAmB,QAAnB,EAAI,MAAOA,EAAX,EAAkD,GAAlD,GAA+BA,CAAA1qB,OAAA,CAAY,CAAZ,CAA/B,CACE,MAAO,CAAC63B,CAAA,CAAO98B,CAAP,CAAY2vB,CAAAtH,OAAA,CAAY,CAAZ,CAAZ,CAEV,QAAQ,MAAOroB,EAAf,EACE,KAAK,SAAL,CACA,KAAK,QAAL,CACA,KAAK,QAAL,CACE,MAAOmvC,EAAA,CAAWnvC,CAAX,CAAgB2vB,CAAhB,CACT,MAAK,QAAL,CACE,OAAQ,MAAOA,EAAf,EACE,KAAK,QAAL,CACE,MAAOwf,EAAA,CAAWnvC,CAAX;AAAgB2vB,CAAhB,CACT,SACE,IAAM4f,IAAIA,CAAV,GAAoBvvC,EAApB,CACE,GAAyB,GAAzB,GAAIuvC,CAAAtqC,OAAA,CAAc,CAAd,CAAJ,EAAgC63B,CAAA,CAAO98B,CAAA,CAAIuvC,CAAJ,CAAP,CAAoB5f,CAApB,CAAhC,CACE,MAAO,CAAA,CANf,CAWA,MAAO,CAAA,CACT,MAAK,OAAL,CACE,IAAUzuB,CAAV,CAAc,CAAd,CAAiBA,CAAjB,CAAqBlB,CAAAE,OAArB,CAAiCgB,CAAA,EAAjC,CACE,GAAI47B,CAAA,CAAO98B,CAAA,CAAIkB,CAAJ,CAAP,CAAeyuB,CAAf,CAAJ,CACE,MAAO,CAAA,CAGX,OAAO,CAAA,CACT,SACE,MAAO,CAAA,CA1BX,CAJ8B,CAiChC,QAAQ,MAAOiD,EAAf,EACE,KAAK,SAAL,CACA,KAAK,QAAL,CACA,KAAK,QAAL,CAEEA,CAAA,CAAa,GAAGA,CAAH,CAEf,MAAK,QAAL,CAEE,IAAKnyB,IAAIA,CAAT,GAAgBmyB,EAAhB,CACG,SAAQ,CAACtnB,CAAD,CAAO,CACiB,WAA/B,EAAI,MAAOsnB,EAAA,CAAWtnB,CAAX,CAAX,EACA+jC,CAAAtuC,KAAA,CAAgB,QAAQ,CAACM,CAAD,CAAQ,CAC9B,MAAOy7B,EAAA,CAAe,GAAR,EAAAxxB,CAAA,CAAcjK,CAAd,CAAuBA,CAAvB,EAAgCA,CAAA,CAAMiK,CAAN,CAAvC,CAAqDsnB,CAAA,CAAWtnB,CAAX,CAArD,CADuB,CAAhC,CAFc,CAAf,CAAA,CAKE7K,CALF,CAOH,MACF,MAAK,UAAL,CACE4uC,CAAAtuC,KAAA,CAAgB6xB,CAAhB,CACA,MACF,SACE,MAAOzuB,EAtBX,CAwBIqrC,CAAAA,CAAW,EACf,KAAUp7B,CAAV,CAAc,CAAd,CAAiBA,CAAjB,CAAqBjQ,CAAAjE,OAArB,CAAmCkU,CAAA,EAAnC,CAAwC,CACtC,IAAI/S,EAAQ8C,CAAA,CAAMiQ,CAAN,CACRi7B,EAAAtxB,MAAA,CAAiB1c,CAAjB,CAAJ,EACEmuC,CAAAzuC,KAAA,CAAcM,CAAd,CAHoC,CAMxC,MAAOmuC,EArGsC,CADzB,CA0JxBd,QAASA,GAAc,CAACe,CAAD,CAAU,CAC/B,IAAIC;AAAUD,CAAAE,eACd,OAAO,SAAQ,CAACC,CAAD,CAASC,CAAT,CAAwB,CACjC9sC,CAAA,CAAY8sC,CAAZ,CAAJ,GAAiCA,CAAjC,CAAkDH,CAAAI,aAAlD,CACA,OAAOC,GAAA,CAAaH,CAAb,CAAqBF,CAAAM,SAAA,CAAiB,CAAjB,CAArB,CAA0CN,CAAAO,UAA1C,CAA6DP,CAAAQ,YAA7D,CAAkF,CAAlF,CAAAvoC,QAAA,CACa,SADb,CACwBkoC,CADxB,CAF8B,CAFR,CA4DjCb,QAASA,GAAY,CAACS,CAAD,CAAU,CAC7B,IAAIC,EAAUD,CAAAE,eACd,OAAO,SAAQ,CAACQ,CAAD,CAASC,CAAT,CAAuB,CACpC,MAAOL,GAAA,CAAaI,CAAb,CAAqBT,CAAAM,SAAA,CAAiB,CAAjB,CAArB,CAA0CN,CAAAO,UAA1C,CAA6DP,CAAAQ,YAA7D,CACLE,CADK,CAD6B,CAFT,CAS/BL,QAASA,GAAY,CAACI,CAAD,CAASE,CAAT,CAAkBC,CAAlB,CAA4BC,CAA5B,CAAwCH,CAAxC,CAAsD,CACzE,GAAc,IAAd,EAAID,CAAJ,EAAsB,CAACK,QAAA,CAASL,CAAT,CAAvB,EAA2CltC,CAAA,CAASktC,CAAT,CAA3C,CAA6D,MAAO,EAEpE,KAAIM,EAAsB,CAAtBA,CAAaN,CACjBA,EAAA,CAASxiB,IAAA+iB,IAAA,CAASP,CAAT,CAJgE,KAKrEQ,EAASR,CAATQ,CAAkB,EALmD,CAMrEC,EAAe,EANsD,CAOrEzoC,EAAQ,EAP6D,CASrE0oC,EAAc,CAAA,CAClB,IAA6B,EAA7B,GAAIF,CAAAzsC,QAAA,CAAe,GAAf,CAAJ,CAAgC,CAC9B,IAAIwD,EAAQipC,CAAAjpC,MAAA,CAAa,qBAAb,CACRA,EAAJ,EAAyB,GAAzB,EAAaA,CAAA,CAAM,CAAN,CAAb,EAAgCA,CAAA,CAAM,CAAN,CAAhC,CAA2C0oC,CAA3C,CAA0D,CAA1D,CACEO,CADF,CACW,GADX,EAGEC,CACA,CADeD,CACf,CAAAE,CAAA,CAAc,CAAA,CAJhB,CAF8B,CAUhC,GAAKA,CAAL,CA2CqB,CAAnB,CAAIT,CAAJ,GAAkC,EAAlC,CAAwBD,CAAxB,EAAgD,CAAhD,CAAuCA,CAAvC,IACES,CADF,CACiBT,CAAAW,QAAA,CAAeV,CAAf,CADjB,CA3CF;IAAkB,CACZW,CAAAA,CAAe7wC,CAAAywC,CAAA1oC,MAAA,CAAaioC,EAAb,CAAA,CAA0B,CAA1B,CAAAhwC,EAAgC,EAAhCA,QAGf6C,EAAA,CAAYqtC,CAAZ,CAAJ,GACEA,CADF,CACiBziB,IAAAqjB,IAAA,CAASrjB,IAAAC,IAAA,CAASyiB,CAAAY,QAAT,CAA0BF,CAA1B,CAAT,CAAiDV,CAAAa,QAAjD,CADjB,CAIIC,EAAAA,CAAMxjB,IAAAwjB,IAAA,CAAS,EAAT,CAAaf,CAAb,CACVD,EAAA,CAASxiB,IAAAyjB,MAAA,CAAWjB,CAAX,CAAoBgB,CAApB,CAAT,CAAoCA,CAChCE,EAAAA,CAAYppC,CAAA,EAAAA,CAAKkoC,CAALloC,OAAA,CAAmBioC,EAAnB,CACZhT,EAAAA,CAAQmU,CAAA,CAAS,CAAT,CACZA,EAAA,CAAWA,CAAA,CAAS,CAAT,CAAX,EAA0B,EAEnBzmC,KAAAA,EAAM,CAANA,CACH0mC,EAASjB,CAAAkB,OADN3mC,CAEH4mC,EAAQnB,CAAAoB,MAEZ,IAAIvU,CAAAh9B,OAAJ,EAAqBoxC,CAArB,CAA8BE,CAA9B,CAEE,IADA5mC,CACK,CADCsyB,CAAAh9B,OACD,CADgBoxC,CAChB,CAAApwC,CAAA,CAAI,CAAT,CAAYA,CAAZ,CAAgB0J,CAAhB,CAAqB1J,CAAA,EAArB,CAC0B,CAGxB,IAHK0J,CAGL,CAHW1J,CAGX,EAHcswC,CAGd,EAHmC,CAGnC,GAH6BtwC,CAG7B,GAFE0vC,CAEF,EAFkBN,CAElB,EAAAM,CAAA,EAAgB1T,CAAAj4B,OAAA,CAAa/D,CAAb,CAIpB,KAAKA,CAAL,CAAS0J,CAAT,CAAc1J,CAAd,CAAkBg8B,CAAAh9B,OAAlB,CAAgCgB,CAAA,EAAhC,CACoC,CAGlC,IAHKg8B,CAAAh9B,OAGL,CAHoBgB,CAGpB,EAHuBowC,CAGvB,EAH6C,CAG7C,GAHuCpwC,CAGvC,GAFE0vC,CAEF,EAFkBN,CAElB,EAAAM,CAAA,EAAgB1T,CAAAj4B,OAAA,CAAa/D,CAAb,CAIlB,KAAA,CAAMmwC,CAAAnxC,OAAN,CAAwBkwC,CAAxB,CAAA,CACEiB,CAAA,EAAY,GAGVjB,EAAJ,EAAqC,GAArC,GAAoBA,CAApB,GAA0CQ,CAA1C,EAA0DL,CAA1D,CAAuEc,CAAAhpB,OAAA,CAAgB,CAAhB,CAAmB+nB,CAAnB,CAAvE,CAxCgB,CAgDlBjoC,CAAApH,KAAA,CAAW0vC,CAAA,CAAaJ,CAAAqB,OAAb,CAA8BrB,CAAAsB,OAAzC,CACAxpC,EAAApH,KAAA,CAAW6vC,CAAX,CACAzoC,EAAApH,KAAA,CAAW0vC,CAAA,CAAaJ,CAAAuB,OAAb,CAA8BvB,CAAAwB,OAAzC,CACA,OAAO1pC,EAAAxG,KAAA,CAAW,EAAX,CAvEkE,CA0E3EmwC,QAASA,GAAS,CAACpW,CAAD;AAAMqW,CAAN,CAAc9+B,CAAd,CAAoB,CACpC,IAAI++B,EAAM,EACA,EAAV,CAAItW,CAAJ,GACEsW,CACA,CADO,GACP,CAAAtW,CAAA,CAAM,CAACA,CAFT,CAKA,KADAA,CACA,CADM,EACN,CADWA,CACX,CAAMA,CAAAx7B,OAAN,CAAmB6xC,CAAnB,CAAA,CAA2BrW,CAAA,CAAM,GAAN,CAAYA,CACnCzoB,EAAJ,GACEyoB,CADF,CACQA,CAAArT,OAAA,CAAWqT,CAAAx7B,OAAX,CAAwB6xC,CAAxB,CADR,CAEA,OAAOC,EAAP,CAAatW,CAVuB,CActCuW,QAASA,EAAU,CAACjpC,CAAD,CAAOoZ,CAAP,CAAa1Q,CAAb,CAAqBuB,CAArB,CAA2B,CAC5CvB,CAAA,CAASA,CAAT,EAAmB,CACnB,OAAO,SAAQ,CAACwgC,CAAD,CAAO,CAChB7wC,CAAAA,CAAQ6wC,CAAA,CAAK,KAAL,CAAalpC,CAAb,CAAA,EACZ,IAAa,CAAb,CAAI0I,CAAJ,EAAkBrQ,CAAlB,CAA0B,CAACqQ,CAA3B,CACErQ,CAAA,EAASqQ,CACG,EAAd,GAAIrQ,CAAJ,EAA8B,GAA9B,EAAmBqQ,CAAnB,GAAmCrQ,CAAnC,CAA2C,EAA3C,CACA,OAAOywC,GAAA,CAAUzwC,CAAV,CAAiB+gB,CAAjB,CAAuBnP,CAAvB,CALa,CAFsB,CAW9Ck/B,QAASA,GAAa,CAACnpC,CAAD,CAAOopC,CAAP,CAAkB,CACtC,MAAO,SAAQ,CAACF,CAAD,CAAOxC,CAAP,CAAgB,CAC7B,IAAIruC,EAAQ6wC,CAAA,CAAK,KAAL,CAAalpC,CAAb,CAAA,EAAZ,CACIiR,EAAMhN,EAAA,CAAUmlC,CAAA,CAAa,OAAb,CAAuBppC,CAAvB,CAA+BA,CAAzC,CAEV,OAAO0mC,EAAA,CAAQz1B,CAAR,CAAA,CAAa5Y,CAAb,CAJsB,CADO,CAuIxCstC,QAASA,GAAU,CAACc,CAAD,CAAU,CAK3B4C,QAASA,EAAgB,CAACC,CAAD,CAAS,CAChC,IAAI5qC,CACJ,IAAIA,CAAJ,CAAY4qC,CAAA5qC,MAAA,CAAa6qC,CAAb,CAAZ,CAAyC,CACnCL,CAAAA,CAAO,IAAIttC,IAAJ,CAAS,CAAT,CAD4B,KAEnC4tC,EAAS,CAF0B,CAGnCC,EAAS,CAH0B,CAInCC,EAAahrC,CAAA,CAAM,CAAN,CAAA,CAAWwqC,CAAAS,eAAX,CAAiCT,CAAAU,YAJX,CAKnCC,EAAanrC,CAAA,CAAM,CAAN,CAAA,CAAWwqC,CAAAY,YAAX,CAA8BZ,CAAAa,SAE3CrrC,EAAA,CAAM,CAAN,CAAJ,GACE8qC,CACA,CADSnwC,CAAA,CAAIqF,CAAA,CAAM,CAAN,CAAJ,CAAeA,CAAA,CAAM,EAAN,CAAf,CACT,CAAA+qC,CAAA,CAAQpwC,CAAA,CAAIqF,CAAA,CAAM,CAAN,CAAJ,CAAeA,CAAA,CAAM,EAAN,CAAf,CAFV,CAIAgrC;CAAA9xC,KAAA,CAAgBsxC,CAAhB,CAAsB7vC,CAAA,CAAIqF,CAAA,CAAM,CAAN,CAAJ,CAAtB,CAAqCrF,CAAA,CAAIqF,CAAA,CAAM,CAAN,CAAJ,CAArC,CAAqD,CAArD,CAAwDrF,CAAA,CAAIqF,CAAA,CAAM,CAAN,CAAJ,CAAxD,CACI1F,EAAAA,CAAIK,CAAA,CAAIqF,CAAA,CAAM,CAAN,CAAJ,EAAc,CAAd,CAAJ1F,CAAuBwwC,CACvBQ,EAAAA,CAAI3wC,CAAA,CAAIqF,CAAA,CAAM,CAAN,CAAJ,EAAc,CAAd,CAAJsrC,CAAuBP,CACvBQ,EAAAA,CAAI5wC,CAAA,CAAIqF,CAAA,CAAM,CAAN,CAAJ,EAAc,CAAd,CACJwrC,EAAAA,CAAKvlB,IAAAyjB,MAAA,CAA8C,GAA9C,CAAW+B,UAAA,CAAW,IAAX,EAAmBzrC,CAAA,CAAM,CAAN,CAAnB,EAA6B,CAA7B,EAAX,CACTmrC,EAAAjyC,KAAA,CAAgBsxC,CAAhB,CAAsBlwC,CAAtB,CAAyBgxC,CAAzB,CAA4BC,CAA5B,CAA+BC,CAA/B,CAhBuC,CAmBzC,MAAOZ,EArByB,CAFlC,IAAIC,EAAgB,sGA2BpB,OAAO,SAAQ,CAACL,CAAD,CAAOkB,CAAP,CAAe,CAAA,IACxBzjB,EAAO,EADiB,CAExBxnB,EAAQ,EAFgB,CAGxBrC,CAHwB,CAGpB4B,CAER0rC,EAAA,CAASA,CAAT,EAAmB,YACnBA,EAAA,CAAS3D,CAAA4D,iBAAA,CAAyBD,CAAzB,CAAT,EAA6CA,CACzChzC,EAAA,CAAS8xC,CAAT,CAAJ,GAEIA,CAFJ,CACMoB,EAAAnpC,KAAA,CAAmB+nC,CAAnB,CAAJ,CACS7vC,CAAA,CAAI6vC,CAAJ,CADT,CAGSG,CAAA,CAAiBH,CAAjB,CAJX,CAQIhvC,GAAA,CAASgvC,CAAT,CAAJ,GACEA,CADF,CACS,IAAIttC,IAAJ,CAASstC,CAAT,CADT,CAIA,IAAI,CAAC/uC,EAAA,CAAO+uC,CAAP,CAAL,CACE,MAAOA,EAGT,KAAA,CAAMkB,CAAN,CAAA,CAEE,CADA1rC,CACA,CADQ6rC,EAAApqC,KAAA,CAAwBiqC,CAAxB,CACR,GACEjrC,CACA,CADeA,CA9pbdhC,OAAA,CAAcH,EAAApF,KAAA,CA8pbO8G,CA9pbP,CA8pbcnG,CA9pbd,CAAd,CA+pbD,CAAA6xC,CAAA,CAASjrC,CAAAuV,IAAA,EAFX,GAIEvV,CAAApH,KAAA,CAAWqyC,CAAX,CACA,CAAAA,CAAA,CAAS,IALX,CASF9yC,EAAA,CAAQ6H,CAAR,CAAe,QAAQ,CAAC9G,CAAD,CAAO,CAC5ByE,CAAA;AAAK0tC,EAAA,CAAanyC,CAAb,CACLsuB,EAAA,EAAQ7pB,CAAA,CAAKA,CAAA,CAAGosC,CAAH,CAASzC,CAAA4D,iBAAT,CAAL,CACKhyC,CAAAsG,QAAA,CAAc,UAAd,CAA0B,EAA1B,CAAAA,QAAA,CAAsC,KAAtC,CAA6C,GAA7C,CAHe,CAA9B,CAMA,OAAOgoB,EAxCqB,CA9BH,CAuG7Bkf,QAASA,GAAU,EAAG,CACpB,MAAO,SAAQ,CAAC4E,CAAD,CAAS,CACtB,MAAOntC,GAAA,CAAOmtC,CAAP,CAAe,CAAA,CAAf,CADe,CADJ,CAiGtB3E,QAASA,GAAa,EAAE,CACtB,MAAO,SAAQ,CAAC4E,CAAD,CAAQC,CAAR,CAAe,CAC5B,GAAI,CAACtzC,CAAA,CAAQqzC,CAAR,CAAL,EAAuB,CAACtzC,CAAA,CAASszC,CAAT,CAAxB,CAAyC,MAAOA,EAEhDC,EAAA,CAAQtxC,CAAA,CAAIsxC,CAAJ,CAER,IAAIvzC,CAAA,CAASszC,CAAT,CAAJ,CAEE,MAAIC,EAAJ,CACkB,CAAT,EAAAA,CAAA,CAAaD,CAAA1tC,MAAA,CAAY,CAAZ,CAAe2tC,CAAf,CAAb,CAAqCD,CAAA1tC,MAAA,CAAY2tC,CAAZ,CAAmBD,CAAAxzC,OAAnB,CAD9C,CAGS,EAViB,KAcxB0zC,EAAM,EAdkB,CAe1B1yC,CAf0B,CAevB4gB,CAGD6xB,EAAJ,CAAYD,CAAAxzC,OAAZ,CACEyzC,CADF,CACUD,CAAAxzC,OADV,CAESyzC,CAFT,CAEiB,CAACD,CAAAxzC,OAFlB,GAGEyzC,CAHF,CAGU,CAACD,CAAAxzC,OAHX,CAKY,EAAZ,CAAIyzC,CAAJ,EACEzyC,CACA,CADI,CACJ,CAAA4gB,CAAA,CAAI6xB,CAFN,GAIEzyC,CACA,CADIwyC,CAAAxzC,OACJ,CADmByzC,CACnB,CAAA7xB,CAAA,CAAI4xB,CAAAxzC,OALN,CAQA,KAAA,CAAOgB,CAAP,CAAS4gB,CAAT,CAAY5gB,CAAA,EAAZ,CACE0yC,CAAA7yC,KAAA,CAAS2yC,CAAA,CAAMxyC,CAAN,CAAT,CAGF,OAAO0yC,EAnCqB,CADR,CAqGxB3E,QAASA,GAAa,CAACxqB,CAAD,CAAQ,CAC5B,MAAO,SAAQ,CAACtgB,CAAD,CAAQ0vC,CAAR,CAAuBC,CAAvB,CAAqC,CAkClDC,QAASA,EAAiB,CAACC,CAAD,CAAOC,CAAP,CAAmB,CAC3C,MAAOptC,GAAA,CAAUotC,CAAV,CACA,CAAD,QAAQ,CAACxoB,CAAD,CAAGC,CAAH,CAAK,CAAC,MAAOsoB,EAAA,CAAKtoB,CAAL,CAAOD,CAAP,CAAR,CAAZ,CACDuoB,CAHqC,CAlCK;AAuClD7oB,QAASA,EAAO,CAAC+oB,CAAD,CAAKC,CAAL,CAAQ,CACtB,IAAI9uC,EAAK,MAAO6uC,EAAhB,CACI5uC,EAAK,MAAO6uC,EAChB,OAAI9uC,EAAJ,EAAUC,CAAV,EACY,QAIV,EAJID,CAIJ,GAHG6uC,CACA,CADKA,CAAArpC,YAAA,EACL,CAAAspC,CAAA,CAAKA,CAAAtpC,YAAA,EAER,EAAIqpC,CAAJ,GAAWC,CAAX,CAAsB,CAAtB,CACOD,CAAA,CAAKC,CAAL,CAAW,EAAX,CAAe,CANxB,EAQS9uC,CAAA,CAAKC,CAAL,CAAW,EAAX,CAAe,CAXF,CArCxB,GADI,CAACjF,CAAA,CAAQ8D,CAAR,CACL,EAAI,CAAC0vC,CAAL,CAAoB,MAAO1vC,EAC3B0vC,EAAA,CAAgBxzC,CAAA,CAAQwzC,CAAR,CAAA,CAAyBA,CAAzB,CAAwC,CAACA,CAAD,CACxDA,EAAA,CAAgB9vC,EAAA,CAAI8vC,CAAJ,CAAmB,QAAQ,CAACO,CAAD,CAAW,CAAA,IAChDH,EAAa,CAAA,CADmC,CAC5Bh6B,EAAMm6B,CAANn6B,EAAmBrX,EAC3C,IAAIxC,CAAA,CAASg0C,CAAT,CAAJ,CAAyB,CACvB,GAA4B,GAA5B,EAAKA,CAAAnvC,OAAA,CAAiB,CAAjB,CAAL,EAA0D,GAA1D,EAAmCmvC,CAAAnvC,OAAA,CAAiB,CAAjB,CAAnC,CACEgvC,CACA,CADoC,GACpC,EADaG,CAAAnvC,OAAA,CAAiB,CAAjB,CACb,CAAAmvC,CAAA,CAAYA,CAAAvzB,UAAA,CAAoB,CAApB,CAEd5G,EAAA,CAAMwK,CAAA,CAAO2vB,CAAP,CACN,IAAIn6B,CAAAsB,SAAJ,CAAkB,CAChB,IAAI9a,EAAMwZ,CAAA,EACV,OAAO85B,EAAA,CAAkB,QAAQ,CAACtoB,CAAD,CAAGC,CAAH,CAAM,CACrC,MAAOP,EAAA,CAAQM,CAAA,CAAEhrB,CAAF,CAAR,CAAgBirB,CAAA,CAAEjrB,CAAF,CAAhB,CAD8B,CAAhC,CAEJwzC,CAFI,CAFS,CANK,CAazB,MAAOF,EAAA,CAAkB,QAAQ,CAACtoB,CAAD,CAAGC,CAAH,CAAK,CACpC,MAAOP,EAAA,CAAQlR,CAAA,CAAIwR,CAAJ,CAAR,CAAexR,CAAA,CAAIyR,CAAJ,CAAf,CAD6B,CAA/B,CAEJuoB,CAFI,CAf6C,CAAtC,CAoBhB,KADA,IAAII,EAAY,EAAhB,CACUnzC,EAAI,CAAd,CAAiBA,CAAjB,CAAqBiD,CAAAjE,OAArB,CAAmCgB,CAAA,EAAnC,CAA0CmzC,CAAAtzC,KAAA,CAAeoD,CAAA,CAAMjD,CAAN,CAAf,CAC1C,OAAOmzC,EAAArzC,KAAA,CAAe+yC,CAAA,CAEtB5E,QAAmB,CAAChqC,CAAD,CAAKC,CAAL,CAAQ,CACzB,IAAM,IAAIlE;AAAI,CAAd,CAAiBA,CAAjB,CAAqB2yC,CAAA3zC,OAArB,CAA2CgB,CAAA,EAA3C,CAAgD,CAC9C,IAAI8yC,EAAOH,CAAA,CAAc3yC,CAAd,CAAA,CAAiBiE,CAAjB,CAAqBC,CAArB,CACX,IAAa,CAAb,GAAI4uC,CAAJ,CAAgB,MAAOA,EAFuB,CAIhD,MAAO,EALkB,CAFL,CAA8BF,CAA9B,CAAf,CAzB2C,CADxB,CAyD9BQ,QAASA,GAAW,CAAC/mC,CAAD,CAAY,CAC1B7M,CAAA,CAAW6M,CAAX,CAAJ,GACEA,CADF,CACc,MACJA,CADI,CADd,CAKAA,EAAAyW,SAAA,CAAqBzW,CAAAyW,SAArB,EAA2C,IAC3C,OAAOlhB,GAAA,CAAQyK,CAAR,CAPuB,CAqfhCgnC,QAASA,GAAc,CAACttC,CAAD,CAAU6f,CAAV,CAAiBmF,CAAjB,CAAyBrH,CAAzB,CAAmC,CAqBxD4vB,QAASA,EAAc,CAACC,CAAD,CAAUC,CAAV,CAA8B,CACnDA,CAAA,CAAqBA,CAAA,CAAqB,GAArB,CAA2BlqC,EAAA,CAAWkqC,CAAX,CAA+B,GAA/B,CAA3B,CAAiE,EACtF9vB,EAAA0M,YAAA,CAAqBrqB,CAArB,EAA+BwtC,CAAA,CAAUE,EAAV,CAA0BC,EAAzD,EAAwEF,CAAxE,CACA9vB,EAAAkB,SAAA,CAAkB7e,CAAlB,EAA4BwtC,CAAA,CAAUG,EAAV,CAAwBD,EAApD,EAAqED,CAArE,CAHmD,CArBG,IACpDG,EAAO,IAD6C,CAEpDC,EAAa7tC,CAAAxE,OAAA,EAAAshB,WAAA,CAA4B,MAA5B,CAAb+wB,EAAoDC,EAFA,CAGpDC,EAAe,CAHqC,CAIpDC,EAASJ,CAAAK,OAATD,CAAuB,EAJ6B,CAKpDE,EAAW,EAGfN,EAAAO,MAAA,CAAatuB,CAAA9d,KAAb,EAA2B8d,CAAAuuB,OAC3BR,EAAAS,OAAA,CAAc,CAAA,CACdT,EAAAU,UAAA,CAAiB,CAAA,CACjBV,EAAAW,OAAA,CAAc,CAAA,CACdX,EAAAY,SAAA,CAAgB,CAAA,CAEhBX,EAAAY,YAAA,CAAuBb,CAAvB,CAGA5tC,EAAA6e,SAAA,CAAiB6vB,EAAjB,CACAnB,EAAA,CAAe,CAAA,CAAf,CAkBAK,EAAAa,YAAA,CAAmBE,QAAQ,CAACC,CAAD,CAAU,CAGnCzqC,EAAA,CAAwByqC,CAAAT,MAAxB,CAAuC,OAAvC,CACAD,EAAAp0C,KAAA,CAAc80C,CAAd,CAEIA,EAAAT,MAAJ;CACEP,CAAA,CAAKgB,CAAAT,MAAL,CADF,CACwBS,CADxB,CANmC,CAoBrChB,EAAAiB,eAAA,CAAsBC,QAAQ,CAACF,CAAD,CAAU,CAClCA,CAAAT,MAAJ,EAAqBP,CAAA,CAAKgB,CAAAT,MAAL,CAArB,GAA6CS,CAA7C,EACE,OAAOhB,CAAA,CAAKgB,CAAAT,MAAL,CAET90C,EAAA,CAAQ20C,CAAR,CAAgB,QAAQ,CAACe,CAAD,CAAQC,CAAR,CAAyB,CAC/CpB,CAAAqB,aAAA,CAAkBD,CAAlB,CAAmC,CAAA,CAAnC,CAAyCJ,CAAzC,CAD+C,CAAjD,CAIAzxC,GAAA,CAAY+wC,CAAZ,CAAsBU,CAAtB,CARsC,CAoBxChB,EAAAqB,aAAA,CAAoBC,QAAQ,CAACF,CAAD,CAAkBxB,CAAlB,CAA2BoB,CAA3B,CAAoC,CAC9D,IAAIG,EAAQf,CAAA,CAAOgB,CAAP,CAEZ,IAAIxB,CAAJ,CACMuB,CAAJ,GACE5xC,EAAA,CAAY4xC,CAAZ,CAAmBH,CAAnB,CACA,CAAKG,CAAA91C,OAAL,GACE80C,CAAA,EAQA,CAPKA,CAOL,GANER,CAAA,CAAeC,CAAf,CAEA,CADAI,CAAAW,OACA,CADc,CAAA,CACd,CAAAX,CAAAY,SAAA,CAAgB,CAAA,CAIlB,EAFAR,CAAA,CAAOgB,CAAP,CAEA,CAF0B,CAAA,CAE1B,CADAzB,CAAA,CAAe,CAAA,CAAf,CAAqByB,CAArB,CACA,CAAAnB,CAAAoB,aAAA,CAAwBD,CAAxB,CAAyC,CAAA,CAAzC,CAA+CpB,CAA/C,CATF,CAFF,CADF,KAgBO,CACAG,CAAL,EACER,CAAA,CAAeC,CAAf,CAEF,IAAIuB,CAAJ,CACE,IAtwdyB,EAswdzB,EAtwdC9xC,EAAA,CAswdY8xC,CAtwdZ,CAswdmBH,CAtwdnB,CAswdD,CAA8B,MAA9B,CADF,IAGEZ,EAAA,CAAOgB,CAAP,CAGA,CAH0BD,CAG1B,CAHkC,EAGlC,CAFAhB,CAAA,EAEA,CADAR,CAAA,CAAe,CAAA,CAAf,CAAsByB,CAAtB,CACA,CAAAnB,CAAAoB,aAAA,CAAwBD,CAAxB,CAAyC,CAAA,CAAzC,CAAgDpB,CAAhD,CAEFmB,EAAAj1C,KAAA,CAAW80C,CAAX,CAEAhB,EAAAW,OAAA,CAAc,CAAA,CACdX,EAAAY,SAAA,CAAgB,CAAA,CAfX,CAnBuD,CAgDhEZ,EAAAuB,UAAA,CAAiBC,QAAQ,EAAG,CAC1BzxB,CAAA0M,YAAA,CAAqBrqB,CAArB,CAA8B0uC,EAA9B,CACA/wB,EAAAkB,SAAA,CAAkB7e,CAAlB,CAA2BqvC,EAA3B,CACAzB,EAAAS,OAAA,CAAc,CAAA,CACdT,EAAAU,UAAA;AAAiB,CAAA,CACjBT,EAAAsB,UAAA,EAL0B,CAsB5BvB,EAAA0B,aAAA,CAAoBC,QAAS,EAAG,CAC9B5xB,CAAA0M,YAAA,CAAqBrqB,CAArB,CAA8BqvC,EAA9B,CACA1xB,EAAAkB,SAAA,CAAkB7e,CAAlB,CAA2B0uC,EAA3B,CACAd,EAAAS,OAAA,CAAc,CAAA,CACdT,EAAAU,UAAA,CAAiB,CAAA,CACjBj1C,EAAA,CAAQ60C,CAAR,CAAkB,QAAQ,CAACU,CAAD,CAAU,CAClCA,CAAAU,aAAA,EADkC,CAApC,CAL8B,CAlJwB,CAwyB1DE,QAASA,GAAQ,CAACC,CAAD,CAAOC,CAAP,CAAsBC,CAAtB,CAAgCv1C,CAAhC,CAAsC,CACrDq1C,CAAAR,aAAA,CAAkBS,CAAlB,CAAiCC,CAAjC,CACA,OAAOA,EAAA,CAAWv1C,CAAX,CAAmBxB,CAF2B,CAMvDg3C,QAASA,GAAwB,CAACH,CAAD,CAAOC,CAAP,CAAsB1vC,CAAtB,CAA+B,CAC9D,IAAI2vC,EAAW3vC,CAAArD,KAAA,CAAa,UAAb,CACXX,EAAA,CAAS2zC,CAAT,CAAJ,EAWEF,CAAAI,SAAA/1C,KAAA,CAVgBg2C,QAAQ,CAAC11C,CAAD,CAAQ,CAG9B,GAAKq1C,CAAAxB,OAAA,CAAYyB,CAAZ,CAAL,EAAoC,EAAAC,CAAAI,SAAA,EAAqBJ,CAAAK,YAArB,EAChCL,CAAAM,aADgC,CAApC,EAC+BN,CAAAO,aAD/B,CAKA,MAAO91C,EAHLq1C,EAAAR,aAAA,CAAkBS,CAAlB,CAAiC,CAAA,CAAjC,CAL4B,CAUhC,CAb4D,CAiBhES,QAASA,GAAa,CAACvtC,CAAD,CAAQ5C,CAAR,CAAiBpD,CAAjB,CAAuB6yC,CAAvB,CAA6Bp5B,CAA7B,CAAuC2W,CAAvC,CAAiD,CACrE,IAAI2iB,EAAW3vC,CAAArD,KAAA,CAAa,UAAb,CAIf,IAAI,CAAC0Z,CAAA4vB,QAAL,CAAuB,CACrB,IAAImK,EAAY,CAAA,CAEhBpwC,EAAA6Y,GAAA,CAAW,kBAAX,CAA+B,QAAQ,CAAC7V,CAAD,CAAO,CAC5CotC,CAAA,CAAY,CAAA,CADgC,CAA9C,CAIApwC;CAAA6Y,GAAA,CAAW,gBAAX,CAA6B,QAAQ,EAAG,CACtCu3B,CAAA,CAAY,CAAA,CACZ74B,EAAA,EAFsC,CAAxC,CAPqB,CAavB,IAAIA,EAAWA,QAAQ,EAAG,CACxB,GAAI64B,CAAAA,CAAJ,CAAA,CACA,IAAIh2C,EAAQ4F,CAAAZ,IAAA,EAKRQ,GAAA,CAAUhD,CAAAyzC,OAAV,EAAyB,GAAzB,CAAJ,GACEj2C,CADF,CACU4R,EAAA,CAAK5R,CAAL,CADV,CAIA,IAAIq1C,CAAAa,WAAJ,GAAwBl2C,CAAxB,EAIKu1C,CAJL,EAI2B,EAJ3B,GAIiBv1C,CAJjB,EAIiC,CAACu1C,CAAAO,aAJlC,CAKMttC,CAAAgtB,QAAJ,CACE6f,CAAAc,cAAA,CAAmBn2C,CAAnB,CADF,CAGEwI,CAAAG,OAAA,CAAa,QAAQ,EAAG,CACtB0sC,CAAAc,cAAA,CAAmBn2C,CAAnB,CADsB,CAAxB,CAlBJ,CADwB,CA4B1B,IAAIic,CAAAywB,SAAA,CAAkB,OAAlB,CAAJ,CACE9mC,CAAA6Y,GAAA,CAAW,OAAX,CAAoBtB,CAApB,CADF,KAEO,CACL,IAAI+Y,CAAJ,CAEIkgB,EAAgBA,QAAQ,EAAG,CACxBlgB,CAAL,GACEA,CADF,CACYtD,CAAAnT,MAAA,CAAe,QAAQ,EAAG,CAClCtC,CAAA,EACA+Y,EAAA,CAAU,IAFwB,CAA1B,CADZ,CAD6B,CAS/BtwB,EAAA6Y,GAAA,CAAW,SAAX,CAAsB,QAAQ,CAACzI,CAAD,CAAQ,CAChC5W,CAAAA,CAAM4W,CAAAqgC,QAIE,GAAZ,GAAIj3C,CAAJ,GAAmB,EAAnB,CAAwBA,CAAxB,EAAqC,EAArC,CAA+BA,CAA/B,EAA6C,EAA7C,EAAmDA,CAAnD,EAAiE,EAAjE,EAA0DA,CAA1D,GAEAg3C,CAAA,EAPoC,CAAtC,CAWA,IAAIn6B,CAAAywB,SAAA,CAAkB,OAAlB,CAAJ,CACE9mC,CAAA6Y,GAAA,CAAW,WAAX,CAAwB23B,CAAxB,CAxBG,CA8BPxwC,CAAA6Y,GAAA,CAAW,QAAX,CAAqBtB,CAArB,CAEAk4B,EAAAiB,QAAA,CAAeC,QAAQ,EAAG,CACxB3wC,CAAAZ,IAAA,CAAYqwC,CAAAmB,SAAA,CAAcnB,CAAAa,WAAd,CAAA;AAAiC,EAAjC,CAAsCb,CAAAa,WAAlD,CADwB,CAhF2C,KAqFjElH,EAAUxsC,CAAAi0C,UAIVzH,EAAJ,GAKE,CADA3oC,CACA,CADQ2oC,CAAA3oC,MAAA,CAAc,oBAAd,CACR,GACE2oC,CACA,CADcvrC,MAAJ,CAAW4C,CAAA,CAAM,CAAN,CAAX,CAAqBA,CAAA,CAAM,CAAN,CAArB,CACV,CAAAqwC,CAAA,CAAmBA,QAAQ,CAAC12C,CAAD,CAAQ,CACjC,MANKo1C,GAAA,CAASC,CAAT,CAAe,SAAf,CAA0BA,CAAAmB,SAAA,CAMDx2C,CANC,CAA1B,EAMgBgvC,CANkClmC,KAAA,CAMzB9I,CANyB,CAAlD,CAMyBA,CANzB,CAK4B,CAFrC,EAME02C,CANF,CAMqBA,QAAQ,CAAC12C,CAAD,CAAQ,CACjC,IAAI22C,EAAanuC,CAAA0/B,MAAA,CAAY8G,CAAZ,CAEjB,IAAI,CAAC2H,CAAL,EAAmB,CAACA,CAAA7tC,KAApB,CACE,KAAMrK,EAAA,CAAO,WAAP,CAAA,CAAoB,UAApB,CACqDuwC,CADrD,CAEJ2H,CAFI,CAEQhxC,EAAA,CAAYC,CAAZ,CAFR,CAAN,CAIF,MAjBKwvC,GAAA,CAASC,CAAT,CAAe,SAAf,CAA0BA,CAAAmB,SAAA,CAiBEx2C,CAjBF,CAA1B,EAiBgB22C,CAjBkC7tC,KAAA,CAiBtB9I,CAjBsB,CAAlD,CAiB4BA,CAjB5B,CAS4B,CAarC,CADAq1C,CAAAuB,YAAAl3C,KAAA,CAAsBg3C,CAAtB,CACA,CAAArB,CAAAI,SAAA/1C,KAAA,CAAmBg3C,CAAnB,CAxBF,CA4BA,IAAIl0C,CAAAq0C,YAAJ,CAAsB,CACpB,IAAIC,EAAY91C,CAAA,CAAIwB,CAAAq0C,YAAJ,CACZE,EAAAA,CAAqBA,QAAQ,CAAC/2C,CAAD,CAAQ,CACvC,MAAOo1C,GAAA,CAASC,CAAT,CAAe,WAAf,CAA4BA,CAAAmB,SAAA,CAAcx2C,CAAd,CAA5B,EAAoDA,CAAAnB,OAApD,EAAoEi4C,CAApE,CAA+E92C,CAA/E,CADgC,CAIzCq1C,EAAAI,SAAA/1C,KAAA,CAAmBq3C,CAAnB,CACA1B,EAAAuB,YAAAl3C,KAAA,CAAsBq3C,CAAtB,CAPoB,CAWtB,GAAIv0C,CAAAw0C,YAAJ,CAAsB,CACpB,IAAIC;AAAYj2C,CAAA,CAAIwB,CAAAw0C,YAAJ,CACZE,EAAAA,CAAqBA,QAAQ,CAACl3C,CAAD,CAAQ,CACvC,MAAOo1C,GAAA,CAASC,CAAT,CAAe,WAAf,CAA4BA,CAAAmB,SAAA,CAAcx2C,CAAd,CAA5B,EAAoDA,CAAAnB,OAApD,EAAoEo4C,CAApE,CAA+Ej3C,CAA/E,CADgC,CAIzCq1C,EAAAI,SAAA/1C,KAAA,CAAmBw3C,CAAnB,CACA7B,EAAAuB,YAAAl3C,KAAA,CAAsBw3C,CAAtB,CAPoB,CAhI+C,CAyyCvEC,QAASA,GAAc,CAACxvC,CAAD,CAAOiN,CAAP,CAAiB,CACtCjN,CAAA,CAAO,SAAP,CAAmBA,CACnB,OAAO,CAAC,UAAD,CAAa,QAAQ,CAAC4b,CAAD,CAAW,CAiFrC6zB,QAASA,EAAe,CAACnmB,CAAD,CAAUC,CAAV,CAAmB,CACzC,IAAIF,EAAS,EAAb,CAGQnxB,EAAI,CADZ,EAAA,CACA,IAAA,CAAeA,CAAf,CAAmBoxB,CAAApyB,OAAnB,CAAmCgB,CAAA,EAAnC,CAAwC,CAEtC,IADA,IAAIsxB,EAAQF,CAAA,CAAQpxB,CAAR,CAAZ,CACQkT,EAAI,CAAZ,CAAeA,CAAf,CAAmBme,CAAAryB,OAAnB,CAAmCkU,CAAA,EAAnC,CACE,GAAGoe,CAAH,EAAYD,CAAA,CAAQne,CAAR,CAAZ,CAAwB,SAAS,CAEnCie,EAAAtxB,KAAA,CAAYyxB,CAAZ,CALsC,CAOxC,MAAOH,EAXkC,CAc3CqmB,QAASA,EAAa,CAACtnB,CAAD,CAAW,CAC/B,GAAI,CAAA/wB,CAAA,CAAQ+wB,CAAR,CAAJ,CAEO,CAAA,GAAIhxB,CAAA,CAASgxB,CAAT,CAAJ,CACL,MAAOA,EAAAnpB,MAAA,CAAe,GAAf,CACF,IAAIhF,CAAA,CAASmuB,CAAT,CAAJ,CAAwB,CAAA,IACzBunB,EAAU,EACdr4C,EAAA,CAAQ8wB,CAAR,CAAkB,QAAQ,CAACtqB,CAAD,CAAIkqB,CAAJ,CAAO,CAC3BlqB,CAAJ,EACE6xC,CAAA53C,KAAA,CAAaiwB,CAAb,CAF6B,CAAjC,CAKA,OAAO2nB,EAPsB,CAFxB,CAWP,MAAOvnB,EAdwB,CA9FjC,MAAO,UACK,IADL,MAECrP,QAAQ,CAAClY,CAAD,CAAQ5C,CAAR,CAAiBpD,CAAjB,CAAuB,CAiCnC+0C,QAASA,EAAkB,CAACD,CAAD,CAAU7d,CAAV,CAAiB,CAC1C,IAAI+d;AAAc5xC,CAAAgD,KAAA,CAAa,cAAb,CAAd4uC,EAA8C,EAAlD,CACIC,EAAkB,EACtBx4C,EAAA,CAAQq4C,CAAR,CAAiB,QAAS,CAACvvC,CAAD,CAAY,CACpC,GAAY,CAAZ,CAAI0xB,CAAJ,EAAiB+d,CAAA,CAAYzvC,CAAZ,CAAjB,CACEyvC,CAAA,CAAYzvC,CAAZ,CACA,EAD0ByvC,CAAA,CAAYzvC,CAAZ,CAC1B,EADoD,CACpD,EADyD0xB,CACzD,CAAI+d,CAAA,CAAYzvC,CAAZ,CAAJ,GAA+B,EAAU,CAAV,CAAE0xB,CAAF,CAA/B,EACEge,CAAA/3C,KAAA,CAAqBqI,CAArB,CAJgC,CAAtC,CAQAnC,EAAAgD,KAAA,CAAa,cAAb,CAA6B4uC,CAA7B,CACA,OAAOC,EAAAn3C,KAAA,CAAqB,GAArB,CAZmC,CA8B5Co3C,QAASA,EAAkB,CAACjR,CAAD,CAAS,CAClC,GAAiB,CAAA,CAAjB,GAAI7xB,CAAJ,EAAyBpM,CAAAmvC,OAAzB,CAAwC,CAAxC,GAA8C/iC,CAA9C,CAAwD,CACtD,IAAIsb,EAAamnB,CAAA,CAAa5Q,CAAb,EAAuB,EAAvB,CACjB,IAAI,CAACC,CAAL,CAAa,CA1Cf,IAAIxW,EAAaqnB,CAAA,CA2CFrnB,CA3CE,CAA2B,CAA3B,CACjB1tB,EAAAstB,UAAA,CAAeI,CAAf,CAyCe,CAAb,IAEO,IAAI,CAACrsB,EAAA,CAAO4iC,CAAP,CAAcC,CAAd,CAAL,CAA4B,CAEnB9Y,IAAAA,EADGypB,CAAAzpB,CAAa8Y,CAAb9Y,CACHA,CArBduC,EAAQinB,CAAA,CAqBkBlnB,CArBlB,CAA4BtC,CAA5B,CAqBMA,CApBdyC,EAAW+mB,CAAA,CAAgBxpB,CAAhB,CAoBesC,CApBf,CAoBGtC,CAnBlByC,EAAWknB,CAAA,CAAkBlnB,CAAlB,CAA6B,EAA7B,CAmBOzC,CAlBlBuC,EAAQonB,CAAA,CAAkBpnB,CAAlB,CAAyB,CAAzB,CAEa,EAArB,GAAIA,CAAAtxB,OAAJ,CACE0kB,CAAA0M,YAAA,CAAqBrqB,CAArB,CAA8ByqB,CAA9B,CADF,CAE+B,CAAxB,GAAIA,CAAAxxB,OAAJ,CACL0kB,CAAAkB,SAAA,CAAkB7e,CAAlB,CAA2BuqB,CAA3B,CADK,CAGL5M,CAAA+M,SAAA,CAAkB1qB,CAAlB,CAA2BuqB,CAA3B,CAAkCE,CAAlC,CASmC,CAJmB,CASxDqW,CAAA,CAASzjC,EAAA,CAAKwjC,CAAL,CAVyB,CA9DpC,IAAIC,CAEJl+B,EAAAnF,OAAA,CAAab,CAAA,CAAKmF,CAAL,CAAb,CAAyB+vC,CAAzB,CAA6C,CAAA,CAA7C,CAEAl1C,EAAAwnB,SAAA,CAAc,OAAd,CAAuB,QAAQ,CAAChqB,CAAD,CAAQ,CACrC03C,CAAA,CAAmBlvC,CAAA0/B,MAAA,CAAY1lC,CAAA,CAAKmF,CAAL,CAAZ,CAAnB,CADqC,CAAvC,CAKa,UAAb,GAAIA,CAAJ,EACEa,CAAAnF,OAAA,CAAa,QAAb;AAAuB,QAAQ,CAACs0C,CAAD,CAASC,CAAT,CAAoB,CAEjD,IAAIC,EAAMF,CAANE,CAAe,CACnB,IAAIA,CAAJ,GAAYD,CAAZ,CAAwB,CAAxB,CAA2B,CACzB,IAAIN,EAAUD,CAAA,CAAa7uC,CAAA0/B,MAAA,CAAY1lC,CAAA,CAAKmF,CAAL,CAAZ,CAAb,CACdkwC,EAAA,GAAQjjC,CAAR,EAQAsb,CACJ,CADiBqnB,CAAA,CAPAD,CAOA,CAA2B,CAA3B,CACjB,CAAA90C,CAAAstB,UAAA,CAAeI,CAAf,CATI,GAaAA,CACJ,CADiBqnB,CAAA,CAXGD,CAWH,CAA4B,EAA5B,CACjB,CAAA90C,CAAAwtB,aAAA,CAAkBE,CAAlB,CAdI,CAFyB,CAHsB,CAAnD,CAXiC,CAFhC,CAD8B,CAAhC,CAF+B,CAzziBxC,IAAIxqB,EAAYA,QAAQ,CAACurC,CAAD,CAAQ,CAAC,MAAOlyC,EAAA,CAASkyC,CAAT,CAAA,CAAmBA,CAAAznC,YAAA,EAAnB,CAA0CynC,CAAlD,CAAhC,CACI3xC,GAAiBw4C,MAAAj+B,UAAAva,eADrB,CAaIsM,GAAYA,QAAQ,CAACqlC,CAAD,CAAQ,CAAC,MAAOlyC,EAAA,CAASkyC,CAAT,CAAA,CAAmBA,CAAA3gC,YAAA,EAAnB,CAA0C2gC,CAAlD,CAbhC,CAwCIn6B,CAxCJ,CAyCIjR,CAzCJ,CA0CI2L,EA1CJ,CA2CI7M,GAAoB,EAAAA,MA3CxB,CA4CIjF,GAAoB,EAAAA,KA5CxB,CA6CIqC,GAAoB+1C,MAAAj+B,UAAA9X,SA7CxB,CA8CIuB,GAAoB7E,CAAA,CAAO,IAAP,CA9CxB,CAmDIsK,GAAoBzK,CAAAyK,QAApBA,GAAuCzK,CAAAyK,QAAvCA,CAAwD,EAAxDA,CAnDJ,CAoDI8C,EApDJ,CAqDI2a,EArDJ,CAsDIrmB,GAAoB,CAAC,GAAD,CAAM,GAAN,CAAW,GAAX,CAMxB2W,EAAA,CAAO9V,CAAA,CAAI,CAAC,YAAA8G,KAAA,CAAkBpC,CAAA,CAAUqmC,SAAAD,UAAV,CAAlB,CAAD,EAAsD,EAAtD,EAA0D,CAA1D,CAAJ,CACH3D,MAAA,CAAMrxB,CAAN,CAAJ,GACEA,CADF,CACS9V,CAAA,CAAI,CAAC,uBAAA8G,KAAA,CAA6BpC,CAAA,CAAUqmC,SAAAD,UAAV,CAA7B,CAAD;AAAiE,EAAjE,EAAqE,CAArE,CAAJ,CADT,CAiNAxqC,EAAA+V,QAAA,CAAe,EAoBf9V,GAAA8V,QAAA,CAAmB,EA8KnB,KAAIzF,GAAQ,QAAQ,EAAG,CAIrB,MAAKrR,OAAAsZ,UAAAjI,KAAL,CAKO,QAAQ,CAAC5R,CAAD,CAAQ,CACrB,MAAOjB,EAAA,CAASiB,CAAT,CAAA,CAAkBA,CAAA4R,KAAA,EAAlB,CAAiC5R,CADnB,CALvB,CACS,QAAQ,CAACA,CAAD,CAAQ,CACrB,MAAOjB,EAAA,CAASiB,CAAT,CAAA,CAAkBA,CAAAsG,QAAA,CAAc,QAAd,CAAwB,EAAxB,CAAAA,QAAA,CAAoC,QAApC,CAA8C,EAA9C,CAAlB,CAAsEtG,CADxD,CALJ,CAAX,EA8CVwmB,GAAA,CADS,CAAX,CAAI1P,CAAJ,CACc0P,QAAQ,CAAC5gB,CAAD,CAAU,CAC5BA,CAAA,CAAUA,CAAAtD,SAAA,CAAmBsD,CAAnB,CAA6BA,CAAA,CAAQ,CAAR,CACvC,OAAQA,EAAA4jB,UACD,EAD2C,MAC3C,EADsB5jB,CAAA4jB,UACtB,CAAH5d,EAAA,CAAUhG,CAAA4jB,UAAV,CAA8B,GAA9B,CAAoC5jB,CAAAtD,SAApC,CAAG,CAAqDsD,CAAAtD,SAHhC,CADhC,CAOckkB,QAAQ,CAAC5gB,CAAD,CAAU,CAC5B,MAAOA,EAAAtD,SAAA,CAAmBsD,CAAAtD,SAAnB,CAAsCsD,CAAA,CAAQ,CAAR,CAAAtD,SADjB,CAurBhC,KAAI+G,GAAoB,QAAxB,CAmgBIsC,GAAU,MACN,QADM,OAEL,CAFK,OAGL,CAHK,KAIP,EAJO,UAKF,oBALE,CAngBd,CAsuBIyI,GAAUzC,CAAAwH,MAAV/E,CAAyB,EAtuB7B,CAuuBIF,GAASvC,CAAA+d,QAATxb,CAA0B,KAA1BA,CAAkC1Q,CAAA,IAAID,IAAJC,SAAA,EAvuBtC;AAwuBI8Q,GAAO,CAxuBX,CAyuBIyjC,GAAsBz5C,CAAAC,SAAAy5C,iBACA,CAAlB,QAAQ,CAACpyC,CAAD,CAAU8N,CAAV,CAAgBjP,CAAhB,CAAoB,CAACmB,CAAAoyC,iBAAA,CAAyBtkC,CAAzB,CAA+BjP,CAA/B,CAAmC,CAAA,CAAnC,CAAD,CAAV,CAClB,QAAQ,CAACmB,CAAD,CAAU8N,CAAV,CAAgBjP,CAAhB,CAAoB,CAACmB,CAAAqyC,YAAA,CAAoB,IAApB,CAA2BvkC,CAA3B,CAAiCjP,CAAjC,CAAD,CA3uBpC,CA4uBIuP,GAAyB1V,CAAAC,SAAA25C,oBACA,CAArB,QAAQ,CAACtyC,CAAD,CAAU8N,CAAV,CAAgBjP,CAAhB,CAAoB,CAACmB,CAAAsyC,oBAAA,CAA4BxkC,CAA5B,CAAkCjP,CAAlC,CAAsC,CAAA,CAAtC,CAAD,CAAP,CACrB,QAAQ,CAACmB,CAAD,CAAU8N,CAAV,CAAgBjP,CAAhB,CAAoB,CAACmB,CAAAuyC,YAAA,CAAoB,IAApB,CAA2BzkC,CAA3B,CAAiCjP,CAAjC,CAAD,CAKvBkN,EAAAymC,MAAb,CAA4BC,QAAQ,CAACh2C,CAAD,CAAO,CAEzC,MAAO,KAAA8W,MAAA,CAAW9W,CAAA,CAAK,IAAAqtB,QAAL,CAAX,CAAP,EAAyC,EAFA,CAQ3C,KAAIvf,GAAuB,iBAA3B,CACII,GAAkB,aADtB,CAEIsB,GAAepT,CAAA,CAAO,QAAP,CAFnB,CA4DIsT,GAAoB,4BA5DxB,CA6DIG,GAAc,WA7DlB,CA8DII,GAAkB,WA9DtB,CA+DIK,GAAmB,yEA/DvB,CAiEIH;AAAU,QACF,CAAC,CAAD,CAAI,8BAAJ,CAAoC,WAApC,CADE,OAGH,CAAC,CAAD,CAAI,SAAJ,CAAe,UAAf,CAHG,KAIL,CAAC,CAAD,CAAI,mBAAJ,CAAyB,qBAAzB,CAJK,IAKN,CAAC,CAAD,CAAI,gBAAJ,CAAsB,kBAAtB,CALM,IAMN,CAAC,CAAD,CAAI,oBAAJ,CAA0B,uBAA1B,CANM,UAOA,CAAC,CAAD,CAAI,EAAJ,CAAQ,EAAR,CAPA,CAUdA,GAAA8lC,SAAA,CAAmB9lC,EAAA+lC,OACnB/lC,GAAAgmC,MAAA,CAAgBhmC,EAAAimC,MAAhB,CAAgCjmC,EAAAkmC,SAAhC,CAAmDlmC,EAAAmmC,QAAnD,CAAqEnmC,EAAAomC,MACrEpmC,GAAAqmC,GAAA,CAAarmC,EAAAsmC,GAgQb,KAAIx0B,GAAkB3S,CAAAkI,UAAlByK,CAAqC,OAChCy0B,QAAQ,CAACt0C,CAAD,CAAK,CAGlBu0C,QAASA,EAAO,EAAG,CACbC,CAAJ,GACAA,CACA,CADQ,CAAA,CACR,CAAAx0C,CAAA,EAFA,CADiB,CAFnB,IAAIw0C,EAAQ,CAAA,CASgB,WAA5B,GAAI16C,CAAA+4B,WAAJ,CACE7a,UAAA,CAAWu8B,CAAX,CADF,EAGE,IAAAv6B,GAAA,CAAQ,kBAAR,CAA4Bu6B,CAA5B,CAGA,CAAArnC,CAAA,CAAOrT,CAAP,CAAAmgB,GAAA,CAAkB,MAAlB,CAA0Bu6B,CAA1B,CANF,CAVkB,CADmB,UAqB7Bj3C,QAAQ,EAAG,CACnB,IAAI/B;AAAQ,EACZf,EAAA,CAAQ,IAAR,CAAc,QAAQ,CAAC+G,CAAD,CAAG,CAAEhG,CAAAN,KAAA,CAAW,EAAX,CAAgBsG,CAAhB,CAAF,CAAzB,CACA,OAAO,GAAP,CAAahG,CAAAM,KAAA,CAAW,IAAX,CAAb,CAAgC,GAHb,CArBkB,IA2BnCikB,QAAQ,CAACrkB,CAAD,CAAQ,CAChB,MAAiB,EAAV,EAACA,CAAD,CAAe2F,CAAA,CAAO,IAAA,CAAK3F,CAAL,CAAP,CAAf,CAAqC2F,CAAA,CAAO,IAAA,CAAK,IAAAhH,OAAL,CAAmBqB,CAAnB,CAAP,CAD5B,CA3BmB,QA+B/B,CA/B+B,MAgCjCR,EAhCiC,MAiCjC,EAAAC,KAjCiC,QAkC/B,EAAAqD,OAlC+B,CAAzC,CA0CI6S,GAAe,EACnB5W,EAAA,CAAQ,2DAAA,MAAA,CAAA,GAAA,CAAR,CAAgF,QAAQ,CAACe,CAAD,CAAQ,CAC9F6V,EAAA,CAAanQ,CAAA,CAAU1F,CAAV,CAAb,CAAA,CAAiCA,CAD6D,CAAhG,CAGA,KAAI8V,GAAmB,EACvB7W,EAAA,CAAQ,kDAAA,MAAA,CAAA,GAAA,CAAR,CAAuE,QAAQ,CAACe,CAAD,CAAQ,CACrF8V,EAAA,CAAiBlK,EAAA,CAAU5L,CAAV,CAAjB,CAAA,CAAqC,CAAA,CADgD,CAAvF,CAYAf,EAAA,CAAQ,MACAsV,EADA,eAESe,EAFT,OAIC9M,QAAQ,CAAC5C,CAAD,CAAU,CAEvB,MAAOC,EAAA,CAAOD,CAAP,CAAAgD,KAAA,CAAqB,QAArB,CAAP,EAAyC0M,EAAA,CAAoB1P,CAAA4P,WAApB,EAA0C5P,CAA1C,CAAmD,CAAC,eAAD,CAAkB,QAAlB,CAAnD,CAFlB,CAJnB;aASQsjB,QAAQ,CAACtjB,CAAD,CAAU,CAE9B,MAAOC,EAAA,CAAOD,CAAP,CAAAgD,KAAA,CAAqB,eAArB,CAAP,EAAgD/C,CAAA,CAAOD,CAAP,CAAAgD,KAAA,CAAqB,yBAArB,CAFlB,CAT1B,YAcMyM,EAdN,UAgBIlN,QAAQ,CAACvC,CAAD,CAAU,CAC1B,MAAO0P,GAAA,CAAoB1P,CAApB,CAA6B,WAA7B,CADmB,CAhBtB,YAoBM6qB,QAAQ,CAAC7qB,CAAD,CAAS+B,CAAT,CAAe,CACjC/B,CAAAszC,gBAAA,CAAwBvxC,CAAxB,CADiC,CApB7B,UAwBIgN,EAxBJ,KA0BDwkC,QAAQ,CAACvzC,CAAD,CAAU+B,CAAV,CAAgB3H,CAAhB,CAAuB,CAClC2H,CAAA,CAAOuI,EAAA,CAAUvI,CAAV,CAEP,IAAIhG,CAAA,CAAU3B,CAAV,CAAJ,CACE4F,CAAAymC,MAAA,CAAc1kC,CAAd,CAAA,CAAsB3H,CADxB,KAEO,CACL,IAAIgF,CAEQ,EAAZ,EAAI8R,CAAJ,GAEE9R,CACA,CADMY,CAAAwzC,aACN,EAD8BxzC,CAAAwzC,aAAA,CAAqBzxC,CAArB,CAC9B,CAAY,EAAZ,GAAI3C,CAAJ,GAAgBA,CAAhB,CAAsB,MAAtB,CAHF,CAMAA,EAAA,CAAMA,CAAN,EAAaY,CAAAymC,MAAA,CAAc1kC,CAAd,CAED,EAAZ,EAAImP,CAAJ,GAEE9R,CAFF,CAEiB,EAAT,GAACA,CAAD,CAAexG,CAAf,CAA2BwG,CAFnC,CAKA,OAAQA,EAhBH,CAL2B,CA1B9B,MAmDAxC,QAAQ,CAACoD,CAAD,CAAU+B,CAAV,CAAgB3H,CAAhB,CAAsB,CAClC,IAAIq5C,EAAiB3zC,CAAA,CAAUiC,CAAV,CACrB,IAAIkO,EAAA,CAAawjC,CAAb,CAAJ,CACE,GAAI13C,CAAA,CAAU3B,CAAV,CAAJ,CACQA,CAAN,EACE4F,CAAA,CAAQ+B,CAAR,CACA,CADgB,CAAA,CAChB,CAAA/B,CAAAoP,aAAA,CAAqBrN,CAArB,CAA2B0xC,CAA3B,CAFF,GAIEzzC,CAAA,CAAQ+B,CAAR,CACA,CADgB,CAAA,CAChB,CAAA/B,CAAAszC,gBAAA,CAAwBG,CAAxB,CALF,CADF;IASE,OAAQzzC,EAAA,CAAQ+B,CAAR,CAED,EADGkf,CAAAjhB,CAAAoC,WAAAsxC,aAAA,CAAgC3xC,CAAhC,CAAAkf,EAAwCvlB,CAAxCulB,WACH,CAAEwyB,CAAF,CACE76C,CAbb,KAeO,IAAImD,CAAA,CAAU3B,CAAV,CAAJ,CACL4F,CAAAoP,aAAA,CAAqBrN,CAArB,CAA2B3H,CAA3B,CADK,KAEA,IAAI4F,CAAAiP,aAAJ,CAKL,MAFI0kC,EAEG,CAFG3zC,CAAAiP,aAAA,CAAqBlN,CAArB,CAA2B,CAA3B,CAEH,CAAQ,IAAR,GAAA4xC,CAAA,CAAe/6C,CAAf,CAA2B+6C,CAxBF,CAnD9B,MA+EAh3C,QAAQ,CAACqD,CAAD,CAAU+B,CAAV,CAAgB3H,CAAhB,CAAuB,CACnC,GAAI2B,CAAA,CAAU3B,CAAV,CAAJ,CACE4F,CAAA,CAAQ+B,CAAR,CAAA,CAAgB3H,CADlB,KAGE,OAAO4F,EAAA,CAAQ+B,CAAR,CAJ0B,CA/E/B,MAuFC,QAAQ,EAAG,CAYhB6xC,QAASA,EAAO,CAAC5zC,CAAD,CAAU5F,CAAV,CAAiB,CAC/B,IAAIy5C,EAAWC,CAAA,CAAwB9zC,CAAA9G,SAAxB,CACf,IAAI4C,CAAA,CAAY1B,CAAZ,CAAJ,CACE,MAAOy5C,EAAA,CAAW7zC,CAAA,CAAQ6zC,CAAR,CAAX,CAA+B,EAExC7zC,EAAA,CAAQ6zC,CAAR,CAAA,CAAoBz5C,CALW,CAXjC,IAAI05C,EAA0B,EACnB,EAAX,CAAI5iC,CAAJ,EACE4iC,CAAA,CAAwB,CAAxB,CACA,CAD6B,WAC7B,CAAAA,CAAA,CAAwB,CAAxB,CAAA,CAA6B,WAF/B,EAIEA,CAAA,CAAwB,CAAxB,CAJF,CAKEA,CAAA,CAAwB,CAAxB,CALF,CAK+B,aAE/BF,EAAAG,IAAA,CAAc,EACd,OAAOH,EAVS,CAAX,EAvFD,KA4GDx0C,QAAQ,CAACY,CAAD,CAAU5F,CAAV,CAAiB,CAC5B,GAAI0B,CAAA,CAAY1B,CAAZ,CAAJ,CAAwB,CACtB,GAA2B,QAA3B,GAAIwmB,EAAA,CAAU5gB,CAAV,CAAJ,EAAuCA,CAAAg0C,SAAvC,CAAyD,CACvD,IAAIz+B,EAAS,EACblc,EAAA,CAAQ2G,CAAAwa,QAAR,CAAyB,QAAS,CAACm4B,CAAD,CAAS,CACrCA,CAAAsB,SAAJ;AACE1+B,CAAAzb,KAAA,CAAY64C,CAAAv4C,MAAZ,EAA4Bu4C,CAAAjqB,KAA5B,CAFuC,CAA3C,CAKA,OAAyB,EAAlB,GAAAnT,CAAAtc,OAAA,CAAsB,IAAtB,CAA6Bsc,CAPmB,CASzD,MAAOvV,EAAA5F,MAVe,CAYxB4F,CAAA5F,MAAA,CAAgBA,CAbY,CA5GxB,MA4HAmG,QAAQ,CAACP,CAAD,CAAU5F,CAAV,CAAiB,CAC7B,GAAI0B,CAAA,CAAY1B,CAAZ,CAAJ,CACE,MAAO4F,EAAA8M,UAET,KAJ6B,IAIpB7S,EAAI,CAJgB,CAIboT,EAAarN,CAAAqN,WAA7B,CAAiDpT,CAAjD,CAAqDoT,CAAApU,OAArD,CAAwEgB,CAAA,EAAxE,CACE0T,EAAA,CAAaN,CAAA,CAAWpT,CAAX,CAAb,CAEF+F,EAAA8M,UAAA,CAAoB1S,CAPS,CA5HzB,OAsIC0V,EAtID,CAAR,CAuIG,QAAQ,CAACjR,CAAD,CAAKkD,CAAL,CAAU,CAInBgK,CAAAkI,UAAA,CAAiBlS,CAAjB,CAAA,CAAyB,QAAQ,CAACm4B,CAAD,CAAOC,CAAP,CAAa,CAAA,IACxClgC,CADwC,CACrCT,CAKP,IAAIqF,CAAJ,GAAWiR,EAAX,GACoB,CAAd,EAACjR,CAAA5F,OAAD,EAAoB4F,CAApB,GAA2BkQ,EAA3B,EAA6ClQ,CAA7C,GAAoD4Q,EAApD,CAAyEyqB,CAAzE,CAAgFC,CADtF,IACgGvhC,CADhG,CAC4G,CAC1G,GAAIoD,CAAA,CAASk+B,CAAT,CAAJ,CAAoB,CAGlB,IAAKjgC,CAAL,CAAS,CAAT,CAAYA,CAAZ,CAAgB,IAAAhB,OAAhB,CAA6BgB,CAAA,EAA7B,CACE,GAAI4E,CAAJ,GAAW8P,EAAX,CAEE9P,CAAA,CAAG,IAAA,CAAK5E,CAAL,CAAH,CAAYigC,CAAZ,CAFF,KAIE,KAAK1gC,CAAL,GAAY0gC,EAAZ,CACEr7B,CAAA,CAAG,IAAA,CAAK5E,CAAL,CAAH,CAAYT,CAAZ,CAAiB0gC,CAAA,CAAK1gC,CAAL,CAAjB,CAKN,OAAO,KAdW,CAiBdY,CAAAA,CAAQyE,CAAAk1C,IAER3mC,EAAAA,CAAMhT,CAAD,GAAWxB,CAAX,CAAwB8tB,IAAAqjB,IAAA,CAAS,IAAA9wC,OAAT,CAAsB,CAAtB,CAAxB,CAAmD,IAAAA,OAC5D,KAAK,IAAIkU,EAAI,CAAb,CAAgBA,CAAhB,CAAoBC,CAApB,CAAwBD,CAAA,EAAxB,CAA6B,CAC3B,IAAI+Q,EAAYrf,CAAA,CAAG,IAAA,CAAKsO,CAAL,CAAH,CAAY+sB,CAAZ,CAAkBC,CAAlB,CAChB//B,EAAA;AAAQA,CAAA,CAAQA,CAAR,CAAgB8jB,CAAhB,CAA4BA,CAFT,CAI7B,MAAO9jB,EAzBiG,CA6B1G,IAAKH,CAAL,CAAS,CAAT,CAAYA,CAAZ,CAAgB,IAAAhB,OAAhB,CAA6BgB,CAAA,EAA7B,CACE4E,CAAA,CAAG,IAAA,CAAK5E,CAAL,CAAH,CAAYigC,CAAZ,CAAkBC,CAAlB,CAGF,OAAO,KAxCmC,CAJ3B,CAvIrB,CAqPA9gC,EAAA,CAAQ,YACMuU,EADN,QAGED,EAHF,IAKFumC,QAASA,EAAI,CAACl0C,CAAD,CAAU8N,CAAV,CAAgBjP,CAAhB,CAAoBkP,CAApB,CAAgC,CAC/C,GAAIhS,CAAA,CAAUgS,CAAV,CAAJ,CAA4B,KAAM9B,GAAA,CAAa,QAAb,CAAN,CADmB,IAG3C+B,EAASC,EAAA,CAAmBjO,CAAnB,CAA4B,QAA5B,CAHkC,CAI3CkO,EAASD,EAAA,CAAmBjO,CAAnB,CAA4B,QAA5B,CAERgO,EAAL,EAAaC,EAAA,CAAmBjO,CAAnB,CAA4B,QAA5B,CAAsCgO,CAAtC,CAA+C,EAA/C,CACRE,EAAL,EAAaD,EAAA,CAAmBjO,CAAnB,CAA4B,QAA5B,CAAsCkO,CAAtC,CAA+CiC,EAAA,CAAmBnQ,CAAnB,CAA4BgO,CAA5B,CAA/C,CAEb3U,EAAA,CAAQyU,CAAA9M,MAAA,CAAW,GAAX,CAAR,CAAyB,QAAQ,CAAC8M,CAAD,CAAM,CACrC,IAAIqmC,EAAWnmC,CAAA,CAAOF,CAAP,CAEf,IAAI,CAACqmC,CAAL,CAAe,CACb,GAAY,YAAZ,EAAIrmC,CAAJ,EAAoC,YAApC,EAA4BA,CAA5B,CAAkD,CAChD,IAAIsmC,EAAWz7C,CAAA64B,KAAA4iB,SAAA,EAA0Bz7C,CAAA64B,KAAA6iB,wBAA1B,CACf,QAAQ,CAAE7vB,CAAF,CAAKC,CAAL,CAAS,CAAA,IAEX6vB,EAAuB,CAAf,GAAA9vB,CAAAtrB,SAAA,CAAmBsrB,CAAA+vB,gBAAnB,CAAuC/vB,CAFpC,CAGfgwB,EAAM/vB,CAAN+vB,EAAW/vB,CAAA7U,WACX,OAAO4U,EAAP,GAAagwB,CAAb,EAAoB,CAAC,EAAGA,CAAH,EAA2B,CAA3B,GAAUA,CAAAt7C,SAAV,GACnBo7C,CAAAF,SAAA,CACAE,CAAAF,SAAA,CAAgBI,CAAhB,CADA;AAEAhwB,CAAA6vB,wBAFA,EAE6B7vB,CAAA6vB,wBAAA,CAA2BG,CAA3B,CAF7B,CAEgE,EAH7C,EAJN,CADF,CAWb,QAAQ,CAAEhwB,CAAF,CAAKC,CAAL,CAAS,CACf,GAAKA,CAAL,CACE,IAAA,CAASA,CAAT,CAAaA,CAAA7U,WAAb,CAAA,CACE,GAAK6U,CAAL,GAAWD,CAAX,CACE,MAAO,CAAA,CAIb,OAAO,CAAA,CARQ,CAWnBxW,EAAA,CAAOF,CAAP,CAAA,CAAe,EAOfomC,EAAA,CAAKl0C,CAAL,CAFey0C,YAAe,UAAfA,YAAwC,WAAxCA,CAED,CAAS3mC,CAAT,CAAd,CAA8B,QAAQ,CAACsC,CAAD,CAAQ,CAC5C,IAAmBskC,EAAUtkC,CAAAukC,cAGvBD,EAAN,GAAkBA,CAAlB,GAHa/jC,IAGb,EAAyCyjC,CAAA,CAH5BzjC,IAG4B,CAAiB+jC,CAAjB,CAAzC,GACExmC,CAAA,CAAOkC,CAAP,CAActC,CAAd,CAL0C,CAA9C,CA9BgD,CAAlD,IAwCEqkC,GAAA,CAAmBnyC,CAAnB,CAA4B8N,CAA5B,CAAkCI,CAAlC,CACA,CAAAF,CAAA,CAAOF,CAAP,CAAA,CAAe,EAEjBqmC,EAAA,CAAWnmC,CAAA,CAAOF,CAAP,CA5CE,CA8CfqmC,CAAAr6C,KAAA,CAAc+E,CAAd,CAjDqC,CAAvC,CAT+C,CAL3C,KAmEDgP,EAnEC,KAqED+mC,QAAQ,CAAC50C,CAAD,CAAU8N,CAAV,CAAgBjP,CAAhB,CAAoB,CAC/BmB,CAAA,CAAUC,CAAA,CAAOD,CAAP,CAKVA,EAAA6Y,GAAA,CAAW/K,CAAX,CAAiBomC,QAASA,EAAI,EAAG,CAC/Bl0C,CAAA60C,IAAA,CAAY/mC,CAAZ,CAAkBjP,CAAlB,CACAmB,EAAA60C,IAAA,CAAY/mC,CAAZ,CAAkBomC,CAAlB,CAF+B,CAAjC,CAIAl0C,EAAA6Y,GAAA,CAAW/K,CAAX,CAAiBjP,CAAjB,CAV+B,CArE3B,aAkFOmnB,QAAQ,CAAChmB,CAAD,CAAU80C,CAAV,CAAuB,CAAA,IACtCx6C,CADsC,CAC/BkB,EAASwE,CAAA4P,WACpBjC,GAAA,CAAa3N,CAAb,CACA3G,EAAA,CAAQ,IAAI0S,CAAJ,CAAW+oC,CAAX,CAAR,CAAiC,QAAQ,CAACr4C,CAAD,CAAM,CACzCnC,CAAJ,CACEkB,CAAAu5C,aAAA,CAAoBt4C,CAApB,CAA0BnC,CAAAuK,YAA1B,CADF;AAGErJ,CAAAquB,aAAA,CAAoBptB,CAApB,CAA0BuD,CAA1B,CAEF1F,EAAA,CAAQmC,CANqC,CAA/C,CAH0C,CAlFtC,UA+FI+O,QAAQ,CAACxL,CAAD,CAAU,CAC1B,IAAIwL,EAAW,EACfnS,EAAA,CAAQ2G,CAAAqN,WAAR,CAA4B,QAAQ,CAACrN,CAAD,CAAS,CAClB,CAAzB,GAAIA,CAAA9G,SAAJ,EACEsS,CAAA1R,KAAA,CAAckG,CAAd,CAFyC,CAA7C,CAIA,OAAOwL,EANmB,CA/FtB,UAwGI0a,QAAQ,CAAClmB,CAAD,CAAU,CAC1B,MAAOA,EAAAg1C,gBAAP,EAAkCh1C,CAAAqN,WAAlC,EAAwD,EAD9B,CAxGtB,QA4GE/M,QAAQ,CAACN,CAAD,CAAUvD,CAAV,CAAgB,CAC9BpD,CAAA,CAAQ,IAAI0S,CAAJ,CAAWtP,CAAX,CAAR,CAA0B,QAAQ,CAAC6jC,CAAD,CAAO,CACd,CAAzB,GAAItgC,CAAA9G,SAAJ,EAAmD,EAAnD,GAA8B8G,CAAA9G,SAA9B,EACE8G,CAAAwM,YAAA,CAAoB8zB,CAApB,CAFqC,CAAzC,CAD8B,CA5G1B,SAoHG2U,QAAQ,CAACj1C,CAAD,CAAUvD,CAAV,CAAgB,CAC/B,GAAyB,CAAzB,GAAIuD,CAAA9G,SAAJ,CAA4B,CAC1B,IAAIoB,EAAQ0F,CAAAiN,WACZ5T,EAAA,CAAQ,IAAI0S,CAAJ,CAAWtP,CAAX,CAAR,CAA0B,QAAQ,CAAC6jC,CAAD,CAAO,CACvCtgC,CAAA+0C,aAAA,CAAqBzU,CAArB,CAA4BhmC,CAA5B,CADuC,CAAzC,CAF0B,CADG,CApH3B,MA6HAqS,QAAQ,CAAC3M,CAAD,CAAUk1C,CAAV,CAAoB,CAChCA,CAAA,CAAWj1C,CAAA,CAAOi1C,CAAP,CAAA,CAAiB,CAAjB,CACX,KAAI15C,EAASwE,CAAA4P,WACTpU,EAAJ,EACEA,CAAAquB,aAAA,CAAoBqrB,CAApB,CAA8Bl1C,CAA9B,CAEFk1C,EAAA1oC,YAAA,CAAqBxM,CAArB,CANgC,CA7H5B,QAsIE0b,QAAQ,CAAC1b,CAAD,CAAU,CACxB2N,EAAA,CAAa3N,CAAb,CACA;IAAIxE,EAASwE,CAAA4P,WACTpU,EAAJ,EAAYA,CAAAwR,YAAA,CAAmBhN,CAAnB,CAHY,CAtIpB,OA4ICm1C,QAAQ,CAACn1C,CAAD,CAAUo1C,CAAV,CAAsB,CAAA,IAC/B96C,EAAQ0F,CADuB,CACdxE,EAASwE,CAAA4P,WAC9BvW,EAAA,CAAQ,IAAI0S,CAAJ,CAAWqpC,CAAX,CAAR,CAAgC,QAAQ,CAAC34C,CAAD,CAAM,CAC5CjB,CAAAu5C,aAAA,CAAoBt4C,CAApB,CAA0BnC,CAAAuK,YAA1B,CACAvK,EAAA,CAAQmC,CAFoC,CAA9C,CAFmC,CA5I/B,UAoJI6S,EApJJ,aAqJOJ,EArJP,aAuJOmmC,QAAQ,CAACr1C,CAAD,CAAUgP,CAAV,CAAoBsmC,CAApB,CAA+B,CAC9CtmC,CAAJ,EACE3V,CAAA,CAAQ2V,CAAAhO,MAAA,CAAe,GAAf,CAAR,CAA6B,QAAQ,CAACmB,CAAD,CAAW,CAC9C,IAAIozC,EAAiBD,CACjBx5C,EAAA,CAAYy5C,CAAZ,CAAJ,GACEA,CADF,CACmB,CAACxmC,EAAA,CAAe/O,CAAf,CAAwBmC,CAAxB,CADpB,CAGC,EAAAozC,CAAA,CAAiBjmC,EAAjB,CAAkCJ,EAAlC,EAAqDlP,CAArD,CAA8DmC,CAA9D,CAL6C,CAAhD,CAFgD,CAvJ9C,QAmKE3G,QAAQ,CAACwE,CAAD,CAAU,CAExB,MAAO,CADHxE,CACG,CADMwE,CAAA4P,WACN,GAA8B,EAA9B,GAAUpU,CAAAtC,SAAV,CAAmCsC,CAAnC,CAA4C,IAF3B,CAnKpB,MAwKAgnC,QAAQ,CAACxiC,CAAD,CAAU,CACtB,GAAIA,CAAAw1C,mBAAJ,CACE,MAAOx1C,EAAAw1C,mBAKT,KADI9/B,CACJ,CADU1V,CAAA6E,YACV,CAAc,IAAd,EAAO6Q,CAAP,EAAuC,CAAvC,GAAsBA,CAAAxc,SAAtB,CAAA,CACEwc,CAAA,CAAMA,CAAA7Q,YAER,OAAO6Q,EAVe,CAxKlB,MAqLA7Y,QAAQ,CAACmD,CAAD,CAAUgP,CAAV,CAAoB,CAChC,MAAIhP,EAAAy1C,qBAAJ;AACSz1C,CAAAy1C,qBAAA,CAA6BzmC,CAA7B,CADT,CAGS,EAJuB,CArL5B,OA6LCvB,EA7LD,gBA+LU/B,QAAQ,CAAC1L,CAAD,CAAU01C,CAAV,CAAqBC,CAArB,CAAgC,CAClDxB,CAAAA,CAAW,CAAClmC,EAAA,CAAmBjO,CAAnB,CAA4B,QAA5B,CAAD,EAA0C,EAA1C,EAA8C01C,CAA9C,CAEfC,EAAA,CAAYA,CAAZ,EAAyB,EAEzB,KAAIvlC,EAAQ,CAAC,gBACK1U,CADL,iBAEMA,CAFN,CAAD,CAKZrC,EAAA,CAAQ86C,CAAR,CAAkB,QAAQ,CAACt1C,CAAD,CAAK,CAC7BA,CAAAI,MAAA,CAASe,CAAT,CAAkBoQ,CAAAlR,OAAA,CAAay2C,CAAb,CAAlB,CAD6B,CAA/B,CAVsD,CA/LlD,CAAR,CA6MG,QAAQ,CAAC92C,CAAD,CAAKkD,CAAL,CAAU,CAInBgK,CAAAkI,UAAA,CAAiBlS,CAAjB,CAAA,CAAyB,QAAQ,CAACm4B,CAAD,CAAOC,CAAP,CAAayb,CAAb,CAAmB,CAElD,IADA,IAAIx7C,CAAJ,CACQH,EAAE,CAAV,CAAaA,CAAb,CAAiB,IAAAhB,OAAjB,CAA8BgB,CAAA,EAA9B,CACM6B,CAAA,CAAY1B,CAAZ,CAAJ,EACEA,CACA,CADQyE,CAAA,CAAG,IAAA,CAAK5E,CAAL,CAAH,CAAYigC,CAAZ,CAAkBC,CAAlB,CAAwByb,CAAxB,CACR,CAAI75C,CAAA,CAAU3B,CAAV,CAAJ,GAEEA,CAFF,CAEU6F,CAAA,CAAO7F,CAAP,CAFV,CAFF,EAOEoT,EAAA,CAAepT,CAAf,CAAsByE,CAAA,CAAG,IAAA,CAAK5E,CAAL,CAAH,CAAYigC,CAAZ,CAAkBC,CAAlB,CAAwByb,CAAxB,CAAtB,CAGJ,OAAO75C,EAAA,CAAU3B,CAAV,CAAA,CAAmBA,CAAnB,CAA2B,IAbgB,CAiBpD2R,EAAAkI,UAAAtV,KAAA,CAAwBoN,CAAAkI,UAAA4E,GACxB9M,EAAAkI,UAAA4hC,OAAA,CAA0B9pC,CAAAkI,UAAA4gC,IAtBP,CA7MrB,CA0QAvjC,GAAA2C,UAAA,CAAoB,KAMb1C,QAAQ,CAAC/X,CAAD,CAAMY,CAAN,CAAa,CACxB,IAAA,CAAKgX,EAAA,CAAQ5X,CAAR,CAAL,CAAA,CAAqBY,CADG,CANR,KAcb4Y,QAAQ,CAACxZ,CAAD,CAAM,CACjB,MAAO,KAAA,CAAK4X,EAAA,CAAQ5X,CAAR,CAAL,CADU,CAdD;OAsBVkiB,QAAQ,CAACliB,CAAD,CAAM,CACpB,IAAIY,EAAQ,IAAA,CAAKZ,CAAL,CAAW4X,EAAA,CAAQ5X,CAAR,CAAX,CACZ,QAAO,IAAA,CAAKA,CAAL,CACP,OAAOY,EAHa,CAtBJ,CA0FpB,KAAIyX,GAAU,oCAAd,CACIC,GAAe,GADnB,CAEIC,GAAS,sBAFb,CAGIJ,GAAiB,kCAHrB,CAII5M,GAAkBlM,CAAA,CAAO,WAAP,CAJtB,CAo0BIi9C,GAAiBj9C,CAAA,CAAO,UAAP,CAp0BrB,CAm1BIiQ,GAAmB,CAAC,UAAD,CAAa,QAAQ,CAACrG,CAAD,CAAW,CAGrD,IAAAszC,YAAA,CAAmB,EAkCnB,KAAAtqB,SAAA,CAAgBC,QAAQ,CAAC3pB,CAAD,CAAOkD,CAAP,CAAgB,CACtC,IAAIzL,EAAMuI,CAANvI,CAAa,YACjB,IAAIuI,CAAJ,EAA8B,GAA9B,EAAYA,CAAA/D,OAAA,CAAY,CAAZ,CAAZ,CAAmC,KAAM83C,GAAA,CAAe,SAAf,CACoB/zC,CADpB,CAAN,CAEnC,IAAAg0C,YAAA,CAAiBh0C,CAAAqf,OAAA,CAAY,CAAZ,CAAjB,CAAA,CAAmC5nB,CACnCiJ,EAAAwC,QAAA,CAAiBzL,CAAjB,CAAsByL,CAAtB,CALsC,CAsBxC,KAAA+wC,gBAAA,CAAuBC,QAAQ,CAACtqB,CAAD,CAAa,CAClB,CAAxB,GAAGxwB,SAAAlC,OAAH,GACE,IAAAi9C,kBADF,CAC4BvqB,CAAD,WAAuB9tB,OAAvB;AAAiC8tB,CAAjC,CAA8C,IADzE,CAGA,OAAO,KAAAuqB,kBAJmC,CAO5C,KAAAzjC,KAAA,CAAY,CAAC,UAAD,CAAa,iBAAb,CAAgC,QAAQ,CAACwD,CAAD,CAAWkgC,CAAX,CAA4B,CAuB9E,MAAO,OAiBGC,QAAQ,CAACp2C,CAAD,CAAUxE,CAAV,CAAkB25C,CAAlB,CAAyB3lB,CAAzB,CAA+B,CACzC2lB,CAAJ,CACEA,CAAAA,MAAA,CAAYn1C,CAAZ,CADF,EAGOxE,CAGL,EAHgBA,CAAA,CAAO,CAAP,CAGhB,GAFEA,CAEF,CAFW25C,CAAA35C,OAAA,EAEX,EAAAA,CAAA8E,OAAA,CAAcN,CAAd,CANF,CAQMwvB,EA9CR,EAAM2mB,CAAA,CA8CE3mB,CA9CF,CAqCyC,CAjB1C,OAwCG6mB,QAAQ,CAACr2C,CAAD,CAAUwvB,CAAV,CAAgB,CAC9BxvB,CAAA0b,OAAA,EACM8T,EA9DR,EAAM2mB,CAAA,CA8DE3mB,CA9DF,CA4D0B,CAxC3B,MA+DE8mB,QAAQ,CAACt2C,CAAD,CAAUxE,CAAV,CAAkB25C,CAAlB,CAAyB3lB,CAAzB,CAA+B,CAG5C,IAAA4mB,MAAA,CAAWp2C,CAAX,CAAoBxE,CAApB,CAA4B25C,CAA5B,CAAmC3lB,CAAnC,CAH4C,CA/DzC,UAkFM3Q,QAAQ,CAAC7e,CAAD,CAAUmC,CAAV,CAAqBqtB,CAArB,CAA2B,CAC5CrtB,CAAA,CAAYhJ,CAAA,CAASgJ,CAAT,CAAA,CACEA,CADF,CAEE/I,CAAA,CAAQ+I,CAAR,CAAA,CAAqBA,CAAAzH,KAAA,CAAe,GAAf,CAArB,CAA2C,EACzDrB,EAAA,CAAQ2G,CAAR,CAAiB,QAAS,CAACA,CAAD,CAAU,CAClCsP,EAAA,CAAetP,CAAf,CAAwBmC,CAAxB,CADkC,CAApC,CAGMqtB,EA7GR,EAAM2mB,CAAA,CA6GE3mB,CA7GF,CAsGwC,CAlFzC,aAyGSnF,QAAQ,CAACrqB,CAAD,CAAUmC,CAAV,CAAqBqtB,CAArB,CAA2B,CAC/CrtB,CAAA,CAAYhJ,CAAA,CAASgJ,CAAT,CAAA,CACEA,CADF,CAEE/I,CAAA,CAAQ+I,CAAR,CAAA,CAAqBA,CAAAzH,KAAA,CAAe,GAAf,CAArB,CAA2C,EACzDrB,EAAA,CAAQ2G,CAAR,CAAiB,QAAS,CAACA,CAAD,CAAU,CAClCkP,EAAA,CAAkBlP,CAAlB,CAA2BmC,CAA3B,CADkC,CAApC,CAGMqtB,EApIR,EAAM2mB,CAAA,CAoIE3mB,CApIF,CA6H2C,CAzG5C,UAiIM9E,QAAQ,CAAC1qB,CAAD,CAAUu2C,CAAV,CAAe76B,CAAf,CAAuB8T,CAAvB,CAA6B,CAC9Cn2B,CAAA,CAAQ2G,CAAR,CAAiB,QAAS,CAACA,CAAD,CAAU,CAClCsP,EAAA,CAAetP,CAAf,CAAwBu2C,CAAxB,CACArnC,GAAA,CAAkBlP,CAAlB;AAA2B0b,CAA3B,CAFkC,CAApC,CAIM8T,EA1JR,EAAM2mB,CAAA,CA0JE3mB,CA1JF,CAqJ0C,CAjI3C,SAyIK9zB,CAzIL,CAvBuE,CAApE,CAlEyC,CAAhC,CAn1BvB,CAm0EIomB,GAAiBjpB,CAAA,CAAO,UAAP,CASrBwN,GAAAoL,QAAA,CAA2B,CAAC,UAAD,CAAa,uBAAb,CAy5C3B,KAAIwZ,GAAgB,0BAApB,CAi8CIqI,GAAqBz6B,CAAA,CAAO,cAAP,CAj8CzB,CA66DI29C,GAAa,iCA76DjB,CA86DIlhB,GAAgB,MAAS,EAAT,OAAsB,GAAtB,KAAkC,EAAlC,CA96DpB,CA+6DIsB,GAAkB/9B,CAAA,CAAO,WAAP,CA6QtB8+B,GAAA1jB,UAAA,CACEojB,EAAApjB,UADF,CAEEoiB,EAAApiB,UAFF,CAE+B,SAMpB,CAAA,CANoB,WAYlB,CAAA,CAZkB,QA0BrB2jB,EAAA,CAAe,UAAf,CA1BqB,KA2CxBvgB,QAAQ,CAACA,CAAD,CAAM3W,CAAN,CAAe,CAC1B,GAAI5E,CAAA,CAAYub,CAAZ,CAAJ,CACE,MAAO,KAAA0f,MAET,KAAIt2B,EAAQ+1C,EAAAt0C,KAAA,CAAgBmV,CAAhB,CACR5W,EAAA,CAAM,CAAN,CAAJ,EAAc,IAAA4D,KAAA,CAAUzD,kBAAA,CAAmBH,CAAA,CAAM,CAAN,CAAnB,CAAV,CACd,EAAIA,CAAA,CAAM,CAAN,CAAJ,EAAgBA,CAAA,CAAM,CAAN,CAAhB,GAA0B,IAAAo1B,OAAA,CAAYp1B,CAAA,CAAM,CAAN,CAAZ,EAAwB,EAAxB,CAC1B,KAAAgV,KAAA,CAAUhV,CAAA,CAAM,CAAN,CAAV,EAAsB,EAAtB,CAA0BC,CAA1B,CAEA,OAAO,KATmB,CA3CC,UAkEnBk3B,EAAA,CAAe,YAAf,CAlEmB;KA+EvBA,EAAA,CAAe,QAAf,CA/EuB,MA4FvBA,EAAA,CAAe,QAAf,CA5FuB,MA+GvBE,EAAA,CAAqB,QAArB,CAA+B,QAAQ,CAACzzB,CAAD,CAAO,CAClD,MAAyB,GAAlB,EAAAA,CAAArG,OAAA,CAAY,CAAZ,CAAA,CAAwBqG,CAAxB,CAA+B,GAA/B,CAAqCA,CADM,CAA9C,CA/GuB,QAwIrBwxB,QAAQ,CAACA,CAAD,CAAS4gB,CAAT,CAAqB,CACnC,OAAQt7C,SAAAlC,OAAR,EACE,KAAK,CAAL,CACE,MAAO,KAAA28B,SACT,MAAK,CAAL,CACE,GAAIz8B,CAAA,CAAS08B,CAAT,CAAJ,CACE,IAAAD,SAAA,CAAgB/0B,EAAA,CAAcg1B,CAAd,CADlB,KAEO,IAAI75B,CAAA,CAAS65B,CAAT,CAAJ,CACL,IAAAD,SAAA,CAAgBC,CADX,KAGL,MAAMe,GAAA,CAAgB,UAAhB,CAAN,CAGF,KACF,SACM96B,CAAA,CAAY26C,CAAZ,CAAJ,EAA8C,IAA9C,GAA+BA,CAA/B,CACE,OAAO,IAAA7gB,SAAA,CAAcC,CAAd,CADT,CAGE,IAAAD,SAAA,CAAcC,CAAd,CAHF,CAG0B4gB,CAjB9B,CAqBA,IAAA5f,UAAA,EACA,OAAO,KAvB4B,CAxIR,MAgLvBiB,EAAA,CAAqB,QAArB,CAA+Bn8B,EAA/B,CAhLuB,SA0LpB+E,QAAQ,EAAG,CAClB,IAAA24B,UAAA,CAAiB,CAAA,CACjB,OAAO,KAFW,CA1LS,CAkkB/B,KAAIiB,GAAezhC,CAAA,CAAO,QAAP,CAAnB,CACIwjC,GAAsB,EAD1B,CAEIxB,EAFJ,CAgEI6b,GAAY,CAEZ,MAFY,CAELC,QAAQ,EAAE,CAAC,MAAO,KAAR,CAFL,CAGZ,MAHY,CAGLC,QAAQ,EAAE,CAAC,MAAO,CAAA,CAAR,CAHL;AAIZ,OAJY,CAIJC,QAAQ,EAAE,CAAC,MAAO,CAAA,CAAR,CAJN,WAKFn7C,CALE,CAMZ,GANY,CAMRo7C,QAAQ,CAACl4C,CAAD,CAAOgV,CAAP,CAAe4Q,CAAf,CAAiBC,CAAjB,CAAmB,CAC7BD,CAAA,CAAEA,CAAA,CAAE5lB,CAAF,CAAQgV,CAAR,CAAiB6Q,EAAA,CAAEA,CAAA,CAAE7lB,CAAF,CAAQgV,CAAR,CACrB,OAAI7X,EAAA,CAAUyoB,CAAV,CAAJ,CACMzoB,CAAA,CAAU0oB,CAAV,CAAJ,CACSD,CADT,CACaC,CADb,CAGOD,CAJT,CAMOzoB,CAAA,CAAU0oB,CAAV,CAAA,CAAaA,CAAb,CAAe7rB,CARO,CANnB,CAeZ,GAfY,CAeRm+C,QAAQ,CAACn4C,CAAD,CAAOgV,CAAP,CAAe4Q,CAAf,CAAiBC,CAAjB,CAAmB,CACzBD,CAAA,CAAEA,CAAA,CAAE5lB,CAAF,CAAQgV,CAAR,CAAiB6Q,EAAA,CAAEA,CAAA,CAAE7lB,CAAF,CAAQgV,CAAR,CACrB,QAAQ7X,CAAA,CAAUyoB,CAAV,CAAA,CAAaA,CAAb,CAAe,CAAvB,GAA2BzoB,CAAA,CAAU0oB,CAAV,CAAA,CAAaA,CAAb,CAAe,CAA1C,CAFyB,CAfnB,CAmBZ,GAnBY,CAmBRuyB,QAAQ,CAACp4C,CAAD,CAAOgV,CAAP,CAAe4Q,CAAf,CAAiBC,CAAjB,CAAmB,CAAC,MAAOD,EAAA,CAAE5lB,CAAF,CAAQgV,CAAR,CAAP,CAAuB6Q,CAAA,CAAE7lB,CAAF,CAAQgV,CAAR,CAAxB,CAnBnB,CAoBZ,GApBY,CAoBRqjC,QAAQ,CAACr4C,CAAD,CAAOgV,CAAP,CAAe4Q,CAAf,CAAiBC,CAAjB,CAAmB,CAAC,MAAOD,EAAA,CAAE5lB,CAAF,CAAQgV,CAAR,CAAP,CAAuB6Q,CAAA,CAAE7lB,CAAF,CAAQgV,CAAR,CAAxB,CApBnB,CAqBZ,GArBY,CAqBRsjC,QAAQ,CAACt4C,CAAD,CAAOgV,CAAP,CAAe4Q,CAAf,CAAiBC,CAAjB,CAAmB,CAAC,MAAOD,EAAA,CAAE5lB,CAAF,CAAQgV,CAAR,CAAP,CAAuB6Q,CAAA,CAAE7lB,CAAF,CAAQgV,CAAR,CAAxB,CArBnB,CAsBZ,GAtBY,CAsBRujC,QAAQ,CAACv4C,CAAD,CAAOgV,CAAP,CAAe4Q,CAAf,CAAiBC,CAAjB,CAAmB,CAAC,MAAOD,EAAA,CAAE5lB,CAAF,CAAQgV,CAAR,CAAP,CAAuB6Q,CAAA,CAAE7lB,CAAF,CAAQgV,CAAR,CAAxB,CAtBnB,CAuBZ,GAvBY,CAuBRlY,CAvBQ,CAwBZ,KAxBY,CAwBN07C,QAAQ,CAACx4C,CAAD,CAAOgV,CAAP,CAAe4Q,CAAf,CAAkBC,CAAlB,CAAoB,CAAC,MAAOD,EAAA,CAAE5lB,CAAF,CAAQgV,CAAR,CAAP,GAAyB6Q,CAAA,CAAE7lB,CAAF,CAAQgV,CAAR,CAA1B,CAxBtB,CAyBZ,KAzBY,CAyBNyjC,QAAQ,CAACz4C,CAAD,CAAOgV,CAAP,CAAe4Q,CAAf,CAAkBC,CAAlB,CAAoB,CAAC,MAAOD,EAAA,CAAE5lB,CAAF,CAAQgV,CAAR,CAAP,GAAyB6Q,CAAA,CAAE7lB,CAAF,CAAQgV,CAAR,CAA1B,CAzBtB,CA0BZ,IA1BY,CA0BP0jC,QAAQ,CAAC14C,CAAD,CAAOgV,CAAP,CAAe4Q,CAAf,CAAiBC,CAAjB,CAAmB,CAAC,MAAOD,EAAA,CAAE5lB,CAAF,CAAQgV,CAAR,CAAP,EAAwB6Q,CAAA,CAAE7lB,CAAF;AAAQgV,CAAR,CAAzB,CA1BpB,CA2BZ,IA3BY,CA2BP2jC,QAAQ,CAAC34C,CAAD,CAAOgV,CAAP,CAAe4Q,CAAf,CAAiBC,CAAjB,CAAmB,CAAC,MAAOD,EAAA,CAAE5lB,CAAF,CAAQgV,CAAR,CAAP,EAAwB6Q,CAAA,CAAE7lB,CAAF,CAAQgV,CAAR,CAAzB,CA3BpB,CA4BZ,GA5BY,CA4BR4jC,QAAQ,CAAC54C,CAAD,CAAOgV,CAAP,CAAe4Q,CAAf,CAAiBC,CAAjB,CAAmB,CAAC,MAAOD,EAAA,CAAE5lB,CAAF,CAAQgV,CAAR,CAAP,CAAuB6Q,CAAA,CAAE7lB,CAAF,CAAQgV,CAAR,CAAxB,CA5BnB,CA6BZ,GA7BY,CA6BR6jC,QAAQ,CAAC74C,CAAD,CAAOgV,CAAP,CAAe4Q,CAAf,CAAiBC,CAAjB,CAAmB,CAAC,MAAOD,EAAA,CAAE5lB,CAAF,CAAQgV,CAAR,CAAP,CAAuB6Q,CAAA,CAAE7lB,CAAF,CAAQgV,CAAR,CAAxB,CA7BnB,CA8BZ,IA9BY,CA8BP8jC,QAAQ,CAAC94C,CAAD,CAAOgV,CAAP,CAAe4Q,CAAf,CAAiBC,CAAjB,CAAmB,CAAC,MAAOD,EAAA,CAAE5lB,CAAF,CAAQgV,CAAR,CAAP,EAAwB6Q,CAAA,CAAE7lB,CAAF,CAAQgV,CAAR,CAAzB,CA9BpB,CA+BZ,IA/BY,CA+BP+jC,QAAQ,CAAC/4C,CAAD,CAAOgV,CAAP,CAAe4Q,CAAf,CAAiBC,CAAjB,CAAmB,CAAC,MAAOD,EAAA,CAAE5lB,CAAF,CAAQgV,CAAR,CAAP,EAAwB6Q,CAAA,CAAE7lB,CAAF,CAAQgV,CAAR,CAAzB,CA/BpB,CAgCZ,IAhCY,CAgCPgkC,QAAQ,CAACh5C,CAAD,CAAOgV,CAAP,CAAe4Q,CAAf,CAAiBC,CAAjB,CAAmB,CAAC,MAAOD,EAAA,CAAE5lB,CAAF,CAAQgV,CAAR,CAAP,EAAwB6Q,CAAA,CAAE7lB,CAAF,CAAQgV,CAAR,CAAzB,CAhCpB,CAiCZ,IAjCY,CAiCPikC,QAAQ,CAACj5C,CAAD,CAAOgV,CAAP,CAAe4Q,CAAf,CAAiBC,CAAjB,CAAmB,CAAC,MAAOD,EAAA,CAAE5lB,CAAF,CAAQgV,CAAR,CAAP,EAAwB6Q,CAAA,CAAE7lB,CAAF,CAAQgV,CAAR,CAAzB,CAjCpB,CAkCZ,GAlCY,CAkCRkkC,QAAQ,CAACl5C,CAAD,CAAOgV,CAAP,CAAe4Q,CAAf,CAAiBC,CAAjB,CAAmB,CAAC,MAAOD,EAAA,CAAE5lB,CAAF,CAAQgV,CAAR,CAAP,CAAuB6Q,CAAA,CAAE7lB,CAAF,CAAQgV,CAAR,CAAxB,CAlCnB,CAoCZ,GApCY,CAoCRmkC,QAAQ,CAACn5C,CAAD,CAAOgV,CAAP,CAAe4Q,CAAf,CAAiBC,CAAjB,CAAmB,CAAC,MAAOA,EAAA,CAAE7lB,CAAF,CAAQgV,CAAR,CAAA,CAAgBhV,CAAhB,CAAsBgV,CAAtB,CAA8B4Q,CAAA,CAAE5lB,CAAF,CAAQgV,CAAR,CAA9B,CAAR,CApCnB,CAqCZ,GArCY,CAqCRokC,QAAQ,CAACp5C,CAAD,CAAOgV,CAAP,CAAe4Q,CAAf,CAAiB,CAAC,MAAO,CAACA,CAAA,CAAE5lB,CAAF,CAAQgV,CAAR,CAAT,CArCjB,CAhEhB,CAwGIqkC,GAAS,GAAK,IAAL,GAAe,IAAf,GAAyB,IAAzB,GAAmC,IAAnC,GAA6C,IAA7C,CAAmD,GAAnD,CAAuD,GAAvD,CAA4D,GAA5D,CAAgE,GAAhE,CAxGb;AAiHIzb,GAAQA,QAAS,CAAChiB,CAAD,CAAU,CAC7B,IAAAA,QAAA,CAAeA,CADc,CAI/BgiB,GAAAvoB,UAAA,CAAkB,aACHuoB,EADG,KAGX0b,QAAS,CAACxvB,CAAD,CAAO,CACnB,IAAAA,KAAA,CAAYA,CAEZ,KAAApuB,MAAA,CAAa,CACb,KAAA69C,GAAA,CAAUv/C,CACV,KAAAw/C,OAAA,CAAc,GAEd,KAAAC,OAAA,CAAc,EAEd,KAAI9sB,CAGJ,KAFI7rB,CAEJ,CAFW,EAEX,CAAO,IAAApF,MAAP,CAAoB,IAAAouB,KAAAzvB,OAApB,CAAA,CAAsC,CACpC,IAAAk/C,GAAA,CAAU,IAAAzvB,KAAA1qB,OAAA,CAAiB,IAAA1D,MAAjB,CACV,IAAI,IAAAg+C,GAAA,CAAQ,KAAR,CAAJ,CACE,IAAAC,WAAA,CAAgB,IAAAJ,GAAhB,CADF,KAEO,IAAI,IAAAl8C,SAAA,CAAc,IAAAk8C,GAAd,CAAJ,EAA8B,IAAAG,GAAA,CAAQ,GAAR,CAA9B,EAA8C,IAAAr8C,SAAA,CAAc,IAAAu8C,KAAA,EAAd,CAA9C,CACL,IAAAC,WAAA,EADK,KAEA,IAAI,IAAAC,QAAA,CAAa,IAAAP,GAAb,CAAJ,CACL,IAAAQ,UAAA,EAEA,CAAI,IAAAC,IAAA,CAAS,IAAT,CAAJ,GAAkC,GAAlC,GAAsBl5C,CAAA,CAAK,CAAL,CAAtB,GACK6rB,CADL,CACa,IAAA8sB,OAAA,CAAY,IAAAA,OAAAp/C,OAAZ,CAAiC,CAAjC,CADb,KAEEsyB,CAAA7rB,KAFF,CAE4C,EAF5C,GAEe6rB,CAAA7C,KAAAzrB,QAAA,CAAmB,GAAnB,CAFf,CAHK;IAOA,IAAI,IAAAq7C,GAAA,CAAQ,aAAR,CAAJ,CACL,IAAAD,OAAAv+C,KAAA,CAAiB,OACR,IAAAQ,MADQ,MAET,IAAA69C,GAFS,MAGR,IAAAS,IAAA,CAAS,KAAT,CAHQ,EAGW,IAAAN,GAAA,CAAQ,IAAR,CAHX,EAG6B,IAAAA,GAAA,CAAQ,MAAR,CAH7B,CAAjB,CAOA,CAFI,IAAAA,GAAA,CAAQ,IAAR,CAEJ,EAFmB54C,CAAA7E,QAAA,CAAa,IAAAs9C,GAAb,CAEnB,CADI,IAAAG,GAAA,CAAQ,IAAR,CACJ,EADmB54C,CAAA+L,MAAA,EACnB,CAAA,IAAAnR,MAAA,EARK,KASA,IAAI,IAAAu+C,aAAA,CAAkB,IAAAV,GAAlB,CAAJ,CAAgC,CACrC,IAAA79C,MAAA,EACA,SAFqC,CAAhC,IAGA,CACL,IAAIw+C,EAAM,IAAAX,GAANW,CAAgB,IAAAN,KAAA,EAApB,CACIO,EAAMD,CAANC,CAAY,IAAAP,KAAA,CAAU,CAAV,CADhB,CAEI35C,EAAK63C,EAAA,CAAU,IAAAyB,GAAV,CAFT,CAGIa,EAAMtC,EAAA,CAAUoC,CAAV,CAHV,CAIIG,EAAMvC,EAAA,CAAUqC,CAAV,CACNE,EAAJ,EACE,IAAAZ,OAAAv+C,KAAA,CAAiB,OAAQ,IAAAQ,MAAR,MAA0By+C,CAA1B,IAAmCE,CAAnC,CAAjB,CACA,CAAA,IAAA3+C,MAAA,EAAc,CAFhB,EAGW0+C,CAAJ,EACL,IAAAX,OAAAv+C,KAAA,CAAiB,OAAQ,IAAAQ,MAAR,MAA0Bw+C,CAA1B,IAAmCE,CAAnC,CAAjB,CACA,CAAA,IAAA1+C,MAAA,EAAc,CAFT,EAGIuE,CAAJ,EACL,IAAAw5C,OAAAv+C,KAAA,CAAiB,OACR,IAAAQ,MADQ;KAET,IAAA69C,GAFS,IAGXt5C,CAHW,MAIR,IAAA+5C,IAAA,CAAS,KAAT,CAJQ,EAIW,IAAAN,GAAA,CAAQ,IAAR,CAJX,CAAjB,CAMA,CAAA,IAAAh+C,MAAA,EAAc,CAPT,EASL,IAAA4+C,WAAA,CAAgB,4BAAhB,CAA8C,IAAA5+C,MAA9C,CAA0D,IAAAA,MAA1D,CAAuE,CAAvE,CArBG,CAwBP,IAAA89C,OAAA,CAAc,IAAAD,GAjDsB,CAmDtC,MAAO,KAAAE,OA/DY,CAHL,IAqEZC,QAAQ,CAACa,CAAD,CAAQ,CAClB,MAAmC,EAAnC,GAAOA,CAAAl8C,QAAA,CAAc,IAAAk7C,GAAd,CADW,CArEJ,KAyEXS,QAAQ,CAACO,CAAD,CAAQ,CACnB,MAAuC,EAAvC,GAAOA,CAAAl8C,QAAA,CAAc,IAAAm7C,OAAd,CADY,CAzEL,MA6EVI,QAAQ,CAACv+C,CAAD,CAAI,CACZw6B,CAAAA,CAAMx6B,CAANw6B,EAAW,CACf,OAAQ,KAAAn6B,MAAD,CAAcm6B,CAAd,CAAoB,IAAA/L,KAAAzvB,OAApB,CAAwC,IAAAyvB,KAAA1qB,OAAA,CAAiB,IAAA1D,MAAjB,CAA8Bm6B,CAA9B,CAAxC,CAA6E,CAAA,CAFpE,CA7EF,UAkFNx4B,QAAQ,CAACk8C,CAAD,CAAK,CACrB,MAAQ,GAAR,EAAeA,CAAf,EAA2B,GAA3B,EAAqBA,CADA,CAlFP,cAsFFU,QAAQ,CAACV,CAAD,CAAK,CAEzB,MAAe,GAAf,GAAQA,CAAR,EAA6B,IAA7B,GAAsBA,CAAtB,EAA4C,IAA5C,GAAqCA,CAArC,EACe,IADf,GACQA,CADR,EAC8B,IAD9B,GACuBA,CADvB,EAC6C,QAD7C;AACsCA,CAHb,CAtFX,SA4FPO,QAAQ,CAACP,CAAD,CAAK,CACpB,MAAQ,GAAR,EAAeA,CAAf,EAA2B,GAA3B,EAAqBA,CAArB,EACQ,GADR,EACeA,CADf,EAC2B,GAD3B,EACqBA,CADrB,EAEQ,GAFR,GAEgBA,CAFhB,EAE6B,GAF7B,GAEsBA,CAHF,CA5FN,eAkGDiB,QAAQ,CAACjB,CAAD,CAAK,CAC1B,MAAe,GAAf,GAAQA,CAAR,EAA6B,GAA7B,GAAsBA,CAAtB,EAAoC,IAAAl8C,SAAA,CAAck8C,CAAd,CADV,CAlGZ,YAsGJe,QAAQ,CAACxiC,CAAD,CAAQ2iC,CAAR,CAAeC,CAAf,CAAoB,CACtCA,CAAA,CAAMA,CAAN,EAAa,IAAAh/C,MACTi/C,EAAAA,CAAUx9C,CAAA,CAAUs9C,CAAV,CACA,CAAJ,IAAI,CAAGA,CAAH,CAAY,GAAZ,CAAkB,IAAA/+C,MAAlB,CAA+B,IAA/B,CAAsC,IAAAouB,KAAA9O,UAAA,CAAoBy/B,CAApB,CAA2BC,CAA3B,CAAtC,CAAwE,GAAxE,CACJ,GADI,CACEA,CAChB,MAAMhf,GAAA,CAAa,QAAb,CACF5jB,CADE,CACK6iC,CADL,CACa,IAAA7wB,KADb,CAAN,CALsC,CAtGxB,YA+GJ+vB,QAAQ,EAAG,CAGrB,IAFA,IAAIvP,EAAS,EAAb,CACImQ,EAAQ,IAAA/+C,MACZ,CAAO,IAAAA,MAAP,CAAoB,IAAAouB,KAAAzvB,OAApB,CAAA,CAAsC,CACpC,IAAIk/C,EAAKr4C,CAAA,CAAU,IAAA4oB,KAAA1qB,OAAA,CAAiB,IAAA1D,MAAjB,CAAV,CACT,IAAU,GAAV,EAAI69C,CAAJ,EAAiB,IAAAl8C,SAAA,CAAck8C,CAAd,CAAjB,CACEjP,CAAA,EAAUiP,CADZ,KAEO,CACL,IAAIqB,EAAS,IAAAhB,KAAA,EACb,IAAU,GAAV,EAAIL,CAAJ,EAAiB,IAAAiB,cAAA,CAAmBI,CAAnB,CAAjB,CACEtQ,CAAA;AAAUiP,CADZ,KAEO,IAAI,IAAAiB,cAAA,CAAmBjB,CAAnB,CAAJ,EACHqB,CADG,EACO,IAAAv9C,SAAA,CAAcu9C,CAAd,CADP,EAEiC,GAFjC,EAEHtQ,CAAAlrC,OAAA,CAAckrC,CAAAjwC,OAAd,CAA8B,CAA9B,CAFG,CAGLiwC,CAAA,EAAUiP,CAHL,KAIA,IAAI,CAAA,IAAAiB,cAAA,CAAmBjB,CAAnB,CAAJ,EACDqB,CADC,EACU,IAAAv9C,SAAA,CAAcu9C,CAAd,CADV,EAEiC,GAFjC,EAEHtQ,CAAAlrC,OAAA,CAAckrC,CAAAjwC,OAAd,CAA8B,CAA9B,CAFG,CAKL,KALK,KAGL,KAAAigD,WAAA,CAAgB,kBAAhB,CAXG,CAgBP,IAAA5+C,MAAA,EApBoC,CAsBtC4uC,CAAA,EAAS,CACT,KAAAmP,OAAAv+C,KAAA,CAAiB,OACRu/C,CADQ,MAETnQ,CAFS,MAGT,CAAA,CAHS,IAIXrqC,QAAQ,EAAG,CAAE,MAAOqqC,EAAT,CAJA,CAAjB,CA1BqB,CA/GP,WAiJLyP,QAAQ,EAAG,CAQpB,IAPA,IAAIlc,EAAS,IAAb,CAEIgd,EAAQ,EAFZ,CAGIJ,EAAQ,IAAA/+C,MAHZ,CAKIo/C,CALJ,CAKaC,CALb,CAKwBC,CALxB,CAKoCzB,CAEpC,CAAO,IAAA79C,MAAP,CAAoB,IAAAouB,KAAAzvB,OAApB,CAAA,CAAsC,CACpCk/C,CAAA,CAAK,IAAAzvB,KAAA1qB,OAAA,CAAiB,IAAA1D,MAAjB,CACL,IAAW,GAAX,GAAI69C,CAAJ,EAAkB,IAAAO,QAAA,CAAaP,CAAb,CAAlB,EAAsC,IAAAl8C,SAAA,CAAck8C,CAAd,CAAtC,CACa,GACX,GADIA,CACJ,GADgBuB,CAChB,CAD0B,IAAAp/C,MAC1B,EAAAm/C,CAAA,EAAStB,CAFX,KAIE,MAEF;IAAA79C,MAAA,EARoC,CAYtC,GAAIo/C,CAAJ,CAEE,IADAC,CACA,CADY,IAAAr/C,MACZ,CAAOq/C,CAAP,CAAmB,IAAAjxB,KAAAzvB,OAAnB,CAAA,CAAqC,CACnCk/C,CAAA,CAAK,IAAAzvB,KAAA1qB,OAAA,CAAiB27C,CAAjB,CACL,IAAW,GAAX,GAAIxB,CAAJ,CAAgB,CACdyB,CAAA,CAAaH,CAAAr4B,OAAA,CAAas4B,CAAb,CAAuBL,CAAvB,CAA+B,CAA/B,CACbI,EAAA,CAAQA,CAAAr4B,OAAA,CAAa,CAAb,CAAgBs4B,CAAhB,CAA0BL,CAA1B,CACR,KAAA/+C,MAAA,CAAaq/C,CACb,MAJc,CAMhB,GAAI,IAAAd,aAAA,CAAkBV,CAAlB,CAAJ,CACEwB,CAAA,EADF,KAGE,MAXiC,CAiBnCpuB,CAAAA,CAAQ,OACH8tB,CADG,MAEJI,CAFI,CAMZ,IAAI/C,EAAAh9C,eAAA,CAAyB+/C,CAAzB,CAAJ,CACEluB,CAAA1sB,GACA,CADW63C,EAAA,CAAU+C,CAAV,CACX,CAAAluB,CAAA7rB,KAAA,CAAag3C,EAAA,CAAU+C,CAAV,CAFf,KAGO,CACL,IAAIr1C,EAASs3B,EAAA,CAAS+d,CAAT,CAAgB,IAAAj/B,QAAhB,CAA8B,IAAAkO,KAA9B,CACb6C,EAAA1sB,GAAA,CAAW5D,CAAA,CAAO,QAAQ,CAAC2D,CAAD,CAAOgV,CAAP,CAAe,CACvC,MAAQxP,EAAA,CAAOxF,CAAP,CAAagV,CAAb,CAD+B,CAA9B,CAER,QACO8Q,QAAQ,CAAC9lB,CAAD,CAAOxE,CAAP,CAAc,CAC5B,MAAOogC,GAAA,CAAO57B,CAAP,CAAa66C,CAAb,CAAoBr/C,CAApB,CAA2BqiC,CAAA/T,KAA3B,CAAwC+T,CAAAjiB,QAAxC,CADqB,CAD7B,CAFQ,CAFN,CAWP,IAAA69B,OAAAv+C,KAAA,CAAiByxB,CAAjB,CAEIquB,EAAJ,GACE,IAAAvB,OAAAv+C,KAAA,CAAiB,OACT4/C,CADS,MAET,GAFS,MAGT,CAAA,CAHS,CAAjB,CAKA,CAAA,IAAArB,OAAAv+C,KAAA,CAAiB,OACR4/C,CADQ,CACE,CADF,MAETE,CAFS,MAGT,CAAA,CAHS,CAAjB,CANF,CA7DoB,CAjJN;WA4NJrB,QAAQ,CAACsB,CAAD,CAAQ,CAC1B,IAAIR,EAAQ,IAAA/+C,MACZ,KAAAA,MAAA,EAIA,KAHA,IAAI+wC,EAAS,EAAb,CACIyO,EAAYD,CADhB,CAEIrgC,EAAS,CAAA,CACb,CAAO,IAAAlf,MAAP,CAAoB,IAAAouB,KAAAzvB,OAApB,CAAA,CAAsC,CACpC,IAAIk/C,EAAK,IAAAzvB,KAAA1qB,OAAA,CAAiB,IAAA1D,MAAjB,CAAT,CACAw/C,EAAAA,CAAAA,CAAa3B,CACb,IAAI3+B,CAAJ,CACa,GAAX,GAAI2+B,CAAJ,EACM4B,CAIJ,CAJU,IAAArxB,KAAA9O,UAAA,CAAoB,IAAAtf,MAApB,CAAiC,CAAjC,CAAoC,IAAAA,MAApC,CAAiD,CAAjD,CAIV,CAHKy/C,CAAAt5C,MAAA,CAAU,aAAV,CAGL,EAFE,IAAAy4C,WAAA,CAAgB,6BAAhB,CAAgDa,CAAhD,CAAsD,GAAtD,CAEF,CADA,IAAAz/C,MACA,EADc,CACd,CAAA+wC,CAAA,EAAU1wC,MAAAC,aAAA,CAAoBU,QAAA,CAASy+C,CAAT,CAAc,EAAd,CAApB,CALZ,EASI1O,CATJ,CAQE,CADI2O,CACJ,CADU/B,EAAA,CAAOE,CAAP,CACV,EACE9M,CADF,CACY2O,CADZ,CAGE3O,CAHF,CAGY8M,CAGd,CAAA3+B,CAAA,CAAS,CAAA,CAfX,KAgBO,IAAW,IAAX,GAAI2+B,CAAJ,CACL3+B,CAAA,CAAS,CAAA,CADJ,KAEA,CAAA,GAAI2+B,CAAJ,GAAW0B,CAAX,CAAkB,CACvB,IAAAv/C,MAAA,EACA,KAAA+9C,OAAAv+C,KAAA,CAAiB,OACRu/C,CADQ,MAETS,CAFS,QAGPzO,CAHO,MAIT,CAAA,CAJS,IAKXxsC,QAAQ,EAAG,CAAE,MAAOwsC,EAAT,CALA,CAAjB,CAOA,OATuB,CAWvBA,CAAA;AAAU8M,CAXL,CAaP,IAAA79C,MAAA,EAlCoC,CAoCtC,IAAA4+C,WAAA,CAAgB,oBAAhB,CAAsCG,CAAtC,CA1C0B,CA5NZ,CA8QlB,KAAI3c,GAASA,QAAS,CAACH,CAAD,CAAQH,CAAR,CAAiB5hB,CAAjB,CAA0B,CAC9C,IAAA+hB,MAAA,CAAaA,CACb,KAAAH,QAAA,CAAeA,CACf,KAAA5hB,QAAA,CAAeA,CAH+B,CAMhDkiB,GAAAud,KAAA,CAAch/C,CAAA,CAAO,QAAS,EAAG,CAC/B,MAAO,EADwB,CAAnB,CAEX,UACS,CAAA,CADT,CAFW,CAMdyhC,GAAAzoB,UAAA,CAAmB,aACJyoB,EADI,OAGV/8B,QAAS,CAAC+oB,CAAD,CAAOhpB,CAAP,CAAa,CAC3B,IAAAgpB,KAAA,CAAYA,CAGZ,KAAAhpB,KAAA,CAAYA,CAEZ,KAAA24C,OAAA,CAAc,IAAA9b,MAAA2b,IAAA,CAAexvB,CAAf,CAEVhpB,EAAJ,GAGE,IAAAw6C,WAEA,CAFkB,IAAAC,UAElB,CAAA,IAAAC,aAAA,CACA,IAAAC,YADA,CAEA,IAAAC,YAFA,CAGA,IAAAC,YAHA,CAGmBC,QAAQ,EAAG,CAC5B,IAAAtB,WAAA,CAAgB,mBAAhB,CAAqC,MAAOxwB,CAAP,OAAoB,CAApB,CAArC,CAD4B,CARhC,CAaA,KAAItuB,EAAQsF,CAAA,CAAO,IAAA+6C,QAAA,EAAP,CAAwB,IAAAC,WAAA,EAET,EAA3B,GAAI,IAAArC,OAAAp/C,OAAJ;AACE,IAAAigD,WAAA,CAAgB,wBAAhB,CAA0C,IAAAb,OAAA,CAAY,CAAZ,CAA1C,CAGFj+C,EAAAmqB,QAAA,CAAgB,CAAC,CAACnqB,CAAAmqB,QAClBnqB,EAAAka,SAAA,CAAiB,CAAC,CAACla,CAAAka,SAEnB,OAAOla,EA9BoB,CAHZ,SAoCRqgD,QAAS,EAAG,CACnB,IAAIA,CACJ,IAAI,IAAAE,OAAA,CAAY,GAAZ,CAAJ,CACEF,CACA,CADU,IAAAF,YAAA,EACV,CAAA,IAAAK,QAAA,CAAa,GAAb,CAFF,KAGO,IAAI,IAAAD,OAAA,CAAY,GAAZ,CAAJ,CACLF,CAAA,CAAU,IAAAI,iBAAA,EADL,KAEA,IAAI,IAAAF,OAAA,CAAY,GAAZ,CAAJ,CACLF,CAAA,CAAU,IAAAjO,OAAA,EADL,KAEA,CACL,IAAIjhB,EAAQ,IAAAovB,OAAA,EAEZ,EADAF,CACA,CADUlvB,CAAA1sB,GACV,GACE,IAAAq6C,WAAA,CAAgB,0BAAhB,CAA4C3tB,CAA5C,CAEEA,EAAA7rB,KAAJ,GACE+6C,CAAAnmC,SACA,CADmB,CAAA,CACnB,CAAAmmC,CAAAl2B,QAAA,CAAkB,CAAA,CAFpB,CANK,CAaP,IADA,IAAUhrB,CACV,CAAQipC,CAAR,CAAe,IAAAmY,OAAA,CAAY,GAAZ,CAAiB,GAAjB,CAAsB,GAAtB,CAAf,CAAA,CACoB,GAAlB,GAAInY,CAAA9Z,KAAJ,EACE+xB,CACA,CADU,IAAAL,aAAA,CAAkBK,CAAlB,CAA2BlhD,CAA3B,CACV,CAAAA,CAAA,CAAU,IAFZ,EAGyB,GAAlB,GAAIipC,CAAA9Z,KAAJ;CACLnvB,CACA,CADUkhD,CACV,CAAAA,CAAA,CAAU,IAAAH,YAAA,CAAiBG,CAAjB,CAFL,EAGkB,GAAlB,GAAIjY,CAAA9Z,KAAJ,EACLnvB,CACA,CADUkhD,CACV,CAAAA,CAAA,CAAU,IAAAJ,YAAA,CAAiBI,CAAjB,CAFL,EAIL,IAAAvB,WAAA,CAAgB,YAAhB,CAGJ,OAAOuB,EApCY,CApCJ,YA2ELvB,QAAQ,CAAC4B,CAAD,CAAMvvB,CAAN,CAAa,CAC/B,KAAM+O,GAAA,CAAa,QAAb,CAEA/O,CAAA7C,KAFA,CAEYoyB,CAFZ,CAEkBvvB,CAAAjxB,MAFlB,CAEgC,CAFhC,CAEoC,IAAAouB,KAFpC,CAE+C,IAAAA,KAAA9O,UAAA,CAAoB2R,CAAAjxB,MAApB,CAF/C,CAAN,CAD+B,CA3EhB,WAiFNygD,QAAQ,EAAG,CACpB,GAA2B,CAA3B,GAAI,IAAA1C,OAAAp/C,OAAJ,CACE,KAAMqhC,GAAA,CAAa,MAAb,CAA0D,IAAA5R,KAA1D,CAAN,CACF,MAAO,KAAA2vB,OAAA,CAAY,CAAZ,CAHa,CAjFL,MAuFXG,QAAQ,CAACwC,CAAD,CAAKC,CAAL,CAASC,CAAT,CAAaC,CAAb,CAAiB,CAC7B,GAAyB,CAAzB,CAAI,IAAA9C,OAAAp/C,OAAJ,CAA4B,CAC1B,IAAIsyB,EAAQ,IAAA8sB,OAAA,CAAY,CAAZ,CAAZ,CACI+C,EAAI7vB,CAAA7C,KACR,IAAI0yB,CAAJ,GAAUJ,CAAV,EAAgBI,CAAhB,GAAsBH,CAAtB,EAA4BG,CAA5B,GAAkCF,CAAlC,EAAwCE,CAAxC,GAA8CD,CAA9C,EACK,EAACH,CAAD,EAAQC,CAAR,EAAeC,CAAf,EAAsBC,CAAtB,CADL,CAEE,MAAO5vB,EALiB,CAQ5B,MAAO,CAAA,CATsB,CAvFd,QAmGTovB,QAAQ,CAACK,CAAD,CAAKC,CAAL,CAASC,CAAT,CAAaC,CAAb,CAAgB,CAE9B,MAAA,CADI5vB,CACJ,CADY,IAAAitB,KAAA,CAAUwC,CAAV,CAAcC,CAAd,CAAkBC,CAAlB;AAAsBC,CAAtB,CACZ,GACM,IAAAz7C,KAIG6rB,EAJW7rB,CAAA6rB,CAAA7rB,KAIX6rB,EAHL,IAAA2tB,WAAA,CAAgB,mBAAhB,CAAqC3tB,CAArC,CAGKA,CADP,IAAA8sB,OAAA5sC,MAAA,EACO8f,CAAAA,CALT,EAOO,CAAA,CATuB,CAnGf,SA+GRqvB,QAAQ,CAACI,CAAD,CAAI,CACd,IAAAL,OAAA,CAAYK,CAAZ,CAAL,EACE,IAAA9B,WAAA,CAAgB,4BAAhB,CAA+C8B,CAA/C,CAAoD,GAApD,CAAyD,IAAAxC,KAAA,EAAzD,CAFiB,CA/GJ,SAqHR6C,QAAQ,CAACx8C,CAAD,CAAKy8C,CAAL,CAAY,CAC3B,MAAOrgD,EAAA,CAAO,QAAQ,CAAC2D,CAAD,CAAOgV,CAAP,CAAe,CACnC,MAAO/U,EAAA,CAAGD,CAAH,CAASgV,CAAT,CAAiB0nC,CAAjB,CAD4B,CAA9B,CAEJ,UACQA,CAAAhnC,SADR,CAFI,CADoB,CArHZ,WA6HNinC,QAAQ,CAACC,CAAD,CAAOC,CAAP,CAAeH,CAAf,CAAqB,CACtC,MAAOrgD,EAAA,CAAO,QAAQ,CAAC2D,CAAD,CAAOgV,CAAP,CAAc,CAClC,MAAO4nC,EAAA,CAAK58C,CAAL,CAAWgV,CAAX,CAAA,CAAqB6nC,CAAA,CAAO78C,CAAP,CAAagV,CAAb,CAArB,CAA4C0nC,CAAA,CAAM18C,CAAN,CAAYgV,CAAZ,CADjB,CAA7B,CAEJ,UACS4nC,CAAAlnC,SADT,EAC0BmnC,CAAAnnC,SAD1B,EAC6CgnC,CAAAhnC,SAD7C,CAFI,CAD+B,CA7HvB,UAqIPonC,QAAQ,CAACF,CAAD,CAAO38C,CAAP,CAAWy8C,CAAX,CAAkB,CAClC,MAAOrgD,EAAA,CAAO,QAAQ,CAAC2D,CAAD,CAAOgV,CAAP,CAAe,CACnC,MAAO/U,EAAA,CAAGD,CAAH,CAASgV,CAAT,CAAiB4nC,CAAjB,CAAuBF,CAAvB,CAD4B,CAA9B,CAEJ,UACQE,CAAAlnC,SADR,EACyBgnC,CAAAhnC,SADzB,CAFI,CAD2B,CArInB;WA6ILomC,QAAQ,EAAG,CAErB,IADA,IAAIA,EAAa,EACjB,CAAA,CAAA,CAGE,GAFyB,CAErB,CAFA,IAAArC,OAAAp/C,OAEA,EAF2B,CAAA,IAAAu/C,KAAA,CAAU,GAAV,CAAe,GAAf,CAAoB,GAApB,CAAyB,GAAzB,CAE3B,EADFkC,CAAA5gD,KAAA,CAAgB,IAAAygD,YAAA,EAAhB,CACE,CAAA,CAAC,IAAAI,OAAA,CAAY,GAAZ,CAAL,CAGE,MAA8B,EACvB,GADCD,CAAAzhD,OACD,CAADyhD,CAAA,CAAW,CAAX,CAAC,CACD,QAAQ,CAAC97C,CAAD,CAAOgV,CAAP,CAAe,CAErB,IADA,IAAIxZ,CAAJ,CACSH,EAAI,CAAb,CAAgBA,CAAhB,CAAoBygD,CAAAzhD,OAApB,CAAuCgB,CAAA,EAAvC,CAA4C,CAC1C,IAAI0hD,EAAYjB,CAAA,CAAWzgD,CAAX,CACZ0hD,EAAJ,GACEvhD,CADF,CACUuhD,CAAA,CAAU/8C,CAAV,CAAgBgV,CAAhB,CADV,CAF0C,CAM5C,MAAOxZ,EARc,CAVZ,CA7IN,aAqKJmgD,QAAQ,EAAG,CAGtB,IAFA,IAAIiB,EAAO,IAAA7vB,WAAA,EAAX,CACIJ,CACJ,CAAA,CAAA,CACE,GAAKA,CAAL,CAAa,IAAAovB,OAAA,CAAY,GAAZ,CAAb,CACEa,CAAA,CAAO,IAAAE,SAAA,CAAcF,CAAd,CAAoBjwB,CAAA1sB,GAApB,CAA8B,IAAAqM,OAAA,EAA9B,CADT,KAGE,OAAOswC,EAPW,CArKP,QAiLTtwC,QAAQ,EAAG,CAIjB,IAHA,IAAIqgB,EAAQ,IAAAovB,OAAA,EAAZ,CACI97C,EAAK,IAAAu9B,QAAA,CAAa7Q,CAAA7C,KAAb,CADT,CAEIkzB,EAAS,EACb,CAAA,CAAA,CACE,GAAKrwB,CAAL,CAAa,IAAAovB,OAAA,CAAY,GAAZ,CAAb,CACEiB,CAAA9hD,KAAA,CAAY,IAAA6xB,WAAA,EAAZ,CADF,KAEO,CACL,IAAIkwB;AAAWA,QAAQ,CAACj9C,CAAD,CAAOgV,CAAP,CAAe64B,CAAf,CAAsB,CACvC54B,CAAAA,CAAO,CAAC44B,CAAD,CACX,KAAK,IAAIxyC,EAAI,CAAb,CAAgBA,CAAhB,CAAoB2hD,CAAA3iD,OAApB,CAAmCgB,CAAA,EAAnC,CACE4Z,CAAA/Z,KAAA,CAAU8hD,CAAA,CAAO3hD,CAAP,CAAA,CAAU2E,CAAV,CAAgBgV,CAAhB,CAAV,CAEF,OAAO/U,EAAAI,MAAA,CAASL,CAAT,CAAeiV,CAAf,CALoC,CAO7C,OAAO,SAAQ,EAAG,CAChB,MAAOgoC,EADS,CARb,CAPQ,CAjLF,YAuMLlwB,QAAQ,EAAG,CACrB,MAAO,KAAAuuB,WAAA,EADc,CAvMN,YA2MLA,QAAQ,EAAG,CACrB,IAAIsB,EAAO,IAAAM,QAAA,EAAX,CACIR,CADJ,CAEI/vB,CACJ,OAAA,CAAKA,CAAL,CAAa,IAAAovB,OAAA,CAAY,GAAZ,CAAb,GACOa,CAAA92B,OAKE,EAJL,IAAAw0B,WAAA,CAAgB,0BAAhB,CACI,IAAAxwB,KAAA9O,UAAA,CAAoB,CAApB,CAAuB2R,CAAAjxB,MAAvB,CADJ,CAC0C,0BAD1C,CACsEixB,CADtE,CAIK,CADP+vB,CACO,CADC,IAAAQ,QAAA,EACD,CAAA,QAAQ,CAACl5C,CAAD,CAAQgR,CAAR,CAAgB,CAC7B,MAAO4nC,EAAA92B,OAAA,CAAY9hB,CAAZ,CAAmB04C,CAAA,CAAM14C,CAAN,CAAagR,CAAb,CAAnB,CAAyCA,CAAzC,CADsB,CANjC,EAUO4nC,CAdc,CA3MN,SA4NRM,QAAQ,EAAG,CAClB,IAAIN,EAAO,IAAArB,UAAA,EAAX,CACIsB,CADJ,CAEIlwB,CACJ,IAAa,IAAAovB,OAAA,CAAY,GAAZ,CAAb,CAAgC,CAC9Bc,CAAA,CAAS,IAAAK,QAAA,EACT;GAAKvwB,CAAL,CAAa,IAAAovB,OAAA,CAAY,GAAZ,CAAb,CACE,MAAO,KAAAY,UAAA,CAAeC,CAAf,CAAqBC,CAArB,CAA6B,IAAAK,QAAA,EAA7B,CAEP,KAAA5C,WAAA,CAAgB,YAAhB,CAA8B3tB,CAA9B,CAL4B,CAAhC,IAQE,OAAOiwB,EAZS,CA5NH,WA4ONrB,QAAQ,EAAG,CAGpB,IAFA,IAAIqB,EAAO,IAAAO,WAAA,EAAX,CACIxwB,CACJ,CAAA,CAAA,CACE,GAAKA,CAAL,CAAa,IAAAovB,OAAA,CAAY,IAAZ,CAAb,CACEa,CAAA,CAAO,IAAAE,SAAA,CAAcF,CAAd,CAAoBjwB,CAAA1sB,GAApB,CAA8B,IAAAk9C,WAAA,EAA9B,CADT,KAGE,OAAOP,EAPS,CA5OL,YAwPLO,QAAQ,EAAG,CACrB,IAAIP,EAAO,IAAAQ,SAAA,EAAX,CACIzwB,CACJ,IAAKA,CAAL,CAAa,IAAAovB,OAAA,CAAY,IAAZ,CAAb,CACEa,CAAA,CAAO,IAAAE,SAAA,CAAcF,CAAd,CAAoBjwB,CAAA1sB,GAApB,CAA8B,IAAAk9C,WAAA,EAA9B,CAET,OAAOP,EANc,CAxPN,UAiQPQ,QAAQ,EAAG,CACnB,IAAIR,EAAO,IAAAS,WAAA,EAAX,CACI1wB,CACJ,IAAKA,CAAL,CAAa,IAAAovB,OAAA,CAAY,IAAZ,CAAiB,IAAjB,CAAsB,KAAtB,CAA4B,KAA5B,CAAb,CACEa,CAAA,CAAO,IAAAE,SAAA,CAAcF,CAAd,CAAoBjwB,CAAA1sB,GAApB,CAA8B,IAAAm9C,SAAA,EAA9B,CAET,OAAOR,EANY,CAjQJ;WA0QLS,QAAQ,EAAG,CACrB,IAAIT,EAAO,IAAAU,SAAA,EAAX,CACI3wB,CACJ,IAAKA,CAAL,CAAa,IAAAovB,OAAA,CAAY,GAAZ,CAAiB,GAAjB,CAAsB,IAAtB,CAA4B,IAA5B,CAAb,CACEa,CAAA,CAAO,IAAAE,SAAA,CAAcF,CAAd,CAAoBjwB,CAAA1sB,GAApB,CAA8B,IAAAo9C,WAAA,EAA9B,CAET,OAAOT,EANc,CA1QN,UAmRPU,QAAQ,EAAG,CAGnB,IAFA,IAAIV,EAAO,IAAAW,eAAA,EAAX,CACI5wB,CACJ,CAAQA,CAAR,CAAgB,IAAAovB,OAAA,CAAY,GAAZ,CAAgB,GAAhB,CAAhB,CAAA,CACEa,CAAA,CAAO,IAAAE,SAAA,CAAcF,CAAd,CAAoBjwB,CAAA1sB,GAApB,CAA8B,IAAAs9C,eAAA,EAA9B,CAET,OAAOX,EANY,CAnRJ,gBA4RDW,QAAQ,EAAG,CAGzB,IAFA,IAAIX,EAAO,IAAAY,MAAA,EAAX,CACI7wB,CACJ,CAAQA,CAAR,CAAgB,IAAAovB,OAAA,CAAY,GAAZ,CAAgB,GAAhB,CAAoB,GAApB,CAAhB,CAAA,CACEa,CAAA,CAAO,IAAAE,SAAA,CAAcF,CAAd,CAAoBjwB,CAAA1sB,GAApB,CAA8B,IAAAu9C,MAAA,EAA9B,CAET,OAAOZ,EANkB,CA5RV,OAqSVY,QAAQ,EAAG,CAChB,IAAI7wB,CACJ,OAAI,KAAAovB,OAAA,CAAY,GAAZ,CAAJ,CACS,IAAAF,QAAA,EADT,CAEO,CAAKlvB,CAAL,CAAa,IAAAovB,OAAA,CAAY,GAAZ,CAAb,EACE,IAAAe,SAAA,CAAchf,EAAAud,KAAd,CAA2B1uB,CAAA1sB,GAA3B;AAAqC,IAAAu9C,MAAA,EAArC,CADF,CAEA,CAAK7wB,CAAL,CAAa,IAAAovB,OAAA,CAAY,GAAZ,CAAb,EACE,IAAAU,QAAA,CAAa9vB,CAAA1sB,GAAb,CAAuB,IAAAu9C,MAAA,EAAvB,CADF,CAGE,IAAA3B,QAAA,EATO,CArSD,aAkTJJ,QAAQ,CAAC7N,CAAD,CAAS,CAC5B,IAAI/P,EAAS,IAAb,CACI4f,EAAQ,IAAA1B,OAAA,EAAAjyB,KADZ,CAEItkB,EAASs3B,EAAA,CAAS2gB,CAAT,CAAgB,IAAA7hC,QAAhB,CAA8B,IAAAkO,KAA9B,CAEb,OAAOztB,EAAA,CAAO,QAAQ,CAAC2H,CAAD,CAAQgR,CAAR,CAAgBhV,CAAhB,CAAsB,CAC1C,MAAOwF,EAAA,CAAOxF,CAAP,EAAe4tC,CAAA,CAAO5pC,CAAP,CAAcgR,CAAd,CAAf,CADmC,CAArC,CAEJ,QACO8Q,QAAQ,CAAC9hB,CAAD,CAAQxI,CAAR,CAAewZ,CAAf,CAAuB,CACrC,MAAO4mB,GAAA,CAAOgS,CAAA,CAAO5pC,CAAP,CAAcgR,CAAd,CAAP,CAA8ByoC,CAA9B,CAAqCjiD,CAArC,CAA4CqiC,CAAA/T,KAA5C,CAAyD+T,CAAAjiB,QAAzD,CAD8B,CADtC,CAFI,CALqB,CAlTb,aAgUJ8/B,QAAQ,CAACvhD,CAAD,CAAM,CACzB,IAAI0jC,EAAS,IAAb,CAEI6f,EAAU,IAAA3wB,WAAA,EACd,KAAAivB,QAAA,CAAa,GAAb,CAEA,OAAO3/C,EAAA,CAAO,QAAQ,CAAC2D,CAAD,CAAOgV,CAAP,CAAe,CAAA,IAC/B2oC,EAAIxjD,CAAA,CAAI6F,CAAJ,CAAUgV,CAAV,CAD2B,CAE/B3Z,EAAIqiD,CAAA,CAAQ19C,CAAR,CAAcgV,CAAd,CAF2B,CAG5BmH,CAEP,IAAI,CAACwhC,CAAL,CAAQ,MAAO3jD,EAEf,EADAiH,CACA,CADI06B,EAAA,CAAiBgiB,CAAA,CAAEtiD,CAAF,CAAjB,CAAuBwiC,CAAA/T,KAAvB,CACJ,IAAS7oB,CAAA+uB,KAAT,EAAmB6N,CAAAjiB,QAAAogB,eAAnB,IACE7f,CAKA,CALIlb,CAKJ,CAJM,KAIN,EAJeA,EAIf,GAHEkb,CAAA+f,IACA,CADQliC,CACR,CAAAmiB,CAAA6T,KAAA,CAAO,QAAQ,CAACxvB,CAAD,CAAM,CAAE2b,CAAA+f,IAAA;AAAQ17B,CAAV,CAArB,CAEF,EAAAS,CAAA,CAAIA,CAAAi7B,IANN,CAQA,OAAOj7B,EAf4B,CAA9B,CAgBJ,QACO6kB,QAAQ,CAAC9lB,CAAD,CAAOxE,CAAP,CAAcwZ,CAAd,CAAsB,CACpC,IAAIpa,EAAM8iD,CAAA,CAAQ19C,CAAR,CAAcgV,CAAd,CAGV,OADW2mB,GAAAiiB,CAAiBzjD,CAAA,CAAI6F,CAAJ,CAAUgV,CAAV,CAAjB4oC,CAAoC/f,CAAA/T,KAApC8zB,CACJ,CAAKhjD,CAAL,CAAP,CAAmBY,CAJiB,CADrC,CAhBI,CANkB,CAhUV,cAgWHggD,QAAQ,CAACv7C,CAAD,CAAK49C,CAAL,CAAoB,CACxC,IAAIb,EAAS,EACb,IAA8B,GAA9B,GAAI,IAAAb,UAAA,EAAAryB,KAAJ,EACE,EACEkzB,EAAA9hD,KAAA,CAAY,IAAA6xB,WAAA,EAAZ,CADF,OAES,IAAAgvB,OAAA,CAAY,GAAZ,CAFT,CADF,CAKA,IAAAC,QAAA,CAAa,GAAb,CAEA,KAAIne,EAAS,IAEb,OAAO,SAAQ,CAAC75B,CAAD,CAAQgR,CAAR,CAAgB,CAI7B,IAHA,IAAIC,EAAO,EAAX,CACIta,EAAUkjD,CAAA,CAAgBA,CAAA,CAAc75C,CAAd,CAAqBgR,CAArB,CAAhB,CAA+ChR,CAD7D,CAGS3I,EAAI,CAAb,CAAgBA,CAAhB,CAAoB2hD,CAAA3iD,OAApB,CAAmCgB,CAAA,EAAnC,CACE4Z,CAAA/Z,KAAA,CAAU8hD,CAAA,CAAO3hD,CAAP,CAAA,CAAU2I,CAAV,CAAiBgR,CAAjB,CAAV,CAEE8oC,EAAAA,CAAQ79C,CAAA,CAAG+D,CAAH,CAAUgR,CAAV,CAAkBra,CAAlB,CAARmjD,EAAsChhD,CAE1C6+B,GAAA,CAAiBhhC,CAAjB,CAA0BkjC,CAAA/T,KAA1B,CACA6R,GAAA,CAAiBmiB,CAAjB,CAAwBjgB,CAAA/T,KAAxB,CAGI7oB,EAAAA,CAAI68C,CAAAz9C,MACA,CAAAy9C,CAAAz9C,MAAA,CAAY1F,CAAZ,CAAqBsa,CAArB,CAAA,CACA6oC,CAAA,CAAM7oC,CAAA,CAAK,CAAL,CAAN,CAAeA,CAAA,CAAK,CAAL,CAAf,CAAwBA,CAAA,CAAK,CAAL,CAAxB,CAAiCA,CAAA,CAAK,CAAL,CAAjC,CAA0CA,CAAA,CAAK,CAAL,CAA1C,CAER,OAAO0mB,GAAA,CAAiB16B,CAAjB,CAAoB48B,CAAA/T,KAApB,CAjBsB,CAXS,CAhWzB,kBAiYCmyB,QAAS,EAAG,CAC5B,IAAI8B,EAAa,EAAjB,CACIC,EAAc,CAAA,CAClB,IAA8B,GAA9B,GAAI,IAAA7B,UAAA,EAAAryB,KAAJ,EACE,EAAG,CACD,GAAI,IAAA8vB,KAAA,CAAU,GAAV,CAAJ,CAEE,KAEF;IAAIqE,EAAY,IAAAlxB,WAAA,EAChBgxB,EAAA7iD,KAAA,CAAgB+iD,CAAhB,CACKA,EAAAvoC,SAAL,GACEsoC,CADF,CACgB,CAAA,CADhB,CAPC,CAAH,MAUS,IAAAjC,OAAA,CAAY,GAAZ,CAVT,CADF,CAaA,IAAAC,QAAA,CAAa,GAAb,CAEA,OAAO3/C,EAAA,CAAO,QAAQ,CAAC2D,CAAD,CAAOgV,CAAP,CAAe,CAEnC,IADA,IAAI1W,EAAQ,EAAZ,CACSjD,EAAI,CAAb,CAAgBA,CAAhB,CAAoB0iD,CAAA1jD,OAApB,CAAuCgB,CAAA,EAAvC,CACEiD,CAAApD,KAAA,CAAW6iD,CAAA,CAAW1iD,CAAX,CAAA,CAAc2E,CAAd,CAAoBgV,CAApB,CAAX,CAEF,OAAO1W,EAL4B,CAA9B,CAMJ,SACQ,CAAA,CADR,UAES0/C,CAFT,CANI,CAlBqB,CAjYb,QA+ZTpQ,QAAS,EAAG,CAClB,IAAIsQ,EAAY,EAAhB,CACIF,EAAc,CAAA,CAClB,IAA8B,GAA9B,GAAI,IAAA7B,UAAA,EAAAryB,KAAJ,EACE,EAAG,CACD,GAAI,IAAA8vB,KAAA,CAAU,GAAV,CAAJ,CAEE,KAHD,KAKGjtB,EAAQ,IAAAovB,OAAA,EALX,CAMDnhD,EAAM+xB,CAAA8f,OAAN7xC,EAAsB+xB,CAAA7C,KACtB,KAAAkyB,QAAA,CAAa,GAAb,CACA,KAAIxgD,EAAQ,IAAAuxB,WAAA,EACZmxB,EAAAhjD,KAAA,CAAe,KAAMN,CAAN,OAAkBY,CAAlB,CAAf,CACKA,EAAAka,SAAL,GACEsoC,CADF,CACgB,CAAA,CADhB,CAVC,CAAH,MAaS,IAAAjC,OAAA,CAAY,GAAZ,CAbT,CADF,CAgBA,IAAAC,QAAA,CAAa,GAAb,CAEA,OAAO3/C,EAAA,CAAO,QAAQ,CAAC2D,CAAD,CAAOgV,CAAP,CAAe,CAEnC,IADA,IAAI44B,EAAS,EAAb,CACSvyC,EAAI,CAAb,CAAgBA,CAAhB;AAAoB6iD,CAAA7jD,OAApB,CAAsCgB,CAAA,EAAtC,CAA2C,CACzC,IAAI6G,EAAWg8C,CAAA,CAAU7iD,CAAV,CACfuyC,EAAA,CAAO1rC,CAAAtH,IAAP,CAAA,CAAuBsH,CAAA1G,MAAA,CAAewE,CAAf,CAAqBgV,CAArB,CAFkB,CAI3C,MAAO44B,EAN4B,CAA9B,CAOJ,SACQ,CAAA,CADR,UAESoQ,CAFT,CAPI,CArBW,CA/ZH,CAsenB,KAAIjhB,GAAgB,EAApB,CAumEI8H,GAAa5qC,CAAA,CAAO,MAAP,CAvmEjB,CAymEIgrC,GAAe,MACX,MADW,KAEZ,KAFY,KAGZ,KAHY,cAMH,aANG,IAOb,IAPa,CAzmEnB,CA6zGIuD,EAAiBzuC,CAAA8T,cAAA,CAAuB,GAAvB,CA7zGrB,CA8zGI66B,GAAYpV,EAAA,CAAWx5B,CAAA2D,SAAAgc,KAAX,CAAiC,CAAA,CAAjC,CAqNhBjP,GAAAqI,QAAA,CAA0B,CAAC,UAAD,CAkU1Bg2B,GAAAh2B,QAAA,CAAyB,CAAC,SAAD,CA4DzBs2B,GAAAt2B,QAAA,CAAuB,CAAC,SAAD,CASvB,KAAIw3B,GAAc,GAAlB,CA2HIsD,GAAe,MACXvB,CAAA,CAAW,UAAX,CAAuB,CAAvB,CADW,IAEXA,CAAA,CAAW,UAAX,CAAuB,CAAvB,CAA0B,CAA1B,CAA6B,CAAA,CAA7B,CAFW,GAGXA,CAAA,CAAW,UAAX,CAAuB,CAAvB,CAHW,MAIXE,EAAA,CAAc,OAAd,CAJW,KAKXA,EAAA,CAAc,OAAd,CAAuB,CAAA,CAAvB,CALW,IAMXF,CAAA,CAAW,OAAX,CAAoB,CAApB,CAAuB,CAAvB,CANW,GAOXA,CAAA,CAAW,OAAX,CAAoB,CAApB,CAAuB,CAAvB,CAPW,IAQXA,CAAA,CAAW,MAAX,CAAmB,CAAnB,CARW,GASXA,CAAA,CAAW,MAAX,CAAmB,CAAnB,CATW,IAUXA,CAAA,CAAW,OAAX,CAAoB,CAApB,CAVW,GAWXA,CAAA,CAAW,OAAX;AAAoB,CAApB,CAXW,IAYXA,CAAA,CAAW,OAAX,CAAoB,CAApB,CAAwB,GAAxB,CAZW,GAaXA,CAAA,CAAW,OAAX,CAAoB,CAApB,CAAwB,GAAxB,CAbW,IAcXA,CAAA,CAAW,SAAX,CAAsB,CAAtB,CAdW,GAeXA,CAAA,CAAW,SAAX,CAAsB,CAAtB,CAfW,IAgBXA,CAAA,CAAW,SAAX,CAAsB,CAAtB,CAhBW,GAiBXA,CAAA,CAAW,SAAX,CAAsB,CAAtB,CAjBW,KAoBXA,CAAA,CAAW,cAAX,CAA2B,CAA3B,CApBW,MAqBXE,EAAA,CAAc,KAAd,CArBW,KAsBXA,EAAA,CAAc,KAAd,CAAqB,CAAA,CAArB,CAtBW,GAJnB6R,QAAmB,CAAC9R,CAAD,CAAOxC,CAAP,CAAgB,CACjC,MAAyB,GAAlB,CAAAwC,CAAA+R,SAAA,EAAA,CAAuBvU,CAAAwU,MAAA,CAAc,CAAd,CAAvB,CAA0CxU,CAAAwU,MAAA,CAAc,CAAd,CADhB,CAIhB,GAdnBC,QAAuB,CAACjS,CAAD,CAAO,CACxBkS,CAAAA,CAAQ,EAARA,CAAYlS,CAAAmS,kBAAA,EAMhB,OAHAC,EAGA,EAL0B,CAATA,EAACF,CAADE,CAAc,GAAdA,CAAoB,EAKrC,GAHcxS,EAAA,CAAUnkB,IAAA,CAAY,CAAP,CAAAy2B,CAAA,CAAW,OAAX,CAAqB,MAA1B,CAAA,CAAkCA,CAAlC,CAAyC,EAAzC,CAAV,CAAwD,CAAxD,CAGd,CAFctS,EAAA,CAAUnkB,IAAA+iB,IAAA,CAAS0T,CAAT,CAAgB,EAAhB,CAAV,CAA+B,CAA/B,CAEd,CAP4B,CAcX,CA3HnB,CAsJI7Q,GAAqB,8EAtJzB,CAuJID,GAAgB,UAmFpB3E,GAAAj2B,QAAA,CAAqB,CAAC,SAAD,CAuHrB,KAAIq2B,GAAkBjsC,EAAA,CAAQiE,CAAR,CAAtB,CAWImoC,GAAkBpsC,EAAA,CAAQmK,EAAR,CA2KtBgiC,GAAAv2B,QAAA;AAAwB,CAAC,QAAD,CAiFxB,KAAIlL,GAAsB1K,EAAA,CAAQ,UACtB,GADsB,SAEvBgH,QAAQ,CAAC7C,CAAD,CAAUpD,CAAV,CAAgB,CAEnB,CAAZ,EAAIsU,CAAJ,GAIOtU,CAAAyb,KAQL,EARmBzb,CAAAmF,KAQnB,EAPEnF,CAAAqqB,KAAA,CAAU,MAAV,CAAkB,EAAlB,CAOF,CAAAjnB,CAAAM,OAAA,CAAe3H,CAAAotB,cAAA,CAAuB,QAAvB,CAAf,CAZF,CAeA,IAAI,CAACnpB,CAAAyb,KAAL,EAAkB,CAACzb,CAAA0gD,UAAnB,EAAqC,CAAC1gD,CAAAmF,KAAtC,CACE,MAAO,SAAQ,CAACa,CAAD,CAAQ5C,CAAR,CAAiB,CAE9B,IAAIqY,EAA+C,4BAAxC,GAAAlc,EAAAxC,KAAA,CAAcqG,CAAArD,KAAA,CAAa,MAAb,CAAd,CAAA,CACA,YADA,CACe,MAC1BqD,EAAA6Y,GAAA,CAAW,OAAX,CAAoB,QAAQ,CAACzI,CAAD,CAAO,CAE5BpQ,CAAApD,KAAA,CAAayb,CAAb,CAAL,EACEjI,CAAAC,eAAA,EAH+B,CAAnC,CAJ8B,CAlBH,CAFD,CAAR,CAA1B,CAuXI1H,GAA6B,EAIjCtP,EAAA,CAAQ4W,EAAR,CAAsB,QAAQ,CAACstC,CAAD,CAAW15B,CAAX,CAAqB,CAEjD,GAAgB,UAAhB,EAAI05B,CAAJ,CAAA,CAEA,IAAIC,EAAa78B,EAAA,CAAmB,KAAnB,CAA2BkD,CAA3B,CACjBlb,GAAA,CAA2B60C,CAA3B,CAAA,CAAyC,QAAQ,EAAG,CAClD,MAAO,UACK,GADL,MAEC1iC,QAAQ,CAAClY,CAAD,CAAQ5C,CAAR,CAAiBpD,CAAjB,CAAuB,CACnCgG,CAAAnF,OAAA,CAAab,CAAA,CAAK4gD,CAAL,CAAb,CAA+BC,QAAiC,CAACrjD,CAAD,CAAQ,CACtEwC,CAAAqqB,KAAA,CAAUpD,CAAV,CAAoB,CAAC,CAACzpB,CAAtB,CADsE,CAAxE,CADmC,CAFhC,CAD2C,CAHpD,CAFiD,CAAnD,CAmBAf,EAAA,CAAQ,CAAC,KAAD;AAAQ,QAAR,CAAkB,MAAlB,CAAR,CAAmC,QAAQ,CAACwqB,CAAD,CAAW,CACpD,IAAI25B,EAAa78B,EAAA,CAAmB,KAAnB,CAA2BkD,CAA3B,CACjBlb,GAAA,CAA2B60C,CAA3B,CAAA,CAAyC,QAAQ,EAAG,CAClD,MAAO,UACK,EADL,MAEC1iC,QAAQ,CAAClY,CAAD,CAAQ5C,CAAR,CAAiBpD,CAAjB,CAAuB,CAAA,IAC/B2gD,EAAW15B,CADoB,CAE/B9hB,EAAO8hB,CAEM,OAAjB,GAAIA,CAAJ,EAC4C,4BAD5C,GACI1nB,EAAAxC,KAAA,CAAcqG,CAAArD,KAAA,CAAa,MAAb,CAAd,CADJ,GAEEoF,CAEA,CAFO,WAEP,CADAnF,CAAA6jB,MAAA,CAAW1e,CAAX,CACA,CADmB,YACnB,CAAAw7C,CAAA,CAAW,IAJb,CAOA3gD,EAAAwnB,SAAA,CAAco5B,CAAd,CAA0B,QAAQ,CAACpjD,CAAD,CAAQ,CACnCA,CAAL,GAGAwC,CAAAqqB,KAAA,CAAUllB,CAAV,CAAgB3H,CAAhB,CAMA,CAAI8W,CAAJ,EAAYqsC,CAAZ,EAAsBv9C,CAAArD,KAAA,CAAa4gD,CAAb,CAAuB3gD,CAAA,CAAKmF,CAAL,CAAvB,CATtB,CADwC,CAA1C,CAXmC,CAFhC,CAD2C,CAFA,CAAtD,CAkCA,KAAI+rC,GAAe,aACJpyC,CADI,gBAEDA,CAFC,cAGHA,CAHG,WAINA,CAJM,cAKHA,CALG,CA6CnB4xC,GAAA77B,QAAA,CAAyB,CAAC,UAAD,CAAa,QAAb,CAAuB,QAAvB,CAAiC,UAAjC,CA+TzB,KAAIisC,GAAuBA,QAAQ,CAACC,CAAD,CAAW,CAC5C,MAAO,CAAC,UAAD,CAAa,QAAQ,CAAC1nC,CAAD,CAAW,CAoDrC,MAnDoBxP,MACZ,MADYA;SAERk3C,CAAA,CAAW,KAAX,CAAmB,GAFXl3C,YAGN6mC,EAHM7mC,SAIT5D,QAAQ,EAAG,CAClB,MAAO,KACA2f,QAAQ,CAAC5f,CAAD,CAAQg7C,CAAR,CAAqBhhD,CAArB,CAA2BkgB,CAA3B,CAAuC,CAClD,GAAI,CAAClgB,CAAAihD,OAAL,CAAkB,CAOhB,IAAIC,EAAyBA,QAAQ,CAAC1tC,CAAD,CAAQ,CAC3CA,CAAAC,eACA,CAAID,CAAAC,eAAA,EAAJ,CACID,CAAAG,YADJ,CACwB,CAAA,CAHmB,CAM7C4hC,GAAA,CAAmByL,CAAA,CAAY,CAAZ,CAAnB,CAAmC,QAAnC,CAA6CE,CAA7C,CAIAF,EAAA/kC,GAAA,CAAe,UAAf,CAA2B,QAAQ,EAAG,CACpC5C,CAAA,CAAS,QAAQ,EAAG,CAClB7H,EAAA,CAAsBwvC,CAAA,CAAY,CAAZ,CAAtB,CAAsC,QAAtC,CAAgDE,CAAhD,CADkB,CAApB,CAEG,CAFH,CAEM,CAAA,CAFN,CADoC,CAAtC,CAjBgB,CADgC,IAyB9CC,EAAiBH,CAAApiD,OAAA,EAAAshB,WAAA,CAAgC,MAAhC,CAzB6B,CA0B9CkhC,EAAQphD,CAAAmF,KAARi8C,EAAqBphD,CAAAwxC,OAErB4P,EAAJ,EACExjB,EAAA,CAAO53B,CAAP,CAAco7C,CAAd,CAAqBlhC,CAArB,CAAiCkhC,CAAjC,CAEF,IAAID,CAAJ,CACEH,CAAA/kC,GAAA,CAAe,UAAf,CAA2B,QAAQ,EAAG,CACpCklC,CAAAlP,eAAA,CAA8B/xB,CAA9B,CACIkhC,EAAJ,EACExjB,EAAA,CAAO53B,CAAP,CAAco7C,CAAd,CAAqBplD,CAArB,CAAgColD,CAAhC,CAEF/iD,EAAA,CAAO6hB,CAAP,CAAmBgxB,EAAnB,CALoC,CAAtC,CAhCgD,CAD/C,CADW,CAJFrnC,CADiB,CAAhC,CADqC,CAA9C,CAyDIA,GAAgBi3C,EAAA,EAzDpB,CA0DIp2C,GAAkBo2C,EAAA,CAAqB,CAAA,CAArB,CA1DtB,CAoEIO,GAAa,qFApEjB;AAqEIC,GAAe,4DArEnB,CAsEIC,GAAgB,oCAtEpB,CAwEIC,GAAY,MA6ENjO,EA7EM,QAokBhBkO,QAAwB,CAACz7C,CAAD,CAAQ5C,CAAR,CAAiBpD,CAAjB,CAAuB6yC,CAAvB,CAA6Bp5B,CAA7B,CAAuC2W,CAAvC,CAAiD,CACvEmjB,EAAA,CAAcvtC,CAAd,CAAqB5C,CAArB,CAA8BpD,CAA9B,CAAoC6yC,CAApC,CAA0Cp5B,CAA1C,CAAoD2W,CAApD,CAEAyiB,EAAAI,SAAA/1C,KAAA,CAAmB,QAAQ,CAACM,CAAD,CAAQ,CACjC,IAAI+F,EAAQsvC,CAAAmB,SAAA,CAAcx2C,CAAd,CACZ,IAAI+F,CAAJ,EAAag+C,EAAAj7C,KAAA,CAAmB9I,CAAnB,CAAb,CAEE,MADAq1C,EAAAR,aAAA,CAAkB,QAAlB,CAA4B,CAAA,CAA5B,CACO,CAAU,EAAV,GAAA70C,CAAA,CAAe,IAAf,CAAuB+F,CAAA,CAAQ/F,CAAR,CAAgB8xC,UAAA,CAAW9xC,CAAX,CAE9Cq1C,EAAAR,aAAA,CAAkB,QAAlB,CAA4B,CAAA,CAA5B,CACA,OAAOr2C,EAPwB,CAAnC,CAWAg3C,GAAA,CAAyBH,CAAzB,CAA+B,QAA/B,CAAyCzvC,CAAzC,CAEAyvC,EAAAuB,YAAAl3C,KAAA,CAAsB,QAAQ,CAACM,CAAD,CAAQ,CACpC,MAAOq1C,EAAAmB,SAAA,CAAcx2C,CAAd,CAAA,CAAuB,EAAvB,CAA4B,EAA5B,CAAiCA,CADJ,CAAtC,CAIIwC,EAAAmtC,IAAJ,GACMuU,CAMJ,CANmBA,QAAQ,CAAClkD,CAAD,CAAQ,CACjC,IAAI2vC,EAAMmC,UAAA,CAAWtvC,CAAAmtC,IAAX,CACV,OAAOyF,GAAA,CAASC,CAAT,CAAe,KAAf,CAAsBA,CAAAmB,SAAA,CAAcx2C,CAAd,CAAtB,EAA8CA,CAA9C,EAAuD2vC,CAAvD,CAA4D3vC,CAA5D,CAF0B,CAMnC,CADAq1C,CAAAI,SAAA/1C,KAAA,CAAmBwkD,CAAnB,CACA;AAAA7O,CAAAuB,YAAAl3C,KAAA,CAAsBwkD,CAAtB,CAPF,CAUI1hD,EAAA+pB,IAAJ,GACM43B,CAMJ,CANmBA,QAAQ,CAACnkD,CAAD,CAAQ,CACjC,IAAIusB,EAAMulB,UAAA,CAAWtvC,CAAA+pB,IAAX,CACV,OAAO6oB,GAAA,CAASC,CAAT,CAAe,KAAf,CAAsBA,CAAAmB,SAAA,CAAcx2C,CAAd,CAAtB,EAA8CA,CAA9C,EAAuDusB,CAAvD,CAA4DvsB,CAA5D,CAF0B,CAMnC,CADAq1C,CAAAI,SAAA/1C,KAAA,CAAmBykD,CAAnB,CACA,CAAA9O,CAAAuB,YAAAl3C,KAAA,CAAsBykD,CAAtB,CAPF,CAUA9O,EAAAuB,YAAAl3C,KAAA,CAAsB,QAAQ,CAACM,CAAD,CAAQ,CACpC,MAAOo1C,GAAA,CAASC,CAAT,CAAe,QAAf,CAAyBA,CAAAmB,SAAA,CAAcx2C,CAAd,CAAzB,EAAiD6B,EAAA,CAAS7B,CAAT,CAAjD,CAAkEA,CAAlE,CAD6B,CAAtC,CAxCuE,CApkBzD,KAinBhBokD,QAAqB,CAAC57C,CAAD,CAAQ5C,CAAR,CAAiBpD,CAAjB,CAAuB6yC,CAAvB,CAA6Bp5B,CAA7B,CAAuC2W,CAAvC,CAAiD,CACpEmjB,EAAA,CAAcvtC,CAAd,CAAqB5C,CAArB,CAA8BpD,CAA9B,CAAoC6yC,CAApC,CAA0Cp5B,CAA1C,CAAoD2W,CAApD,CAEIyxB,EAAAA,CAAeA,QAAQ,CAACrkD,CAAD,CAAQ,CACjC,MAAOo1C,GAAA,CAASC,CAAT,CAAe,KAAf,CAAsBA,CAAAmB,SAAA,CAAcx2C,CAAd,CAAtB,EAA8C6jD,EAAA/6C,KAAA,CAAgB9I,CAAhB,CAA9C,CAAsEA,CAAtE,CAD0B,CAInCq1C,EAAAuB,YAAAl3C,KAAA,CAAsB2kD,CAAtB,CACAhP,EAAAI,SAAA/1C,KAAA,CAAmB2kD,CAAnB,CARoE,CAjnBtD,OA4nBhBC,QAAuB,CAAC97C,CAAD,CAAQ5C,CAAR,CAAiBpD,CAAjB,CAAuB6yC,CAAvB,CAA6Bp5B,CAA7B,CAAuC2W,CAAvC,CAAiD,CACtEmjB,EAAA,CAAcvtC,CAAd,CAAqB5C,CAArB,CAA8BpD,CAA9B,CAAoC6yC,CAApC,CAA0Cp5B,CAA1C,CAAoD2W,CAApD,CAEI2xB,EAAAA,CAAiBA,QAAQ,CAACvkD,CAAD,CAAQ,CACnC,MAAOo1C,GAAA,CAASC,CAAT,CAAe,OAAf,CAAwBA,CAAAmB,SAAA,CAAcx2C,CAAd,CAAxB,EAAgD8jD,EAAAh7C,KAAA,CAAkB9I,CAAlB,CAAhD,CAA0EA,CAA1E,CAD4B,CAIrCq1C,EAAAuB,YAAAl3C,KAAA,CAAsB6kD,CAAtB,CACAlP;CAAAI,SAAA/1C,KAAA,CAAmB6kD,CAAnB,CARsE,CA5nBxD,OAuoBhBC,QAAuB,CAACh8C,CAAD,CAAQ5C,CAAR,CAAiBpD,CAAjB,CAAuB6yC,CAAvB,CAA6B,CAE9C3zC,CAAA,CAAYc,CAAAmF,KAAZ,CAAJ,EACE/B,CAAApD,KAAA,CAAa,MAAb,CAAqBvC,EAAA,EAArB,CAGF2F,EAAA6Y,GAAA,CAAW,OAAX,CAAoB,QAAQ,EAAG,CACzB7Y,CAAA,CAAQ,CAAR,CAAA6+C,QAAJ,EACEj8C,CAAAG,OAAA,CAAa,QAAQ,EAAG,CACtB0sC,CAAAc,cAAA,CAAmB3zC,CAAAxC,MAAnB,CADsB,CAAxB,CAF2B,CAA/B,CAQAq1C,EAAAiB,QAAA,CAAeC,QAAQ,EAAG,CAExB3wC,CAAA,CAAQ,CAAR,CAAA6+C,QAAA,CADYjiD,CAAAxC,MACZ,EAA+Bq1C,CAAAa,WAFP,CAK1B1zC,EAAAwnB,SAAA,CAAc,OAAd,CAAuBqrB,CAAAiB,QAAvB,CAnBkD,CAvoBpC,UA6pBhBoO,QAA0B,CAACl8C,CAAD,CAAQ5C,CAAR,CAAiBpD,CAAjB,CAAuB6yC,CAAvB,CAA6B,CAAA,IACjDsP,EAAYniD,CAAAoiD,YADqC,CAEjDC,EAAariD,CAAAsiD,aAEZ/lD,EAAA,CAAS4lD,CAAT,CAAL,GAA0BA,CAA1B,CAAsC,CAAA,CAAtC,CACK5lD,EAAA,CAAS8lD,CAAT,CAAL,GAA2BA,CAA3B,CAAwC,CAAA,CAAxC,CAEAj/C,EAAA6Y,GAAA,CAAW,OAAX,CAAoB,QAAQ,EAAG,CAC7BjW,CAAAG,OAAA,CAAa,QAAQ,EAAG,CACtB0sC,CAAAc,cAAA,CAAmBvwC,CAAA,CAAQ,CAAR,CAAA6+C,QAAnB,CADsB,CAAxB,CAD6B,CAA/B,CAMApP,EAAAiB,QAAA,CAAeC,QAAQ,EAAG,CACxB3wC,CAAA,CAAQ,CAAR,CAAA6+C,QAAA,CAAqBpP,CAAAa,WADG,CAK1Bb,EAAAmB,SAAA,CAAgBuO,QAAQ,CAAC/kD,CAAD,CAAQ,CAC9B,MAAOA,EAAP,GAAiB2kD,CADa,CAIhCtP;CAAAuB,YAAAl3C,KAAA,CAAsB,QAAQ,CAACM,CAAD,CAAQ,CACpC,MAAOA,EAAP,GAAiB2kD,CADmB,CAAtC,CAIAtP,EAAAI,SAAA/1C,KAAA,CAAmB,QAAQ,CAACM,CAAD,CAAQ,CACjC,MAAOA,EAAA,CAAQ2kD,CAAR,CAAoBE,CADM,CAAnC,CA1BqD,CA7pBvC,QAyZJvjD,CAzZI,QA0ZJA,CA1ZI,QA2ZJA,CA3ZI,OA4ZLA,CA5ZK,MA6ZNA,CA7ZM,CAxEhB,CA+4BI8K,GAAiB,CAAC,UAAD,CAAa,UAAb,CAAyB,QAAQ,CAACwmB,CAAD,CAAW3W,CAAX,CAAqB,CACzE,MAAO,UACK,GADL,SAEI,UAFJ,MAGCyE,QAAQ,CAAClY,CAAD,CAAQ5C,CAAR,CAAiBpD,CAAjB,CAAuB6yC,CAAvB,CAA6B,CACrCA,CAAJ,EACG,CAAA2O,EAAA,CAAUt+C,CAAA,CAAUlD,CAAAkR,KAAV,CAAV,CAAA,EAAmCswC,EAAA11B,KAAnC,EAAmD9lB,CAAnD,CAA0D5C,CAA1D,CAAmEpD,CAAnE,CAAyE6yC,CAAzE,CAA+Ep5B,CAA/E,CACmD2W,CADnD,CAFsC,CAHtC,CADkE,CAAtD,CA/4BrB,CA45BI2gB,GAAc,UA55BlB,CA65BID,GAAgB,YA75BpB,CA85BIgB,GAAiB,aA95BrB,CA+5BIW,GAAc,UA/5BlB,CAuiCI+P,GAAoB,CAAC,QAAD,CAAW,mBAAX,CAAgC,QAAhC,CAA0C,UAA1C,CAAsD,QAAtD,CAAgE,UAAhE,CACpB,QAAQ,CAACp6B,CAAD,CAAStI,CAAT,CAA4B+D,CAA5B,CAAmC7B,CAAnC,CAA6CpB,CAA7C,CAAqDG,CAArD,CAA+D,CA6DzE4vB,QAASA,EAAc,CAACC,CAAD,CAAUC,CAAV,CAA8B,CACnDA,CAAA,CAAqBA,CAAA,CAAqB,GAArB,CAA2BlqC,EAAA,CAAWkqC,CAAX,CAA+B,GAA/B,CAA3B,CAAiE,EACtF9vB,EAAA0M,YAAA,CAAqBzL,CAArB,EAAgC4uB,CAAA,CAAUE,EAAV,CAA0BC,EAA1D,EAAyEF,CAAzE,CACA9vB;CAAAkB,SAAA,CAAkBD,CAAlB,EAA6B4uB,CAAA,CAAUG,EAAV,CAAwBD,EAArD,EAAsED,CAAtE,CAHmD,CA3DrD,IAAA4R,YAAA,CADA,IAAA/O,WACA,CADkBh1B,MAAAgkC,IAElB,KAAAzP,SAAA,CAAgB,EAChB,KAAAmB,YAAA,CAAmB,EACnB,KAAAuO,qBAAA,CAA4B,EAC5B,KAAAjR,UAAA,CAAiB,CAAA,CACjB,KAAAD,OAAA,CAAc,CAAA,CACd,KAAAE,OAAA,CAAc,CAAA,CACd,KAAAC,SAAA,CAAgB,CAAA,CAChB,KAAAL,MAAA,CAAa1tB,CAAA1e,KAV4D,KAYrEy9C,EAAahiC,CAAA,CAAOiD,CAAAg/B,QAAP,CAZwD,CAarEC,EAAaF,CAAA96B,OAEjB,IAAI,CAACg7B,CAAL,CACE,KAAM7mD,EAAA,CAAO,SAAP,CAAA,CAAkB,WAAlB,CACF4nB,CAAAg/B,QADE,CACa1/C,EAAA,CAAY6e,CAAZ,CADb,CAAN,CAYF,IAAA8xB,QAAA,CAAeh1C,CAmBf,KAAAk1C,SAAA,CAAgB+O,QAAQ,CAACvlD,CAAD,CAAQ,CAC9B,MAAO0B,EAAA,CAAY1B,CAAZ,CAAP,EAAuC,EAAvC,GAA6BA,CAA7B,EAAuD,IAAvD,GAA6CA,CAA7C,EAA+DA,CAA/D,GAAyEA,CAD3C,CA/CyC,KAmDrEyzC,EAAajvB,CAAAghC,cAAA,CAAuB,iBAAvB,CAAb/R,EAA0DC,EAnDW,CAoDrEC,EAAe,CApDsD,CAqDrEE,EAAS,IAAAA,OAATA,CAAuB,EAI3BrvB,EAAAC,SAAA,CAAkB6vB,EAAlB,CACAnB,EAAA,CAAe,CAAA,CAAf,CA0BA,KAAA0B,aAAA,CAAoB4Q,QAAQ,CAACpS,CAAD,CAAqBD,CAArB,CAA8B,CAGpDS,CAAA,CAAOR,CAAP,CAAJ;AAAmC,CAACD,CAApC,GAGIA,CAAJ,EACMS,CAAA,CAAOR,CAAP,CACJ,EADgCM,CAAA,EAChC,CAAKA,CAAL,GACER,CAAA,CAAe,CAAA,CAAf,CAEA,CADA,IAAAgB,OACA,CADc,CAAA,CACd,CAAA,IAAAC,SAAA,CAAgB,CAAA,CAHlB,CAFF,GAQEjB,CAAA,CAAe,CAAA,CAAf,CAGA,CAFA,IAAAiB,SAEA,CAFgB,CAAA,CAEhB,CADA,IAAAD,OACA,CADc,CAAA,CACd,CAAAR,CAAA,EAXF,CAiBA,CAHAE,CAAA,CAAOR,CAAP,CAGA,CAH6B,CAACD,CAG9B,CAFAD,CAAA,CAAeC,CAAf,CAAwBC,CAAxB,CAEA,CAAAI,CAAAoB,aAAA,CAAwBxB,CAAxB,CAA4CD,CAA5C,CAAqD,IAArD,CApBA,CAHwD,CAoC1D,KAAA8B,aAAA,CAAoBwQ,QAAS,EAAG,CAC9B,IAAAzR,OAAA,CAAc,CAAA,CACd,KAAAC,UAAA,CAAiB,CAAA,CACjB3wB,EAAA0M,YAAA,CAAqBzL,CAArB,CAA+BywB,EAA/B,CACA1xB,EAAAkB,SAAA,CAAkBD,CAAlB,CAA4B8vB,EAA5B,CAJ8B,CA4BhC,KAAA6B,cAAA,CAAqBwP,QAAQ,CAAC3lD,CAAD,CAAQ,CACnC,IAAAk2C,WAAA,CAAkBl2C,CAGd,KAAAk0C,UAAJ,GACE,IAAAD,OAIA,CAJc,CAAA,CAId,CAHA,IAAAC,UAGA,CAHiB,CAAA,CAGjB,CAFA3wB,CAAA0M,YAAA,CAAqBzL,CAArB,CAA+B8vB,EAA/B,CAEA,CADA/wB,CAAAkB,SAAA,CAAkBD,CAAlB,CAA4BywB,EAA5B,CACA,CAAAxB,CAAAsB,UAAA,EALF,CAQA91C,EAAA,CAAQ,IAAAw2C,SAAR,CAAuB,QAAQ,CAAChxC,CAAD,CAAK,CAClCzE,CAAA,CAAQyE,CAAA,CAAGzE,CAAH,CAD0B,CAApC,CAII,KAAAilD,YAAJ,GAAyBjlD,CAAzB,GACE,IAAAilD,YAEA,CAFmBjlD,CAEnB,CADAslD,CAAA,CAAW16B,CAAX,CAAmB5qB,CAAnB,CACA,CAAAf,CAAA,CAAQ,IAAAkmD,qBAAR;AAAmC,QAAQ,CAAChoC,CAAD,CAAW,CACpD,GAAI,CACFA,CAAA,EADE,CAEF,MAAMnX,CAAN,CAAS,CACTsc,CAAA,CAAkBtc,CAAlB,CADS,CAHyC,CAAtD,CAHF,CAhBmC,CA8BrC,KAAIqvC,EAAO,IAEXzqB,EAAAvnB,OAAA,CAAcuiD,QAAqB,EAAG,CACpC,IAAI5lD,EAAQolD,CAAA,CAAWx6B,CAAX,CAGZ,IAAIyqB,CAAA4P,YAAJ,GAAyBjlD,CAAzB,CAAgC,CAAA,IAE1B6lD,EAAaxQ,CAAAuB,YAFa,CAG1BhhB,EAAMiwB,CAAAhnD,OAGV,KADAw2C,CAAA4P,YACA,CADmBjlD,CACnB,CAAM41B,CAAA,EAAN,CAAA,CACE51B,CAAA,CAAQ6lD,CAAA,CAAWjwB,CAAX,CAAA,CAAgB51B,CAAhB,CAGNq1C,EAAAa,WAAJ,GAAwBl2C,CAAxB,GACEq1C,CAAAa,WACA,CADkBl2C,CAClB,CAAAq1C,CAAAiB,QAAA,EAFF,CAV8B,CAgBhC,MAAOt2C,EApB6B,CAAtC,CApLyE,CADnD,CAviCxB,CA21CIiO,GAAmBA,QAAQ,EAAG,CAChC,MAAO,SACI,CAAC,SAAD,CAAY,QAAZ,CADJ,YAEO+2C,EAFP,MAGCtkC,QAAQ,CAAClY,CAAD,CAAQ5C,CAAR,CAAiBpD,CAAjB,CAAuBsjD,CAAvB,CAA8B,CAAA,IAGtCC,EAAYD,CAAA,CAAM,CAAN,CAH0B,CAItCE,EAAWF,CAAA,CAAM,CAAN,CAAXE,EAAuBtS,EAE3BsS,EAAA3R,YAAA,CAAqB0R,CAArB,CAEAv9C,EAAA6/B,IAAA,CAAU,UAAV,CAAsB,QAAQ,EAAG,CAC/B2d,CAAAvR,eAAA,CAAwBsR,CAAxB,CAD+B,CAAjC,CAR0C,CAHvC,CADyB,CA31ClC,CAy6CI53C,GAAoB1M,EAAA,CAAQ,SACrB,SADqB,MAExBif,QAAQ,CAAClY,CAAD,CAAQ5C,CAAR,CAAiBpD,CAAjB,CAAuB6yC,CAAvB,CAA6B,CACzCA,CAAA8P,qBAAAzlD,KAAA,CAA+B,QAAQ,EAAG,CACxC8I,CAAA0/B,MAAA,CAAY1lC,CAAAyjD,SAAZ,CADwC,CAA1C,CADyC,CAFb,CAAR,CAz6CxB;AAm7CI73C,GAAoBA,QAAQ,EAAG,CACjC,MAAO,SACI,UADJ,MAECsS,QAAQ,CAAClY,CAAD,CAAQ8S,CAAR,CAAa9Y,CAAb,CAAmB6yC,CAAnB,CAAyB,CACrC,GAAKA,CAAL,CAAA,CACA7yC,CAAA0jD,SAAA,CAAgB,CAAA,CAEhB,KAAIxQ,EAAYA,QAAQ,CAAC11C,CAAD,CAAQ,CAC9B,GAAIwC,CAAA0jD,SAAJ,EAAqB7Q,CAAAmB,SAAA,CAAcx2C,CAAd,CAArB,CACEq1C,CAAAR,aAAA,CAAkB,UAAlB,CAA8B,CAAA,CAA9B,CADF,KAKE,OADAQ,EAAAR,aAAA,CAAkB,UAAlB,CAA8B,CAAA,CAA9B,CACO70C,CAAAA,CANqB,CAUhCq1C,EAAAuB,YAAAl3C,KAAA,CAAsBg2C,CAAtB,CACAL,EAAAI,SAAAh1C,QAAA,CAAsBi1C,CAAtB,CAEAlzC,EAAAwnB,SAAA,CAAc,UAAd,CAA0B,QAAQ,EAAG,CACnC0rB,CAAA,CAAUL,CAAAa,WAAV,CADmC,CAArC,CAhBA,CADqC,CAFlC,CAD0B,CAn7CnC,CAqgDIhoC,GAAkBA,QAAQ,EAAG,CAC/B,MAAO,SACI,SADJ,MAECwS,QAAQ,CAAClY,CAAD,CAAQ5C,CAAR,CAAiBpD,CAAjB,CAAuB6yC,CAAvB,CAA6B,CACzC,IACIjsC,GADA/C,CACA+C,CADQ,UAAAtB,KAAA,CAAgBtF,CAAA2jD,OAAhB,CACR/8C,GAAyB3F,MAAJ,CAAW4C,CAAA,CAAM,CAAN,CAAX,CAArB+C,EAA6C5G,CAAA2jD,OAA7C/8C,EAA4D,GAiBhEisC,EAAAI,SAAA/1C,KAAA,CAfY6F,QAAQ,CAAC6gD,CAAD,CAAY,CAE9B,GAAI,CAAA1kD,CAAA,CAAY0kD,CAAZ,CAAJ,CAAA,CAEA,IAAIxjD,EAAO,EAEPwjD,EAAJ,EACEnnD,CAAA,CAAQmnD,CAAAx/C,MAAA,CAAgBwC,CAAhB,CAAR,CAAoC,QAAQ,CAACpJ,CAAD,CAAQ,CAC9CA,CAAJ;AAAW4C,CAAAlD,KAAA,CAAUkS,EAAA,CAAK5R,CAAL,CAAV,CADuC,CAApD,CAKF,OAAO4C,EAVP,CAF8B,CAehC,CACAyyC,EAAAuB,YAAAl3C,KAAA,CAAsB,QAAQ,CAACM,CAAD,CAAQ,CACpC,MAAIhB,EAAA,CAAQgB,CAAR,CAAJ,CACSA,CAAAM,KAAA,CAAW,IAAX,CADT,CAIO9B,CAL6B,CAAtC,CASA62C,EAAAmB,SAAA,CAAgBuO,QAAQ,CAAC/kD,CAAD,CAAQ,CAC9B,MAAO,CAACA,CAAR,EAAiB,CAACA,CAAAnB,OADY,CA7BS,CAFtC,CADwB,CArgDjC,CA6iDIwnD,GAAwB,oBA7iD5B,CAimDIh4C,GAAmBA,QAAQ,EAAG,CAChC,MAAO,UACK,GADL,SAEI5F,QAAQ,CAAC69C,CAAD,CAAMC,CAAN,CAAe,CAC9B,MAAIF,GAAAv9C,KAAA,CAA2By9C,CAAAC,QAA3B,CAAJ,CACSC,QAA4B,CAACj+C,CAAD,CAAQ8S,CAAR,CAAa9Y,CAAb,CAAmB,CACpDA,CAAAqqB,KAAA,CAAU,OAAV,CAAmBrkB,CAAA0/B,MAAA,CAAY1lC,CAAAgkD,QAAZ,CAAnB,CADoD,CADxD,CAKSE,QAAoB,CAACl+C,CAAD,CAAQ8S,CAAR,CAAa9Y,CAAb,CAAmB,CAC5CgG,CAAAnF,OAAA,CAAab,CAAAgkD,QAAb,CAA2BG,QAAyB,CAAC3mD,CAAD,CAAQ,CAC1DwC,CAAAqqB,KAAA,CAAU,OAAV,CAAmB7sB,CAAnB,CAD0D,CAA5D,CAD4C,CANlB,CAF3B,CADyB,CAjmDlC,CAsqDI0M,GAAkBumC,EAAA,CAAY,QAAQ,CAACzqC,CAAD,CAAQ5C,CAAR,CAAiBpD,CAAjB,CAAuB,CAC/DoD,CAAA6e,SAAA,CAAiB,YAAjB,CAAA7b,KAAA,CAAoC,UAApC,CAAgDpG,CAAAokD,OAAhD,CACAp+C,EAAAnF,OAAA,CAAab,CAAAokD,OAAb,CAA0BC,QAA0B,CAAC7mD,CAAD,CAAQ,CAI1D4F,CAAA0oB,KAAA,CAAatuB,CAAA,EAASxB,CAAT,CAAqB,EAArB,CAA0BwB,CAAvC,CAJ0D,CAA5D,CAF+D,CAA3C,CAtqDtB,CAmuDI4M,GAA0B,CAAC,cAAD;AAAiB,QAAQ,CAACqW,CAAD,CAAe,CACpE,MAAO,SAAQ,CAACza,CAAD,CAAQ5C,CAAR,CAAiBpD,CAAjB,CAAuB,CAEhC+rB,CAAAA,CAAgBtL,CAAA,CAAard,CAAApD,KAAA,CAAaA,CAAA6jB,MAAAygC,eAAb,CAAb,CACpBlhD,EAAA6e,SAAA,CAAiB,YAAjB,CAAA7b,KAAA,CAAoC,UAApC,CAAgD2lB,CAAhD,CACA/rB,EAAAwnB,SAAA,CAAc,gBAAd,CAAgC,QAAQ,CAAChqB,CAAD,CAAQ,CAC9C4F,CAAA0oB,KAAA,CAAatuB,CAAb,CAD8C,CAAhD,CAJoC,CAD8B,CAAxC,CAnuD9B,CA6xDI2M,GAAsB,CAAC,MAAD,CAAS,QAAT,CAAmB,QAAQ,CAAC2W,CAAD,CAAOF,CAAP,CAAe,CAClE,MAAO,SAAQ,CAAC5a,CAAD,CAAQ5C,CAAR,CAAiBpD,CAAjB,CAAuB,CACpCoD,CAAA6e,SAAA,CAAiB,YAAjB,CAAA7b,KAAA,CAAoC,UAApC,CAAgDpG,CAAAukD,WAAhD,CAEA,KAAIj1C,EAASsR,CAAA,CAAO5gB,CAAAukD,WAAP,CAGbv+C,EAAAnF,OAAA,CAFA2jD,QAAuB,EAAG,CAAE,MAAQjlD,CAAA+P,CAAA,CAAOtJ,CAAP,CAAAzG,EAAiB,EAAjBA,UAAA,EAAV,CAE1B,CAA6BklD,QAA8B,CAACjnD,CAAD,CAAQ,CACjE4F,CAAAO,KAAA,CAAamd,CAAA4jC,eAAA,CAAoBp1C,CAAA,CAAOtJ,CAAP,CAApB,CAAb,EAAmD,EAAnD,CADiE,CAAnE,CANoC,CAD4B,CAA1C,CA7xD1B,CA8iEIqE,GAAmBsqC,EAAA,CAAe,EAAf,CAAmB,CAAA,CAAnB,CA9iEvB,CA8lEIpqC,GAAsBoqC,EAAA,CAAe,KAAf,CAAsB,CAAtB,CA9lE1B,CA8oEIrqC,GAAuBqqC,EAAA,CAAe,MAAf,CAAuB,CAAvB,CA9oE3B,CAwsEInqC,GAAmBimC,EAAA,CAAY,SACxBxqC,QAAQ,CAAC7C,CAAD,CAAUpD,CAAV,CAAgB,CAC/BA,CAAAqqB,KAAA,CAAU,SAAV,CAAqBruB,CAArB,CACAoH,EAAAqqB,YAAA,CAAoB,UAApB,CAF+B,CADA,CAAZ,CAxsEvB;AA+4EIhjB,GAAwB,CAAC,QAAQ,EAAG,CACtC,MAAO,OACE,CAAA,CADF,YAEO,GAFP,UAGK,GAHL,CAD+B,CAAZ,CA/4E5B,CAq+EIuB,GAAoB,EACxBvP,EAAA,CACE,6IAAA,MAAA,CAAA,GAAA,CADF,CAEE,QAAQ,CAAC0I,CAAD,CAAO,CACb,IAAIkhB,EAAgBtC,EAAA,CAAmB,KAAnB,CAA2B5e,CAA3B,CACpB6G,GAAA,CAAkBqa,CAAlB,CAAA,CAAmC,CAAC,QAAD,CAAW,QAAQ,CAACzF,CAAD,CAAS,CAC7D,MAAO,SACI3a,QAAQ,CAAC+b,CAAD,CAAWhiB,CAAX,CAAiB,CAChC,IAAIiC,EAAK2e,CAAA,CAAO5gB,CAAA,CAAKqmB,CAAL,CAAP,CACT,OAAO,SAAQ,CAACrgB,CAAD,CAAQ5C,CAAR,CAAiBpD,CAAjB,CAAuB,CACpCoD,CAAA6Y,GAAA,CAAW/Y,CAAA,CAAUiC,CAAV,CAAX,CAA4B,QAAQ,CAACqO,CAAD,CAAQ,CAC1CxN,CAAAG,OAAA,CAAa,QAAQ,EAAG,CACtBlE,CAAA,CAAG+D,CAAH,CAAU,QAAQwN,CAAR,CAAV,CADsB,CAAxB,CAD0C,CAA5C,CADoC,CAFN,CAD7B,CADsD,CAA5B,CAFtB,CAFjB,CAgeA,KAAI5I,GAAgB,CAAC,UAAD,CAAa,QAAQ,CAACmW,CAAD,CAAW,CAClD,MAAO,YACO,SADP,UAEK,GAFL,UAGK,CAAA,CAHL,UAIK,GAJL;MAKE,CAAA,CALF,MAMC7C,QAAS,CAACkK,CAAD,CAASpG,CAAT,CAAmB6B,CAAnB,CAA0BgvB,CAA1B,CAAgC8R,CAAhC,CAA6C,CAAA,IACpD17C,CADoD,CAC7CsZ,CAD6C,CACjCqiC,CACvBx8B,EAAAvnB,OAAA,CAAcgjB,CAAAghC,KAAd,CAA0BC,QAAwB,CAACtnD,CAAD,CAAQ,CAEpDwF,EAAA,CAAUxF,CAAV,CAAJ,CACO+kB,CADP,GAEIA,CACA,CADa6F,CAAAvF,KAAA,EACb,CAAA8hC,CAAA,CAAYpiC,CAAZ,CAAwB,QAAS,CAACjf,CAAD,CAAQ,CACvCA,CAAA,CAAMA,CAAAjH,OAAA,EAAN,CAAA,CAAwBN,CAAAotB,cAAA,CAAuB,aAAvB,CAAuCtF,CAAAghC,KAAvC,CAAoD,GAApD,CAIxB57C,EAAA,CAAQ,OACC3F,CADD,CAGRyd,EAAAy4B,MAAA,CAAel2C,CAAf,CAAsB0e,CAAApjB,OAAA,EAAtB,CAAyCojB,CAAzC,CARuC,CAAzC,CAHJ,GAeK4iC,CAQH,GAPEA,CAAA9lC,OAAA,EACA,CAAA8lC,CAAA,CAAmB,IAMrB,EAJGriC,CAIH,GAHEA,CAAA1Q,SAAA,EACA,CAAA0Q,CAAA,CAAa,IAEf,EAAGtZ,CAAH,GACE27C,CAIA,CAJmB/8C,EAAA,CAAiBoB,CAAA3F,MAAjB,CAInB,CAHAyd,CAAA04B,MAAA,CAAemL,CAAf,CAAiC,QAAQ,EAAG,CAC1CA,CAAA,CAAmB,IADuB,CAA5C,CAGA,CAAA37C,CAAA,CAAQ,IALV,CAvBF,CAFwD,CAA1D,CAFwD,CANvD,CAD2C,CAAhC,CAApB,CA8MI4B,GAAqB,CAAC,OAAD,CAAU,gBAAV,CAA4B,eAA5B,CAA6C,UAA7C,CAAyD,MAAzD,CACP,QAAQ,CAAC6V,CAAD,CAAUC,CAAV,CAA4BokC,CAA5B,CAA6ChkC,CAA7C,CAAyDD,CAAzD,CAA+D,CACvF,MAAO,UACK,KADL,UAEK,GAFL,UAGK,CAAA,CAHL,YAIO,SAJP,YAKOva,EAAAzH,KALP,SAMImH,QAAQ,CAAC7C,CAAD;AAAUpD,CAAV,CAAgB,CAAA,IAC3BglD,EAAShlD,CAAAilD,UAATD,EAA2BhlD,CAAAmB,IADA,CAE3B+jD,EAAYllD,CAAA00B,OAAZwwB,EAA2B,EAFA,CAG3BC,EAAgBnlD,CAAAolD,WAEpB,OAAO,SAAQ,CAACp/C,CAAD,CAAQgc,CAAR,CAAkB6B,CAAlB,CAAyBgvB,CAAzB,CAA+B8R,CAA/B,CAA4C,CAAA,IACrDroB,EAAgB,CADqC,CAErD+J,CAFqD,CAGrDgf,CAHqD,CAIrDC,CAJqD,CAMrDC,EAA4BA,QAAQ,EAAG,CACtCF,CAAH,GACEA,CAAAvmC,OAAA,EACA,CAAAumC,CAAA,CAAkB,IAFpB,CAIGhf,EAAH,GACEA,CAAAx0B,SAAA,EACA,CAAAw0B,CAAA,CAAe,IAFjB,CAIGif,EAAH,GACEvkC,CAAA04B,MAAA,CAAe6L,CAAf,CAA+B,QAAQ,EAAG,CACxCD,CAAA,CAAkB,IADsB,CAA1C,CAIA,CADAA,CACA,CADkBC,CAClB,CAAAA,CAAA,CAAiB,IALnB,CATyC,CAkB3Ct/C,EAAAnF,OAAA,CAAaigB,CAAA0kC,mBAAA,CAAwBR,CAAxB,CAAb,CAA8CS,QAA6B,CAACtkD,CAAD,CAAM,CAC/E,IAAIukD,EAAiBA,QAAQ,EAAG,CAC1B,CAAAvmD,CAAA,CAAUgmD,CAAV,CAAJ,EAAkCA,CAAlC,EAAmD,CAAAn/C,CAAA0/B,MAAA,CAAYyf,CAAZ,CAAnD,EACEJ,CAAA,EAF4B,CAAhC,CAKIY,EAAe,EAAErpB,CAEjBn7B,EAAJ,EACEuf,CAAAtK,IAAA,CAAUjV,CAAV,CAAe,OAAQwf,CAAR,CAAf,CAAAmK,QAAA,CAAgD,QAAQ,CAACO,CAAD,CAAW,CACjE,GAAIs6B,CAAJ,GAAqBrpB,CAArB,CAAA,CACA,IAAIspB,EAAW5/C,CAAA6c,KAAA,EACfgwB,EAAAvqB,SAAA,CAAgB+C,CAQZ/nB,EAAAA,CAAQqhD,CAAA,CAAYiB,CAAZ,CAAsB,QAAQ,CAACtiD,CAAD,CAAQ,CAChDiiD,CAAA,EACAxkC,EAAAy4B,MAAA,CAAel2C,CAAf,CAAsB,IAAtB,CAA4B0e,CAA5B,CAAsC0jC,CAAtC,CAFgD,CAAtC,CAKZrf,EAAA,CAAeuf,CACfN,EAAA,CAAiBhiD,CAEjB+iC,EAAAH,MAAA,CAAmB,uBAAnB,CACAlgC,EAAA0/B,MAAA,CAAYwf,CAAZ,CAnBA,CADiE,CAAnE,CAAAprC,MAAA,CAqBS,QAAQ,EAAG,CACd6rC,CAAJ;AAAqBrpB,CAArB,EAAoCipB,CAAA,EADlB,CArBpB,CAwBA,CAAAv/C,CAAAkgC,MAAA,CAAY,0BAAZ,CAzBF,GA2BEqf,CAAA,EACA,CAAA1S,CAAAvqB,SAAA,CAAgB,IA5BlB,CAR+E,CAAjF,CAxByD,CAL5B,CAN5B,CADgF,CADhE,CA9MzB,CAoSIxc,GAAgC,CAAC,UAAD,CAClC,QAAQ,CAAC+5C,CAAD,CAAW,CACjB,MAAO,UACK,KADL,UAEM,IAFN,SAGI,WAHJ,MAIC3nC,QAAQ,CAAClY,CAAD,CAAQgc,CAAR,CAAkB6B,CAAlB,CAAyBgvB,CAAzB,CAA+B,CAC3C7wB,CAAAre,KAAA,CAAckvC,CAAAvqB,SAAd,CACAu9B,EAAA,CAAS7jC,CAAAsH,SAAA,EAAT,CAAA,CAA8BtjB,CAA9B,CAF2C,CAJxC,CADU,CADe,CApSpC,CAwWI8E,GAAkB2lC,EAAA,CAAY,UACtB,GADsB,SAEvBxqC,QAAQ,EAAG,CAClB,MAAO,KACA2f,QAAQ,CAAC5f,CAAD,CAAQ5C,CAAR,CAAiB6f,CAAjB,CAAwB,CACnCjd,CAAA0/B,MAAA,CAAYziB,CAAA6iC,OAAZ,CADmC,CADhC,CADW,CAFY,CAAZ,CAxWtB,CAmZI/6C,GAAyB0lC,EAAA,CAAY,UAAY,CAAA,CAAZ,UAA4B,GAA5B,CAAZ,CAnZ7B,CAgkBIzlC,GAAuB,CAAC,SAAD,CAAY,cAAZ,CAA4B,QAAQ,CAAC4gC,CAAD,CAAUnrB,CAAV,CAAwB,CACrF,IAAIslC,EAAQ,KACZ,OAAO,UACK,IADL,MAEC7nC,QAAQ,CAAClY,CAAD,CAAQ5C,CAAR,CAAiBpD,CAAjB,CAAuB,CAAA,IAC/BgmD,EAAYhmD,CAAAi3B,MADmB,CAE/BgvB,EAAUjmD,CAAA6jB,MAAAqO,KAAV+zB,EAA6B7iD,CAAApD,KAAA,CAAaA,CAAA6jB,MAAAqO,KAAb,CAFE,CAG/BrkB,EAAS7N,CAAA6N,OAATA;AAAwB,CAHO,CAI/Bq4C,EAAQlgD,CAAA0/B,MAAA,CAAYugB,CAAZ,CAARC,EAAgC,EAJD,CAK/BC,EAAc,EALiB,CAM/Bh4B,EAAc1N,CAAA0N,YAAA,EANiB,CAO/BC,EAAY3N,CAAA2N,UAAA,EAPmB,CAQ/Bg4B,EAAS,oBAEb3pD,EAAA,CAAQuD,CAAR,CAAc,QAAQ,CAAC+uB,CAAD,CAAas3B,CAAb,CAA4B,CAC5CD,CAAA9/C,KAAA,CAAY+/C,CAAZ,CAAJ,GACEH,CAAA,CAAMhjD,CAAA,CAAUmjD,CAAAviD,QAAA,CAAsB,MAAtB,CAA8B,EAA9B,CAAAA,QAAA,CAA0C,OAA1C,CAAmD,GAAnD,CAAV,CAAN,CADF,CAEIV,CAAApD,KAAA,CAAaA,CAAA6jB,MAAA,CAAWwiC,CAAX,CAAb,CAFJ,CADgD,CAAlD,CAMA5pD,EAAA,CAAQypD,CAAR,CAAe,QAAQ,CAACn3B,CAAD,CAAanyB,CAAb,CAAkB,CACvCupD,CAAA,CAAYvpD,CAAZ,CAAA,CACE6jB,CAAA,CAAasO,CAAAjrB,QAAA,CAAmBiiD,CAAnB,CAA0B53B,CAA1B,CAAwC63B,CAAxC,CAAoD,GAApD,CACXn4C,CADW,CACFugB,CADE,CAAb,CAFqC,CAAzC,CAMApoB,EAAAnF,OAAA,CAAaylD,QAAyB,EAAG,CACvC,IAAI9oD,EAAQ8xC,UAAA,CAAWtpC,CAAA0/B,MAAA,CAAYsgB,CAAZ,CAAX,CAEZ,IAAKrgB,KAAA,CAAMnoC,CAAN,CAAL,CAME,MAAO,EAHDA,EAAN,GAAe0oD,EAAf,GAAuB1oD,CAAvB,CAA+BouC,CAAAhU,UAAA,CAAkBp6B,CAAlB,CAA0BqQ,CAA1B,CAA/B,CACC,OAAOs4C,EAAA,CAAY3oD,CAAZ,CAAA,CAAmBwI,CAAnB,CAA0B5C,CAA1B,CAAmC,CAAA,CAAnC,CAP6B,CAAzC,CAWGmjD,QAA+B,CAACtiB,CAAD,CAAS,CACzC7gC,CAAA0oB,KAAA,CAAamY,CAAb,CADyC,CAX3C,CAtBmC,CAFhC,CAF8E,CAA5D,CAhkB3B,CAkzBIh5B,GAAoB,CAAC,QAAD,CAAW,UAAX,CAAuB,QAAQ,CAAC2V,CAAD,CAASG,CAAT,CAAmB,CAExE,IAAIylC,EAAiBvqD,CAAA,CAAO,UAAP,CACrB,OAAO,YACO,SADP,UAEK,GAFL,UAGK,CAAA,CAHL,OAIE,CAAA,CAJF;KAKCiiB,QAAQ,CAACkK,CAAD,CAASpG,CAAT,CAAmB6B,CAAnB,CAA0BgvB,CAA1B,CAAgC8R,CAAhC,CAA4C,CACtD,IAAI51B,EAAalL,CAAA4iC,SAAjB,CACI5iD,EAAQkrB,CAAAlrB,MAAA,CAAiB,qEAAjB,CADZ,CAEc6iD,CAFd,CAEgCC,CAFhC,CAEgDC,CAFhD,CAEkEC,CAFlE,CAGYC,CAHZ,CAG6BC,CAH7B,CAIEC,EAAe,KAAMxyC,EAAN,CAEjB,IAAI,CAAC3Q,CAAL,CACE,KAAM2iD,EAAA,CAAe,MAAf,CACJz3B,CADI,CAAN,CAIFk4B,CAAA,CAAMpjD,CAAA,CAAM,CAAN,CACNqjD,EAAA,CAAMrjD,CAAA,CAAM,CAAN,CAGN,EAFAsjD,CAEA,CAFatjD,CAAA,CAAM,CAAN,CAEb,GACE6iD,CACA,CADmB9lC,CAAA,CAAOumC,CAAP,CACnB,CAAAR,CAAA,CAAiBA,QAAQ,CAAC/pD,CAAD,CAAMY,CAAN,CAAaE,CAAb,CAAoB,CAEvCqpD,CAAJ,GAAmBC,CAAA,CAAaD,CAAb,CAAnB,CAAiDnqD,CAAjD,CACAoqD,EAAA,CAAaF,CAAb,CAAA,CAAgCtpD,CAChCwpD,EAAA7R,OAAA,CAAsBz3C,CACtB,OAAOgpD,EAAA,CAAiBt+B,CAAjB,CAAyB4+B,CAAzB,CALoC,CAF/C,GAUEJ,CAGA,CAHmBA,QAAQ,CAAChqD,CAAD,CAAMY,CAAN,CAAa,CACtC,MAAOgX,GAAA,CAAQhX,CAAR,CAD+B,CAGxC,CAAAqpD,CAAA,CAAiBA,QAAQ,CAACjqD,CAAD,CAAM,CAC7B,MAAOA,EADsB,CAbjC,CAkBAiH,EAAA,CAAQojD,CAAApjD,MAAA,CAAU,+CAAV,CACR,IAAI,CAACA,CAAL,CACE,KAAM2iD,EAAA,CAAe,QAAf,CACoDS,CADpD,CAAN,CAGFH,CAAA,CAAkBjjD,CAAA,CAAM,CAAN,CAAlB,EAA8BA,CAAA,CAAM,CAAN,CAC9BkjD,EAAA,CAAgBljD,CAAA,CAAM,CAAN,CAOhB,KAAIujD,EAAe,EAGnBh/B,EAAAgc,iBAAA,CAAwB8iB,CAAxB,CAA6BG,QAAuB,CAACC,CAAD,CAAY,CAAA,IAC1D5pD,CAD0D,CACnDrB,CADmD,CAE1DkrD,EAAevlC,CAAA,CAAS,CAAT,CAF2C,CAG1DwlC,CAH0D,CAM1DC,EAAe,EAN2C,CAO1DC,CAP0D,CAQ1DnlC,CAR0D,CAS1D3lB,CAT0D,CASrDY,CATqD,CAY1DmqD,CAZ0D,CAa1D1+C,CAb0D;AAc1D2+C,EAAiB,EAIrB,IAAI1rD,EAAA,CAAYorD,CAAZ,CAAJ,CACEK,CACA,CADiBL,CACjB,CAAAO,CAAA,CAAclB,CAAd,EAAgCC,CAFlC,KAGO,CACLiB,CAAA,CAAclB,CAAd,EAAgCE,CAEhCc,EAAA,CAAiB,EACjB,KAAK/qD,CAAL,GAAY0qD,EAAZ,CACMA,CAAAxqD,eAAA,CAA0BF,CAA1B,CAAJ,EAAuD,GAAvD,EAAsCA,CAAAwE,OAAA,CAAW,CAAX,CAAtC,EACEumD,CAAAzqD,KAAA,CAAoBN,CAApB,CAGJ+qD,EAAAxqD,KAAA,EATK,CAYPuqD,CAAA,CAAcC,CAAAtrD,OAGdA,EAAA,CAASurD,CAAAvrD,OAAT,CAAiCsrD,CAAAtrD,OACjC,KAAIqB,CAAJ,CAAY,CAAZ,CAAeA,CAAf,CAAuBrB,CAAvB,CAA+BqB,CAAA,EAA/B,CAKC,GAJAd,CAIG,CAJI0qD,CAAD,GAAgBK,CAAhB,CAAkCjqD,CAAlC,CAA0CiqD,CAAA,CAAejqD,CAAf,CAI7C,CAHHF,CAGG,CAHK8pD,CAAA,CAAW1qD,CAAX,CAGL,CAFHkrD,CAEG,CAFSD,CAAA,CAAYjrD,CAAZ,CAAiBY,CAAjB,CAAwBE,CAAxB,CAET,CADH6J,EAAA,CAAwBugD,CAAxB,CAAmC,eAAnC,CACG,CAAAV,CAAAtqD,eAAA,CAA4BgrD,CAA5B,CAAH,CACE7+C,CAGA,CAHQm+C,CAAA,CAAaU,CAAb,CAGR,CAFA,OAAOV,CAAA,CAAaU,CAAb,CAEP,CADAL,CAAA,CAAaK,CAAb,CACA,CAD0B7+C,CAC1B,CAAA2+C,CAAA,CAAelqD,CAAf,CAAA,CAAwBuL,CAJ1B,KAKO,CAAA,GAAIw+C,CAAA3qD,eAAA,CAA4BgrD,CAA5B,CAAJ,CAML,KAJArrD,EAAA,CAAQmrD,CAAR,CAAwB,QAAQ,CAAC3+C,CAAD,CAAQ,CAClCA,CAAJ,EAAaA,CAAAjD,MAAb,GAA0BohD,CAAA,CAAan+C,CAAA64B,GAAb,CAA1B,CAAmD74B,CAAnD,CADsC,CAAxC,CAIM,CAAAu9C,CAAA,CAAe,OAAf,CACiIz3B,CADjI,CACmJ+4B,CADnJ,CAAN,CAIAF,CAAA,CAAelqD,CAAf,CAAA,CAAwB,IAAMoqD,CAAN,CACxBL,EAAA,CAAaK,CAAb,CAAA,CAA0B,CAAA,CAXrB,CAgBR,IAAKlrD,CAAL,GAAYwqD,EAAZ,CAEMA,CAAAtqD,eAAA,CAA4BF,CAA5B,CAAJ,GACEqM,CAIA,CAJQm+C,CAAA,CAAaxqD,CAAb,CAIR,CAHAgwB,CAGA,CAHmB/kB,EAAA,CAAiBoB,CAAA3F,MAAjB,CAGnB,CAFAyd,CAAA04B,MAAA,CAAe7sB,CAAf,CAEA,CADAnwB,CAAA,CAAQmwB,CAAR,CAA0B,QAAQ,CAACxpB,CAAD,CAAU,CAAEA,CAAA,aAAA,CAAsB,CAAA,CAAxB,CAA5C,CACA,CAAA6F,CAAAjD,MAAA6L,SAAA,EALF,CAUGnU;CAAA,CAAQ,CAAb,KAAgBrB,CAAhB,CAAyBsrD,CAAAtrD,OAAzB,CAAgDqB,CAAhD,CAAwDrB,CAAxD,CAAgEqB,CAAA,EAAhE,CAAyE,CACvEd,CAAA,CAAO0qD,CAAD,GAAgBK,CAAhB,CAAkCjqD,CAAlC,CAA0CiqD,CAAA,CAAejqD,CAAf,CAChDF,EAAA,CAAQ8pD,CAAA,CAAW1qD,CAAX,CACRqM,EAAA,CAAQ2+C,CAAA,CAAelqD,CAAf,CACJkqD,EAAA,CAAelqD,CAAf,CAAuB,CAAvB,CAAJ,GAA+B6pD,CAA/B,CAA0DK,CAAA3+C,CAAevL,CAAfuL,CAAuB,CAAvBA,CAwD3D3F,MAAA,CAxD2DskD,CAAA3+C,CAAevL,CAAfuL,CAAuB,CAAvBA,CAwD/C3F,MAAAjH,OAAZ,CAAiC,CAAjC,CAxDC,CAEA,IAAI4M,CAAAjD,MAAJ,CAAiB,CAGfuc,CAAA,CAAatZ,CAAAjD,MAEbwhD,EAAA,CAAWD,CACX,GACEC,EAAA,CAAWA,CAAAv/C,YADb,OAEQu/C,CAFR,EAEoBA,CAAA,aAFpB,CAIkBv+C,EAwCrB3F,MAAA,CAAY,CAAZ,CAxCG,EAA4BkkD,CAA5B,EAEEzmC,CAAA24B,KAAA,CAAc7xC,EAAA,CAAiBoB,CAAA3F,MAAjB,CAAd,CAA6C,IAA7C,CAAmDD,CAAA,CAAOkkD,CAAP,CAAnD,CAEFA,EAAA,CAA2Bt+C,CAwC9B3F,MAAA,CAxC8B2F,CAwClB3F,MAAAjH,OAAZ,CAAiC,CAAjC,CAtDkB,CAAjB,IAiBEkmB,EAAA,CAAa6F,CAAAvF,KAAA,EAGfN,EAAA,CAAWukC,CAAX,CAAA,CAA8BtpD,CAC1BupD,EAAJ,GAAmBxkC,CAAA,CAAWwkC,CAAX,CAAnB,CAA+CnqD,CAA/C,CACA2lB,EAAA4yB,OAAA,CAAoBz3C,CACpB6kB,EAAAwlC,OAAA,CAA+B,CAA/B,GAAqBrqD,CACrB6kB,EAAAylC,MAAA,CAAoBtqD,CAApB,GAA+BgqD,CAA/B,CAA6C,CAC7CnlC,EAAA0lC,QAAA,CAAqB,EAAE1lC,CAAAwlC,OAAF,EAAuBxlC,CAAAylC,MAAvB,CAErBzlC,EAAA2lC,KAAA,CAAkB,EAAE3lC,CAAA4lC,MAAF,CAAmC,CAAnC,IAAsBzqD,CAAtB,CAA4B,CAA5B,EAGbuL,EAAAjD,MAAL,EACE2+C,CAAA,CAAYpiC,CAAZ,CAAwB,QAAQ,CAACjf,CAAD,CAAQ,CACtCA,CAAA,CAAMA,CAAAjH,OAAA,EAAN,CAAA,CAAwBN,CAAAotB,cAAA,CAAuB,iBAAvB,CAA2C4F,CAA3C,CAAwD,GAAxD,CACxBhO,EAAAy4B,MAAA,CAAel2C,CAAf,CAAsB,IAAtB,CAA4BD,CAAA,CAAOkkD,CAAP,CAA5B,CACAA,EAAA,CAAejkD,CACf2F,EAAAjD,MAAA,CAAcuc,CAIdtZ,EAAA3F,MAAA;AAAcA,CACdmkD,EAAA,CAAax+C,CAAA64B,GAAb,CAAA,CAAyB74B,CATa,CAAxC,CArCqE,CAkDzEm+C,CAAA,CAAeK,CA7H+C,CAAhE,CAlDsD,CALrD,CAHiE,CAAlD,CAlzBxB,CA8oCIv8C,GAAkB,CAAC,UAAD,CAAa,QAAQ,CAAC6V,CAAD,CAAW,CACpD,MAAO,SAAQ,CAAC/a,CAAD,CAAQ5C,CAAR,CAAiBpD,CAAjB,CAAuB,CACpCgG,CAAAnF,OAAA,CAAab,CAAAooD,OAAb,CAA0BC,QAA0B,CAAC7qD,CAAD,CAAO,CACzDujB,CAAA,CAAS/d,EAAA,CAAUxF,CAAV,CAAA,CAAmB,aAAnB,CAAmC,UAA5C,CAAA,CAAwD4F,CAAxD,CAAiE,SAAjE,CADyD,CAA3D,CADoC,CADc,CAAhC,CA9oCtB,CA8yCIuH,GAAkB,CAAC,UAAD,CAAa,QAAQ,CAACoW,CAAD,CAAW,CACpD,MAAO,SAAQ,CAAC/a,CAAD,CAAQ5C,CAAR,CAAiBpD,CAAjB,CAAuB,CACpCgG,CAAAnF,OAAA,CAAab,CAAAsoD,OAAb,CAA0BC,QAA0B,CAAC/qD,CAAD,CAAO,CACzDujB,CAAA,CAAS/d,EAAA,CAAUxF,CAAV,CAAA,CAAmB,UAAnB,CAAgC,aAAzC,CAAA,CAAwD4F,CAAxD,CAAiE,SAAjE,CADyD,CAA3D,CADoC,CADc,CAAhC,CA9yCtB,CA81CI+H,GAAmBslC,EAAA,CAAY,QAAQ,CAACzqC,CAAD,CAAQ5C,CAAR,CAAiBpD,CAAjB,CAAuB,CAChEgG,CAAAnF,OAAA,CAAab,CAAAwoD,QAAb,CAA2BC,QAA2B,CAACC,CAAD,CAAYC,CAAZ,CAAuB,CACvEA,CAAJ,EAAkBD,CAAlB,GAAgCC,CAAhC,EACElsD,CAAA,CAAQksD,CAAR,CAAmB,QAAQ,CAACnmD,CAAD,CAAMqnC,CAAN,CAAa,CAAEzmC,CAAAuzC,IAAA,CAAY9M,CAAZ,CAAmB,EAAnB,CAAF,CAAxC,CAEE6e,EAAJ,EAAetlD,CAAAuzC,IAAA,CAAY+R,CAAZ,CAJ4D,CAA7E,CAKG,CAAA,CALH,CADgE,CAA3C,CA91CvB,CAm+CIt9C,GAAoB,CAAC,UAAD,CAAa,QAAQ,CAAC2V,CAAD,CAAW,CACtD,MAAO,UACK,IADL,SAEI,UAFJ,YAKO,CAAC,QAAD,CAAW6nC,QAA2B,EAAG,CACpD,IAAAC,MAAA;AAAa,EADuC,CAAzC,CALP,MAQC3qC,QAAQ,CAAClY,CAAD,CAAQ5C,CAAR,CAAiBpD,CAAjB,CAAuB4oD,CAAvB,CAA2C,CAAA,IAEnDE,CAFmD,CAGnDC,CAHmD,CAInDnE,CAJmD,CAKnDoE,EAAiB,EAErBhjD,EAAAnF,OAAA,CANgBb,CAAAipD,SAMhB,EANiCjpD,CAAAic,GAMjC,CAAwBitC,QAA4B,CAAC1rD,CAAD,CAAQ,CAAA,IACtDH,CADsD,CACnD0V,EAAKi2C,CAAA3sD,OACZ,IAAQ,CAAR,CAAG0W,CAAH,CAAW,CACT,GAAG6xC,CAAH,CAAqB,CACnB,IAAKvnD,CAAL,CAAS,CAAT,CAAYA,CAAZ,CAAgB0V,CAAhB,CAAoB1V,CAAA,EAApB,CACEunD,CAAA,CAAiBvnD,CAAjB,CAAAyhB,OAAA,EAEF8lC,EAAA,CAAmB,IAJA,CAOrBA,CAAA,CAAmB,EACnB,KAAKvnD,CAAL,CAAQ,CAAR,CAAWA,CAAX,CAAa0V,CAAb,CAAiB1V,CAAA,EAAjB,CAAsB,CACpB,IAAIg6C,EAAW0R,CAAA,CAAiB1rD,CAAjB,CACf2rD,EAAA,CAAe3rD,CAAf,CAAAwU,SAAA,EACA+yC,EAAA,CAAiBvnD,CAAjB,CAAA,CAAsBg6C,CACtBt2B,EAAA04B,MAAA,CAAepC,CAAf,CAAyB,QAAQ,EAAG,CAClCuN,CAAApkD,OAAA,CAAwBnD,CAAxB,CAA2B,CAA3B,CAC+B,EAA/B,GAAGunD,CAAAvoD,OAAH,GACEuoD,CADF,CACqB,IADrB,CAFkC,CAApC,CAJoB,CATb,CAsBXmE,CAAA,CAAmB,EACnBC,EAAA,CAAiB,EAEjB,IAAKF,CAAL,CAA2BF,CAAAC,MAAA,CAAyB,GAAzB,CAA+BrrD,CAA/B,CAA3B,EAAoEorD,CAAAC,MAAA,CAAyB,GAAzB,CAApE,CACE7iD,CAAA0/B,MAAA,CAAY1lC,CAAAmpD,OAAZ,CACA,CAAA1sD,CAAA,CAAQqsD,CAAR,CAA6B,QAAQ,CAACM,CAAD,CAAqB,CACxD,IAAIC,EAAgBrjD,CAAA6c,KAAA,EACpBmmC,EAAA9rD,KAAA,CAAoBmsD,CAApB,CACAD,EAAArmC,WAAA,CAA8BsmC,CAA9B,CAA6C,QAAQ,CAACC,CAAD,CAAc,CACjE,IAAIC,EAASH,CAAAhmD,QAEb2lD,EAAA7rD,KAAA,CAAsBosD,CAAtB,CACAvoC,EAAAy4B,MAAA,CAAe8P,CAAf,CAA4BC,CAAA3qD,OAAA,EAA5B,CAA6C2qD,CAA7C,CAJiE,CAAnE,CAHwD,CAA1D,CA7BwD,CAA5D,CAPuD,CARpD,CAD+C,CAAhC,CAn+CxB,CAgiDIl+C,GAAwBolC,EAAA,CAAY,YAC1B,SAD0B,UAE5B,GAF4B,SAG7B,WAH6B;KAIhCvyB,QAAQ,CAAClY,CAAD,CAAQ5C,CAAR,CAAiB6f,CAAjB,CAAwB4vB,CAAxB,CAA8B8R,CAA9B,CAA2C,CACvD9R,CAAAgW,MAAA,CAAW,GAAX,CAAiB5lC,CAAAumC,aAAjB,CAAA,CAAwC3W,CAAAgW,MAAA,CAAW,GAAX,CAAiB5lC,CAAAumC,aAAjB,CAAxC,EAAgF,EAChF3W,EAAAgW,MAAA,CAAW,GAAX,CAAiB5lC,CAAAumC,aAAjB,CAAAtsD,KAAA,CAA0C,YAAcynD,CAAd,SAAoCvhD,CAApC,CAA1C,CAFuD,CAJnB,CAAZ,CAhiD5B,CA0iDIkI,GAA2BmlC,EAAA,CAAY,YAC7B,SAD6B,UAE/B,GAF+B,SAGhC,WAHgC,MAInCvyB,QAAQ,CAAClY,CAAD,CAAQ5C,CAAR,CAAiBpD,CAAjB,CAAuB6yC,CAAvB,CAA6B8R,CAA7B,CAA0C,CACtD9R,CAAAgW,MAAA,CAAW,GAAX,CAAA,CAAmBhW,CAAAgW,MAAA,CAAW,GAAX,CAAnB,EAAsC,EACtChW,EAAAgW,MAAA,CAAW,GAAX,CAAA3rD,KAAA,CAAqB,YAAcynD,CAAd,SAAoCvhD,CAApC,CAArB,CAFsD,CAJf,CAAZ,CA1iD/B,CA2mDIoI,GAAwBilC,EAAA,CAAY,MAChCvyB,QAAQ,CAACkK,CAAD,CAASpG,CAAT,CAAmBynC,CAAnB,CAA2BvpC,CAA3B,CAAuCykC,CAAvC,CAAoD,CAChE,GAAI,CAACA,CAAL,CACE,KAAM1oD,EAAA,CAAO,cAAP,CAAA,CAAuB,QAAvB,CAILkH,EAAA,CAAY6e,CAAZ,CAJK,CAAN,CAOF2iC,CAAA,CAAY,QAAQ,CAACrhD,CAAD,CAAQ,CAC1B0e,CAAAze,MAAA,EACAye,EAAAte,OAAA,CAAgBJ,CAAhB,CAF0B,CAA5B,CATgE,CAD5B,CAAZ,CA3mD5B,CA6pDIwG,GAAkB,CAAC,gBAAD,CAAmB,QAAQ,CAAC6W,CAAD,CAAiB,CAChE,MAAO,UACK,GADL,UAEK,CAAA,CAFL,SAGI1a,QAAQ,CAAC7C,CAAD;AAAUpD,CAAV,CAAgB,CACd,kBAAjB,EAAIA,CAAAkR,KAAJ,EAKEyP,CAAAhM,IAAA,CAJkB3U,CAAA8hC,GAIlB,CAFW1+B,CAAA,CAAQ,CAAR,CAAA0oB,KAEX,CAN6B,CAH5B,CADyD,CAA5C,CA7pDtB,CA6qDI49B,GAAkBztD,CAAA,CAAO,WAAP,CA7qDtB,CAmzDIsP,GAAqBtM,EAAA,CAAQ,UAAY,CAAA,CAAZ,CAAR,CAnzDzB,CAqzDI8K,GAAkB,CAAC,UAAD,CAAa,QAAb,CAAuB,QAAQ,CAAC87C,CAAD,CAAajlC,CAAb,CAAqB,CAAA,IAEpE+oC,EAAoB,wMAFgD,CAGpEC,EAAgB,eAAgB9qD,CAAhB,CAGpB,OAAO,UACK,GADL,SAEI,CAAC,QAAD,CAAW,UAAX,CAFJ,YAGO,CAAC,UAAD,CAAa,QAAb,CAAuB,QAAvB,CAAiC,QAAQ,CAACkjB,CAAD,CAAWoG,CAAX,CAAmBqhC,CAAnB,CAA2B,CAAA,IAC1EznD,EAAO,IADmE,CAE1E6nD,EAAa,EAF6D,CAG1EC,EAAcF,CAH4D,CAK1EG,CAGJ/nD,EAAAgoD,UAAA;AAAiBP,CAAA5G,QAGjB7gD,EAAAioD,KAAA,CAAYC,QAAQ,CAACC,CAAD,CAAeC,CAAf,CAA4BC,CAA5B,CAA4C,CAC9DP,CAAA,CAAcK,CAEdJ,EAAA,CAAgBM,CAH8C,CAOhEroD,EAAAsoD,UAAA,CAAiBC,QAAQ,CAAC/sD,CAAD,CAAQ,CAC/B+J,EAAA,CAAwB/J,CAAxB,CAA+B,gBAA/B,CACAqsD,EAAA,CAAWrsD,CAAX,CAAA,CAAoB,CAAA,CAEhBssD,EAAApW,WAAJ,EAA8Bl2C,CAA9B,GACEwkB,CAAAxf,IAAA,CAAahF,CAAb,CACA,CAAIusD,CAAAnrD,OAAA,EAAJ,EAA4BmrD,CAAAjrC,OAAA,EAF9B,CAJ+B,CAWjC9c,EAAAwoD,aAAA,CAAoBC,QAAQ,CAACjtD,CAAD,CAAQ,CAC9B,IAAAktD,UAAA,CAAeltD,CAAf,CAAJ,GACE,OAAOqsD,CAAA,CAAWrsD,CAAX,CACP,CAAIssD,CAAApW,WAAJ,EAA8Bl2C,CAA9B,EACE,IAAAmtD,oBAAA,CAAyBntD,CAAzB,CAHJ,CADkC,CAUpCwE,EAAA2oD,oBAAA,CAA2BC,QAAQ,CAACpoD,CAAD,CAAM,CACnCqoD,CAAAA,CAAa,IAAbA,CAAoBr2C,EAAA,CAAQhS,CAAR,CAApBqoD,CAAmC,IACvCd,EAAAvnD,IAAA,CAAkBqoD,CAAlB,CACA7oC,EAAAq2B,QAAA,CAAiB0R,CAAjB,CACA/nC,EAAAxf,IAAA,CAAaqoD,CAAb,CACAd,EAAAhqD,KAAA,CAAmB,UAAnB,CAA+B,CAAA,CAA/B,CALuC,CASzCiC,EAAA0oD,UAAA,CAAiBI,QAAQ,CAACttD,CAAD,CAAQ,CAC/B,MAAOqsD,EAAA/sD,eAAA,CAA0BU,CAA1B,CADwB,CAIjC4qB,EAAAyd,IAAA,CAAW,UAAX,CAAuB,QAAQ,EAAG,CAEhC7jC,CAAA2oD,oBAAA,CAA2B7rD,CAFK,CAAlC,CApD8E,CAApE,CAHP,MA6DCof,QAAQ,CAAClY,CAAD,CAAQ5C,CAAR,CAAiBpD,CAAjB,CAAuBsjD,CAAvB,CAA8B,CA0C1CyH,QAASA,EAAa,CAAC/kD,CAAD;AAAQglD,CAAR,CAAuBlB,CAAvB,CAAoCmB,CAApC,CAAgD,CACpEnB,CAAAhW,QAAA,CAAsBoX,QAAQ,EAAG,CAC/B,IAAItH,EAAYkG,CAAApW,WAEZuX,EAAAP,UAAA,CAAqB9G,CAArB,CAAJ,EACMmG,CAAAnrD,OAAA,EAEJ,EAF4BmrD,CAAAjrC,OAAA,EAE5B,CADAksC,CAAAxoD,IAAA,CAAkBohD,CAAlB,CACA,CAAkB,EAAlB,GAAIA,CAAJ,EAAsBuH,CAAAprD,KAAA,CAAiB,UAAjB,CAA6B,CAAA,CAA7B,CAHxB,EAKMb,CAAA,CAAY0kD,CAAZ,CAAJ,EAA8BuH,CAA9B,CACEH,CAAAxoD,IAAA,CAAkB,EAAlB,CADF,CAGEyoD,CAAAN,oBAAA,CAA+B/G,CAA/B,CAX2B,CAgBjCoH,EAAA/uC,GAAA,CAAiB,QAAjB,CAA2B,QAAQ,EAAG,CACpCjW,CAAAG,OAAA,CAAa,QAAQ,EAAG,CAClB4jD,CAAAnrD,OAAA,EAAJ,EAA4BmrD,CAAAjrC,OAAA,EAC5BgrC,EAAAnW,cAAA,CAA0BqX,CAAAxoD,IAAA,EAA1B,CAFsB,CAAxB,CADoC,CAAtC,CAjBoE,CAyBtE4oD,QAASA,EAAe,CAACplD,CAAD,CAAQglD,CAAR,CAAuBnY,CAAvB,CAA6B,CACnD,IAAIwY,CACJxY,EAAAiB,QAAA,CAAeC,QAAQ,EAAG,CACxB,IAAIuX,EAAQ,IAAI52C,EAAJ,CAAYm+B,CAAAa,WAAZ,CACZj3C,EAAA,CAAQuuD,CAAA/qD,KAAA,CAAmB,QAAnB,CAAR,CAAsC,QAAQ,CAAC81C,CAAD,CAAS,CACrDA,CAAAsB,SAAA,CAAkBl4C,CAAA,CAAUmsD,CAAAl1C,IAAA,CAAU2/B,CAAAv4C,MAAV,CAAV,CADmC,CAAvD,CAFwB,CAS1BwI,EAAAnF,OAAA,CAAa0qD,QAA4B,EAAG,CACrClqD,EAAA,CAAOgqD,CAAP,CAAiBxY,CAAAa,WAAjB,CAAL,GACE2X,CACA,CADW5qD,EAAA,CAAKoyC,CAAAa,WAAL,CACX,CAAAb,CAAAiB,QAAA,EAFF,CAD0C,CAA5C,CAOAkX,EAAA/uC,GAAA,CAAiB,QAAjB,CAA2B,QAAQ,EAAG,CACpCjW,CAAAG,OAAA,CAAa,QAAQ,EAAG,CACtB,IAAI7F;AAAQ,EACZ7D,EAAA,CAAQuuD,CAAA/qD,KAAA,CAAmB,QAAnB,CAAR,CAAsC,QAAQ,CAAC81C,CAAD,CAAS,CACjDA,CAAAsB,SAAJ,EACE/2C,CAAApD,KAAA,CAAW64C,CAAAv4C,MAAX,CAFmD,CAAvD,CAKAq1C,EAAAc,cAAA,CAAmBrzC,CAAnB,CAPsB,CAAxB,CADoC,CAAtC,CAlBmD,CA+BrDkrD,QAASA,EAAc,CAACxlD,CAAD,CAAQglD,CAAR,CAAuBnY,CAAvB,CAA6B,CA6GlD4Y,QAASA,EAAM,EAAG,CAAA,IAEZC,EAAe,CAAC,EAAD,CAAI,EAAJ,CAFH,CAGZC,EAAmB,CAAC,EAAD,CAHP,CAIZC,CAJY,CAKZC,CALY,CAMZ9V,CANY,CAOZ+V,CAPY,CAOIC,CAChBC,EAAAA,CAAanZ,CAAA4P,YACbj0B,EAAAA,CAASy9B,CAAA,CAASjmD,CAAT,CAATwoB,EAA4B,EAThB,KAUZvxB,EAAOivD,CAAA,CAAUlvD,EAAA,CAAWwxB,CAAX,CAAV,CAA+BA,CAV1B,CAYCnyB,CAZD,CAaZ8vD,CAbY,CAaAzuD,CACZsZ,EAAAA,CAAS,EAETo1C,EAAAA,CAAc,CAAA,CAhBF,KAiBZC,CAjBY,CAkBZjpD,CAGJ,IAAIg0C,CAAJ,CACE,GAAIkV,CAAJ,EAAe9vD,CAAA,CAAQwvD,CAAR,CAAf,CAEE,IADAI,CACSG,CADK,IAAI73C,EAAJ,CAAY,EAAZ,CACL63C,CAAAA,CAAAA,CAAa,CAAtB,CAAyBA,CAAzB,CAAsCP,CAAA3vD,OAAtC,CAAyDkwD,CAAA,EAAzD,CACEv1C,CAAA,CAAOw1C,CAAP,CACA,CADoBR,CAAA,CAAWO,CAAX,CACpB,CAAAH,CAAAz3C,IAAA,CAAgB23C,CAAA,CAAQtmD,CAAR,CAAegR,CAAf,CAAhB,CAAwCg1C,CAAA,CAAWO,CAAX,CAAxC,CAJJ,KAOEH,EAAA,CAAc,IAAI13C,EAAJ,CAAYs3C,CAAZ,CAKlB,KAAKtuD,CAAL,CAAa,CAAb,CAAgBrB,CAAA,CAASY,CAAAZ,OAAT,CAAsBqB,CAAtB,CAA8BrB,CAA9C,CAAsDqB,CAAA,EAAtD,CAA+D,CAE7Dd,CAAA,CAAMc,CACN,IAAIwuD,CAAJ,CAAa,CACXtvD,CAAA,CAAMK,CAAA,CAAKS,CAAL,CACN,IAAuB,GAAvB,GAAKd,CAAAwE,OAAA,CAAW,CAAX,CAAL,CAA6B,QAC7B4V,EAAA,CAAOk1C,CAAP,CAAA,CAAkBtvD,CAHP,CAMboa,CAAA,CAAOw1C,CAAP,CAAA,CAAoBh+B,CAAA,CAAO5xB,CAAP,CAEpBgvD,EAAA,CAAkBa,CAAA,CAAUzmD,CAAV,CAAiBgR,CAAjB,CAAlB,EAA8C,EAC9C,EAAM60C,CAAN,CAAoBH,CAAA,CAAaE,CAAb,CAApB,IACEC,CACA,CADcH,CAAA,CAAaE,CAAb,CACd,CAD8C,EAC9C,CAAAD,CAAAzuD,KAAA,CAAsB0uD,CAAtB,CAFF,CAIIxU,EAAJ,CACEC,CADF,CACal4C,CAAA,CACTitD,CAAAttC,OAAA,CAAmBwtC,CAAA,CAAUA,CAAA,CAAQtmD,CAAR,CAAegR,CAAf,CAAV,CAAmC/X,CAAA,CAAQ+G,CAAR,CAAegR,CAAf,CAAtD,CADS,CADb,EAKMs1C,CAAJ,EACMI,CAEJ,CAFgB,EAEhB,CADAA,CAAA,CAAUF,CAAV,CACA,CADuBR,CACvB,CAAA3U,CAAA;AAAWiV,CAAA,CAAQtmD,CAAR,CAAe0mD,CAAf,CAAX,GAAyCJ,CAAA,CAAQtmD,CAAR,CAAegR,CAAf,CAH3C,EAKEqgC,CALF,CAKa2U,CALb,GAK4B/sD,CAAA,CAAQ+G,CAAR,CAAegR,CAAf,CAE5B,CAAAo1C,CAAA,CAAcA,CAAd,EAA6B/U,CAZ/B,CAcAsV,EAAA,CAAQC,CAAA,CAAU5mD,CAAV,CAAiBgR,CAAjB,CAGR21C,EAAA,CAAQxtD,CAAA,CAAUwtD,CAAV,CAAA,CAAmBA,CAAnB,CAA2B,EACnCd,EAAA3uD,KAAA,CAAiB,IAEXovD,CAAA,CAAUA,CAAA,CAAQtmD,CAAR,CAAegR,CAAf,CAAV,CAAoCk1C,CAAA,CAAUjvD,CAAA,CAAKS,CAAL,CAAV,CAAwBA,CAFjD,OAGRivD,CAHQ,UAILtV,CAJK,CAAjB,CAlC6D,CAyC1DD,CAAL,GACMyV,CAAJ,EAAiC,IAAjC,GAAkBb,CAAlB,CAEEN,CAAA,CAAa,EAAb,CAAAztD,QAAA,CAAyB,IAAI,EAAJ,OAAc,EAAd,UAA2B,CAACmuD,CAA5B,CAAzB,CAFF,CAGYA,CAHZ,EAKEV,CAAA,CAAa,EAAb,CAAAztD,QAAA,CAAyB,IAAI,GAAJ,OAAe,EAAf,UAA4B,CAAA,CAA5B,CAAzB,CANJ,CAWKkuD,EAAA,CAAa,CAAlB,KAAqBW,CAArB,CAAmCnB,CAAAtvD,OAAnC,CACK8vD,CADL,CACkBW,CADlB,CAEKX,CAAA,EAFL,CAEmB,CAEjBP,CAAA,CAAkBD,CAAA,CAAiBQ,CAAjB,CAGlBN,EAAA,CAAcH,CAAA,CAAaE,CAAb,CAEVmB,EAAA1wD,OAAJ,EAAgC8vD,CAAhC,EAEEL,CAMA,CANiB,SACNkB,CAAA1pD,MAAA,EAAAtD,KAAA,CAA8B,OAA9B,CAAuC4rD,CAAvC,CADM,OAERC,CAAAc,MAFQ,CAMjB,CAFAZ,CAEA,CAFkB,CAACD,CAAD,CAElB,CADAiB,CAAA7vD,KAAA,CAAuB6uD,CAAvB,CACA,CAAAf,CAAAtnD,OAAA,CAAqBooD,CAAA1oD,QAArB,CARF,GAUE2oD,CAIA,CAJkBgB,CAAA,CAAkBZ,CAAlB,CAIlB,CAHAL,CAGA,CAHiBC,CAAA,CAAgB,CAAhB,CAGjB,CAAID,CAAAa,MAAJ,EAA4Bf,CAA5B,EACEE,CAAA1oD,QAAApD,KAAA,CAA4B,OAA5B,CAAqC8rD,CAAAa,MAArC,CAA4Df,CAA5D,CAfJ,CAmBAS,EAAA,CAAc,IACV3uD,EAAA,CAAQ,CAAZ,KAAerB,CAAf,CAAwBwvD,CAAAxvD,OAAxB,CAA4CqB,CAA5C,CAAoDrB,CAApD,CAA4DqB,CAAA,EAA5D,CACEq4C,CACA,CADS8V,CAAA,CAAYnuD,CAAZ,CACT,CAAA,CAAKuvD,CAAL,CAAsBlB,CAAA,CAAgBruD,CAAhB,CAAsB,CAAtB,CAAtB,GAEE2uD,CAQA,CARcY,CAAA7pD,QAQd,CAPI6pD,CAAAN,MAOJ,GAP6B5W,CAAA4W,MAO7B;AANEN,CAAAvgC,KAAA,CAAiBmhC,CAAAN,MAAjB,CAAwC5W,CAAA4W,MAAxC,CAMF,CAJIM,CAAAnrB,GAIJ,GAJ0BiU,CAAAjU,GAI1B,EAHEuqB,CAAA7pD,IAAA,CAAgByqD,CAAAnrB,GAAhB,CAAoCiU,CAAAjU,GAApC,CAGF,CAAImrB,CAAA5V,SAAJ,GAAgCtB,CAAAsB,SAAhC,EACEgV,CAAAtsD,KAAA,CAAiB,UAAjB,CAA8BktD,CAAA5V,SAA9B,CAAwDtB,CAAAsB,SAAxD,CAXJ,GAiBoB,EAAlB,GAAItB,CAAAjU,GAAJ,EAAwB+qB,CAAxB,CAEEzpD,CAFF,CAEYypD,CAFZ,CAOGrqD,CAAAY,CAAAZ,CAAU0qD,CAAA5pD,MAAA,EAAVd,KAAA,CACQuzC,CAAAjU,GADR,CAAA9hC,KAAA,CAES,UAFT,CAEqB+1C,CAAAsB,SAFrB,CAAAvrB,KAAA,CAGSiqB,CAAA4W,MAHT,CAiBH,CAXAZ,CAAA7uD,KAAA,CAAsC,SACzBkG,CADyB,OAE3B2yC,CAAA4W,MAF2B,IAG9B5W,CAAAjU,GAH8B,UAIxBiU,CAAAsB,SAJwB,CAAtC,CAWA,CALIgV,CAAJ,CACEA,CAAA9T,MAAA,CAAkBn1C,CAAlB,CADF,CAGE0oD,CAAA1oD,QAAAM,OAAA,CAA8BN,CAA9B,CAEF,CAAAipD,CAAA,CAAcjpD,CAzChB,CA8CF,KADA1F,CAAA,EACA,CAAMquD,CAAA1vD,OAAN,CAA+BqB,CAA/B,CAAA,CACEquD,CAAAlyC,IAAA,EAAAzW,QAAA0b,OAAA,EA5Ee,CAgFnB,IAAA,CAAMiuC,CAAA1wD,OAAN,CAAiC8vD,CAAjC,CAAA,CACEY,CAAAlzC,IAAA,EAAA,CAAwB,CAAxB,CAAAzW,QAAA0b,OAAA,EAzKc,CA5GlB,IAAIjb,CAEJ,IAAI,EAAEA,CAAF,CAAUspD,CAAAtpD,MAAA,CAAiB8lD,CAAjB,CAAV,CAAJ,CACE,KAAMD,GAAA,CAAgB,MAAhB,CAIJyD,CAJI,CAIQhqD,EAAA,CAAY6nD,CAAZ,CAJR,CAAN,CAJgD,IAW9C4B,EAAYhsC,CAAA,CAAO/c,CAAA,CAAM,CAAN,CAAP,EAAmBA,CAAA,CAAM,CAAN,CAAnB,CAXkC,CAY9C2oD,EAAY3oD,CAAA,CAAM,CAAN,CAAZ2oD,EAAwB3oD,CAAA,CAAM,CAAN,CAZsB,CAa9CqoD,EAAUroD,CAAA,CAAM,CAAN,CAboC,CAc9C4oD,EAAY7rC,CAAA,CAAO/c,CAAA,CAAM,CAAN,CAAP,EAAmB,EAAnB,CAdkC,CAe9C5E;AAAU2hB,CAAA,CAAO/c,CAAA,CAAM,CAAN,CAAA,CAAWA,CAAA,CAAM,CAAN,CAAX,CAAsB2oD,CAA7B,CAfoC,CAgB9CP,EAAWrrC,CAAA,CAAO/c,CAAA,CAAM,CAAN,CAAP,CAhBmC,CAkB9CyoD,EADQzoD,CAAAupD,CAAM,CAANA,CACE,CAAQxsC,CAAA,CAAO/c,CAAA,CAAM,CAAN,CAAP,CAAR,CAA2B,IAlBS,CAuB9CkpD,EAAoB,CAAC,CAAC,SAAU/B,CAAV,OAA+B,EAA/B,CAAD,CAAD,CAEpB6B,EAAJ,GAEEhH,CAAA,CAASgH,CAAT,CAAA,CAAqB7mD,CAArB,CAQA,CAJA6mD,CAAAp/B,YAAA,CAAuB,UAAvB,CAIA,CAAAo/B,CAAA/tC,OAAA,EAVF,CAcAksC,EAAAznD,MAAA,EAEAynD,EAAA/uC,GAAA,CAAiB,QAAjB,CAA2B,QAAQ,EAAG,CACpCjW,CAAAG,OAAA,CAAa,QAAQ,EAAG,CAAA,IAClB0lD,CADkB,CAElBvE,EAAa2E,CAAA,CAASjmD,CAAT,CAAbshD,EAAgC,EAFd,CAGlBtwC,EAAS,EAHS,CAIlBpa,CAJkB,CAIbY,CAJa,CAISE,CAJT,CAIgByuD,CAJhB,CAI4B9vD,CAJ5B,CAIoCywD,CAJpC,CAIiDP,CAEvE,IAAInV,CAAJ,CAEE,IADA55C,CACqB,CADb,EACa,CAAhB2uD,CAAgB,CAAH,CAAG,CAAAW,CAAA,CAAcC,CAAA1wD,OAAnC,CACK8vD,CADL,CACkBW,CADlB,CAEKX,CAAA,EAFL,CAME,IAFAN,CAEe,CAFDkB,CAAA,CAAkBZ,CAAlB,CAEC,CAAXzuD,CAAW,CAAH,CAAG,CAAArB,CAAA,CAASwvD,CAAAxvD,OAAxB,CAA4CqB,CAA5C,CAAoDrB,CAApD,CAA4DqB,CAAA,EAA5D,CACE,IAAI,CAAC2vD,CAAD,CAAiBxB,CAAA,CAAYnuD,CAAZ,CAAA0F,QAAjB,EAA6C,CAA7C,CAAAi0C,SAAJ,CAA8D,CAC5Dz6C,CAAA,CAAMywD,CAAA7qD,IAAA,EACF0pD,EAAJ,GAAal1C,CAAA,CAAOk1C,CAAP,CAAb,CAA+BtvD,CAA/B,CACA,IAAI0vD,CAAJ,CACE,IAAKC,CAAL,CAAkB,CAAlB,CAAqBA,CAArB,CAAkCjF,CAAAjrD,OAAlC,GACE2a,CAAA,CAAOw1C,CAAP,CACI,CADgBlF,CAAA,CAAWiF,CAAX,CAChB,CAAAD,CAAA,CAAQtmD,CAAR,CAAegR,CAAf,CAAA,EAA0Bpa,CAFhC,EAAqD2vD,CAAA,EAArD,EADF,IAMEv1C,EAAA,CAAOw1C,CAAP,CAAA,CAAoBlF,CAAA,CAAW1qD,CAAX,CAEtBY,EAAAN,KAAA,CAAW+B,CAAA,CAAQ+G,CAAR,CAAegR,CAAf,CAAX,CAX4D,CAA9D,CATN,IAwBO,CACLpa,CAAA,CAAMouD,CAAAxoD,IAAA,EACN,IAAW,GAAX,EAAI5F,CAAJ,CACEY,CAAA,CAAQxB,CADV,KAEO,IAAY,EAAZ,GAAIY,CAAJ,CACLY,CAAA,CAAQ,IADH,KAGL,IAAI8uD,CAAJ,CACE,IAAKC,CAAL,CAAkB,CAAlB,CAAqBA,CAArB,CAAkCjF,CAAAjrD,OAAlC,CAAqDkwD,CAAA,EAArD,CAEE,IADAv1C,CAAA,CAAOw1C,CAAP,CACI;AADgBlF,CAAA,CAAWiF,CAAX,CAChB,CAAAD,CAAA,CAAQtmD,CAAR,CAAegR,CAAf,CAAA,EAA0Bpa,CAA9B,CAAmC,CACjCY,CAAA,CAAQyB,CAAA,CAAQ+G,CAAR,CAAegR,CAAf,CACR,MAFiC,CAAnC,CAHJ,IASEA,EAAA,CAAOw1C,CAAP,CAEA,CAFoBlF,CAAA,CAAW1qD,CAAX,CAEpB,CADIsvD,CACJ,GADal1C,CAAA,CAAOk1C,CAAP,CACb,CAD+BtvD,CAC/B,EAAAY,CAAA,CAAQyB,CAAA,CAAQ+G,CAAR,CAAegR,CAAf,CAIsB,EAAlC,CAAI+1C,CAAA,CAAkB,CAAlB,CAAA1wD,OAAJ,EACM0wD,CAAA,CAAkB,CAAlB,CAAA,CAAqB,CAArB,CAAAjrB,GADN,GACqCllC,CADrC,GAEImwD,CAAA,CAAkB,CAAlB,CAAA,CAAqB,CAArB,CAAA1V,SAFJ,CAEuC,CAAA,CAFvC,CAtBK,CA4BPxE,CAAAc,cAAA,CAAmBn2C,CAAnB,CA1DsB,CAAxB,CADoC,CAAtC,CA+DAq1C,EAAAiB,QAAA,CAAe2X,CAGfzlD,EAAAnF,OAAA,CAAa4qD,CAAb,CA3GkD,CAhGpD,GAAKnI,CAAA,CAAM,CAAN,CAAL,CAAA,CAF0C,IAItC2H,EAAa3H,CAAA,CAAM,CAAN,CACbwG,EAAAA,CAAcxG,CAAA,CAAM,CAAN,CALwB,KAMtClM,EAAWp3C,CAAAo3C,SAN2B,CAOtC+V,EAAantD,CAAAstD,UAPyB,CAQtCT,EAAa,CAAA,CARyB,CAStC1B,CATsC,CAYtC+B,EAAiB7pD,CAAA,CAAOtH,CAAA8T,cAAA,CAAuB,QAAvB,CAAP,CAZqB,CAatCm9C,EAAkB3pD,CAAA,CAAOtH,CAAA8T,cAAA,CAAuB,UAAvB,CAAP,CAboB,CActCk6C,EAAgBmD,CAAA5pD,MAAA,EAGZjG,EAAAA,CAAI,CAAZ,KAjB0C,IAiB3BuR,EAAWxL,CAAAwL,SAAA,EAjBgB,CAiBImE,EAAKnE,CAAAvS,OAAnD,CAAoEgB,CAApE,CAAwE0V,CAAxE,CAA4E1V,CAAA,EAA5E,CACE,GAA0B,EAA1B,GAAIuR,CAAA,CAASvR,CAAT,CAAAG,MAAJ,CAA8B,CAC5B2tD,CAAA,CAAc0B,CAAd,CAA2Bj+C,CAAAmT,GAAA,CAAY1kB,CAAZ,CAC3B,MAF4B,CAMhC4tD,CAAAhB,KAAA,CAAgBH,CAAhB,CAA6B+C,CAA7B,CAAyC9C,CAAzC,CAGI3S,EAAJ,GACE0S,CAAA9V,SADF,CACyBuZ,QAAQ,CAAC/vD,CAAD,CAAQ,CACrC,MAAO,CAACA,CAAR,EAAkC,CAAlC,GAAiBA,CAAAnB,OADoB,CADzC,CAMI8wD,EAAJ,CAAgB3B,CAAA,CAAexlD,CAAf,CAAsB5C,CAAtB,CAA+B0mD,CAA/B,CAAhB,CACS1S,CAAJ,CAAcgU,CAAA,CAAgBplD,CAAhB,CAAuB5C,CAAvB,CAAgC0mD,CAAhC,CAAd,CACAiB,CAAA,CAAc/kD,CAAd,CAAqB5C,CAArB,CAA8B0mD,CAA9B,CAA2CmB,CAA3C,CAjCL,CAF0C,CA7DvC,CANiE,CAApD,CArzDtB,CAwvEIhhD,GAAkB,CAAC,cAAD;AAAiB,QAAQ,CAACwW,CAAD,CAAe,CAC5D,IAAI+sC,EAAiB,WACR1uD,CADQ,cAELA,CAFK,CAKrB,OAAO,UACK,GADL,UAEK,GAFL,SAGImH,QAAQ,CAAC7C,CAAD,CAAUpD,CAAV,CAAgB,CAC/B,GAAId,CAAA,CAAYc,CAAAxC,MAAZ,CAAJ,CAA6B,CAC3B,IAAIuuB,EAAgBtL,CAAA,CAAard,CAAA0oB,KAAA,EAAb,CAA6B,CAAA,CAA7B,CACfC,EAAL,EACE/rB,CAAAqqB,KAAA,CAAU,OAAV,CAAmBjnB,CAAA0oB,KAAA,EAAnB,CAHyB,CAO7B,MAAO,SAAS,CAAC9lB,CAAD,CAAQ5C,CAAR,CAAiBpD,CAAjB,CAAuB,CAAA,IAEjCpB,EAASwE,CAAAxE,OAAA,EAFwB,CAGjCqsD,EAAarsD,CAAAwH,KAAA,CAFIqnD,mBAEJ,CAAbxC,EACErsD,CAAAA,OAAA,EAAAwH,KAAA,CAHeqnD,mBAGf,CAEFxC,EAAJ,EAAkBA,CAAAjB,UAAlB,CAGE5mD,CAAArD,KAAA,CAAa,UAAb,CAAyB,CAAA,CAAzB,CAHF,CAKEkrD,CALF,CAKeuC,CAGXzhC,EAAJ,CACE/lB,CAAAnF,OAAA,CAAakrB,CAAb,CAA4B2hC,QAA+B,CAACzpB,CAAD,CAASC,CAAT,CAAiB,CAC1ElkC,CAAAqqB,KAAA,CAAU,OAAV,CAAmB4Z,CAAnB,CACIA,EAAJ,GAAeC,CAAf,EAAuB+mB,CAAAT,aAAA,CAAwBtmB,CAAxB,CACvB+mB,EAAAX,UAAA,CAAqBrmB,CAArB,CAH0E,CAA5E,CADF,CAOEgnB,CAAAX,UAAA,CAAqBtqD,CAAAxC,MAArB,CAGF4F,EAAA6Y,GAAA,CAAW,UAAX,CAAuB,QAAQ,EAAG,CAChCgvC,CAAAT,aAAA,CAAwBxqD,CAAAxC,MAAxB,CADgC,CAAlC,CAxBqC,CARR,CAH5B,CANqD,CAAxC,CAxvEtB,CAyyEIwM,GAAiB/K,EAAA,CAAQ,UACjB,GADiB;SAEjB,CAAA,CAFiB,CAAR,CAKfnD,EAAAyK,QAAA1B,UAAJ,CAEEq4B,OAAAE,IAAA,CAAY,gDAAZ,CAFF,EA5jnBA,CAFApuB,EAEA,CAFSlT,CAAAkT,OAET,GACE3L,CAYA,CAZS2L,EAYT,CAXA3Q,CAAA,CAAO2Q,EAAA/M,GAAP,CAAkB,OACT6f,EAAA9b,MADS,cAEF8b,EAAA4E,aAFE,YAGJ5E,EAAA5B,WAHI,UAIN4B,EAAAnc,SAJM,eAKDmc,EAAAkhC,cALC,CAAlB,CAWA,CAFAh1C,EAAA,CAAwB,QAAxB,CAAkC,CAAA,CAAlC,CAAwC,CAAA,CAAxC,CAA8C,CAAA,CAA9C,CAEA,CADAA,EAAA,CAAwB,OAAxB,CAAiC,CAAA,CAAjC,CAAwC,CAAA,CAAxC,CAA+C,CAAA,CAA/C,CACA,CAAAA,EAAA,CAAwB,MAAxB,CAAgC,CAAA,CAAhC,CAAuC,CAAA,CAAvC,CAA8C,CAAA,CAA9C,CAbF,EAeE3K,CAfF,CAeW8L,CAyjnBX,CAvjnBA5I,EAAAnD,QAujnBA,CAvjnBkBC,CAujnBlB,CAFA6F,EAAA,CAAmB3C,EAAnB,CAEA,CAAAlD,CAAA,CAAOtH,CAAP,CAAAw6C,MAAA,CAAuB,QAAQ,EAAG,CAChC3xC,EAAA,CAAY7I,CAAZ,CAAsB8I,EAAtB,CADgC,CAAlC,CAZA,CAh8pBqC,CAAtC,CAAA,CAg9pBE/I,MAh9pBF,CAg9pBUC,QAh9pBV,CAk9pBD,EAACwK,OAAAonD,MAAA,EAAD,EAAoBpnD,OAAAnD,QAAA,CAAgBrH,QAAhB,CAAAkE,KAAA,CAA+B,MAA/B,CAAAo4C,QAAA,CAA+C,uRAA/C;", -"sources":["angular.js"], -"names":["window","document","undefined","minErr","isArrayLike","obj","isWindow","length","nodeType","isString","isArray","forEach","iterator","context","key","isFunction","hasOwnProperty","call","sortedKeys","keys","push","sort","forEachSorted","i","reverseParams","iteratorFn","value","nextUid","index","uid","digit","charCodeAt","join","String","fromCharCode","unshift","setHashKey","h","$$hashKey","extend","dst","arguments","int","str","parseInt","inherit","parent","extra","noop","identity","$","valueFn","isUndefined","isDefined","isObject","isNumber","isDate","toString","isRegExp","location","alert","setInterval","isElement","node","nodeName","prop","attr","find","map","results","list","indexOf","array","arrayRemove","splice","copy","source","destination","$evalAsync","$watch","ngMinErr","Date","getTime","RegExp","shallowCopy","src","charAt","equals","o1","o2","t1","t2","keySet","csp","securityPolicy","isActive","querySelector","bind","self","fn","curryArgs","slice","startIndex","apply","concat","toJsonReplacer","val","toJson","pretty","JSON","stringify","fromJson","json","parse","toBoolean","v","lowercase","startingTag","element","jqLite","clone","empty","e","elemHtml","append","html","TEXT_NODE","match","replace","tryDecodeURIComponent","decodeURIComponent","parseKeyValue","keyValue","key_value","split","toKeyValue","parts","arrayValue","encodeUriQuery","encodeUriSegment","pctEncodeSpaces","encodeURIComponent","angularInit","bootstrap","elements","appElement","module","names","NG_APP_CLASS_REGEXP","name","getElementById","querySelectorAll","exec","className","attributes","modules","doBootstrap","injector","tag","$provide","createInjector","invoke","scope","compile","animate","$apply","data","NG_DEFER_BOOTSTRAP","test","angular","resumeBootstrap","angular.resumeBootstrap","extraModules","snake_case","separator","SNAKE_CASE_REGEXP","letter","pos","toLowerCase","assertArg","arg","reason","assertArgFn","acceptArrayAnnotation","constructor","assertNotHasOwnProperty","getter","path","bindFnToScope","lastInstance","len","getBlockElements","nodes","startNode","endNode","nextSibling","setupModuleLoader","$injectorMinErr","$$minErr","factory","requires","configFn","invokeLater","provider","method","insertMethod","invokeQueue","moduleInstance","runBlocks","config","run","block","publishExternalAPI","version","uppercase","angularModule","$LocaleProvider","ngModule","$$SanitizeUriProvider","$CompileProvider","directive","htmlAnchorDirective","inputDirective","formDirective","scriptDirective","selectDirective","styleDirective","optionDirective","ngBindDirective","ngBindHtmlDirective","ngBindTemplateDirective","ngClassDirective","ngClassEvenDirective","ngClassOddDirective","ngCloakDirective","ngControllerDirective","ngFormDirective","ngHideDirective","ngIfDirective","ngIncludeDirective","ngInitDirective","ngNonBindableDirective","ngPluralizeDirective","ngRepeatDirective","ngShowDirective","ngStyleDirective","ngSwitchDirective","ngSwitchWhenDirective","ngSwitchDefaultDirective","ngOptionsDirective","ngTranscludeDirective","ngModelDirective","ngListDirective","ngChangeDirective","requiredDirective","ngValueDirective","ngIncludeFillContentDirective","ngAttributeAliasDirectives","ngEventDirectives","$AnchorScrollProvider","$AnimateProvider","$BrowserProvider","$CacheFactoryProvider","$ControllerProvider","$DocumentProvider","$ExceptionHandlerProvider","$FilterProvider","$InterpolateProvider","$IntervalProvider","$HttpProvider","$HttpBackendProvider","$LocationProvider","$LogProvider","$ParseProvider","$RootScopeProvider","$QProvider","$SceProvider","$SceDelegateProvider","$SnifferProvider","$TemplateCacheProvider","$TimeoutProvider","$WindowProvider","$$RAFProvider","$$AsyncCallbackProvider","camelCase","SPECIAL_CHARS_REGEXP","_","offset","toUpperCase","MOZ_HACK_REGEXP","jqLitePatchJQueryRemove","dispatchThis","filterElems","getterIfNoArguments","removePatch","param","filter","fireEvent","set","setIndex","setLength","childIndex","children","shift","triggerHandler","childLength","jQuery","originalJqFn","$original","JQLite","trim","jqLiteMinErr","parsed","SINGLE_TAG_REGEXP","fragment","createDocumentFragment","HTML_REGEXP","tmp","appendChild","createElement","TAG_NAME_REGEXP","wrap","wrapMap","_default","innerHTML","XHTML_TAG_REGEXP","removeChild","firstChild","lastChild","j","jj","childNodes","textContent","createTextNode","jqLiteAddNodes","jqLiteClone","cloneNode","jqLiteDealoc","jqLiteRemoveData","jqLiteOff","type","unsupported","events","jqLiteExpandoStore","handle","eventHandler","removeEventListenerFn","expandoId","jqName","expandoStore","jqCache","$destroy","jqId","jqLiteData","isSetter","keyDefined","isSimpleGetter","jqLiteHasClass","selector","getAttribute","jqLiteRemoveClass","cssClasses","setAttribute","cssClass","jqLiteAddClass","existingClasses","root","jqLiteController","jqLiteInheritedData","ii","parentNode","host","jqLiteEmpty","getBooleanAttrName","booleanAttr","BOOLEAN_ATTR","BOOLEAN_ELEMENTS","createEventHandler","event","preventDefault","event.preventDefault","returnValue","stopPropagation","event.stopPropagation","cancelBubble","target","srcElement","defaultPrevented","prevent","isDefaultPrevented","event.isDefaultPrevented","eventHandlersCopy","msie","elem","hashKey","objType","HashMap","put","annotate","$inject","fnText","STRIP_COMMENTS","argDecl","FN_ARGS","FN_ARG_SPLIT","FN_ARG","all","underscore","last","modulesToLoad","supportObject","delegate","provider_","providerInjector","instantiate","$get","providerCache","providerSuffix","factoryFn","loadModules","moduleFn","loadedModules","get","_runBlocks","_invokeQueue","invokeArgs","message","stack","createInternalInjector","cache","getService","serviceName","INSTANTIATING","err","locals","args","Type","Constructor","returnedValue","prototype","instance","has","service","$injector","constant","instanceCache","decorator","decorFn","origProvider","orig$get","origProvider.$get","origInstance","instanceInjector","servicename","autoScrollingEnabled","disableAutoScrolling","this.disableAutoScrolling","$window","$location","$rootScope","getFirstAnchor","result","scroll","hash","elm","scrollIntoView","getElementsByName","scrollTo","autoScrollWatch","autoScrollWatchAction","$$rAF","$timeout","supported","Browser","$log","$sniffer","completeOutstandingRequest","outstandingRequestCount","outstandingRequestCallbacks","pop","error","startPoller","interval","setTimeout","check","pollFns","pollFn","pollTimeout","fireUrlChange","newLocation","lastBrowserUrl","url","urlChangeListeners","listener","rawDocument","history","clearTimeout","pendingDeferIds","isMock","$$completeOutstandingRequest","$$incOutstandingRequestCount","self.$$incOutstandingRequestCount","notifyWhenNoOutstandingRequests","self.notifyWhenNoOutstandingRequests","callback","addPollFn","self.addPollFn","href","baseElement","self.url","replaceState","pushState","urlChangeInit","onUrlChange","self.onUrlChange","on","hashchange","baseHref","self.baseHref","lastCookies","lastCookieString","cookiePath","cookies","self.cookies","cookieLength","cookie","escape","warn","cookieArray","unescape","substring","defer","self.defer","delay","timeoutId","cancel","self.defer.cancel","deferId","$document","this.$get","cacheFactory","cacheId","options","refresh","entry","freshEnd","staleEnd","n","link","p","nextEntry","prevEntry","caches","size","stats","capacity","Number","MAX_VALUE","lruHash","lruEntry","remove","removeAll","destroy","info","cacheFactory.info","cacheFactory.get","$cacheFactory","$$sanitizeUriProvider","hasDirectives","Suffix","COMMENT_DIRECTIVE_REGEXP","CLASS_DIRECTIVE_REGEXP","EVENT_HANDLER_ATTR_REGEXP","this.directive","registerDirective","directiveFactory","$exceptionHandler","directives","priority","require","controller","restrict","aHrefSanitizationWhitelist","this.aHrefSanitizationWhitelist","regexp","imgSrcSanitizationWhitelist","this.imgSrcSanitizationWhitelist","$interpolate","$http","$templateCache","$parse","$controller","$sce","$animate","$$sanitizeUri","$compileNodes","transcludeFn","maxPriority","ignoreDirective","previousCompileContext","nodeValue","compositeLinkFn","compileNodes","safeAddClass","publicLinkFn","cloneConnectFn","transcludeControllers","$linkNode","JQLitePrototype","eq","$element","addClass","nodeList","$rootElement","boundTranscludeFn","childLinkFn","$node","childScope","nodeListLength","stableNodeList","Array","linkFns","nodeLinkFn","$new","childTranscludeFn","transclude","createBoundTranscludeFn","attrs","linkFnFound","Attributes","collectDirectives","applyDirectivesToNode","terminal","transcludedScope","cloneFn","controllers","scopeCreated","$$transcluded","attrsMap","$attr","addDirective","directiveNormalize","nodeName_","nName","nAttrs","attrStartName","attrEndName","specified","ngAttrName","NG_ATTR_BINDING","substr","directiveNName","addAttrInterpolateDirective","addTextInterpolateDirective","byPriority","groupScan","attrStart","attrEnd","depth","hasAttribute","$compileMinErr","groupElementsLinkFnWrapper","linkFn","compileNode","templateAttrs","jqCollection","originalReplaceDirective","preLinkFns","postLinkFns","addLinkFns","pre","post","newIsolateScopeDirective","$$isolateScope","cloneAndAnnotateFn","getControllers","elementControllers","retrievalMethod","optional","directiveName","linkNode","controllersBoundTransclude","cloneAttachFn","hasElementTranscludeDirective","isolateScope","$$element","LOCAL_REGEXP","templateDirective","$$originalDirective","definition","scopeName","attrName","mode","lastValue","parentGet","parentSet","compare","$$isolateBindings","$observe","$$observers","$$scope","literal","a","b","assign","parentValueWatch","parentValue","controllerDirectives","controllerInstance","controllerAs","$scope","scopeToChild","template","templateUrl","terminalPriority","newScopeDirective","nonTlbTranscludeDirective","hasTranscludeDirective","$compileNode","$template","$$start","$$end","directiveValue","assertNoDuplicate","$$tlb","createComment","replaceWith","replaceDirective","contents","denormalizeTemplate","newTemplateAttrs","templateDirectives","unprocessedDirectives","markDirectivesAsIsolate","mergeTemplateAttributes","compileTemplateUrl","Math","max","tDirectives","startAttrName","endAttrName","srcAttr","dstAttr","$set","tAttrs","linkQueue","afterTemplateNodeLinkFn","afterTemplateChildLinkFn","beforeTemplateCompileNode","origAsyncDirective","derivedSyncDirective","getTrustedResourceUrl","success","content","childBoundTranscludeFn","tempTemplateAttrs","beforeTemplateLinkNode","linkRootElement","oldClasses","response","code","headers","delayedNodeLinkFn","ignoreChildLinkFn","rootElement","diff","what","previousDirective","text","interpolateFn","textInterpolateLinkFn","bindings","interpolateFnWatchAction","getTrustedContext","attrNormalizedName","HTML","RESOURCE_URL","attrInterpolatePreLinkFn","$$inter","newValue","oldValue","$updateClass","elementsToRemove","newNode","firstElementToRemove","removeCount","j2","replaceChild","expando","k","kk","annotation","$addClass","classVal","$removeClass","removeClass","newClasses","toAdd","tokenDifference","toRemove","setClass","writeAttr","booleanKey","removeAttr","listeners","startSymbol","endSymbol","PREFIX_REGEXP","str1","str2","values","tokens1","tokens2","token","CNTRL_REG","register","this.register","expression","identifier","exception","cause","parseHeaders","line","headersGetter","headersObj","transformData","fns","JSON_START","JSON_END","PROTECTION_PREFIX","CONTENT_TYPE_APPLICATION_JSON","defaults","d","interceptorFactories","interceptors","responseInterceptorFactories","responseInterceptors","$httpBackend","$browser","$q","requestConfig","transformResponse","resp","status","reject","transformRequest","mergeHeaders","execHeaders","headerContent","headerFn","header","defHeaders","reqHeaders","defHeaderName","reqHeaderName","common","lowercaseDefHeaderName","xsrfValue","urlIsSameOrigin","xsrfCookieName","xsrfHeaderName","chain","serverRequest","reqData","withCredentials","sendReq","then","promise","when","reversedInterceptors","interceptor","request","requestError","responseError","thenFn","rejectFn","promise.success","promise.error","done","headersString","statusText","resolvePromise","$$phase","deferred","resolve","removePendingReq","idx","pendingRequests","cachedResp","buildUrl","params","defaultCache","timeout","responseType","interceptorFactory","responseFn","createShortMethods","createShortMethodsWithData","createXhr","XMLHttpRequest","ActiveXObject","createHttpBackend","callbacks","$browserDefer","jsonpReq","script","doneWrapper","onreadystatechange","onload","onerror","body","script.onreadystatechange","readyState","script.onerror","ABORTED","timeoutRequest","jsonpDone","xhr","abort","completeRequest","urlResolve","protocol","callbackId","counter","open","setRequestHeader","xhr.onreadystatechange","responseHeaders","getAllResponseHeaders","responseText","send","this.startSymbol","this.endSymbol","mustHaveExpression","trustedContext","endIndex","hasInterpolation","startSymbolLength","exp","endSymbolLength","$interpolateMinErr","part","getTrusted","valueOf","newErr","$interpolate.startSymbol","$interpolate.endSymbol","count","invokeApply","clearInterval","iteration","skipApply","$$intervalId","tick","notify","intervals","interval.cancel","short","pluralCat","num","encodePath","segments","parseAbsoluteUrl","absoluteUrl","locationObj","appBase","parsedUrl","$$protocol","$$host","hostname","$$port","port","DEFAULT_PORTS","parseAppUrl","relativeUrl","prefixed","$$path","pathname","$$search","search","$$hash","beginsWith","begin","whole","stripHash","stripFile","lastIndexOf","LocationHtml5Url","basePrefix","$$html5","appBaseNoFile","$$parse","this.$$parse","pathUrl","$locationMinErr","$$compose","this.$$compose","$$url","$$absUrl","$$rewrite","this.$$rewrite","appUrl","prevAppUrl","LocationHashbangUrl","hashPrefix","withoutBaseUrl","withoutHashUrl","windowsFilePathExp","firstPathSegmentMatch","LocationHashbangInHtml5Url","locationGetter","property","locationGetterSetter","preprocess","html5Mode","this.hashPrefix","prefix","this.html5Mode","afterLocationChange","oldUrl","$broadcast","absUrl","initialUrl","LocationMode","ctrlKey","metaKey","which","absHref","animVal","rewrittenUrl","newUrl","$digest","changeCounter","$locationWatch","currentReplace","$$replace","debug","debugEnabled","this.debugEnabled","flag","formatError","Error","sourceURL","consoleLog","console","logFn","log","hasApply","arg1","arg2","ensureSafeMemberName","fullExpression","$parseMinErr","ensureSafeObject","setter","setValue","fullExp","propertyObj","unwrapPromises","promiseWarning","$$v","cspSafeGetterFn","key0","key1","key2","key3","key4","cspSafePromiseEnabledGetter","pathVal","cspSafeGetter","simpleGetterFn1","simpleGetterFn2","getterFn","getterFnCache","pathKeys","pathKeysLength","evaledFnGetter","Function","$parseOptions","this.unwrapPromises","logPromiseWarnings","this.logPromiseWarnings","$filter","promiseWarningCache","parsedExpression","lexer","Lexer","parser","Parser","qFactory","nextTick","exceptionHandler","defaultCallback","defaultErrback","pending","ref","createInternalRejectedPromise","progress","errback","progressback","wrappedCallback","wrappedErrback","wrappedProgressback","catch","finally","makePromise","resolved","handleCallback","isResolved","callbackOutput","promises","requestAnimationFrame","webkitRequestAnimationFrame","mozRequestAnimationFrame","cancelAnimationFrame","webkitCancelAnimationFrame","mozCancelAnimationFrame","webkitCancelRequestAnimationFrame","rafSupported","raf","id","timer","TTL","$rootScopeMinErr","lastDirtyWatch","digestTtl","this.digestTtl","Scope","$id","$parent","$$watchers","$$nextSibling","$$prevSibling","$$childHead","$$childTail","$root","$$destroyed","$$asyncQueue","$$postDigestQueue","$$listeners","$$listenerCount","beginPhase","phase","compileToFn","decrementListenerCount","current","initWatchVal","isolate","child","ChildScope","watchExp","objectEquality","watcher","listenFn","watcher.fn","newVal","oldVal","originalFn","$watchCollection","veryOldValue","trackVeryOldValue","changeDetected","objGetter","internalArray","internalObject","initRun","oldLength","$watchCollectionWatch","newLength","$watchCollectionAction","watch","watchers","asyncQueue","postDigestQueue","dirty","ttl","watchLog","logIdx","logMsg","asyncTask","$eval","isNaN","next","$on","this.$watch","expr","$$postDigest","namedListeners","$emit","listenerArgs","array1","currentScope","sanitizeUri","uri","isImage","regex","normalizedVal","adjustMatcher","matcher","$sceMinErr","adjustMatchers","matchers","adjustedMatchers","SCE_CONTEXTS","resourceUrlWhitelist","resourceUrlBlacklist","this.resourceUrlWhitelist","this.resourceUrlBlacklist","generateHolderType","Base","holderType","trustedValue","$$unwrapTrustedValue","this.$$unwrapTrustedValue","holderType.prototype.valueOf","holderType.prototype.toString","htmlSanitizer","trustedValueHolderBase","byType","CSS","URL","JS","trustAs","maybeTrusted","allowed","enabled","this.enabled","$sceDelegate","msieDocumentMode","sce","isEnabled","sce.isEnabled","sce.getTrusted","parseAs","sce.parseAs","sceParseAsTrusted","enumValue","lName","eventSupport","android","userAgent","navigator","boxee","documentMode","vendorPrefix","vendorRegex","bodyStyle","style","transitions","animations","webkitTransition","webkitAnimation","hasEvent","divElm","deferreds","$$timeoutId","timeout.cancel","base","urlParsingNode","requestUrl","originUrl","filters","suffix","currencyFilter","dateFilter","filterFilter","jsonFilter","limitToFilter","lowercaseFilter","numberFilter","orderByFilter","uppercaseFilter","comparator","comparatorType","predicates","predicates.check","objKey","filtered","$locale","formats","NUMBER_FORMATS","amount","currencySymbol","CURRENCY_SYM","formatNumber","PATTERNS","GROUP_SEP","DECIMAL_SEP","number","fractionSize","pattern","groupSep","decimalSep","isFinite","isNegative","abs","numStr","formatedText","hasExponent","toFixed","fractionLen","min","minFrac","maxFrac","pow","round","fraction","lgroup","lgSize","group","gSize","negPre","posPre","negSuf","posSuf","padNumber","digits","neg","dateGetter","date","dateStrGetter","shortForm","jsonStringToDate","string","R_ISO8601_STR","tzHour","tzMin","dateSetter","setUTCFullYear","setFullYear","timeSetter","setUTCHours","setHours","m","s","ms","parseFloat","format","DATETIME_FORMATS","NUMBER_STRING","DATE_FORMATS_SPLIT","DATE_FORMATS","object","input","limit","out","sortPredicate","reverseOrder","reverseComparator","comp","descending","v1","v2","predicate","arrayCopy","ngDirective","FormController","toggleValidCss","isValid","validationErrorKey","INVALID_CLASS","VALID_CLASS","form","parentForm","nullFormCtrl","invalidCount","errors","$error","controls","$name","ngForm","$dirty","$pristine","$valid","$invalid","$addControl","PRISTINE_CLASS","form.$addControl","control","$removeControl","form.$removeControl","queue","validationToken","$setValidity","form.$setValidity","$setDirty","form.$setDirty","DIRTY_CLASS","$setPristine","form.$setPristine","validate","ctrl","validatorName","validity","addNativeHtml5Validators","$parsers","validator","badInput","customError","typeMismatch","valueMissing","textInputType","composing","ngTrim","$viewValue","$setViewValue","deferListener","keyCode","$render","ctrl.$render","$isEmpty","ngPattern","patternValidator","patternObj","$formatters","ngMinlength","minlength","minLengthValidator","ngMaxlength","maxlength","maxLengthValidator","classDirective","arrayDifference","arrayClasses","classes","digestClassCounts","classCounts","classesToUpdate","ngClassWatchAction","$index","old$index","mod","Object","addEventListenerFn","addEventListener","attachEvent","removeEventListener","detachEvent","_data","JQLite._data","optgroup","option","tbody","tfoot","colgroup","caption","thead","th","td","ready","trigger","fired","removeAttribute","css","currentStyle","lowercasedName","getNamedItem","ret","getText","textProp","NODE_TYPE_TEXT_PROPERTY","$dv","multiple","selected","onFn","eventFns","contains","compareDocumentPosition","adown","documentElement","bup","eventmap","related","relatedTarget","one","off","replaceNode","insertBefore","contentDocument","prepend","wrapNode","after","newElement","toggleClass","condition","classCondition","nextElementSibling","getElementsByTagName","eventName","eventData","arg3","unbind","$animateMinErr","$$selectors","classNameFilter","this.classNameFilter","$$classNameFilter","$$asyncCallback","enter","leave","move","add","PATH_MATCH","paramValue","OPERATORS","null","true","false","+","-","*","/","%","^","===","!==","==","!=","<",">","<=",">=","&&","||","&","|","!","ESCAPE","lex","ch","lastCh","tokens","is","readString","peek","readNumber","isIdent","readIdent","was","isWhitespace","ch2","ch3","fn2","fn3","throwError","chars","isExpOperator","start","end","colStr","peekCh","ident","lastDot","peekIndex","methodName","quote","rawString","hex","rep","ZERO","assignment","logicalOR","functionCall","fieldAccess","objectIndex","filterChain","this.filterChain","primary","statements","expect","consume","arrayDeclaration","msg","peekToken","e1","e2","e3","e4","t","unaryFn","right","ternaryFn","left","middle","binaryFn","statement","argsFn","fnInvoke","ternary","logicalAND","equality","relational","additive","multiplicative","unary","field","indexFn","o","safe","contextGetter","fnPtr","elementFns","allConstant","elementFn","keyValues","ampmGetter","getHours","AMPMS","timeZoneGetter","zone","getTimezoneOffset","paddedZone","xlinkHref","propName","normalized","ngBooleanAttrWatchAction","formDirectiveFactory","isNgForm","formElement","action","preventDefaultListener","parentFormCtrl","alias","URL_REGEXP","EMAIL_REGEXP","NUMBER_REGEXP","inputType","numberInputType","minValidator","maxValidator","urlInputType","urlValidator","emailInputType","emailValidator","radioInputType","checked","checkboxInputType","trueValue","ngTrueValue","falseValue","ngFalseValue","ctrl.$isEmpty","NgModelController","$modelValue","NaN","$viewChangeListeners","ngModelGet","ngModel","ngModelSet","this.$isEmpty","inheritedData","this.$setValidity","this.$setPristine","this.$setViewValue","ngModelWatch","formatters","ctrls","modelCtrl","formCtrl","ngChange","required","ngList","viewValue","CONSTANT_VALUE_REGEXP","tpl","tplAttr","ngValue","ngValueConstantLink","ngValueLink","valueWatchAction","ngBind","ngBindWatchAction","ngBindTemplate","ngBindHtml","getStringValue","ngBindHtmlWatchAction","getTrustedHtml","$transclude","previousElements","ngIf","ngIfWatchAction","$anchorScroll","srcExp","ngInclude","onloadExp","autoScrollExp","autoscroll","previousElement","currentElement","cleanupLastIncludeContent","parseAsResourceUrl","ngIncludeWatchAction","afterAnimation","thisChangeId","newScope","$compile","ngInit","BRACE","numberExp","whenExp","whens","whensExpFns","isWhen","attributeName","ngPluralizeWatch","ngPluralizeWatchAction","ngRepeatMinErr","ngRepeat","trackByExpGetter","trackByIdExpFn","trackByIdArrayFn","trackByIdObjFn","valueIdentifier","keyIdentifier","hashFnLocals","lhs","rhs","trackByExp","lastBlockMap","ngRepeatAction","collection","previousNode","nextNode","nextBlockMap","arrayLength","collectionKeys","nextBlockOrder","trackByIdFn","trackById","$first","$last","$middle","$odd","$even","ngShow","ngShowWatchAction","ngHide","ngHideWatchAction","ngStyle","ngStyleWatchAction","newStyles","oldStyles","ngSwitchController","cases","selectedTranscludes","selectedElements","selectedScopes","ngSwitch","ngSwitchWatchAction","change","selectedTransclude","selectedScope","caseElement","anchor","ngSwitchWhen","$attrs","ngOptionsMinErr","NG_OPTIONS_REGEXP","nullModelCtrl","optionsMap","ngModelCtrl","unknownOption","databound","init","self.init","ngModelCtrl_","nullOption_","unknownOption_","addOption","self.addOption","removeOption","self.removeOption","hasOption","renderUnknownOption","self.renderUnknownOption","unknownVal","self.hasOption","setupAsSingle","selectElement","selectCtrl","ngModelCtrl.$render","emptyOption","setupAsMultiple","lastView","items","selectMultipleWatch","setupAsOptions","render","optionGroups","optionGroupNames","optionGroupName","optionGroup","existingParent","existingOptions","modelValue","valuesFn","keyName","groupIndex","selectedSet","lastElement","trackFn","trackIndex","valueName","groupByFn","modelCast","label","displayFn","nullOption","groupLength","optionGroupsCache","optGroupTemplate","existingOption","optionTemplate","optionsExp","track","optionElement","ngOptions","ngModelCtrl.$isEmpty","nullSelectCtrl","selectCtrlName","interpolateWatchAction","$$csp"] -} diff --git a/release-0.19.0/examples/update-demo/local/index.html b/release-0.19.0/examples/update-demo/local/index.html deleted file mode 100644 index 22a4859126a..00000000000 --- a/release-0.19.0/examples/update-demo/local/index.html +++ /dev/null @@ -1,36 +0,0 @@ - - - - - - - - - -
- - ID: {{server.podName}}
- Host: {{server.host}}
- Status: {{server.status}}
- Image: {{server.dockerImage}}
- Labels: -
    -
  • {{key}}={{value}}
  • -
-
- - diff --git a/release-0.19.0/examples/update-demo/local/script.js b/release-0.19.0/examples/update-demo/local/script.js deleted file mode 100644 index cf0fb3dd6b6..00000000000 --- a/release-0.19.0/examples/update-demo/local/script.js +++ /dev/null @@ -1,100 +0,0 @@ -/* -Copyright 2014 Google Inc. All rights reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -var base = "http://localhost:8001/api/v1beta3/"; - -var updateImage = function($http, server) { - $http.get(base + "proxy/namespaces/default/pods/" + server.podName + "/data.json") - .success(function(data) { - console.log(data); - server.image = data.image; - }) - .error(function(data) { - console.log(data); - server.image = ""; - }); -}; - -var updateServer = function($http, server) { - $http.get(base + "namespaces/default/pods/" + server.podName) - .success(function(data) { - console.log(data); - server.labels = data.metadata.labels; - server.host = data.spec.host.split('.')[0]; - server.status = data.status.phase; - server.dockerImage = data.status.containerStatuses[0].image; - updateImage($http, server); - }) - .error(function(data) { - console.log(data); - }); -}; - -var updateData = function($scope, $http) { - var servers = $scope.servers; - for (var i = 0; i < servers.length; ++i) { - var server = servers[i]; - updateServer($http, server); - } -}; - -var ButtonsCtrl = function ($scope, $http, $interval) { - $scope.servers = []; - update($scope, $http); - $interval(angular.bind({}, update, $scope, $http), 2000); -}; - -var getServer = function($scope, name) { - var servers = $scope.servers; - for (var i = 0; i < servers.length; ++i) { - if (servers[i].podName == name) { - return servers[i]; - } - } - return null; -}; - -var isUpdateDemoPod = function(pod) { - return pod.metadata && pod.metadata.labels && pod.metadata.labels.name == "update-demo"; -}; - -var update = function($scope, $http) { - if (!$http) { - console.log("No HTTP!"); - return; - } - $http.get(base + "namespaces/default/pods") - .success(function(data) { - console.log(data); - var newServers = []; - for (var i = 0; i < data.items.length; ++i) { - var pod = data.items[i]; - if (!isUpdateDemoPod(pod)) { - continue; - } - var server = getServer($scope, pod.metadata.name); - if (server == null) { - server = { "podName": pod.metadata.name }; - } - newServers.push(server); - } - $scope.servers = newServers; - updateData($scope, $http); - }) - .error(function(data) { - console.log("ERROR: " + data); - }) -}; diff --git a/release-0.19.0/examples/update-demo/local/style.css b/release-0.19.0/examples/update-demo/local/style.css deleted file mode 100644 index ea8941c0ac3..00000000000 --- a/release-0.19.0/examples/update-demo/local/style.css +++ /dev/null @@ -1,40 +0,0 @@ -/* -Copyright 2014 Google Inc. All rights reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -img { - height: 100px; - width: 100px; - float: right; - background-size: 100px 100px; - background-color: black; - margin-left: 10px; - border: none; -} - -ul { - margin-top: 0; - margin-bottom: 0; -} - -.pod { - font-family: Roboto, Open Sans, arial; - border: 1px solid black; - border-radius: 5px; - padding: 10px; - margin: 10px; - display: inline-block; - background-color: #D1D1D1; -} diff --git a/release-0.19.0/examples/update-demo/nautilus-rc.yaml b/release-0.19.0/examples/update-demo/nautilus-rc.yaml deleted file mode 100644 index 5e3b4566fce..00000000000 --- a/release-0.19.0/examples/update-demo/nautilus-rc.yaml +++ /dev/null @@ -1,21 +0,0 @@ -apiVersion: v1beta3 -kind: ReplicationController -metadata: - name: update-demo-nautilus -spec: - replicas: 2 - selector: - name: update-demo - version: nautilus - template: - metadata: - labels: - name: update-demo - version: nautilus - spec: - containers: - - image: gcr.io/google_containers/update-demo:nautilus - name: update-demo - ports: - - containerPort: 80 - protocol: TCP diff --git a/release-0.19.0/examples/walkthrough/README.md b/release-0.19.0/examples/walkthrough/README.md deleted file mode 100644 index 7e1982f71a8..00000000000 --- a/release-0.19.0/examples/walkthrough/README.md +++ /dev/null @@ -1,118 +0,0 @@ -# Kubernetes 101 - Walkthrough - -## Pods -The first atom of Kubernetes is a _pod_. A pod is a collection of containers that are symbiotically grouped. - -See [pods](../../docs/pods.md) for more details. - -### Intro - -Trivially, a single container might be a pod. For example, you can express a simple web server as a pod: - -```yaml -apiVersion: v1beta3 -kind: Pod -metadata: - name: www -spec: - containers: - - name: nginx - image: nginx -``` - -A pod definition is a declaration of a _desired state_. Desired state is a very important concept in the Kubernetes model. Many things present a desired state to the system, and it is Kubernetes' responsibility to make sure that the current state matches the desired state. For example, when you create a Pod, you declare that you want the containers in it to be running. If the containers happen to not be running (e.g. program failure, ...), Kubernetes will continue to (re-)create them for you in order to drive them to the desired state. This process continues until you delete the Pod. - -See the [design document](../../DESIGN.md) for more details. - -### Volumes - -Now that's great for a static web server, but what about persistent storage? We know that the container file system only lives as long as the container does, so we need more persistent storage. To do this, you also declare a ```volume``` as part of your pod, and mount it into a container: -```yaml -apiVersion: v1beta3 -kind: Pod -metadata: - name: storage -spec: - containers: - - name: redis - image: redis - volumeMounts: - # name must match the volume name below - - name: redis-persistent-storage - # mount path within the container - mountPath: /data/redis - volumes: - - name: redis-persistent-storage - emptyDir: {} -``` - -Ok, so what did we do? We added a volume to our pod: -``` - volumes: - - name: redis-persistent-storage - emptyDir: {} -``` - -And we added a reference to that volume to our container: -``` - volumeMounts: - # name must match the volume name below - - name: redis-persistent-storage - # mount path within the container - mountPath: /data/redis -``` - -In Kubernetes, ```emptyDir``` Volumes live for the lifespan of the Pod, which is longer than the lifespan of any one container, so if the container fails and is restarted, our persistent storage will live on. - -If you want to mount a directory that already exists in the file system (e.g. ```/var/logs```) you can use the ```hostDir``` directive. - -See [volumes](../../docs/volumes.md) for more details. - -### Multiple Containers - -_Note: -The examples below are syntactically correct, but some of the images (e.g. kubernetes/git-monitor) don't exist yet. We're working on turning these into working examples._ - - -However, often you want to have two different containers that work together. An example of this would be a web server, and a helper job that polls a git repository for new updates: - -```yaml -apiVersion: v1beta3 -kind: Pod -metadata: - name: www -spec: - containers: - - name: nginx - image: nginx - volumeMounts: - - mountPath: /srv/www - name: www-data - readOnly: true - - name: git-monitor - image: kubernetes/git-monitor - env: - - name: GIT_REPO - value: http://github.com/some/repo.git - volumeMounts: - - mountPath: /data - name: www-data - volumes: - - name: www-data - emptyDir: {} -``` - -Note that we have also added a volume here. In this case, the volume is mounted into both containers. It is marked ```readOnly``` in the web server's case, since it doesn't need to write to the directory. - -Finally, we have also introduced an environment variable to the ```git-monitor``` container, which allows us to parameterize that container with the particular git repository that we want to track. - - -### What's next? -Continue on to [Kubernetes 201](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/walkthrough/k8s201.md) or -for a complete application see the [guestbook example](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/guestbook/README.md) - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/walkthrough/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/walkthrough/README.md?pixel)]() diff --git a/release-0.19.0/examples/walkthrough/k8s201.md b/release-0.19.0/examples/walkthrough/k8s201.md deleted file mode 100644 index f08f868e097..00000000000 --- a/release-0.19.0/examples/walkthrough/k8s201.md +++ /dev/null @@ -1,157 +0,0 @@ -# Kubernetes 201 - Labels, Replication Controllers, Services and Health Checking - -### Overview -When we had just left off in the [previous episode](README.md) we had learned about pods, multiple containers and volumes. -We'll now cover some slightly more advanced topics in Kubernetes, related to application productionization, deployment and -scaling. - -### Labels -Having already learned about Pods and how to create them, you may be struck by an urge to create many, many pods. Please do! But eventually you will need a system to organize these pods into groups. The system for achieving this in Kubernetes is Labels. Labels are key-value pairs that are attached to each object in Kubernetes. Label selectors can be passed along with a RESTful ```list``` request to the apiserver to retrieve a list of objects which match that label selector. For example: - -```sh -kubectl get pods -l name=nginx -``` - -Lists all pods who name label matches 'nginx'. Labels are discussed in detail [elsewhere](http://docs.k8s.io/labels.md), but they are a core concept for two additional building blocks for Kubernetes, Replication Controllers and Services - -### Replication Controllers - -OK, now you have an awesome, multi-container, labelled pod and you want to use it to build an application, you might be tempted to just start building a whole bunch of individual pods, but if you do that, a whole host of operational concerns pop up. For example: how will you scale the number of pods up or down and how will you ensure that all pods are homogenous? - -Replication controllers are the objects to answer these questions. A replication controller combines a template for pod creation (a "cookie-cutter" if you will) and a number of desired replicas, into a single Kubernetes object. The replication controller also contains a label selector that identifies the set of objects managed by the replication controller. The replication controller constantly measures the size of this set relative to the desired size, and takes action by creating or deleting pods. The design of replication controllers is discussed in detail [elsewhere](http://docs.k8s.io/replication-controller.md). - -An example replication controller that instantiates two pods running nginx looks like: -```yaml -apiVersion: v1beta3 -kind: ReplicationController -metadata: - name: nginx-controller -spec: - replicas: 2 - # selector identifies the set of Pods that this - # replication controller is responsible for managing - selector: - name: nginx - # podTemplate defines the 'cookie cutter' used for creating - # new pods when necessary - template: - metadata: - labels: - # Important: these labels need to match the selector above - # The api server enforces this constraint. - name: nginx - spec: - containers: - - name: nginx - image: nginx - ports: - - containerPort: 80 -``` - -### Services -Once you have a replicated set of pods, you need an abstraction that enables connectivity between the layers of your application. For example, if you have a replication controller managing your backend jobs, you don't want to have to reconfigure your front-ends whenever you re-scale your backends. Likewise, if the pods in your backends are scheduled (or rescheduled) onto different machines, you can't be required to re-configure your front-ends. In Kubernetes, the Service object achieves these goals. A Service basically combines an IP address and a label selector together to form a simple, static rallying point for connecting to a micro-service in your application. - -For example, here is a service that balances across the pods created in the previous nginx replication controller example: -```yaml -apiVersion: v1beta3 -kind: Service -metadata: - name: nginx-example -spec: - ports: - - port: 8000 # the port that this service should serve on - # the container on each pod to connect to, can be a name - # (e.g. 'www') or a number (e.g. 80) - targetPort: 80 - protocol: TCP - # just like the selector in the replication controller, - # but this time it identifies the set of pods to load balance - # traffic to. - selector: - name: nginx -``` - -When created, each service is assigned a unique IP address. This address is tied to the lifespan of the Service, and will not change while the Service is alive. Pods can be configured to talk to the service, and know that communication to the service will be automatically load-balanced out to some pod that is a member of the set identified by the label selector in the Service. Services are described in detail [elsewhere](http://docs.k8s.io/services.md). - -### Health Checking -When I write code it never crashes, right? Sadly the [kubernetes issues list](https://github.com/GoogleCloudPlatform/kubernetes/issues) indicates otherwise... - -Rather than trying to write bug-free code, a better approach is to use a management system to perform periodic health checking -and repair of your application. That way, a system, outside of your application itself, is responsible for monitoring the -application and taking action to fix it. It's important that the system be outside of the application, since of course, if -your application fails, and the health checking agent is part of your application, it may fail as well, and you'll never know. -In Kubernetes, the health check monitor is the Kubelet agent. - -#### Low level process health-checking - -The simplest form of health-checking is just process level health checking. The Kubelet constantly asks the Docker daemon -if the container process is still running, and if not, the container process is restarted. In all of the Kubernetes examples -you have run so far, this health checking was actually already enabled. It's on for every single container that runs in -Kubernetes. - -#### Application health-checking - -However, in many cases, this low-level health checking is insufficient. Consider for example, the following code: - -```go -lockOne := sync.Mutex{} -lockTwo := sync.Mutex{} - -go func() { - lockOne.Lock(); - lockTwo.Lock(); - ... -}() - -lockTwo.Lock(); -lockOne.Lock(); -``` - -This is a classic example of a problem in computer science known as "Deadlock". From Docker's perspective your application is -still operating, the process is still running, but from your application's perspective, your code is locked up, and will never respond correctly. - -To address this problem, Kubernetes supports user implemented application health-checks. These checks are performed by the -Kubelet to ensure that your application is operating correctly for a definition of "correctly" that _you_ provide. - -Currently, there are three types of application health checks that you can choose from: - - * HTTP Health Checks - The Kubelet will call a web hook. If it returns between 200 and 399, it is considered success, failure otherwise. - * Container Exec - The Kubelet will execute a command inside your container. If it exits with status 0 it will be considered a success. - * TCP Socket - The Kubelet will attempt to open a socket to your container. If it can establish a connection, the container is considered healthy, if it can't it is considered a failure. - -In all cases, if the Kubelet discovers a failure, the container is restarted. - -The container health checks are configured in the "LivenessProbe" section of your container config. There you can also specify an "initialDelaySeconds" that is a grace period from when the container is started to when health checks are performed, to enable your container to perform any necessary initialization. - -Here is an example config for a pod with an HTTP health check: -```yaml -apiVersion: v1beta3 -kind: Pod -metadata: - name: pod-with-healthcheck -spec: - containers: - - name: nginx - image: nginx - # defines the health checking - livenessProbe: - # an http probe - httpGet: - path: /_status/healthz - port: 80 - # length of time to wait for a pod to initialize - # after pod startup, before applying health checking - initialDelaySeconds: 30 - timeoutSeconds: 1 - ports: - - containerPort: 80 -``` - -### What's next? -For a complete application see the [guestbook example](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/guestbook). - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/walkthrough/k8s201.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.19.0/examples/walkthrough/k8s201.md?pixel)]() diff --git a/release-0.19.0/examples/walkthrough/pod-with-http-healthcheck.yaml b/release-0.19.0/examples/walkthrough/pod-with-http-healthcheck.yaml deleted file mode 100644 index af1ca32a1ca..00000000000 --- a/release-0.19.0/examples/walkthrough/pod-with-http-healthcheck.yaml +++ /dev/null @@ -1,20 +0,0 @@ -apiVersion: v1beta3 -kind: Pod -metadata: - name: pod-with-healthcheck -spec: - containers: - - name: nginx - image: nginx - # defines the health checking - livenessProbe: - # an http probe - httpGet: - path: /_status/healthz - port: 80 - # length of time to wait for a pod to initialize - # after pod startup, before applying health checking - initialDelaySeconds: 30 - timeoutSeconds: 1 - ports: - - containerPort: 80 diff --git a/release-0.19.0/examples/walkthrough/pod1.yaml b/release-0.19.0/examples/walkthrough/pod1.yaml deleted file mode 100644 index 7eefc9ca8f5..00000000000 --- a/release-0.19.0/examples/walkthrough/pod1.yaml +++ /dev/null @@ -1,8 +0,0 @@ -apiVersion: v1beta3 -kind: Pod -metadata: - name: www -spec: - containers: - - name: nginx - image: nginx diff --git a/release-0.19.0/examples/walkthrough/pod2.yaml b/release-0.19.0/examples/walkthrough/pod2.yaml deleted file mode 100644 index ed0cd1fe916..00000000000 --- a/release-0.19.0/examples/walkthrough/pod2.yaml +++ /dev/null @@ -1,16 +0,0 @@ -apiVersion: v1beta3 -kind: Pod -metadata: - name: storage -spec: - containers: - - name: redis - image: redis - volumeMounts: - # name must match the volume name below - - name: redis-persistent-storage - # mount path within the container - mountPath: /data/redis - volumes: - - name: redis-persistent-storage - emptyDir: {} diff --git a/release-0.19.0/examples/walkthrough/podtemplate.json b/release-0.19.0/examples/walkthrough/podtemplate.json deleted file mode 100644 index 5732a113584..00000000000 --- a/release-0.19.0/examples/walkthrough/podtemplate.json +++ /dev/null @@ -1,22 +0,0 @@ - { - "apiVersion": "v1beta3", - "kind": "PodTemplate", - "metadata": { - "name": "nginx" - }, - "template": { - "metadata": { - "labels": { - "name": "nginx" - }, - "generateName": "nginx-" - }, - "spec": { - "containers": [{ - "name": "nginx", - "image": "dockerfile/nginx", - "ports": [{"containerPort": 80}] - }] - } - } - } diff --git a/release-0.19.0/examples/walkthrough/replication-controller.yaml b/release-0.19.0/examples/walkthrough/replication-controller.yaml deleted file mode 100644 index 826b945ca05..00000000000 --- a/release-0.19.0/examples/walkthrough/replication-controller.yaml +++ /dev/null @@ -1,24 +0,0 @@ -apiVersion: v1beta3 -kind: ReplicationController -metadata: - name: nginx-controller -spec: - replicas: 2 - # selector identifies the set of Pods that this - # replicaController is responsible for managing - selector: - name: nginx - # podTemplate defines the 'cookie cutter' used for creating - # new pods when necessary - template: - metadata: - labels: - # Important: these labels need to match the selector above - # The api server enforces this constraint. - name: nginx - spec: - containers: - - name: nginx - image: nginx - ports: - - containerPort: 80 diff --git a/release-0.19.0/examples/walkthrough/service.yaml b/release-0.19.0/examples/walkthrough/service.yaml deleted file mode 100644 index 58a459e5116..00000000000 --- a/release-0.19.0/examples/walkthrough/service.yaml +++ /dev/null @@ -1,16 +0,0 @@ -apiVersion: v1beta3 -kind: Service -metadata: - name: nginx-example -spec: - ports: - - port: 8000 # the port that this service should serve on - # the container on each pod to connect to, can be a name - # (e.g. 'www') or a number (e.g. 80) - targetPort: 80 - protocol: TCP - # just like the selector in the replication controller, - # but this time it identifies the set of pods to load balance - # traffic to. - selector: - name: nginx diff --git a/release-0.20.0/docs/.files_generated b/release-0.20.0/docs/.files_generated deleted file mode 100644 index ea5ef406c64..00000000000 --- a/release-0.20.0/docs/.files_generated +++ /dev/null @@ -1,28 +0,0 @@ -kubectl.md -kubectl_api-versions.md -kubectl_cluster-info.md -kubectl_config.md -kubectl_config_set-cluster.md -kubectl_config_set-context.md -kubectl_config_set-credentials.md -kubectl_config_set.md -kubectl_config_unset.md -kubectl_config_use-context.md -kubectl_config_view.md -kubectl_create.md -kubectl_delete.md -kubectl_describe.md -kubectl_exec.md -kubectl_expose.md -kubectl_get.md -kubectl_label.md -kubectl_logs.md -kubectl_namespace.md -kubectl_port-forward.md -kubectl_proxy.md -kubectl_rolling-update.md -kubectl_run.md -kubectl_scale.md -kubectl_stop.md -kubectl_update.md -kubectl_version.md diff --git a/release-0.20.0/docs/README.md b/release-0.20.0/docs/README.md deleted file mode 100644 index 37d69f00789..00000000000 --- a/release-0.20.0/docs/README.md +++ /dev/null @@ -1,30 +0,0 @@ -# Kubernetes Documentation - -**Note** -This documentation is current for 0.20.0. - -Documentation for previous releases is available in their respective branches: - * [v0.19.0](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/release-0.19.0/docs) - * [v0.18.1](https://github.com/GoogleCloudPlatform/kubernetes/tree/release-0.18/docs) - * [v0.17.1](https://github.com/GoogleCloudPlatform/kubernetes/tree/release-0.17/docs) - -* The [User's guide](user-guide.md) is for anyone who wants to run programs and services on an existing Kubernetes cluster. - -* The [Cluster Admin's guide](cluster-admin-guide.md) is for anyone setting up a Kubernetes cluster or administering it. - -* The [Developer guide](developer-guide.md) is for anyone wanting to write programs that access the kubernetes API, - write plugins or extensions, or modify the core code of kubernetes. - -* The [Kubectl Command Line Interface](kubectl.md) is a detailed reference on the `kubectl` CLI. - -* The [API object documentation](http://kubernetes.io/third_party/swagger-ui/) is a detailed description of all fields found in core API objects. - -* An overview of the [Design of Kubernetes](design) - -* There are example files and walkthroughs in the [examples](../examples) folder. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/README.md?pixel)]() diff --git a/release-0.20.0/docs/accessing-the-cluster.md b/release-0.20.0/docs/accessing-the-cluster.md deleted file mode 100644 index 503ba6b5a09..00000000000 --- a/release-0.20.0/docs/accessing-the-cluster.md +++ /dev/null @@ -1,251 +0,0 @@ -# User Guide to Accessing the Cluster - * [Accessing the cluster API](#api) - * [Accessing services running on the cluster](#otherservices) - * [So many proxies](#somanyproxies) - -## Accessing the cluster API -### Accessing for the first time with kubectl -When accessing the Kubernetes API for the first time, we suggest using the -kubernetes CLI, `kubectl`. - -To access a cluster, you need to know the location of the cluster and have credentials -to access it. Typically, this is automatically set-up when you work through -though a [Getting started guide](../docs/getting-started-guide/README.md), -or someone else setup the cluster and provided you with credentials and a location. - -Check the location and credentials that kubectl knows about with this command: -``` -kubectl config view -``` - -Many of the [examples](../examples/) provide an introduction to using -kubectl and complete documentation is found in the [kubectl manual](../docs/kubectl.md). - -### Directly accessing the REST API -Kubectl handles locating and authenticating to the apiserver. -If you want to directly access the REST API with an http client like -curl or wget, or a browser, there are several ways to locate and authenticate: - - Run kubectl in proxy mode. - - Recommended approach. - - Uses stored apiserver location. - - Verifies identity of apiserver using self-signed cert. No MITM possible. - - Authenticates to apiserver. - - In future, may do intelligent client-side load-balancing and failover. - - Provide the location and credentials directly to the http client. - - Alternate approach. - - Works with some types of client code that are confused by using a proxy. - - Need to import a root cert into your browser to protect against MITM. - -#### Using kubectl proxy - -The following command runs kubectl in a mode where it acts as a reverse proxy. It handles -locating the apiserver and authenticating. -Run it like this: -``` -kubectl proxy --port=8080 & -``` -See [kubectl proxy](../docs/kubectl_proxy.md) for more details. - -Then you can explore the API with curl, wget, or a browser, like so: -``` -$ curl http://localhost:8080/api/ -{ - "versions": [ - "v1" - ] -} -``` -#### Without kubectl proxy -It is also possible to avoid using kubectl proxy by passing an authentication token -directly to the apiserver, like this: -``` -$ APISERVER=$(kubectl config view | grep server | cut -f 2- -d ":" | tr -d " ") -$ TOKEN=$(kubectl config view | grep token | cut -f 2 -d ":" | tr -d " ") -$ curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure -{ - "versions": [ - "v1" - ] -} -``` - -The above example uses the `--insecure` flag. This leaves it subject to MITM -attacks. When kubectl accesses the cluster it uses a stored root certificate -and client certificates to access the server. (These are installed in the -`~/.kube` directory). Since cluster certificates are typically self-signed, it -make take special configuration to get your http client to use root -certificate. - -On some clusters, the apiserver does not require authentication; it may serve -on localhost, or be protected by a firewall. There is not a standard -for this. [Configuring Access to the API](../docs/accessing_the_api.md) -describes how a cluster admin can configure this. Such approaches may conflict -with future high-availability support. - -### Programmatic access to the API - -There are [client libraries](../docs/client-libraries.md) for accessing the API -from several languages. The Kubernetes project-supported -[Go](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/pkg/client) -client library can use the same [kubeconfig file](../docs/kubeconfig-file.md) -as the kubectl CLI does to locate and authenticate to the apiserver. - -See documentation for other libraries for how they authenticate. - -### Accessing the API from a Pod - -When accessing the API from a pod, locating and authenticating -to the api server are somewhat different. - -The recommended way to locate the apiserver within the pod is with -the `kubernetes` DNS name, which resolves to a Service IP which in turn -will be routed to an apiserver. - -The recommended way to authenticate to the apiserver is with a -[service account](service_accounts.md) credential. By default, a pod -is associated with a service account, and a credential (token) for that -service account is placed into the filesystem tree of each container in that pod, -at `/var/run/secrets/kubernetes.io/serviceaccount/token`. - -From within a pod the recommended ways to connect to API are: - - run a kubectl proxy as one of the containers in the pod, or as a background - process within a container. This proxies the - kubernetes API to the localhost interface of the pod, so that other processes - in any container of the pod can access it. See this [example of using kubectl proxy - in a pod](../examples/kubectl-container/). - - use the Go client library, and create a client using the `client.NewInContainer()` factory. - This handles locating and authenticating to the apiserver. -In each case, the credentials of the pod are used to communicate securely with the apiserver. - - -## Accessing services running on the cluster -The previous section was about connecting the Kubernetes API server. This section is about -connecting to other services running on Kubernetes cluster. In kubernetes, the -[nodes](../docs/node.md), [pods](../docs/pods.md) and [services](services.md) all have -their own IPs. In many cases, the node IPs, pod IPs, and some service IPs on a cluster will not be -routable, so they will not be reachable from a machine outside the cluster, -such as your desktop machine. - -### Ways to connect -You have several options for connecting to nodes, pods and services from outside the cluster: - - Access services through public IPs. - - Use a service with type `NodePort` or `LoadBalancer` to make the service reachable outside - the cluster. See the [services](../docs/services.md) and - [kubectl expose](../docs/kubectl_expose.md) documentation. - - Depending on your cluster environment, this may just expose the service to your corporate network, - or it may expose it to the internet. Think about whether the service being exposed is secure. - Does it do its own authentication? - - Place pods behind services. To access one specific pod from a set of replicas, such as for debugging, - place a unique label on the pod it and create a new service which selects this label. - - In most cases, it should not be necessary for application developer to directly access - nodes via their nodeIPs. - - Access services, nodes, or pods using the Proxy Verb. - - Does apiserver authentication and authorization prior to accessing the remote service. - Use this if the services are not secure enough to expose to the internet, or to gain - access to ports on the node IP, or for debugging. - - Proxies may cause problems for some web applications. - - Only works for HTTP/HTTPS. - - Described [here](#apiserverproxy). - - Access from a node or pod in the cluster. - - Run a pod, and then connect to a shell in it using [kubectl exec](../docs/kubectl_exec.md). - Connect to other nodes, pods, and services from that shell. - - Some clusters may allow you to ssh to a node in the cluster. From there you may be able to - access cluster services. This is a non-standard method, and will work on some clusters but - not others. Browsers and other tools may or may not be installed. Cluster DNS may not work. - -### Discovering builtin services - -Typically, there are several services which are started on a cluster by default. Get a list of these -with the `kubectl cluster-info` command: -``` -$ kubectl cluster-info - - Kubernetes master is running at https://104.197.5.247 - elasticsearch-logging is running at https://104.197.5.247/api/v1/proxy/namespaces/default/services/elasticsearch-logging - kibana-logging is running at https://104.197.5.247/api/v1/proxy/namespaces/default/services/kibana-logging - kube-dns is running at https://104.197.5.247/api/v1/proxy/namespaces/default/services/kube-dns - grafana is running at https://104.197.5.247/api/v1/proxy/namespaces/default/services/monitoring-grafana - heapster is running at https://104.197.5.247/api/v1/proxy/namespaces/default/services/monitoring-heapster -``` -This shows the proxy-verb URL for accessing each service. -For example, this cluster has cluster-level logging enabled (using Elasticsearch), which can be reached -at `https://104.197.5.247/api/v1/proxy/namespaces/default/services/elasticsearch-logging/` if suitable credentials are passed, or through a kubectl proxy at, for example: -`http://localhost:8080/api/v1/proxy/namespaces/default/services/elasticsearch-logging/`. -(See [above](#api) for how to pass credentials or use kubectl proxy.) - -#### Manually constructing apiserver proxy URLs -As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you simply append to the service's proxy URL: -`http://`*`kubernetes_master_address`*`/`*`service_path`*`/`*`service_name`*`/`*`service_endpoint-suffix-parameter`* - - -##### Examples - * To access the Elasticsearch service endpoint `_search?q=user:kimchy`, you would use: `http://104.197.5.247/api/v1/proxy/namespaces/default/services/elasticsearch-logging/_search?q=user:kimchy` - * To access the Elasticsearch cluster health information `_cluster/health?pretty=true`, you would use: `https://104.197.5.247/api/v1/proxy/namespaces/default/services/elasticsearch-logging/_cluster/health?pretty=true` - ``` - { - "cluster_name" : "kubernetes_logging", - "status" : "yellow", - "timed_out" : false, - "number_of_nodes" : 1, - "number_of_data_nodes" : 1, - "active_primary_shards" : 5, - "active_shards" : 5, - "relocating_shards" : 0, - "initializing_shards" : 0, - "unassigned_shards" : 5 - } - ``` - -#### Using web browsers to access services running on the cluster -You may be able to put an apiserver proxy url into the address bar of a browser. However: - - Web browsers cannot usually pass tokens, so you may need to use basic (password) auth. Apiserver can be configured to accept basic auth, - but your cluster may not be configured to accept basic auth. - - Some web apps may not work, particularly those with client side javascript that construct urls in a - way that is unaware of the proxy path prefix. - -## Requesting redirects -The redirect capabilities have been deprecated and removed. Please use a proxy (see below) instead. - -##So Many Proxies -There are several different proxies you may encounter when using kubernetes: - 1. The [kubectl proxy](#kubectlproxy): - - runs on a user's desktop or in a pod - - proxies from a localhost address to the kubernetes apiserver - - client to proxy uses HTTP - - proxy to apiserver uses HTTPS - - locates apiserver - - adds authentication headers - 1. The [apiserver proxy](#apiserverproxy): - - is a bastion built into the apiserver - - connects a user outside of the cluster to cluster IPs which otherwise might not be reachable - - runs in the apiserver processes - - client to proxy uses HTTPS (or http if apiserver so configured) - - proxy to target may use HTTP or HTTPS as chosen by proxy using available information - - can be used to reach a Node, Pod, or Service - - does load balancing when used to reach a Service - 1. The [kube proxy](../docs/services.md#ips-and-vips): - - runs on each node - - proxies UDP and TCP - - does not understand HTTP - - provides load balancing - - is just used to reach services - 1. A Proxy/Load-balancer in front of apiserver(s): - - existence and implementation varies from cluster to cluster (e.g. nginx) - - sits between all clients and one or more apiservers - - acts as load balancer if there are several apiservers. - 1. Cloud Load Balancers on external services: - - are provided by some cloud providers (e.g. AWS ELB, Google Cloud Load Balancer) - - are created automatically when the kubernetes service has type `LoadBalancer` - - use UDP/TCP only - - implementation varies by cloud provider. - - - -Kubernetes users will typically not need to worry about anything other than the first two types. The cluster admin -will typically ensure that the latter types are setup correctly. - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/accessing-the-cluster.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/accessing-the-cluster.md?pixel)]() diff --git a/release-0.20.0/docs/accessing_the_api.md b/release-0.20.0/docs/accessing_the_api.md deleted file mode 100644 index f2a2460927f..00000000000 --- a/release-0.20.0/docs/accessing_the_api.md +++ /dev/null @@ -1,81 +0,0 @@ -# Configuring APIserver ports - -This document describes what ports the kubernetes apiserver -may serve on and how to reach them. The audience is -cluster administrators who want to customize their cluster -or understand the details. - -Most questions about accessing the cluster are covered -in [Accessing the cluster](../docs/accessing-the-cluster.md). - - -## Ports and IPs Served On -The Kubernetes API is served by the Kubernetes APIServer process. Typically, -there is one of these running on a single kubernetes-master node. - -By default the Kubernetes APIserver serves HTTP on 2 ports: - 1. Localhost Port - - serves HTTP - - default is port 8080, change with `--insecure-port` flag. - - defaults IP is localhost, change with `--insecure-bind-address` flag. - - no authentication or authorization checks in HTTP - - protected by need to have host access - 2. Secure Port - - default is port 6443, change with `--secure-port` flag. - - default IP is first non-localhost network interface, change with `--bind-address` flag. - - serves HTTPS. Set cert with `--tls-cert-file` and key with `--tls-private-key-file` flag. - - uses token-file or client-certificate based [authentication](./authentication.md). - - uses policy-based [authorization](./authorization.md). - 3. Removed: ReadOnly Port - - For security reasons, this had to be removed. Use the service account feature instead. - -## Proxies and Firewall rules - -Additionally, in some configurations there is a proxy (nginx) running -on the same machine as the apiserver process. The proxy serves HTTPS protected -by Basic Auth on port 443, and proxies to the apiserver on localhost:8080. In -these configurations the secure port is typically set to 6443. - -A firewall rule is typically configured to allow external HTTPS access to port 443. - -The above are defaults and reflect how Kubernetes is deployed to GCE using -kube-up.sh. Other cloud providers may vary. - -## Use Cases vs IP:Ports - -There are three differently configured serving ports because there are a -variety of uses cases: - 1. Clients outside of a Kubernetes cluster, such as human running `kubectl` - on desktop machine. Currently, accesses the Localhost Port via a proxy (nginx) - running on the `kubernetes-master` machine. Proxy uses bearer token authentication. - 2. Processes running in Containers on Kubernetes that need to do read from - the apiserver. Currently, these can use a service account. - 3. Scheduler and Controller-manager processes, which need to do read-write - API operations. Currently, these have to run on the operations on the - apiserver. Currently, these have to run on the same host as the - apiserver and use the Localhost Port. In the future, these will be - switched to using service accounts to avoid the need to be co-located. - 4. Kubelets, which need to do read-write API operations and are necessarily - on different machines than the apiserver. Kubelet uses the Secure Port - to get their pods, to find the services that a pod can see, and to - write events. Credentials are distributed to kubelets at cluster - setup time. - -## Expected changes - - Policy will limit the actions kubelets can do via the authed port. - - Kubelets will change from token-based authentication to cert-based-auth. - - Scheduler and Controller-manager will use the Secure Port too. They - will then be able to run on different machines than the apiserver. - - A general mechanism will be provided for [giving credentials to - pods]( - https://github.com/GoogleCloudPlatform/kubernetes/issues/1907). - - Clients, like kubectl, will all support token-based auth, and the - Localhost will no longer be needed, and will not be the default. - However, the localhost port may continue to be an option for - installations that want to do their own auth proxy. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/accessing_the_api.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/accessing_the_api.md?pixel)]() diff --git a/release-0.20.0/docs/admission_controllers.md b/release-0.20.0/docs/admission_controllers.md deleted file mode 100644 index 345178d8fac..00000000000 --- a/release-0.20.0/docs/admission_controllers.md +++ /dev/null @@ -1,112 +0,0 @@ -# Admission Controllers - -## What are they? - -An admission control plug-in is a piece of code that intercepts requests to the Kubernetes -API server prior to persistence of the object, but after the request is authenticated -and authorized. The plug-in code is in the API server process -and must be compiled into the binary in order to be used at this time. - -Each admission control plug-in is run in sequence before a request is accepted into the cluster. If -any of the plug-ins in the sequence reject the request, the entire request is rejected immediately -and an error is returned to the end-user. - -Admission control plug-ins may mutate the incoming object in some cases to apply system configured -defaults. In addition, admission control plug-ins may mutate related resources as part of request -processing to do things like increment quota usage. - -## Why do I need them? - -Many advanced features in Kubernetes require an admission control plug-in to be enabled in order -to properly support the feature. As a result, a Kubernetes API server that is not properly -configured with the right set of admission control plug-ins is an incomplete server and will not -support all the features you expect. - -## How do I turn on an admission control plug-in? - -The Kubernetes API server supports a flag, ```admission_control``` that takes a comma-delimited, -ordered list of admission control choices to invoke prior to modifying objects in the cluster. - -## What does each plug-in do? - -### AlwaysAdmit - -Use this plugin by itself to pass-through all requests. - -### AlwaysDeny - -Rejects all requests. Used for testing. - -### DenyExecOnPrivileged - -This plug-in will intercept all requests to exec a command in a pod if that pod has a privileged container. - -If your cluster supports privileged containers, and you want to restrict the ability of end-users to exec -commands in those containers, we strongly encourage enabling this plug-in. - -### ServiceAccount - -This plug-in implements automation for [serviceAccounts]( service_accounts.md). -We strongly recommend using this plug-in if you intend to make use of Kubernetes ```ServiceAccount``` objects. - -### SecurityContextDeny - -This plug-in will deny any pod with a [SecurityContext](security_context.md) that defines options that were not available on the ```Container```. - -### ResourceQuota - -This plug-in will observe the incoming request and ensure that it does not violate any of the constraints -enumerated in the ```ResourceQuota``` object in a ```Namespace```. If you are using ```ResourceQuota``` -objects in your Kubernetes deployment, you MUST use this plug-in to enforce quota constraints. - -See the [resourceQuota design doc]( design/admission_control_resource_quota.md). - -It is strongly encouraged that this plug-in is configured last in the sequence of admission control plug-ins. This is -so that quota is not prematurely incremented only for the request to be rejected later in admission control. - -### LimitRanger - -This plug-in will observe the incoming request and ensure that it does not violate any of the constraints -enumerated in the ```LimitRange``` object in a ```Namespace```. If you are using ```LimitRange``` objects in -your Kubernetes deployment, you MUST use this plug-in to enforce those constraints. - -See the [limitRange design doc]( design/admission_control_limit_range.md). - -### NamespaceExists - -This plug-in will observe all incoming requests that attempt to create a resource in a Kubernetes ```Namespace``` -and reject the request if the ```Namespace``` was not previously created. We strongly recommend running -this plug-in to ensure integrity of your data. - -### NamespaceAutoProvision (deprecated) - -This plug-in will observe all incoming requests that attempt to create a resource in a Kubernetes ```Namespace``` -and create a new ```Namespace``` if one did not already exist previously. - -We strongly recommend ```NamespaceExists``` over ```NamespaceAutoProvision```. - -### NamespaceLifecycle - -This plug-in enforces that a ```Namespace``` that is undergoing termination cannot have new content created in it. - -A ```Namespace``` deletion kicks off a sequence of operations that remove all content (pods, services, etc.) in that -namespace. In order to enforce integrity of that process, we strongly recommend running this plug-in. - -Once ```NamespaceAutoProvision``` is deprecated, we anticipate ```NamespaceLifecycle``` and ```NamespaceExists``` will -be merged into a single plug-in that enforces the life-cycle of a ```Namespace``` in Kubernetes. - -## Is there a recommended set of plug-ins to use? - -Yes. - -For Kubernetes 1.0, we strongly recommend running the following set of admission control plug-ins (order matters): - -```shell ---admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota -``` - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admission_controllers.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/admission_controllers.md?pixel)]() diff --git a/release-0.20.0/docs/annotations.md b/release-0.20.0/docs/annotations.md deleted file mode 100644 index 011aa37e832..00000000000 --- a/release-0.20.0/docs/annotations.md +++ /dev/null @@ -1,31 +0,0 @@ -# Annotations - -We have [labels](labels.md) for identifying metadata. - -It is also useful to be able to attach arbitrary non-identifying metadata, for retrieval by API clients such as tools, libraries, etc. This information may be large, may be structured or unstructured, may include characters not permitted by labels, etc. Such information would not be used for object selection and therefore doesn't belong in labels. - -Like labels, annotations are key-value maps. -``` -"annotations": { - "key1" : "value1", - "key2" : "value2" -} -``` - -Possible information that could be recorded in annotations: - -* fields managed by a declarative configuration layer, to distinguish them from client- and/or server-set default values and other auto-generated fields, fields set by auto-sizing/auto-scaling systems, etc., in order to facilitate merging -* build/release/image information (timestamps, release ids, git branch, PR numbers, image hashes, registry address, etc.) -* pointers to logging/monitoring/analytics/audit repos -* client library/tool information (e.g. for debugging purposes -- name, version, build info) -* other user and/or tool/system provenance info, such as URLs of related objects from other ecosystem components -* lightweight rollout tool metadata (config and/or checkpoints) -* phone/pager number(s) of person(s) responsible, or directory entry where that info could be found, such as a team website - -Yes, this information could be stored in an external database or directory, but that would make it much harder to produce shared client libraries and tools for deployment, management, introspection, etc. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/annotations.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/annotations.md?pixel)]() diff --git a/release-0.20.0/docs/api-conventions.md b/release-0.20.0/docs/api-conventions.md deleted file mode 100644 index df6384fab6f..00000000000 --- a/release-0.20.0/docs/api-conventions.md +++ /dev/null @@ -1,598 +0,0 @@ -API Conventions -=============== - -Updated: 4/16/2015 - -*This document is oriented at users who want a deeper understanding of the kubernetes -API structure, and developers wanting to extend the kubernetes API. An introduction to -using resources with kubectl can be found in (working_with_resources.md).* - -The conventions of the [Kubernetes API](api.md) (and related APIs in the ecosystem) are intended to ease client development and ensure that configuration mechanisms can be implemented that work across a diverse set of use cases consistently. - -The general style of the Kubernetes API is RESTful - clients create, update, delete, or retrieve a description of an object via the standard HTTP verbs (POST, PUT, DELETE, and GET) - and those APIs preferentially accept and return JSON. Kubernetes also exposes additional endpoints for non-standard verbs and allows alternative content types. All of the JSON accepted and returned by the server has a schema, identified by the "kind" and "apiVersion" fields. Where relevant HTTP header fields exist, they should mirror the content of JSON fields, but the information should not be represented only in the HTTP header. - -The following terms are defined: - -* **Kind** the name of a particular object schema (e.g. the "Cat" and "Dog" kinds would have different attributes and properties) -* **Resource** a representation of a system entity, sent or retrieved as JSON via HTTP to the server. Resources are exposed via: - * Collections - a list of resources of the same type, which may be queryable - * Elements - an individual resource, addressable via a URL - -Each resource typically accepts and returns data of a single kind. A kind may be accepted or returned by multiple resources that reflect specific use cases. For instance, the kind "pod" is exposed as a "pods" resource that allows end users to create, update, and delete pods, while a separate "pod status" resource (that acts on "pod" kind) allows automated processes to update a subset of the fields in that resource. A "restart" resource might be exposed for a number of different resources to allow the same action to have different results for each object. - -Resource collections should be all lowercase and plural, whereas kinds are CamelCase and singular. - - -Types (Kinds) -------------- - -Kinds are grouped into three categories: - -1. **Objects** represent a persistent entity in the system. - - Creating an API object is a record of intent - once created, the system will work to ensure that resource exists. All API objects have common metadata. - - An object may have multiple resources that clients can use to perform specific actions that create, update, delete, or get. - - Examples: `Pods`, `ReplicationControllers`, `Services`, `Namespaces`, `Nodes` - -2. **Lists** are collections of **resources** of one (usually) or more (occasionally) kinds. - - Lists have a limited set of common metadata. All lists use the "items" field to contain the array of objects they return. - - Most objects defined in the system should have an endpoint that returns the full set of resources, as well as zero or more endpoints that return subsets of the full list. Some objects may be singletons (the current user, the system defaults) and may not have lists. - - In addition, all lists that return objects with labels should support label filtering (see [labels.md](labels.md), and most lists should support filtering by fields. - - Examples: PodLists, ServiceLists, NodeLists - - TODO: Describe field filtering below or in a separate doc. - -3. **Simple** kinds are used for specific actions on objects and for non-persistent entities. - - Given their limited scope, they have the same set of limited common metadata as lists. - - The "size" action may accept a simple resource that has only a single field as input (the number of things). The "status" kind is returned when errors occur and is not persisted in the system. - - Examples: Binding, Status - -The standard REST verbs (defined below) MUST return singular JSON objects. Some API endpoints may deviate from the strict REST pattern and return resources that are not singular JSON objects, such as streams of JSON objects or unstructured text log data. - -The term "kind" is reserved for these "top-level" API types. The term "type" should be used for distinguishing sub-categories within objects or subobjects. - -### Resources - -All JSON objects returned by an API MUST have the following fields: - -* kind: a string that identifies the schema this object should have -* apiVersion: a string that identifies the version of the schema the object should have - -These fields are required for proper decoding of the object. They may be populated by the server by default from the specified URL path, but the client likely needs to know the values in order to construct the URL path. - -### Objects - -#### Metadata - -Every object kind MUST have the following metadata in a nested object field called "metadata": - -* namespace: a namespace is a DNS compatible subdomain that objects are subdivided into. The default namespace is 'default'. See [namespaces.md](namespaces.md) for more. -* name: a string that uniquely identifies this object within the current namespace (see [identifiers.md](identifiers.md)). This value is used in the path when retrieving an individual object. -* uid: a unique in time and space value (typically an RFC 4122 generated identifier, see [identifiers.md](identifiers.md)) used to distinguish between objects with the same name that have been deleted and recreated - -Every object SHOULD have the following metadata in a nested object field called "metadata": - -* resourceVersion: a string that identifies the internal version of this object that can be used by clients to determine when objects have changed. This value MUST be treated as opaque by clients and passed unmodified back to the server. Clients should not assume that the resource version has meaning across namespaces, different kinds of resources, or different servers. (see [concurrency control](#concurrency-control-and-consistency), below, for more details) -* creationTimestamp: a string representing an RFC 3339 date of the date and time an object was created -* deletionTimestamp: a string representing an RFC 3339 date of the date and time after which this resource will be deleted. This field is set by the server when a graceful deletion is requested by the user, and is not directly settable by a client. The resource will be deleted (no longer visible from resource lists, and not reachable by name) after the time in this field. Once set, this value may not be unset or be set further into the future, although it may be shortened or the resource may be deleted prior to this time. -* labels: a map of string keys and values that can be used to organize and categorize objects (see [labels.md](labels.md)) -* annotations: a map of string keys and values that can be used by external tooling to store and retrieve arbitrary metadata about this object (see [annotations.md](annotations.md)) - -Labels are intended for organizational purposes by end users (select the pods that match this label query). Annotations enable third-party automation and tooling to decorate objects with additional metadata for their own use. - -#### Spec and Status - -By convention, the Kubernetes API makes a distinction between the specification of the desired state of an object (a nested object field called "spec") and the status of the object at the current time (a nested object field called "status"). The specification is a complete description of the desired state, including configuration settings provided by the user, [default values](#defaulting) expanded by the system, and properties initialized or otherwise changed after creation by other ecosystem components (e.g., schedulers, auto-scalers), and is persisted in stable storage with the API object. If the specification is deleted, the object will be purged from the system. The status summarizes the current state of the object in the system, and is usually persisted with the object by an automated processes but may be generated on the fly. At some cost and perhaps some temporary degradation in behavior, the status could be reconstructed by observation if it were lost. - -When a new version of an object is POSTed or PUT, the "spec" is updated and available immediately. Over time the system will work to bring the "status" into line with the "spec". The system will drive toward the most recent "spec" regardless of previous versions of that stanza. In other words, if a value is changed from 2 to 5 in one PUT and then back down to 3 in another PUT the system is not required to 'touch base' at 5 before changing the "status" to 3. In other words, the system's behavior is *level-based* rather than *edge-based*. This enables robust behavior in the presence of missed intermediate state changes. - -The Kubernetes API also serves as the foundation for the declarative configuration schema for the system. In order to facilitate level-based operation and expression of declarative configuration, fields in the specification should have declarative rather than imperative names and semantics -- they represent the desired state, not actions intended to yield the desired state. - -The PUT and POST verbs on objects will ignore the "status" values. A `/status` subresource is provided to enable system components to update statuses of resources they manage. - -Otherwise, PUT expects the whole object to be specified. Therefore, if a field is omitted it is assumed that the client wants to clear that field's value. The PUT verb does not accept partial updates. Modification of just part of an object may be achieved by GETting the resource, modifying part of the spec, labels, or annotations, and then PUTting it back. See [concurrency control](#concurrency-control-and-consistency), below, regarding read-modify-write consistency when using this pattern. Some objects may expose alternative resource representations that allow mutation of the status, or performing custom actions on the object. - -All objects that represent a physical resource whose state may vary from the user's desired intent SHOULD have a "spec" and a "status". Objects whose state cannot vary from the user's desired intent MAY have only "spec", and MAY rename "spec" to a more appropriate name. - -Objects that contain both spec and status should not contain additional top-level fields other than the standard metadata fields. - -##### Typical status properties - -* **phase**: The phase is a simple, high-level summary of the phase of the lifecycle of an object. The phase should progress monotonically. Typical phase values are `Pending` (not yet fully physically realized), `Running` or `Active` (fully realized and active, but not necessarily operating correctly), and `Terminated` (no longer active), but may vary slightly for different types of objects. New phase values should not be added to existing objects in the future. Like other status fields, it must be possible to ascertain the lifecycle phase by observation. Additional details regarding the current phase may be contained in other fields. -* **conditions**: Conditions represent orthogonal observations of an object's current state. Objects may report multiple conditions, and new types of conditions may be added in the future. Condition status values may be `True`, `False`, or `Unknown`. Unlike the phase, conditions are not expected to be monotonic -- their values may change back and forth. A typical condition type is `Ready`, which indicates the object was believed to be fully operational at the time it was last probed. Conditions may carry additional information, such as the last probe time or last transition time. - -TODO(@vishh): Reason and Message. - -Phases and conditions are observations and not, themselves, state machines, nor do we define comprehensive state machines for objects with behaviors associated with state transitions. The system is level-based and should assume an Open World. Additionally, new observations and details about these observations may be added over time. - -In order to preserve extensibility, in the future, we intend to explicitly convey properties that users and components care about rather than requiring those properties to be inferred from observations. - -Note that historical information status (e.g., last transition time, failure counts) is only provided at best effort, and is not guaranteed to not be lost. - -Status information that may be large (especially unbounded in size, such as lists of references to other objects -- see below) and/or rapidly changing, such as [resource usage](resources.md#usage-data), should be put into separate objects, with possibly a reference from the original object. This helps to ensure that GETs and watch remain reasonably efficient for the majority of clients, which may not need that data. - -#### References to related objects - -References to loosely coupled sets of objects, such as [pods](pods.md) overseen by a [replication controller](replication-controller.md), are usually best referred to using a [label selector](labels.md). In order to ensure that GETs of individual objects remain bounded in time and space, these sets may be queried via separate API queries, but will not be expanded in the referring object's status. - -References to specific objects, especially specific resource versions and/or specific fields of those objects, are specified using the `ObjectReference` type. Unlike partial URLs, the ObjectReference type facilitates flexible defaulting of fields from the referring object or other contextual information. - -References in the status of the referee to the referrer may be permitted, when the references are one-to-one and do not need to be frequently updated, particularly in an edge-based manner. - -#### Lists of named subobjects preferred over maps - -Discussed in [#2004](https://github.com/GoogleCloudPlatform/kubernetes/issues/2004) and elsewhere. There are no maps of subobjects in any API objects. Instead, the convention is to use a list of subobjects containing name fields. - -For example: -```yaml -ports: - - name: www - containerPort: 80 -``` -vs. -```yaml -ports: - www: - containerPort: 80 -``` - -This rule maintains the invariant that all JSON/YAML keys are fields in API objects. The only exceptions are pure maps in the API (currently, labels, selectors, and annotations), as opposed to sets of subobjects. - -#### Constants - -Some fields will have a list of allowed values (enumerations). These values will be strings, and they will be in CamelCase, with an initial uppercase letter. Examples: "ClusterFirst", "Pending", "ClientIP". - -### Lists and Simple kinds - -Every list or simple kind SHOULD have the following metadata in a nested object field called "metadata": - -* resourceVersion: a string that identifies the common version of the objects returned by in a list. This value MUST be treated as opaque by clients and passed unmodified back to the server. A resource version is only valid within a single namespace on a single kind of resource. - -Every simple kind returned by the server, and any simple kind sent to the server that must support idempotency or optimistic concurrency should return this value.Since simple resources are often used as input alternate actions that modify objects, the resource version of the simple resource should correspond to the resource version of the object. - - -Differing Representations -------------------------- - -An API may represent a single entity in different ways for different clients, or transform an object after certain transitions in the system occur. In these cases, one request object may have two representations available as different resources, or different kinds. - -An example is a Service, which represents the intent of the user to group a set of pods with common behavior on common ports. When Kubernetes detects a pod matches the service selector, the IP address and port of the pod are added to an Endpoints resource for that Service. The Endpoints resource exists only if the Service exists, but exposes only the IPs and ports of the selected pods. The full service is represented by two distinct resources - under the original Service resource the user created, as well as in the Endpoints resource. - -As another example, a "pod status" resource may accept a PUT with the "pod" kind, with different rules about what fields may be changed. - -Future versions of Kubernetes may allow alternative encodings of objects beyond JSON. - - -Verbs on Resources ------------------- - -API resources should use the traditional REST pattern: - -* GET /<resourceNamePlural> - Retrieve a list of type <resourceName>, e.g. GET /pods returns a list of Pods. -* POST /<resourceNamePlural> - Create a new resource from the JSON object provided by the client. -* GET /<resourceNamePlural>/<name> - Retrieves a single resource with the given name, e.g. GET /pods/first returns a Pod named 'first'. Should be constant time, and the resource should be bounded in size. -* DELETE /<resourceNamePlural>/<name> - Delete the single resource with the given name. DeleteOptions may specify gracePeriodSeconds, the optional duration in seconds before the object should be deleted. Individual kinds may declare fields which provide a default grace period, and different kinds may have differing kind-wide default grace periods. A user provided grace period overrides a default grace period, including the zero grace period ("now"). -* PUT /<resourceNamePlural>/<name> - Update or create the resource with the given name with the JSON object provided by the client. -* PATCH /<resourceNamePlural>/<name> - Selectively modify the specified fields of the resource. See more information [below](#patch). - -Kubernetes by convention exposes additional verbs as new root endpoints with singular names. Examples: - -* GET /watch/<resourceNamePlural> - Receive a stream of JSON objects corresponding to changes made to any resource of the given kind over time. -* GET /watch/<resourceNamePlural>/<name> - Receive a stream of JSON objects corresponding to changes made to the named resource of the given kind over time. - -These are verbs which change the fundamental type of data returned (watch returns a stream of JSON instead of a single JSON object). Support of additional verbs is not required for all object types. - -Two additional verbs `redirect` and `proxy` provide access to cluster resources as described in [accessing-the-cluster.md](accessing-the-cluster.md). - -When resources wish to expose alternative actions that are closely coupled to a single resource, they should do so using new sub-resources. An example is allowing automated processes to update the "status" field of a Pod. The `/pods` endpoint only allows updates to "metadata" and "spec", since those reflect end-user intent. An automated process should be able to modify status for users to see by sending an updated Pod kind to the server to the "/pods/<name>/status" endpoint - the alternate endpoint allows different rules to be applied to the update, and access to be appropriately restricted. Likewise, some actions like "stop" or "scale" are best represented as REST sub-resources that are POSTed to. The POST action may require a simple kind to be provided if the action requires parameters, or function without a request body. - -TODO: more documentation of Watch - -### PATCH operations - -The API supports three different PATCH operations, determined by their corresponding Content-Type header: - -* JSON Patch, `Content-Type: application/json-patch+json` - * As defined in [RFC6902](https://tools.ietf.org/html/rfc6902), a JSON Patch is a sequence of operations that are executed on the resource, e.g. `{"op": "add", "path": "/a/b/c", "value": [ "foo", "bar" ]}`. For more details on how to use JSON Patch, see the RFC. -* Merge Patch, `Content-Type: application/merge-json-patch+json` - * As defined in [RFC7386](https://tools.ietf.org/html/rfc7386), a Merge Patch is essentially a partial representation of the resource. The submitted JSON is "merged" with the current resource to create a new one, then the new one is saved. For more details on how to use Merge Patch, see the RFC. -* Strategic Merge Patch, `Content-Type: application/strategic-merge-patch+json` - * Strategic Merge Patch is a custom implementation of Merge Patch. For a detailed explanation of how it works and why it needed to be introduced, see below. - -#### Strategic Merge Patch - -In the standard JSON merge patch, JSON objects are always merged but lists are always replaced. Often that isn't what we want. Let's say we start with the following Pod: - -```yaml -spec: - containers: - - name: nginx - image: nginx-1.0 -``` - -...and we POST that to the server (as JSON). Then let's say we want to *add* a container to this Pod. - -```yaml -PATCH /api/v1/namespaces/default/pods/pod-name -spec: - containers: - - name: log-tailer - image: log-tailer-1.0 -``` - -If we were to use standard Merge Patch, the entire container list would be replaced with the single log-tailer container. However, our intent is for the container lists to merge together based on the `name` field. - -To solve this problem, Strategic Merge Patch uses metadata attached to the API objects to determine what lists should be merged and which ones should not. Currently the metadata is available as struct tags on the API objects themselves, but will become available to clients as Swagger annotations in the future. In the above example, the `patchStrategy` metadata for the `containers` field would be `merge` and the `patchMergeKey` would be `name`. - -Note: If the patch results in merging two lists of scalars, the scalars are first deduplicated and then merged. - -Strategic Merge Patch also supports special operations as listed below. - -### List Operations - -To override the container list to be strictly replaced, regardless of the default: - -```yaml -containers: - - name: nginx - image: nginx-1.0 - - $patch: replace # any further $patch operations nested in this list will be ignored -``` - -To delete an element of a list that should be merged: - -```yaml -containers: - - name: nginx - image: nginx-1.0 - - $patch: delete - name: log-tailer # merge key and value goes here -``` - -### Map Operations - -To indicate that a map should not be merged and instead should be taken literally: - -```yaml -$patch: replace # recursive and applies to all fields of the map it's in -containers: -- name: nginx - image: nginx-1.0 -``` - -To delete a field of a map: - -```yaml -name: nginx -image: nginx-1.0 -labels: - live: null # set the value of the map key to null -``` - - -Idempotency ------------ - -All compatible Kubernetes APIs MUST support "name idempotency" and respond with an HTTP status code 409 when a request is made to POST an object that has the same name as an existing object in the system. See [identifiers.md](identifiers.md) for details. - -Names generated by the system may be requested using `metadata.generateName`. GenerateName indicates that the name should be made unique by the server prior to persisting it. A non-empty value for the field indicates the name will be made unique (and the name returned to the client will be different than the name passed). The value of this field will be combined with a unique suffix on the server if the Name field has not been provided. The provided value must be valid within the rules for Name, and may be truncated by the length of the suffix required to make the value unique on the server. If this field is specified, and Name is not present, the server will NOT return a 409 if the generated name exists - instead, it will either return 201 Created or 504 with Reason `ServerTimeout` indicating a unique name could not be found in the time allotted, and the client should retry (optionally after the time indicated in the Retry-After header). - -Defaulting ----------- - -Default resource values are API version-specific, and they are applied during -the conversion from API-versioned declarative configuration to internal objects -representing the desired state (`Spec`) of the resource. Subsequent GETs of the -resource will include the default values explicitly. - -Incorporating the default values into the `Spec` ensures that `Spec` depicts the -full desired state so that it is easier for the system to determine how to -achieve the state, and for the user to know what to anticipate. - -API version-specific default values are set by the API server. - -Late Initialization -------------------- -Late initialization is when resource fields are set by a system controller -after an object is created/updated. - -For example, the scheduler sets the `pod.spec.nodeName` field after the pod is created. - -Late-initializers should only make the following types of modifications: - - Setting previously unset fields - - Adding keys to maps - - Adding values to arrays which have mergeable semantics (`patchStrategy:"merge"` attribute in - the type definition). - -These conventions: - 1. allow a user (with sufficient privilege) to override any system-default behaviors by setting - the fields that would otherwise have been defaulted. - 1. enables updates from users to be merged with changes made during late initialization, using - strategic merge patch, as opposed to clobbering the change. - 1. allow the component which does the late-initialization to use strategic merge patch, which - facilitates composition and concurrency of such components. - -Although the apiserver Admission Control stage acts prior to object creation, -Admission Control plugins should follow the Late Initialization conventions -too, to allow their implementation to be later moved to a 'controller', or to client libraries. - -Concurrency Control and Consistency ------------------------------------ - -Kubernetes leverages the concept of *resource versions* to achieve optimistic concurrency. All Kubernetes resources have a "resourceVersion" field as part of their metadata. This resourceVersion is a string that identifies the internal version of an object that can be used by clients to determine when objects have changed. When a record is about to be updated, it's version is checked against a pre-saved value, and if it doesn't match, the update fails with a StatusConflict (HTTP status code 409). - -The resourceVersion is changed by the server every time an object is modified. If resourceVersion is included with the PUT operation the system will verify that there have not been other successful mutations to the resource during a read/modify/write cycle, by verifying that the current value of resourceVersion matches the specified value. - -The resourceVersion is currently backed by [etcd's modifiedIndex](https://coreos.com/docs/distributed-configuration/etcd-api/). However, it's important to note that the application should *not* rely on the implementation details of the versioning system maintained by Kubernetes. We may change the implementation of resourceVersion in the future, such as to change it to a timestamp or per-object counter. - -The only way for a client to know the expected value of resourceVersion is to have received it from the server in response to a prior operation, typically a GET. This value MUST be treated as opaque by clients and passed unmodified back to the server. Clients should not assume that the resource version has meaning across namespaces, different kinds of resources, or different servers. Currently, the value of resourceVersion is set to match etcd's sequencer. You could think of it as a logical clock the API server can use to order requests. However, we expect the implementation of resourceVersion to change in the future, such as in the case we shard the state by kind and/or namespace, or port to another storage system. - -In the case of a conflict, the correct client action at this point is to GET the resource again, apply the changes afresh, and try submitting again. This mechanism can be used to prevent races like the following: - -``` -Client #1 Client #2 -GET Foo GET Foo -Set Foo.Bar = "one" Set Foo.Baz = "two" -PUT Foo PUT Foo -``` - -When these sequences occur in parallel, either the change to Foo.Bar or the change to Foo.Baz can be lost. - -On the other hand, when specifying the resourceVersion, one of the PUTs will fail, since whichever write succeeds changes the resourceVersion for Foo. - -resourceVersion may be used as a precondition for other operations (e.g., GET, DELETE) in the future, such as for read-after-write consistency in the presence of caching. - -"Watch" operations specify resourceVersion using a query parameter. It is used to specify the point at which to begin watching the specified resources. This may be used to ensure that no mutations are missed between a GET of a resource (or list of resources) and a subsequent Watch, even if the current version of the resource is more recent. This is currently the main reason that list operations (GET on a collection) return resourceVersion. - - -Serialization Format --------------------- - -APIs may return alternative representations of any resource in response to an Accept header or under alternative endpoints, but the default serialization for input and output of API responses MUST be JSON. - -All dates should be serialized as RFC3339 strings. - - -Units ------ - -Units must either be explicit in the field name (e.g., `timeoutSeconds`), or must be specified as part of the value (e.g., `resource.Quantity`). Which approach is preferred is TBD. - - -Selecting Fields ----------------- - -Some APIs may need to identify which field in a JSON object is invalid, or to reference a value to extract from a separate resource. The current recommendation is to use standard JavaScript syntax for accessing that field, assuming the JSON object was transformed into a JavaScript object. - -Examples: - -* Find the field "current" in the object "state" in the second item in the array "fields": `fields[0].state.current` - -TODO: Plugins, extensions, nested kinds, headers - - -HTTP Status codes ------------------ - -The server will respond with HTTP status codes that match the HTTP spec. See the section below for a breakdown of the types of status codes the server will send. - -The following HTTP status codes may be returned by the API. - -#### Success codes - -* `200 StatusOK` - * Indicates that the request completed successfully. -* `201 StatusCreated` - * Indicates that the request to create kind completed successfully. -* `204 StatusNoContent` - * Indicates that the request completed successfully, and the response contains no body. - * Returned in response to HTTP OPTIONS requests. - -#### Error codes -* `307 StatusTemporaryRedirect` - * Indicates that the address for the requested resource has changed. - * Suggested client recovery behavior - * Follow the redirect. -* `400 StatusBadRequest` - * Indicates the requested is invalid. - * Suggested client recovery behavior: - * Do not retry. Fix the request. -* `401 StatusUnauthorized` - * Indicates that the server can be reached and understood the request, but refuses to take any further action, because the client must provide authorization. If the client has provided authorization, the server is indicating the provided authorization is unsuitable or invalid. - * Suggested client recovery behavior - * If the user has not supplied authorization information, prompt them for the appropriate credentials - * If the user has supplied authorization information, inform them their credentials were rejected and optionally prompt them again. -* `403 StatusForbidden` - * Indicates that the server can be reached and understood the request, but refuses to take any further action, because it is configured to deny access for some reason to the requested resource by the client. - * Suggested client recovery behavior - * Do not retry. Fix the request. -* `404 StatusNotFound` - * Indicates that the requested resource does not exist. - * Suggested client recovery behavior - * Do not retry. Fix the request. -* `405 StatusMethodNotAllowed` - * Indicates that that the action the client attempted to perform on the resource was not supported by the code. - * Suggested client recovery behavior - * Do not retry. Fix the request. -* `409 StatusConflict` - * Indicates that either the resource the client attempted to create already exists or the requested update operation cannot be completed due to a conflict. - * Suggested client recovery behavior - * * If creating a new resource - * * Either change the identifier and try again, or GET and compare the fields in the pre-existing object and issue a PUT/update to modify the existing object. - * * If updating an existing resource: - * See `Conflict` from the `status` response section below on how to retrieve more information about the nature of the conflict. - * GET and compare the fields in the pre-existing object, merge changes (if still valid according to preconditions), and retry with the updated request (including `ResourceVersion`). -* `422 StatusUnprocessableEntity` - * Indicates that the requested create or update operation cannot be completed due to invalid data provided as part of the request. - * Suggested client recovery behavior - * Do not retry. Fix the request. -* `429 StatusTooManyRequests` - * Indicates that the either the client rate limit has been exceeded or the server has received more requests then it can process. - * Suggested client recovery behavior: - * Read the ```Retry-After``` HTTP header from the response, and wait at least that long before retrying. -* `500 StatusInternalServerError` - * Indicates that the server can be reached and understood the request, but either an unexpected internal error occurred and the outcome of the call is unknown, or the server cannot complete the action in a reasonable time (this maybe due to temporary server load or a transient communication issue with another server). - * Suggested client recovery behavior: - * Retry with exponential backoff. -* `503 StatusServiceUnavailable` - * Indicates that required service is unavailable. - * Suggested client recovery behavior: - * Retry with exponential backoff. -* `504 StatusServerTimeout` - * Indicates that the request could not be completed within the given time. Clients can get this response ONLY when they specified a timeout param in the request. - * Suggested client recovery behavior: - * Increase the value of the timeout param and retry with exponential backoff - -Response Status Kind --------------------- - -Kubernetes will always return the ```Status``` kind from any API endpoint when an error occurs. -Clients SHOULD handle these types of objects when appropriate. - -A ```Status``` kind will be returned by the API in two cases: - * When an operation is not successful (i.e. when the server would return a non 2xx HTTP status code). - * When a HTTP ```DELETE``` call is successful. - -The status object is encoded as JSON and provided as the body of the response. The status object contains fields for humans and machine consumers of the API to get more detailed information for the cause of the failure. The information in the status object supplements, but does not override, the HTTP status code's meaning. When fields in the status object have the same meaning as generally defined HTTP headers and that header is returned with the response, the header should be considered as having higher priority. - -**Example:** -``` -$ curl -v -k -H "Authorization: Bearer WhCDvq4VPpYhrcfmF6ei7V9qlbqTubUc" https://10.240.122.184:443/api/v1/namespaces/default/pods/grafana - -> GET /api/v1/namespaces/default/pods/grafana HTTP/1.1 -> User-Agent: curl/7.26.0 -> Host: 10.240.122.184 -> Accept: */* -> Authorization: Bearer WhCDvq4VPpYhrcfmF6ei7V9qlbqTubUc -> - -< HTTP/1.1 404 Not Found -< Content-Type: application/json -< Date: Wed, 20 May 2015 18:10:42 GMT -< Content-Length: 232 -< -{ - "kind": "Status", - "apiVersion": "v1", - "metadata": {}, - "status": "Failure", - "message": "pods \"grafana\" not found", - "reason": "NotFound", - "details": { - "name": "grafana", - "kind": "pods" - }, - "code": 404 -} -``` - -```status``` field contains one of two possible values: -* `Success` -* `Failure` - -`message` may contain human-readable description of the error - -```reason``` may contain a machine-readable description of why this operation is in the `Failure` status. If this value is empty there is no information available. The `reason` clarifies an HTTP status code but does not override it. - -```details``` may contain extended data associated with the reason. Each reason may define its own extended details. This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type. - -Possible values for the ```reason``` and ```details``` fields: -* `BadRequest` - * Indicates that the request itself was invalid, because the request doesn't make any sense, for example deleting a read-only object. - * This is different than `status reason` `Invalid` above which indicates that the API call could possibly succeed, but the data was invalid. - * API calls that return BadRequest can never succeed. - * Http status code: `400 StatusBadRequest` -* `Unauthorized` - * Indicates that the server can be reached and understood the request, but refuses to take any further action without the client providing appropriate authorization. If the client has provided authorization, this error indicates the provided credentials are insufficient or invalid. - * Details (optional): - * `kind string` - * The kind attribute of the unauthorized resource (on some operations may differ from the requested resource). - * `name string` - * The identifier of the unauthorized resource. - * HTTP status code: `401 StatusUnauthorized` -* `Forbidden` - * Indicates that the server can be reached and understood the request, but refuses to take any further action, because it is configured to deny access for some reason to the requested resource by the client. - * Details (optional): - * `kind string` - * The kind attribute of the forbidden resource (on some operations may differ from the requested resource). - * `name string` - * The identifier of the forbidden resource. - * HTTP status code: `403 StatusForbidden` -* `NotFound` - * Indicates that one or more resources required for this operation could not be found. - * Details (optional): - * `kind string` - * The kind attribute of the missing resource (on some operations may differ from the requested resource). - * `name string` - * The identifier of the missing resource. - * HTTP status code: `404 StatusNotFound` -* `AlreadyExists` - * Indicates that the resource you are creating already exists. - * Details (optional): - * `kind string` - * The kind attribute of the conflicting resource. - * `name string` - * The identifier of the conflicting resource. - * HTTP status code: `409 StatusConflict` -* `Conflict` - * Indicates that the requested update operation cannot be completed due to a conflict. The client may need to alter the request. Each resource may define custom details that indicate the nature of the conflict. - * HTTP status code: `409 StatusConflict` -* `Invalid` - * Indicates that the requested create or update operation cannot be completed due to invalid data provided as part of the request. - * Details (optional): - * `kind string` - * the kind attribute of the invalid resource - * `name string` - * the identifier of the invalid resource - * `causes` - * One or more `StatusCause` entries indicating the data in the provided resource that was invalid. The `reason`, `message`, and `field` attributes will be set. - * HTTP status code: `422 StatusUnprocessableEntity` -* `Timeout` - * Indicates that the request could not be completed within the given time. Clients may receive this response if the server has decided to rate limit the client, or if the server is overloaded and cannot process the request at this time. - * Http status code: `429 TooManyRequests` - * The server should set the `Retry-After` HTTP header and return `retryAfterSeconds` in the details field of the object. A value of `0` is the default. -* `ServerTimeout` - * Indicates that the server can be reached and understood the request, but cannot complete the action in a reasonable time. This maybe due to temporary server load or a transient communication issue with another server. - * Details (optional): - * `kind string` - * The kind attribute of the resource being acted on. - * `name string` - * The operation that is being attempted. - * The server should set the `Retry-After` HTTP header and return `retryAfterSeconds` in the details field of the object. A value of `0` is the default. - * Http status code: `504 StatusServerTimeout` -* `MethodNotAllowed` - * Indicates that that the action the client attempted to perform on the resource was not supported by the code. - * For instance, attempting to delete a resource that can only be created. - * API calls that return MethodNotAllowed can never succeed. - * Http status code: `405 StatusMethodNotAllowed` -* `InternalError` - * Indicates that an internal error occurred, it is unexpected and the outcome of the call is unknown. - * Details (optional): - * `causes` - * The original error. - * Http status code: `500 StatusInternalServerError` - -`code` may contain the suggested HTTP return code for this status. - - -Events ------- - -TODO: Document events (refer to another doc for details) - - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/api-conventions.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/api-conventions.md?pixel)]() diff --git a/release-0.20.0/docs/api.md b/release-0.20.0/docs/api.md deleted file mode 100644 index 267f4f45858..00000000000 --- a/release-0.20.0/docs/api.md +++ /dev/null @@ -1,78 +0,0 @@ -# The Kubernetes API - -Primary system and API concepts are documented in the [User guide](user-guide.md). - -Overall API conventions are described in the [API conventions doc](api-conventions.md). - -Complete API details are documented via [Swagger](http://swagger.io/). The Kubernetes apiserver (aka "master") exports an API that can be used to retrieve the [Swagger spec](https://github.com/swagger-api/swagger-spec/tree/master/schemas/v1.2) for the Kubernetes API, by default at `/swaggerapi`, and a UI you can use to browse the API documentation at `/swagger-ui`. We also periodically update a [statically generated UI](http://kubernetes.io/third_party/swagger-ui/). - -Remote access to the API is discussed in the [access doc](accessing_the_api.md). - -The Kubernetes API also serves as the foundation for the declarative configuration schema for the system. The [Kubectl](kubectl.md) command-line tool can be used to create, update, delete, and get API objects. - -Kubernetes also stores its serialized state (currently in [etcd](https://coreos.com/docs/distributed-configuration/getting-started-with-etcd/)) in terms of the API resources. - -Kubernetes itself is decomposed into multiple components, which interact through its API. - -## API changes - -In our experience, any system that is successful needs to grow and change as new use cases emerge or existing ones change. Therefore, we expect the Kubernetes API to continuously change and grow. However, we intend to not break compatibility with existing clients, for an extended period of time. In general, new API resources and new resource fields can be expected to be added frequently. Elimination of resources or fields will require following a deprecation process. The precise deprecation policy for eliminating features is TBD, but once we reach our 1.0 milestone, there will be a specific policy. - -What constitutes a compatible change and how to change the API are detailed by the [API change document](devel/api_changes.md). - -## API versioning - -Fine-grain resource evolution alone makes it difficult to eliminate fields or restructure resource representations. Therefore, Kubernetes supports multiple API versions, each at a different API path prefix, such as `/api/v1beta3`. These are simply different interfaces to read and/or modify the same underlying resources. In general, all API resources are accessible via all API versions, though there may be some cases in the future where that is not true. - -Distinct API versions present more clear, consistent views of system resources and behavior than intermingled, independently evolved resources. They also provide a more straightforward mechanism for controlling access to end-of-lifed and/or experimental APIs. - -The [API and release versioning proposal](versioning.md) describes the current thinking on the API version evolution process. - -## v1beta1, v1beta2, and v1beta3 are deprecated; please move to v1 ASAP - -As of June 4, 2015, the Kubernetes v1 API has been enabled by default. The v1beta1 and v1beta2 APIs were deleted on June 1, 2015. v1beta3 is planned to be deleted on July 6, 2015. - -### v1 conversion tips (from v1beta3) - -We're working to convert all documentation and examples to v1. A simple [API conversion tool](cluster_management.md#switching-your-config-files-to-a-new-api-version) has been written to simplify the translation process. Use `kubectl create --validate` in order to validate your json or yaml against our Swagger spec. - -Changes to services are the most significant difference between v1beta3 and v1. - -* The `service.spec.portalIP` property is renamed to `service.spec.clusterIP`. -* The `service.spec.createExternalLoadBalancer` property is removed. Specify `service.spec.type: "LoadBalancer"` to create an external load balancer instead. -* The `service.spec.publicIPs` property is deprecated and now called `service.spec.deprecatedPublicIPs`. This property will be removed entirely when v1beta3 is removed. The vast majority of users of this field were using it to expose services on ports on the node. Those users should specify `service.spec.type: "NodePort"` instead. Read [External Services](services.md#external-services) for more info. If this is not sufficient for your use case, please file an issue or contact @thockin. - -Some other difference between v1beta3 and v1: - -* The `pod.spec.containers[*].privileged` and `pod.spec.containers[*].capabilities` properties are now nested under the `pod.spec.containers[*].securityContext` property. See [Security Contexts](security_context.md). -* The `pod.spec.host` property is renamed to `pod.spec.nodeName`. -* The `endpoints.subsets[*].addresses.IP` property is renamed to `endpoints.subsets[*].addresses.ip`. -* The `pod.status.containerStatuses[*].state.termination` and `pod.status.containerStatuses[*].lastState.termination` properties are renamed to `pod.status.containerStatuses[*].state.terminated` and `pod.status.containerStatuses[*].lastState.terminated` respectively. -* The `pod.status.Condition` property is renamed to `pod.status.conditions`. -* The `status.details.id` property is renamed to `status.details.name`. - -### v1beta3 conversion tips (from v1beta1/2) - -Some important differences between v1beta1/2 and v1beta3: - -* The resource `id` is now called `name`. -* `name`, `labels`, `annotations`, and other metadata are now nested in a map called `metadata` -* `desiredState` is now called `spec`, and `currentState` is now called `status` -* `/minions` has been moved to `/nodes`, and the resource has kind `Node` -* The namespace is required (for all namespaced resources) and has moved from a URL parameter to the path: `/api/v1beta3/namespaces/{namespace}/{resource_collection}/{resource_name}`. If you were not using a namespace before, use `default` here. -* The names of all resource collections are now lower cased - instead of `replicationControllers`, use `replicationcontrollers`. -* To watch for changes to a resource, open an HTTP or Websocket connection to the collection query and provide the `?watch=true` query parameter along with the desired `resourceVersion` parameter to watch from. -* The `labels` query parameter has been renamed to `labelSelector`. -* The `fields` query parameter has been renamed to `fieldSelector`. -* The container `entrypoint` has been renamed to `command`, and `command` has been renamed to `args`. -* Container, volume, and node resources are expressed as nested maps (e.g., `resources{cpu:1}`) rather than as individual fields, and resource values support [scaling suffixes](resources.md#resource-quantities) rather than fixed scales (e.g., milli-cores). -* Restart policy is represented simply as a string (e.g., `"Always"`) rather than as a nested map (`always{}`). -* Pull policies changed from `PullAlways`, `PullNever`, and `PullIfNotPresent` to `Always`, `Never`, and `IfNotPresent`. -* The volume `source` is inlined into `volume` rather than nested. -* Host volumes have been changed from `hostDir` to `hostPath` to better reflect that they can be files or directories. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/api.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/api.md?pixel)]() diff --git a/release-0.20.0/docs/application-troubleshooting.md b/release-0.20.0/docs/application-troubleshooting.md deleted file mode 100644 index edc90c83f5e..00000000000 --- a/release-0.20.0/docs/application-troubleshooting.md +++ /dev/null @@ -1,149 +0,0 @@ -# Application Troubleshooting. - -This guide is to help users debug applications that are deployed into Kubernetes and not behaving correctly. -This is *not* a guide for people who want to debug their cluster. For that you should check out -[this guide](cluster-troubleshooting.md) - -## FAQ -Users are highly encouraged to check out our [FAQ](https://github.com/GoogleCloudPlatform/kubernetes/wiki/User-FAQ) - -## Diagnosing the problem -The first step in troubleshooting is triage. What is the problem? Is it your Pods, your Replication Controller or -your Service? - * [Debugging Pods](#debugging-pods) - * [Debugging Replication Controllers](#debugging-replication-controllers) - * [Debugging Services](#debugging-services) - -### Debugging Pods -The first step in debugging a Pod is taking a look at it. For the purposes of example, imagine we have a pod -```my-pod``` which holds two containers ```container-1``` and ```container-2``` - -First, describe the pod. This will show the current state of the Pod and recent events. - -```sh -export POD_NAME=my-pod -kubectl describe pods ${POD_NAME} -``` - -Look at the state of the containers in the pod. Are they all ```Running```? Have there been recent restarts? - -Depending on the state of the pod, you may want to: - * [Debug a pending pod](#debugging-pending-pods) - * [Debug a waiting pod](#debugging-waiting-pods) - * [Debug a crashing pod](#debugging-crashing-pods-or-otherwise-unhealthy-pods) - -#### Debuging Pending Pods -If a Pod is stuck in ```Pending``` it means that it can not be scheduled onto a node. Generally this is because -there are insufficient resources of one type or another that prevent scheduling. Look at the output of the -```kubectl describe ...``` command above. There should be messages from the scheduler about why it can not schedule -your pod. Reasons include: - -You don't have enough resources. You may have exhausted the supply of CPU or Memory in your cluster, in this case -you need to delete Pods, adjust resource requests, or add new nodes to your cluster. - -You are using ```hostPort```. When you bind a Pod to a ```hostPort``` there are a limited number of places that pod can be -scheduled. In most cases, ```hostPort``` is unnecesary, try using a Service object to expose your Pod. If you do require -```hostPort``` then you can only schedule as many Pods as there are nodes in your Kubernetes cluster. - - -#### Debugging Waiting Pods -If a Pod is stuck in the ```Waiting``` state, then it has been scheduled to a worker node, but it can't run on that machine. -Again, the information from ```kubectl describe ...``` should be informative. The most common cause of ```Waiting``` pods -is a failure to pull the image. Make sure that you have the name of the image correct. Have you pushed it to the repository? -Does it work if you run a manual ```docker pull ``` on your machine? - -#### Debugging Crashing or otherwise unhealthy pods - -Let's suppose that ```container-2``` has been crash looping and you don't know why, you can take a look at the logs of -the current container: - -```sh -kubectl logs ${POD_NAME} ${CONTAINER_NAME} -``` - -If your container has previously crashed, you can access the previous container's crash log with: -```sh -kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME} -``` - -Alternately, you can run commands inside that container with ```exec```: - -```sh -kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN} -``` - -Note that ```-c ${CONTAINER_NAME}``` is optional and can be omitted for Pods that only contain a single container. - -As an example, to look at the logs from a running Cassandra pod, you might run -```sh -kubectl exec cassandra -- cat /var/log/cassandra/system.log -``` - - -If none of these approaches work, you can find the host machine that the pod is running on and SSH into that host, -but this should generally not be necessary given tools in the Kubernetes API. Indeed if you find yourself needing to ssh into a machine, please file a -feature request on GitHub describing your use case and why these tools are insufficient. - -### Debugging Replication Controllers -Replication controllers are fairly straightforward. They can either create Pods or they can't. If they can't -create pods, then please refer to the [instructions above](#debugging-pods) - -You can also use ```kubectl describe rc ${CONTROLLER_NAME}``` to introspect events related to the replication -controller. - -### Debugging Services -Services provide load balancing across a set of pods. There are several common problems that can make Services -not work properly. The following instructions should help debug Service problems. - -#### Verify that there are endpoints for the service -For every Service object, the apiserver makes an ```endpoints`` resource available. - -You can view this resource with: - -``` -kubectl get endpoints ${SERVICE_NAME} -``` - -Make sure that the endpoints match up with the number of containers that you expect to be a member of your service. -For example, if your Service is for an nginx container with 3 replicas, you would expect to see three different -IP addresses in the Service's endpoints. - -#### Missing endpoints -If you are missing endpoints, try listing pods using the labels that Service uses. Imagine that you have -a Service where the labels are: -```yaml -... -spec: - - selector: - name: nginx - type: frontend -``` - -You can use: -``` -kubectl get pods --selector=name=nginx,type=frontend -``` - -to list pods that match this selector. Verify that the list matches the Pods that you expect to provide your Service. - -If the list of pods matches expectations, but your endpoints are still empty, it's possible that you don't -have the right ports exposed. If your service has a ```containerPort``` specified, but the Pods that are -selected don't have that port listed, then they won't be added to the endpoints list. - -Verify that the pod's ```containerPort``` matches up with the Service's ```containerPort``` - -#### Network traffic isn't forwarded -If you can connect to the service, but the connection is immediately dropped, and there are endpoints -in the endpoints list, it's likely that the proxy can't contact your pods. - -There are three things to -check: - * Are your pods working correctly? Look for restart count, and [debug pods](#debugging-pods) - * Can you connect to your pods directly? Get the IP address for the Pod, and try to connect directly to that IP - * Is your application serving on the port that you configured? Kubernetes doesn't do port remapping, so if your application serves on 8080, the ```containerPort``` field needs to be 8080. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/application-troubleshooting.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/application-troubleshooting.md?pixel)]() diff --git a/release-0.20.0/docs/architecture.dia b/release-0.20.0/docs/architecture.dia deleted file mode 100644 index 26e0eed22e6..00000000000 Binary files a/release-0.20.0/docs/architecture.dia and /dev/null differ diff --git a/release-0.20.0/docs/architecture.png b/release-0.20.0/docs/architecture.png deleted file mode 100644 index fa39039aaff..00000000000 Binary files a/release-0.20.0/docs/architecture.png and /dev/null differ diff --git a/release-0.20.0/docs/architecture.svg b/release-0.20.0/docs/architecture.svg deleted file mode 100644 index 825c0ace8fb..00000000000 --- a/release-0.20.0/docs/architecture.svg +++ /dev/null @@ -1,499 +0,0 @@ - - - - - - - - - - - - - Node - - - - - - kubelet - - - - - - - - - - - container - - - - - - - container - - - - - - - cAdvisor - - - - - - - Pod - - - - - - - - - - - container - - - - - - - container - - - - - - - container - - - - - - - Pod - - - - - - - - - - - - container - - - - - - - container - - - - - - - container - - - - - - - Pod - - - - - - - Proxy - - - - - - - kubectl (user commands) - - - - - - - - - - - - - - - Firewall - - - - - - - Internet - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - replication controller - - - - - - - Scheduler - - - - - - - Scheduler - - - - Master components - Colocated, or spread across machines, - as dictated by cluster size. - - - - - - - - - - - - REST - (pods, services, - rep. controllers) - - - - - - - authorization - authentication - - - - - - - scheduling - actuator - - - - APIs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - docker - - - - - - - - .. - - - ... - - - - - - - - - - - - - - - - - - - - - - - - Node - - - - - - kubelet - - - - - - - - - - - container - - - - - - - container - - - - - - - cAdvisor - - - - - - - Pod - - - - - - - - - - - container - - - - - - - container - - - - - - - container - - - - - - - Pod - - - - - - - - - - - - container - - - - - - - container - - - - - - - container - - - - - - - Pod - - - - - - - Proxy - - - - - - - - - - - - - - - - - - - docker - - - - - - - - .. - - - ... - - - - - - - - - - - - - - - - - - - - - - - - - - Distributed - Watchable - Storage - - (implemented via etcd) - - - diff --git a/release-0.20.0/docs/authentication.md b/release-0.20.0/docs/authentication.md deleted file mode 100644 index 351ab663462..00000000000 --- a/release-0.20.0/docs/authentication.md +++ /dev/null @@ -1,46 +0,0 @@ -# Authentication Plugins - -Kubernetes uses client certificates, tokens, or http basic auth to authenticate users for API calls. - -Client certificate authentication is enabled by passing the `--client_ca_file=SOMEFILE` -option to apiserver. The referenced file must contain one or more certificates authorities -to use to validate client certificates presented to the apiserver. If a client certificate -is presented and verified, the common name of the subject is used as the user name for the -request. - -Token authentication is enabled by passing the `--token_auth_file=SOMEFILE` option -to apiserver. Currently, tokens last indefinitely, and the token list cannot -be changed without restarting apiserver. We plan in the future for tokens to -be short-lived, and to be generated as needed rather than stored in a file. - -The token file format is implemented in `plugin/pkg/auth/authenticator/token/tokenfile/...` -and is a csv file with 3 columns: token, user name, user uid. - -When using token authentication from an http client the apiserver expects an `Authorization` -header with a value of `Bearer SOMETOKEN`. - -Basic authentication is enabled by passing the `--basic_auth_file=SOMEFILE` -option to apiserver. Currently, the basic auth credentials last indefinitely, -and the password cannot be changed without restarting apiserver. Note that basic -authentication is currently supported for convenience while we finish making the -more secure modes described above easier to use. - -The basic auth file format is implemented in `plugin/pkg/auth/authenticator/password/passwordfile/...` -and is a csv file with 3 columns: password, user name, user id. - -When using basic authentication from an http client the apiserver expects an `Authorization` header -with a value of `Basic BASE64ENCODEDUSER:PASSWORD`. - -## Plugin Development - -We plan for the Kubernetes API server to issue tokens -after the user has been (re)authenticated by a *bedrock* authentication -provider external to Kubernetes. We plan to make it easy to develop modules -that interface between kubernetes and a bedrock authentication provider (e.g. -github.com, google.com, enterprise directory, kerberos, etc.) - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/authentication.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/authentication.md?pixel)]() diff --git a/release-0.20.0/docs/authorization.md b/release-0.20.0/docs/authorization.md deleted file mode 100644 index 39b2bdac5ec..00000000000 --- a/release-0.20.0/docs/authorization.md +++ /dev/null @@ -1,109 +0,0 @@ -# Authorization Plugins - - -In Kubernetes, authorization happens as a separate step from authentication. -See the [authentication documentation](./authentication.md) for an -overview of authentication. - -Authorization applies to all HTTP accesses on the main apiserver port. (The -readonly port is not currently subject to authorization, but is planned to be -removed soon.) - -The authorization check for any request compares attributes of the context of -the request, (such as user, resource, and namespace) with access -policies. An API call must be allowed by some policy in order to proceed. - -The following implementations are available, and are selected by flag: - - `--authorization_mode=AlwaysDeny` - - `--authorization_mode=AlwaysAllow` - - `--authorization_mode=ABAC` - -`AlwaysDeny` blocks all requests (used in tests). -`AlwaysAllow` allows all requests; use if you don't need authorization. -`ABAC` allows for user-configured authorization policy. ABAC stands for Attribute-Based Access Control. - -## ABAC Mode -### Request Attributes - -A request has 4 attributes that can be considered for authorization: - - user (the user-string which a user was authenticated as). - - whether the request is readonly (GETs are readonly) - - what resource is being accessed - - applies only to the API endpoints, such as - `/api/v1/namespaces/default/pods`. For miscellaneous endpoints, like `/version`, the - resource is the empty string. - - the namespace of the object being access, or the empty string if the - endpoint does not support namespaced objects. - -We anticipate adding more attributes to allow finer grained access control and -to assist in policy management. - -### Policy File Format - -For mode `ABAC`, also specify `--authorization_policy_file=SOME_FILENAME`. - -The file format is [one JSON object per line](http://jsonlines.org/). There should be no enclosing list or map, just -one map per line. - -Each line is a "policy object". A policy object is a map with the following properties: - - `user`, type string; the user-string from `--token_auth_file` - - `readonly`, type boolean, when true, means that the policy only applies to GET - operations. - - `resource`, type string; a resource from an URL, such as `pods`. - - `namespace`, type string; a namespace string. - -An unset property is the same as a property set to the zero value for its type (e.g. empty string, 0, false). -However, unset should be preferred for readability. - -In the future, policies may be expressed in a JSON format, and managed via a REST -interface. - -### Authorization Algorithm - -A request has attributes which correspond to the properties of a policy object. - -When a request is received, the attributes are determined. Unknown attributes -are set to the zero value of its type (e.g. empty string, 0, false). - -An unset property will match any value of the corresponding -attribute. An unset attribute will match any value of the corresponding property. - -The tuple of attributes is checked for a match against every policy in the policy file. -If at least one line matches the request attributes, then the request is authorized (but may fail later validation). - -To permit any user to do something, write a policy with the user property unset. -To permit an action Policy with an unset namespace applies regardless of namespace. - -### Examples - 1. Alice can do anything: `{"user":"alice"}` - 2. Kubelet can read any pods: `{"user":"kubelet", "resource": "pods", "readonly": true}` - 3. Kubelet can read and write events: `{"user":"kubelet", "resource": "events"}` - 4. Bob can just read pods in namespace "projectCaribou": `{"user":"bob", "resource": "pods", "readonly": true, "ns": "projectCaribou"}` - -[Complete file example](../pkg/auth/authorizer/abac/example_policy_file.jsonl) - -## Plugin Development - -Other implementations can be developed fairly easily. -The APIserver calls the Authorizer interface: -```go -type Authorizer interface { - Authorize(a Attributes) error -} -``` -to determine whether or not to allow each API action. - -An authorization plugin is a module that implements this interface. -Authorization plugin code goes in `pkg/auth/authorization/$MODULENAME`. - -An authorization module can be completely implemented in go, or can call out -to a remote authorization service. Authorization modules can implement -their own caching to reduce the cost of repeated authorization calls with the -same or similar arguments. Developers should then consider the interaction between -caching and revocation of permissions. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/authorization.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/authorization.md?pixel)]() diff --git a/release-0.20.0/docs/availability.md b/release-0.20.0/docs/availability.md deleted file mode 100644 index f8972106911..00000000000 --- a/release-0.20.0/docs/availability.md +++ /dev/null @@ -1,136 +0,0 @@ -# Availability - -This document collects advice on reasoning about and provisioning for high-availability when using Kubernetes clusters. - -## Failure modes - -This is an incomplete list of things that could go wrong, and how to deal with them. - -Root causes: - - VM(s) shutdown - - network partition within cluster, or between cluster and users. - - crashes in Kubernetes software - - data loss or unavailability of persistent storage (e.g. GCE PD or AWS EBS volume). - - operator error misconfigures kubernetes software or application software. - -Specific scenarios: - - Apiserver VM shutdown or apiserver crashing - - Results - - unable to stop, update, or start new pods, services, replication controller - - existing pods and services should continue to work normally, unless they depend on the Kubernetes API - - Apiserver backing storage lost - - Results - - apiserver should fail to come up. - - kubelets will not be able to reach it but will continue to run the same pods and provide the same service proxying. - - manual recovery or recreation of apiserver state necessary before apiserver is restarted. - - Supporting services (node controller, replication controller manager, scheduler, etc) VM shutdown or crashes - - currently those are colocated with the apiserver, and their unavailability has similar consequences as apiserver - - in future, these will be replicated as well and may not be co-located - - they do not have own persistent state - - Node (thing that runs kubelet and kube-proxy and pods) shutdown - - Results - - pods on that Node stop running - - Kubelet software fault - - Results - - crashing kubelet cannot start new pods on the node - - kubelet might delete the pods or not - - node marked unhealthy - - replication controllers start new pods elsewhere - - Cluster operator error - - Results: - - loss of pods, services, etc - - lost of apiserver backing store - - users unable to read API - - etc - -Mitigations: -- Action: Use IaaS providers automatic VM restarting feature for IaaS VMs. - - Mitigates: Apiserver VM shutdown or apiserver crashing - - Mitigates: Supporting services VM shutdown or crashes - -- Action use IaaS providers reliable storage (e.g GCE PD or AWS EBS volume) for VMs with apiserver+etcd. - - Mitigates: Apiserver backing storage lost - -- Action: Use Replicated APIserver feature (when complete: feature is planned but not implemented) - - Mitigates: Apiserver VM shutdown or apiserver crashing - - Will tolerate one or more simultaneous apiserver failures. - - Mitigates: Apiserver backing storage lost - - Each apiserver has independent storage. Etcd will recover from loss of one member. Risk of total data loss greatly reduced. - -- Action: Snapshot apiserver PDs/EBS-volumes periodically - - Mitigates: Apiserver backing storage lost - - Mitigates: Some cases of operator error - - Mitigates: Some cases of kubernetes software fault - -- Action: use replication controller and services in front of pods - - Mitigates: Node shutdown - - Mitigates: Kubelet software fault - -- Action: applications (containers) designed to tolerate unexpected restarts - - Mitigates: Node shutdown - - Mitigates: Kubelet software fault - -- Action: Multiple independent clusters (and avoid making risky changes to all clusters at once) - - Mitigates: Everything listed above. - -## Choosing Multiple Kubernetes Clusters - -You may want to set up multiple kubernetes clusters, both to -have clusters in different regions to be nearer to your users; and to tolerate failures and/or invasive maintenance. - -### Scope of a single cluster - -On IaaS providers such as Google Compute Engine or Amazon Web Services, a VM exists in a -[zone](https://cloud.google.com/compute/docs/zones) or [availability -zone](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html). -We suggest that all the VMs in a Kubernetes cluster should be in the same availability zone, because: - - compared to having a single global Kubernetes cluster, there are fewer single-points of failure - - compared to a cluster that spans availability zones, it is easier to reason about the availability properties of a - single-zone cluster. - - when the Kubernetes developers are designing the system (e.g. making assumptions about latency, bandwidth, or - correlated failures) they are assuming all the machines are in a single data center, or otherwise closely connected. - -It is okay to have multiple clusters per availability zone, though on balance we think fewer is better. -Reasons to prefer fewer clusters are: - - improved bin packing of Pods in some cases with more nodes in one cluster. - - reduced operational overhead (though the advantage is diminished as ops tooling and processes matures). - - reduced costs for per-cluster fixed resource costs, e.g. apiserver VMs (but small as a percentage - of overall cluster cost for medium to large clusters). - -Reasons to have multiple clusters include: - - strict security policies requiring isolation of one class of work from another (but, see Partitioning Clusters - below). - - test clusters to canary new Kubernetes releases or other cluster software. - -### Selecting the right number of clusters -The selection of the number of kubernetes clusters may be a relatively static choice, only revisited occasionally. -By contrast, the number of nodes in a cluster and the number of pods in a service may be change frequently according to -load and growth. - -To pick the number of clusters, first, decide which regions you need to be in to have adequate latency to all your end users, for services that will run -on Kubernetes (if you use a Content Distribution Network, the latency requirements for the CDN-hosted content need not -be considered). Legal issues might influence this as well. For example, a company with a global customer base might decide to have clusters in US, EU, AP, and SA regions. -Call the number of regions to be in `R`. - -Second, decide how many clusters should be able to be unavailable at the same time, while still being available. Call -the number that can be unavailable `U`. If you are not sure, then 1 is a fine choice. - -If it is allowable for load-balancing to direct traffic to any region in the event of a cluster failure, then -then you need `R + U` clusters. If it is not (e.g you want to ensure low latency for all users in the event of a -cluster failure), then you need to have `R * U` clusters (`U` in each of `R` regions). In any case, try to put each cluster in a different zone. - -Finally, if any of your clusters would need more than the maximum recommended number of nodes for a Kubernetes cluster, then -you may need even more clusters. Our [roadmap](http://docs.k8s.io/roadmap.md) -calls for maximum 100 node clusters at v1.0 and maximum 1000 node clusters in the middle of 2015. - -## Working with multiple clusters - -When you have multiple clusters, you would typically create services with the same config in each cluster and put each of those -service instances behind a load balancer (AWS Elastic Load Balancer, GCE Forwarding Rule or HTTP Load Balancer), so that -failures of a single cluster are not visible to end users. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/availability.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/availability.md?pixel)]() diff --git a/release-0.20.0/docs/cli-roadmap.md b/release-0.20.0/docs/cli-roadmap.md deleted file mode 100644 index bdb5d957c9d..00000000000 --- a/release-0.20.0/docs/cli-roadmap.md +++ /dev/null @@ -1,84 +0,0 @@ -# Kubernetes CLI/Configuration Roadmap - -See also issues with the following labels: -* [area/config-deployment](https://github.com/GoogleCloudPlatform/kubernetes/labels/area%2Fconfig-deployment) -* [component/CLI](https://github.com/GoogleCloudPlatform/kubernetes/labels/component%2FCLI) -* [component/client](https://github.com/GoogleCloudPlatform/kubernetes/labels/component%2Fclient) - -1. Create services before other objects, or at least before objects that depend upon them. Namespace-relative DNS mitigates this some, but most users are still using service environment variables. [#1768](https://github.com/GoogleCloudPlatform/kubernetes/issues/1768) -1. Finish rolling update [#1353](https://github.com/GoogleCloudPlatform/kubernetes/issues/1353) - 1. Friendly to auto-scaling [#2863](https://github.com/GoogleCloudPlatform/kubernetes/pull/2863#issuecomment-69701562) - 1. Rollback (make rolling-update reversible, and complete an in-progress rolling update by taking 2 replication controller names rather than always taking a file) - 1. Rollover (replace multiple replication controllers with one, such as to clean up an aborted partial rollout) - 1. Write a ReplicationController generator to derive the new ReplicationController from an old one (e.g., `--image-version=newversion`, which would apply a name suffix, update a label value, and apply an image tag) - 1. Use readiness [#620](https://github.com/GoogleCloudPlatform/kubernetes/issues/620) - 1. Perhaps factor this in a way that it can be shared with [Openshift’s deployment controller](https://github.com/GoogleCloudPlatform/kubernetes/issues/1743) - 1. Rolling update service as a plugin -1. Kind-based filtering on object streams -- only operate on the kinds of objects specified. This would make directory-based kubectl operations much more useful. Users should be able to instantiate the example applications using `kubectl create -f ...` -1. Improved pretty printing of endpoints, such as in the case that there are more than a few endpoints -1. Service address/port lookup command(s) -1. List supported resources -1. Swagger lookups [#3060](https://github.com/GoogleCloudPlatform/kubernetes/issues/3060) -1. --name, --name-suffix applied during creation and updates -1. --labels and opinionated label injection: --app=foo, --tier={fe,cache,be,db}, --uservice=redis, --env={dev,test,prod}, --stage={canary,final}, --track={hourly,daily,weekly}, --release=0.4.3c2. Exact ones TBD. We could allow arbitrary values -- the keys are important. The actual label keys would be (optionally?) namespaced with kubectl.kubernetes.io/, or perhaps the user’s namespace. -1. --annotations and opinionated annotation injection: --description, --revision -1. Imperative updates. We'll want to optionally make these safe(r) by supporting preconditions based on the current value and resourceVersion. - 1. annotation updates similar to label updates - 1. other custom commands for common imperative updates - 1. more user-friendly (but still generic) on-command-line json for patch -1. We also want to support the following flavors of more general updates: - 1. whichever we don’t support: - 1. safe update: update the full resource, guarded by resourceVersion precondition (and perhaps selected value-based preconditions) - 1. forced update: update the full resource, blowing away the previous Spec without preconditions; delete and re-create if necessary - 1. diff/dryrun: Compare new config with current Spec [#6284](https://github.com/GoogleCloudPlatform/kubernetes/issues/6284) - 1. submit/apply/reconcile/ensure/merge: Merge user-provided fields with current Spec. Keep track of user-provided fields using an annotation -- see [#1702](https://github.com/GoogleCloudPlatform/kubernetes/issues/1702). Delete all objects with deployment-specific labels. -1. --dry-run for all commands -1. Support full label selection syntax, including support for namespaces. -1. Wait on conditions [#1899](https://github.com/GoogleCloudPlatform/kubernetes/issues/1899) -1. Make kubectl scriptable: make output and exit code behavior consistent and useful for wrapping in workflows and piping back into kubectl and/or xargs (e.g., dump full URLs?, distinguish permanent and retry-able failure, identify objects that should be retried) - 1. Here's [an example](http://techoverflow.net/blog/2013/10/22/docker-remove-all-images-and-containers/) where multiple objects on the command line and an option to dump object names only (`-q`) would be useful in combination. [#5906](https://github.com/GoogleCloudPlatform/kubernetes/issues/5906) -1. Easy generation of clean configuration files from existing objects (including containers -- podex) -- remove readonly fields, status - 1. Export from one namespace, import into another is an important use case -1. Derive objects from other objects - 1. pod clone - 1. rc from pod - 1. --labels-from (services from pods or rcs) -1. Kind discovery (i.e., operate on objects of all kinds) [#5278](https://github.com/GoogleCloudPlatform/kubernetes/issues/5278) -1. A fairly general-purpose way to specify fields on the command line during creation and update, not just from a config file -1. Extensible API-based generator framework (i.e. invoke generators via an API/URL rather than building them into kubectl), so that complex client libraries don’t need to be rewritten in multiple languages, and so that the abstractions are available through all interfaces: API, CLI, UI, logs, ... [#5280](https://github.com/GoogleCloudPlatform/kubernetes/issues/5280) - 1. Need schema registry, and some way to invoke generator (e.g., using a container) - 1. Convert run command to API-based generator -1. Transformation framework - 1. More intelligent defaulting of fields (e.g., [#2643](https://github.com/GoogleCloudPlatform/kubernetes/issues/2643)) -1. Update preconditions based on the values of arbitrary object fields. -1. Deployment manager compatibility on GCP: [#3685](https://github.com/GoogleCloudPlatform/kubernetes/issues/3685) -1. Describe multiple objects, multiple kinds of objects [#5905](https://github.com/GoogleCloudPlatform/kubernetes/issues/5905) -1. Support yaml document separator [#5840](https://github.com/GoogleCloudPlatform/kubernetes/issues/5840) - -TODO: -* watch -* attach [#1521](https://github.com/GoogleCloudPlatform/kubernetes/issues/1521) -* image/registry commands -* do any other server paths make sense? validate? generic curl functionality? -* template parameterization -* dynamic/runtime configuration - -Server-side support: - -1. Default selectors from labels [#1698](https://github.com/GoogleCloudPlatform/kubernetes/issues/1698#issuecomment-71048278) -1. Stop [#1535](https://github.com/GoogleCloudPlatform/kubernetes/issues/1535) -1. Deleted objects [#2789](https://github.com/GoogleCloudPlatform/kubernetes/issues/2789) -1. Clone [#170](https://github.com/GoogleCloudPlatform/kubernetes/issues/170) -1. Resize [#1629](https://github.com/GoogleCloudPlatform/kubernetes/issues/1629) -1. Useful /operations API: wait for finalization/reification -1. List supported resources [#2057](https://github.com/GoogleCloudPlatform/kubernetes/issues/2057) -1. Reverse label lookup [#1348](https://github.com/GoogleCloudPlatform/kubernetes/issues/1348) -1. Field selection [#1362](https://github.com/GoogleCloudPlatform/kubernetes/issues/1362) -1. Field filtering [#1459](https://github.com/GoogleCloudPlatform/kubernetes/issues/1459) -1. Operate on uids - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/cli-roadmap.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/cli-roadmap.md?pixel)]() diff --git a/release-0.20.0/docs/client-libraries.md b/release-0.20.0/docs/client-libraries.md deleted file mode 100644 index 8e1f31cff97..00000000000 --- a/release-0.20.0/docs/client-libraries.md +++ /dev/null @@ -1,21 +0,0 @@ -## kubernetes API client libraries - -### Supported - * [Go](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/pkg/client) - -### User Contributed -*Note: Libraries provided by outside parties are supported by their authors, not the core Kubernetes team* - - * [Java](https://github.com/fabric8io/fabric8/tree/master/components/kubernetes-api) - * [Ruby1](https://github.com/Ch00k/kuber) - * [Ruby2](https://github.com/abonas/kubeclient) - * [PHP](https://github.com/devstub/kubernetes-api-php-client) - * [Node.js](https://github.com/tenxcloud/node-kubernetes-client) - * [Perl](https://metacpan.org/pod/Net::Kubernetes) - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/client-libraries.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/client-libraries.md?pixel)]() diff --git a/release-0.20.0/docs/cluster-admin-guide.md b/release-0.20.0/docs/cluster-admin-guide.md deleted file mode 100644 index e59239071c7..00000000000 --- a/release-0.20.0/docs/cluster-admin-guide.md +++ /dev/null @@ -1,80 +0,0 @@ -# Kubernetes Cluster Admin Guide - -The cluster admin guide is for anyone creating or administering a Kubernetes cluster. -It assumes some familiarity with concepts in the [User Guide](user-guide.md). - -## Planning a cluster - -There are many different examples of how to setup a kubernetes cluster. Many of them are listed in this -[matrix](getting-started-guides/README.md). We call each of the combinations in this matrix a *distro*. - -Before choosing a particular guide, here are some things to consider: - - Are you just looking to try out Kubernetes on your laptop, or build a high-availability many-node cluster? Both - models are supported, but some distros are better for one case or the other. - - Will you be using a hosted Kubernetes cluster, such as [GKE](https://cloud.google.com/container-engine), or setting - one up yourself? - - Will your cluster be on-premises, or in the cloud (IaaS)? Kubernetes does not directly support hybrid clusters. We - recommend setting up multiple clusters rather than spanning distant locations. - - Will you be running Kubernetes on "bare metal" or virtual machines? Kubernetes supports both, via different distros. - - Do you just want to run a cluster, or do you expect to do active development of kubernetes project code? If the - latter, it is better to pick a distro actively used by other developers. Some distros only use binary releases, but - offer is a greater variety of choices. - - Not all distros are maintained as actively. Prefer ones which are listed as tested on a more recent version of - Kubernetes. - - If you are configuring kubernetes on-premises, you will need to consider what [networking - model](networking.md) fits best. - - If you are designing for very [high-availability](availability.md), you may want multiple clusters in multiple zones. - -## Setting up a cluster - -Pick one of the Getting Started Guides from the [matrix](getting-started-guides/README.md) and follow it. -If none of the Getting Started Guides fits, you may want to pull ideas from several of the guides. - -One option for custom networking is *OpenVSwitch GRE/VxLAN networking* ([ovs-networking.md](ovs-networking.md)), which -uses OpenVSwitch to set up networking between pods across - Kubernetes nodes. - -If you are modifying an existing guide which uses Salt, this document explains [how Salt is used in the Kubernetes -project.](salt.md). - -## Upgrading a cluster -[Upgrading a cluster](cluster_management.md). - -## Managing nodes - -[Managing nodes](node.md). - -## Optional Cluster Services - -* **DNS Integration with SkyDNS** ([dns.md](dns.md)): - Resolving a DNS name directly to a Kubernetes service. - -* **Logging** with [Kibana](logging.md) - -## Multi-tenant support - -* **Namespaces** ([namespaces.md](namespaces.md)): Namespaces help different - projects, teams, or customers to share a kubernetes cluster. - -* **Resource Quota** ([resource_quota_admin.md](resource_quota_admin.md)) - -## Security - -* **Kubernetes Container Environment** ([container-environment.md](container-environment.md)): - Describes the environment for Kubelet managed containers on a Kubernetes - node. - -* **Securing access to the API Server** [accessing the api]( accessing_the_api.md) - -* **Authentication** [authentication]( authentication.md) - -* **Authorization** [authorization]( authorization.md) - -* **Admission Controllers** [admission_controllers]( admission_controllers.md) - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/cluster-admin-guide.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/cluster-admin-guide.md?pixel)]() diff --git a/release-0.20.0/docs/cluster-troubleshooting.md b/release-0.20.0/docs/cluster-troubleshooting.md deleted file mode 100644 index 2b56ec0282a..00000000000 --- a/release-0.20.0/docs/cluster-troubleshooting.md +++ /dev/null @@ -1,33 +0,0 @@ -# Cluster Troubleshooting -Most of the time, if you encounter problems, it is your application that is having problems. For application -problems please see the [application troubleshooting guide](application-troubleshooting.md). - -## Listing your cluster -The first thing to debug in your cluster is if your nodes are all registered correctly. - -Run -``` -kubectl get nodes -``` - -And verify that all of the nodes you expect to see are present and that they are all in the ```Ready``` state. - -## Looking at logs -For now, digging deeper into the cluster requires logging into the relevant machines. Here are the locations -of the relevant log files. (note that on systemd based systems, you may need to use ```journalctl``` instead) - -### Master - * /var/log/kube-apiserver.log - API Server, responsible for serving the API - * /var/log/kube-scheduler.log - Scheduler, responsible for making scheduling decisions - * /var/log/kube-controller-manager.log - Controller that manages replication controllers - -### Worker Nodes - * /var/log/kubelet.log - Kubelet, responsible for running containers on the node - * /var/log/kube-proxy.log - Kube Proxy, responsible for service load balancing - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/cluster-troubleshooting.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/cluster-troubleshooting.md?pixel)]() diff --git a/release-0.20.0/docs/cluster_management.md b/release-0.20.0/docs/cluster_management.md deleted file mode 100644 index edde83224e6..00000000000 --- a/release-0.20.0/docs/cluster_management.md +++ /dev/null @@ -1,65 +0,0 @@ -# Cluster Management - -This doc is in progress. - -## Upgrading a cluster - -The `cluster/kube-push.sh` script will do a rudimentary update; it is a 1.0 roadmap item to have a robust live cluster update system. - -## Updgrading to a different API version - -There is a sequence of steps to upgrade to a new API version. - -1. Turn on the new api version -2. Upgrade the cluster's storage to use the new version. -3. Upgrade all config files. Identify users of the old api version endpoints. -4. Update existing objects in the storage to new version by running cluster/update-storage-objects.sh -3. Turn off the old version. - -### Turn on or off an API version for your cluster - -Specific API versions can be turned on or off by passing --runtime-config=api/ flag while bringing up the server. For example: to turn off v1 API, pass --runtime-config=api/v1=false. -runtime-config also supports 2 special keys: api/all and api/legacy to control all and legacy APIs respectively. For example, for turning off all api versions except v1, pass --runtime-config=api/all=false,api/v1=true. - -### Switching your cluster's storage API version - -KUBE_API_VERSIONS env var controls the API versions that are supported in the cluster. The first version in the list is used as the cluster's storage version. Hence, to set a specific version as the storage version, bring it to the front of list of versions in the value of KUBE_API_VERSIONS. - -### Switching your config files to a new API version - -You can use the kube-version-change utility to convert config files between different API versions. - -``` -$ hack/build-go.sh cmd/kube-version-change -$ _output/local/go/bin/kube-version-change -i myPod.v1beta3.yaml -o myPod.v1.yaml -``` - -### Maintenance on a Node - -If you need to reboot a node (such as for a kernel upgrade, libc upgrade, hardware repair, etc.), and the downtime is -brief, then when the Kubelet restarts, it will attempt to restart the pods scheduled to it. If the reboot takes longer, -then the node controller will terminate the pods that are bound to the unavailable node. If there is a corresponding -replication controller, then a new copy of the pod will be started on a different node. So, in the case where all -pods are replicated, upgrades can be done without special coordination. - -If you want more control over the upgrading process, you may use the following workflow: - 1. Mark the node to be rebooted as unschedulable: - `kubectl update nodes $NODENAME --patch='{"apiVersion": "v1", "spec": {"unschedulable": true}}'`. - This keeps new pods from landing on the node while you are trying to get them off. - 1. Get the pods off the machine, via any of the following strategies: - 1. wait for finite-duration pods to complete - 1. delete pods with `kubectl delete pods $PODNAME` - 1. for pods with a replication controller, the pod will eventually be replaced by a new pod which will be scheduled to a new node. additionally, if the pod is part of a service, then clients will automatically be redirected to the new pod. - 1. for pods with no replication controller, you need to bring up a new copy of the pod, and assuming it is not part of a service, redirect clients to it. - 1. Work on the node - 1. Make the node schedulable again: - `kubectl update nodes $NODENAME --patch='{"apiVersion": "v1", "spec": {"unschedulable": false}}'`. - If you deleted the node's VM instance and created a new one, then a new schedulable node resource will - be created automatically when you create a new VM instance (if you're using a cloud provider that supports - node discovery; currently this is only GCE, not including CoreOS on GCE using kube-register). See [Node](node.md). - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/cluster_management.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/cluster_management.md?pixel)]() diff --git a/release-0.20.0/docs/container-environment.md b/release-0.20.0/docs/container-environment.md deleted file mode 100644 index 37abbda6bd4..00000000000 --- a/release-0.20.0/docs/container-environment.md +++ /dev/null @@ -1,94 +0,0 @@ - -# Kubernetes Container Environment - -## Overview -This document describes the environment for Kubelet managed containers on a Kubernetes node (kNode).  In contrast to the Kubernetes cluster API, which provides an API for creating and managing containers, the Kubernetes container environment provides the container access to information about what else is going on in the cluster.  - -This cluster information makes it possible to build applications that are *cluster aware*.   -Additionally, the Kubernetes container environment defines a series of hooks that are surfaced to optional hook handlers defined as part of individual containers.  Container hooks are somewhat analogous to operating system signals in a traditional process model.   However these hooks are designed to make it easier to build reliable, scalable cloud applications in the Kubernetes cluster.  Containers that participate in this cluster lifecycle become *cluster native*.  - -Another important part of the container environment is the file system that is available to the container. In Kubernetes, the filesystem is a combination of an [image](./images.md) and one or more [volumes](./volumes.md). - - -The following sections describe both the cluster information provided to containers, as well as the hooks and life-cycle that allows containers to interact with the management system. - -## Cluster Information -There are two types of information that are available within the container environment.  There is information about the container itself, and there is information about other objects in the system. - -### Container Information -Currently, the only information about the container that is available to the container is the Pod name for the pod in which the container is running.  This ID is set as the hostname of the container, and is accessible through all calls to access the hostname within the container (e.g. the hostname command, or the [gethostname][1] function call in libc).  Additionally, user-defined environment variables from the pod definition, are also available to the container, as are any environment variables specified statically in the Docker image. - -In the future, we anticipate expanding this information with richer information about the container.  Examples include available memory, number of restarts, and in general any state that you could get from the call to GET /pods on the API server. - -### Cluster Information -Currently the list of all services that are running at the time when the container was created via the Kubernetes Cluster API are available to the container as environment variables.  The set of environment variables matches the syntax of Docker links. - -For a service named **foo** that maps to a container port named **bar**, the following variables are defined: - -```sh -FOO_SERVICE_HOST= -FOO_SERVICE_PORT= -``` - -Services have dedicated IP address, and are also surfaced to the container via DNS (If [DNS addon](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/cluster/addons/dns) is enabled).  Of course DNS is still not an enumerable protocol, so we will continue to provide environment variables so that containers can do discovery. - -## Container Hooks -*NB*: Container hooks are under active development, we anticipate adding additional hooks as the Kubernetes container management system evolves.* - -Container hooks provide information to the container about events in its management lifecycle.  For example, immediately after a container is started, it receives a *PostStart* hook.  These hooks are broadcast *into* the container with information about the life-cycle of the container.  They are different from the events provided by Docker and other systems which are *output* from the container.  Output events provide a log of what has already happened.  Input hooks provide real-time notification about things that are happening, but no historical log.   - -### Hook Details -There are currently two container hooks that are surfaced to containers, and two proposed hooks: - -*PreStart - ****Proposed*** - -This hook is sent immediately before a container is created.  It notifies that the container will be created immediately after the call completes.  No parameters are passed. *Note - *Some event handlers (namely ‘exec’ are incompatible with this event) - -*PostStart* - -This hook is sent immediately after a container is created.  It notifies the container that it has been created.  No parameters are passed to the handler. - -*PostRestart - ****Proposed*** - -This hook is called before the PostStart handler, when a container has been restarted, rather than started for the first time.  No parameters are passed to the handler. - -*PreStop* - -This hook is called immediately before a container is terminated.  This event handler is blocking, and must complete before the call to delete the container is sent to the Docker daemon. The SIGTERM notification sent by Docker is also still sent. - -A single parameter named reason is passed to the handler which contains the reason for termination.  Currently the valid values for reason are: - -* ```Delete``` - indicating an API call to delete the pod containing this container. -* ```Health``` - indicating that a health check of the container failed. -* ```Dependency``` - indicating that a dependency for the container or the pod is missing, and thus, the container needs to be restarted.  Examples include, the pod infra container crashing, or persistent disk failing for a container that mounts PD. - -Eventually, user specified reasons may be [added to the API](https://github.com/GoogleCloudPlatform/kubernetes/issues/137). - - -### Hook Handler Execution -When a management hook occurs, the management system calls into any registered hook handlers in the container for that hook.  These hook handler calls are synchronous in the context of the pod containing the container. Note:this means that hook handler execution blocks any further management of the pod.  If your hook handler blocks, no other management (including health checks) will occur until the hook handler completes.  Blocking hook handlers do *not* affect management of other Pods.  Typically we expect that users will make their hook handlers as lightweight as possible, but there are cases where long running commands make sense (e.g. saving state prior to container stop) - -For hooks which have parameters, these parameters are passed to the event handler as a set of key/value pairs.  The details of this parameter passing is handler implementation dependent (see below). - -### Hook delivery guarantees -Hook delivery is "at least one", which means that a hook may be called multiple times for any given event (e.g. "start" or "stop") and it is up to the hook implementer to be able to handle this -correctly. - -We expect double delivery to be rare, but in some cases if the ```kubelet``` restarts in the middle of sending a hook, the hook may be resent after the kubelet comes back up. - -Likewise, we only make a single delivery attempt. If (for example) an http hook receiver is down, and unable to take traffic, we do not make any attempts to resend. - -### Hook Handler Implementations -Hook handlers are the way that hooks are surfaced to containers.  Containers can select the type of hook handler they would like to implement.  Kubernetes currently supports two different hook handler types: - - * Exec - Executes a specific command (e.g. pre-stop.sh) inside the cgroup and namespaces of the container.  Resources consumed by the command are counted against the container.  Commands which print "ok" to standard out (stdout) are treated as healthy, any other output is treated as container failures (and will cause kubelet to forcibly restart the container).  Parameters are passed to the command as traditional linux command line flags (e.g. pre-stop.sh --reason=HEALTH) - - * HTTP - Executes an HTTP request against a specific endpoint on the container.  HTTP error codes (5xx) and non-response/failure to connect are treated as container failures. Parameters are passed to the http endpoint as query args (e.g. http://some.server.com/some/path?reason=HEALTH) - -[1]: http://man7.org/linux/man-pages/man2/gethostname.2.html - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/container-environment.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/container-environment.md?pixel)]() diff --git a/release-0.20.0/docs/containers.md b/release-0.20.0/docs/containers.md deleted file mode 100644 index 6cca8e7f0ce..00000000000 --- a/release-0.20.0/docs/containers.md +++ /dev/null @@ -1,95 +0,0 @@ -# Containers with Kubernetes - -## Containers and commands - -So far the Pods we've seen have all used the `image` field to indicate what process Kubernetes -should run in a container. In this case, Kubernetes runs the image's default command. If we want -to run a particular command or override the image's defaults, there are two additional fields that -we can use: - -1. `Command`: Controls the actual command run by the image -2. `Args`: Controls the arguments passed to the command - -### How docker handles command and arguments - -Docker images have metadata associated with them that is used to store information about the image. -The image author may use this to define defaults for the command and arguments to run a container -when the user does not supply values. Docker calls the fields for commands and arguments -`Entrypoint` and `Cmd` respectively. The full details for this feature are too complicated to -describe here, mostly due to the fact that the docker API allows users to specify both of these -fields as either a string array or a string and there are subtle differences in how those cases are -handled. We encourage the curious to check out [docker's documentation]() for this feature. - -Kubernetes allows you to override both the image's default command (docker `Entrypoint`) and args -(docker `Cmd`) with the `Command` and `Args` fields of `Container`. The rules are: - -1. If you do not supply a `Command` or `Args` for a container, the defaults defined by the image - will be used -2. If you supply a `Command` but no `Args` for a container, only the supplied `Command` will be - used; the image's default arguments are ignored -3. If you supply only `Args`, the image's default command will be used with the arguments you - supply -4. If you supply a `Command` **and** `Args`, the image's defaults will be ignored and the values - you supply will be used - -Here are examples for these rules in table format - -| Image `Entrypoint` | Image `Cmd` | Container `Command` | Container `Args` | Command Run | -|--------------------|------------------|---------------------|--------------------|------------------| -| `[/ep-1]` | `[foo bar]` | <not set> | <not set> | `[ep-1 foo bar]` | -| `[/ep-1]` | `[foo bar]` | `[/ep-2]` | <not set> | `[ep-2]` | -| `[/ep-1]` | `[foo bar]` | <not set> | `[zoo boo]` | `[ep-1 zoo boo]` | -| `[/ep-1]` | `[foo bar]` | `[/ep-2]` | `[zoo boo]` | `[ep-2 zoo boo]` | - - -## Capabilities - -By default, Docker containers are "unprivileged" and cannot, for example, run a Docker daemon inside a Docker container. We can have fine grain control over the capabilities using cap-add and cap-drop.More details [here](https://docs.docker.com/reference/run/#runtime-privilege-linux-capabilities-and-lxc-configuration). - -The relationship between Docker's capabilities and [Linux capabilities](http://man7.org/linux/man-pages/man7/capabilities.7.html) - -| Docker's capabilities | Linux capabilities | -| ---- | ---- | -| SETPCAP | CAP_SETPCAP | -| SYS_MODULE | CAP_SYS_MODULE | -| SYS_RAWIO | CAP_SYS_RAWIO | -| SYS_PACCT | CAP_SYS_PACCT | -| SYS_ADMIN | CAP_SYS_ADMIN | -| SYS_NICE | CAP_SYS_NICE | -| SYS_RESOURCE | CAP_SYS_RESOURCE | -| SYS_TIME | CAP_SYS_TIME | -| SYS_TTY_CONFIG | CAP_SYS_TTY_CONFIG | -| MKNOD | CAP_MKNOD | -| AUDIT_WRITE | CAP_AUDIT_WRITE | -| AUDIT_CONTROL | CAP_AUDIT_CONTROL | -| MAC_OVERRIDE | CAP_MAC_OVERRIDE | -| MAC_ADMIN | CAP_MAC_ADMIN | -| NET_ADMIN | CAP_NET_ADMIN | -| SYSLOG | CAP_SYSLOG | -| CHOWN | CAP_CHOWN | -| NET_RAW | CAP_NET_RAW | -| DAC_OVERRIDE | CAP_DAC_OVERRIDE | -| FOWNER | CAP_FOWNER | -| DAC_READ_SEARCH | CAP_DAC_READ_SEARCH | -| FSETID | CAP_FSETID | -| KILL | CAP_KILL | -| SETGID | CAP_SETGID | -| SETUID | CAP_SETUID | -| LINUX_IMMUTABLE | CAP_LINUX_IMMUTABLE | -| NET_BIND_SERVICE | CAP_NET_BIND_SERVICE | -| NET_BROADCAST | CAP_NET_BROADCAST | -| IPC_LOCK | CAP_IPC_LOCK | -| IPC_OWNER | CAP_IPC_OWNER | -| SYS_CHROOT | CAP_SYS_CHROOT | -| SYS_PTRACE | CAP_SYS_PTRACE | -| SYS_BOOT | CAP_SYS_BOOT | -| LEASE | CAP_LEASE | -| SETFCAP | CAP_SETFCAP | -| WAKE_ALARM | CAP_WAKE_ALARM | -| BLOCK_SUSPEND | CAP_BLOCK_SUSPEND | - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/containers.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/containers.md?pixel)]() diff --git a/release-0.20.0/docs/design/README.md b/release-0.20.0/docs/design/README.md deleted file mode 100644 index f1f1fe0d754..00000000000 --- a/release-0.20.0/docs/design/README.md +++ /dev/null @@ -1,23 +0,0 @@ -# Kubernetes Design Overview - -Kubernetes is a system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance, and scaling of applications. - -Kubernetes establishes robust declarative primitives for maintaining the desired state requested by the user. We see these primitives as the main value added by Kubernetes. Self-healing mechanisms, such as auto-restarting, re-scheduling, and replicating containers require active controllers, not just imperative orchestration. - -Kubernetes is primarily targeted at applications composed of multiple containers, such as elastic, distributed micro-services. It is also designed to facilitate migration of non-containerized application stacks to Kubernetes. It therefore includes abstractions for grouping containers in both loosely coupled and tightly coupled formations, and provides ways for containers to find and communicate with each other in relatively familiar ways. - -Kubernetes enables users to ask a cluster to run a set of containers. The system automatically chooses hosts to run those containers on. While Kubernetes's scheduler is currently very simple, we expect it to grow in sophistication over time. Scheduling is a policy-rich, topology-aware, workload-specific function that significantly impacts availability, performance, and capacity. The scheduler needs to take into account individual and collective resource requirements, quality of service requirements, hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference, deadlines, and so on. Workload-specific requirements will be exposed through the API as necessary. - -Kubernetes is intended to run on a number of cloud providers, as well as on physical hosts. - -A single Kubernetes cluster is not intended to span multiple availability zones. Instead, we recommend building a higher-level layer to replicate complete deployments of highly available applications across multiple zones (see [the availability doc](../availability.md) and [cluster federation proposal](../proposals/federation.md) for more details). - -Finally, Kubernetes aspires to be an extensible, pluggable, building-block OSS platform and toolkit. Therefore, architecturally, we want Kubernetes to be built as a collection of pluggable components and layers, with the ability to use alternative schedulers, controllers, storage systems, and distribution mechanisms, and we're evolving its current code in that direction. Furthermore, we want others to be able to extend Kubernetes functionality, such as with higher-level PaaS functionality or multi-cluster layers, without modification of core Kubernetes source. Therefore, its API isn't just (or even necessarily mainly) targeted at end users, but at tool and extension developers. Its APIs are intended to serve as the foundation for an open ecosystem of tools, automation systems, and higher-level API layers. Consequently, there are no "internal" inter-component APIs. All APIs are visible and available, including the APIs used by the scheduler, the node controller, the replication-controller manager, Kubelet's API, etc. There's no glass to break -- in order to handle more complex use cases, one can just access the lower-level APIs in a fully transparent, composable manner. - -For more about the Kubernetes architecture, see [architecture](architecture.md). - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/design/README.md?pixel)]() diff --git a/release-0.20.0/docs/design/access.md b/release-0.20.0/docs/design/access.md deleted file mode 100644 index 147f2f131db..00000000000 --- a/release-0.20.0/docs/design/access.md +++ /dev/null @@ -1,254 +0,0 @@ -# K8s Identity and Access Management Sketch - -This document suggests a direction for identity and access management in the Kubernetes system. - - -## Background - -High level goals are: - - Have a plan for how identity, authentication, and authorization will fit in to the API. - - Have a plan for partitioning resources within a cluster between independent organizational units. - - Ease integration with existing enterprise and hosted scenarios. - -### Actors -Each of these can act as normal users or attackers. - - External Users: People who are accessing applications running on K8s (e.g. a web site served by webserver running in a container on K8s), but who do not have K8s API access. - - K8s Users : People who access the K8s API (e.g. create K8s API objects like Pods) - - K8s Project Admins: People who manage access for some K8s Users - - K8s Cluster Admins: People who control the machines, networks, or binaries that make up a K8s cluster. - - K8s Admin means K8s Cluster Admins and K8s Project Admins taken together. - -### Threats -Both intentional attacks and accidental use of privilege are concerns. - -For both cases it may be useful to think about these categories differently: - - Application Path - attack by sending network messages from the internet to the IP/port of any application running on K8s. May exploit weakness in application or misconfiguration of K8s. - - K8s API Path - attack by sending network messages to any K8s API endpoint. - - Insider Path - attack on K8s system components. Attacker may have privileged access to networks, machines or K8s software and data. Software errors in K8s system components and administrator error are some types of threat in this category. - -This document is primarily concerned with K8s API paths, and secondarily with Internal paths. The Application path also needs to be secure, but is not the focus of this document. - -### Assets to protect - -External User assets: - - Personal information like private messages, or images uploaded by External Users - - web server logs - -K8s User assets: - - External User assets of each K8s User - - things private to the K8s app, like: - - credentials for accessing other services (docker private repos, storage services, facebook, etc) - - SSL certificates for web servers - - proprietary data and code - -K8s Cluster assets: - - Assets of each K8s User - - Machine Certificates or secrets. - - The value of K8s cluster computing resources (cpu, memory, etc). - -This document is primarily about protecting K8s User assets and K8s cluster assets from other K8s Users and K8s Project and Cluster Admins. - -### Usage environments -Cluster in Small organization: - - K8s Admins may be the same people as K8s Users. - - few K8s Admins. - - prefer ease of use to fine-grained access control/precise accounting, etc. - - Product requirement that it be easy for potential K8s Cluster Admin to try out setting up a simple cluster. - -Cluster in Large organization: - - K8s Admins typically distinct people from K8s Users. May need to divide K8s Cluster Admin access by roles. - - K8s Users need to be protected from each other. - - Auditing of K8s User and K8s Admin actions important. - - flexible accurate usage accounting and resource controls important. - - Lots of automated access to APIs. - - Need to integrate with existing enterprise directory, authentication, accounting, auditing, and security policy infrastructure. - -Org-run cluster: - - organization that runs K8s master components is same as the org that runs apps on K8s. - - Nodes may be on-premises VMs or physical machines; Cloud VMs; or a mix. - -Hosted cluster: - - Offering K8s API as a service, or offering a Paas or Saas built on K8s - - May already offer web services, and need to integrate with existing customer account concept, and existing authentication, accounting, auditing, and security policy infrastructure. - - May want to leverage K8s User accounts and accounting to manage their User accounts (not a priority to support this use case.) - - Precise and accurate accounting of resources needed. Resource controls needed for hard limits (Users given limited slice of data) and soft limits (Users can grow up to some limit and then be expanded). - -K8s ecosystem services: - - There may be companies that want to offer their existing services (Build, CI, A/B-test, release automation, etc) for use with K8s. There should be some story for this case. - -Pods configs should be largely portable between Org-run and hosted configurations. - - -# Design -Related discussion: -- https://github.com/GoogleCloudPlatform/kubernetes/issues/442 -- https://github.com/GoogleCloudPlatform/kubernetes/issues/443 - -This doc describes two security profiles: - - Simple profile: like single-user mode. Make it easy to evaluate K8s without lots of configuring accounts and policies. Protects from unauthorized users, but does not partition authorized users. - - Enterprise profile: Provide mechanisms needed for large numbers of users. Defense in depth. Should integrate with existing enterprise security infrastructure. - -K8s distribution should include templates of config, and documentation, for simple and enterprise profiles. System should be flexible enough for knowledgeable users to create intermediate profiles, but K8s developers should only reason about those two Profiles, not a matrix. - -Features in this doc are divided into "Initial Feature", and "Improvements". Initial features would be candidates for version 1.00. - -## Identity -###userAccount -K8s will have a `userAccount` API object. -- `userAccount` has a UID which is immutable. This is used to associate users with objects and to record actions in audit logs. -- `userAccount` has a name which is a string and human readable and unique among userAccounts. It is used to refer to users in Policies, to ensure that the Policies are human readable. It can be changed only when there are no Policy objects or other objects which refer to that name. An email address is a suggested format for this field. -- `userAccount` is not related to the unix username of processes in Pods created by that userAccount. -- `userAccount` API objects can have labels - -The system may associate one or more Authentication Methods with a -`userAccount` (but they are not formally part of the userAccount object.) -In a simple deployment, the authentication method for a -user might be an authentication token which is verified by a K8s server. In a -more complex deployment, the authentication might be delegated to -another system which is trusted by the K8s API to authenticate users, but where -the authentication details are unknown to K8s. - -Initial Features: -- there is no superuser `userAccount` -- `userAccount` objects are statically populated in the K8s API store by reading a config file. Only a K8s Cluster Admin can do this. -- `userAccount` can have a default `namespace`. If API call does not specify a `namespace`, the default `namespace` for that caller is assumed. -- `userAccount` is global. A single human with access to multiple namespaces is recommended to only have one userAccount. - -Improvements: -- Make `userAccount` part of a separate API group from core K8s objects like `pod`. Facilitates plugging in alternate Access Management. - -Simple Profile: - - single `userAccount`, used by all K8s Users and Project Admins. One access token shared by all. - -Enterprise Profile: - - every human user has own `userAccount`. - - `userAccount`s have labels that indicate both membership in groups, and ability to act in certain roles. - - each service using the API has own `userAccount` too. (e.g. `scheduler`, `repcontroller`) - - automated jobs to denormalize the ldap group info into the local system list of users into the K8s userAccount file. - -###Unix accounts -A `userAccount` is not a Unix user account. The fact that a pod is started by a `userAccount` does not mean that the processes in that pod's containers run as a Unix user with a corresponding name or identity. - -Initially: -- The unix accounts available in a container, and used by the processes running in a container are those that are provided by the combination of the base operating system and the Docker manifest. -- Kubernetes doesn't enforce any relation between `userAccount` and unix accounts. - -Improvements: -- Kubelet allocates disjoint blocks of root-namespace uids for each container. This may provide some defense-in-depth against container escapes. (https://github.com/docker/docker/pull/4572) -- requires docker to integrate user namespace support, and deciding what getpwnam() does for these uids. -- any features that help users avoid use of privileged containers (https://github.com/GoogleCloudPlatform/kubernetes/issues/391) - -###Namespaces -K8s will have a have a `namespace` API object. It is similar to a Google Compute Engine `project`. It provides a namespace for objects created by a group of people co-operating together, preventing name collisions with non-cooperating groups. It also serves as a reference point for authorization policies. - -Namespaces are described in [namespace.md](namespaces.md). - -In the Enterprise Profile: - - a `userAccount` may have permission to access several `namespace`s. - -In the Simple Profile: - - There is a single `namespace` used by the single user. - -Namespaces versus userAccount vs Labels: -- `userAccount`s are intended for audit logging (both name and UID should be logged), and to define who has access to `namespace`s. -- `labels` (see [docs/labels.md](/docs/labels.md)) should be used to distinguish pods, users, and other objects that cooperate towards a common goal but are different in some way, such as version, or responsibilities. -- `namespace`s prevent name collisions between uncoordinated groups of people, and provide a place to attach common policies for co-operating groups of people. - - -## Authentication - -Goals for K8s authentication: -- Include a built-in authentication system with no configuration required to use in single-user mode, and little configuration required to add several user accounts, and no https proxy required. -- Allow for authentication to be handled by a system external to Kubernetes, to allow integration with existing to enterprise authorization systems. The kubernetes namespace itself should avoid taking contributions of multiple authorization schemes. Instead, a trusted proxy in front of the apiserver can be used to authenticate users. - - For organizations whose security requirements only allow FIPS compliant implementations (e.g. apache) for authentication. - - So the proxy can terminate SSL, and isolate the CA-signed certificate from less trusted, higher-touch APIserver. - - For organizations that already have existing SaaS web services (e.g. storage, VMs) and want a common authentication portal. -- Avoid mixing authentication and authorization, so that authorization policies be centrally managed, and to allow changes in authentication methods without affecting authorization code. - -Initially: -- Tokens used to authenticate a user. -- Long lived tokens identify a particular `userAccount`. -- Administrator utility generates tokens at cluster setup. -- OAuth2.0 Bearer tokens protocol, http://tools.ietf.org/html/rfc6750 -- No scopes for tokens. Authorization happens in the API server -- Tokens dynamically generated by apiserver to identify pods which are making API calls. -- Tokens checked in a module of the APIserver. -- Authentication in apiserver can be disabled by flag, to allow testing without authorization enabled, and to allow use of an authenticating proxy. In this mode, a query parameter or header added by the proxy will identify the caller. - -Improvements: -- Refresh of tokens. -- SSH keys to access inside containers. - -To be considered for subsequent versions: -- Fuller use of OAuth (http://tools.ietf.org/html/rfc6749) -- Scoped tokens. -- Tokens that are bound to the channel between the client and the api server - - http://www.ietf.org/proceedings/90/slides/slides-90-uta-0.pdf - - http://www.browserauth.net - - -## Authorization - -K8s authorization should: -- Allow for a range of maturity levels, from single-user for those test driving the system, to integration with existing to enterprise authorization systems. -- Allow for centralized management of users and policies. In some organizations, this will mean that the definition of users and access policies needs to reside on a system other than k8s and encompass other web services (such as a storage service). -- Allow processes running in K8s Pods to take on identity, and to allow narrow scoping of permissions for those identities in order to limit damage from software faults. -- Have Authorization Policies exposed as API objects so that a single config file can create or delete Pods, Replication Controllers, Services, and the identities and policies for those Pods and Replication Controllers. -- Be separate as much as practical from Authentication, to allow Authentication methods to change over time and space, without impacting Authorization policies. - -K8s will implement a relatively simple -[Attribute-Based Access Control](http://en.wikipedia.org/wiki/Attribute_Based_Access_Control) model. -The model will be described in more detail in a forthcoming document. The model will -- Be less complex than XACML -- Be easily recognizable to those familiar with Amazon IAM Policies. -- Have a subset/aliases/defaults which allow it to be used in a way comfortable to those users more familiar with Role-Based Access Control. - -Authorization policy is set by creating a set of Policy objects. - -The API Server will be the Enforcement Point for Policy. For each API call that it receives, it will construct the Attributes needed to evaluate the policy (what user is making the call, what resource they are accessing, what they are trying to do that resource, etc) and pass those attributes to a Decision Point. The Decision Point code evaluates the Attributes against all the Policies and allows or denies the API call. The system will be modular enough that the Decision Point code can either be linked into the APIserver binary, or be another service that the apiserver calls for each Decision (with appropriate time-limited caching as needed for performance). - -Policy objects may be applicable only to a single namespace or to all namespaces; K8s Project Admins would be able to create those as needed. Other Policy objects may be applicable to all namespaces; a K8s Cluster Admin might create those in order to authorize a new type of controller to be used by all namespaces, or to make a K8s User into a K8s Project Admin.) - - -## Accounting - -The API should have a `quota` concept (see https://github.com/GoogleCloudPlatform/kubernetes/issues/442). A quota object relates a namespace (and optionally a label selector) to a maximum quantity of resources that may be used (see [resources.md](/docs/resources.md)). - -Initially: -- a `quota` object is immutable. -- for hosted K8s systems that do billing, Project is recommended level for billing accounts. -- Every object that consumes resources should have a `namespace` so that Resource usage stats are roll-up-able to `namespace`. -- K8s Cluster Admin sets quota objects by writing a config file. - -Improvements: -- allow one namespace to charge the quota for one or more other namespaces. This would be controlled by a policy which allows changing a billing_namespace= label on an object. -- allow quota to be set by namespace owners for (namespace x label) combinations (e.g. let "webserver" namespace use 100 cores, but to prevent accidents, don't allow "webserver" namespace and "instance=test" use more than 10 cores. -- tools to help write consistent quota config files based on number of nodes, historical namespace usages, QoS needs, etc. -- way for K8s Cluster Admin to incrementally adjust Quota objects. - -Simple profile: - - a single `namespace` with infinite resource limits. - -Enterprise profile: - - multiple namespaces each with their own limits. - -Issues: -- need for locking or "eventual consistency" when multiple apiserver goroutines are accessing the object store and handling pod creations. - - -## Audit Logging - -API actions can be logged. - -Initial implementation: -- All API calls logged to nginx logs. - -Improvements: -- API server does logging instead. -- Policies to drop logging for high rate trusted API calls, or by users performing audit or other sensitive functions. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/access.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/design/access.md?pixel)]() diff --git a/release-0.20.0/docs/design/admission_control.md b/release-0.20.0/docs/design/admission_control.md deleted file mode 100644 index 0581a190873..00000000000 --- a/release-0.20.0/docs/design/admission_control.md +++ /dev/null @@ -1,85 +0,0 @@ -# Kubernetes Proposal - Admission Control - -**Related PR:** - -| Topic | Link | -| ----- | ---- | -| Separate validation from RESTStorage | https://github.com/GoogleCloudPlatform/kubernetes/issues/2977 | - -## Background - -High level goals: - -* Enable an easy-to-use mechanism to provide admission control to cluster -* Enable a provider to support multiple admission control strategies or author their own -* Ensure any rejected request can propagate errors back to the caller with why the request failed - -Authorization via policy is focused on answering if a user is authorized to perform an action. - -Admission Control is focused on if the system will accept an authorized action. - -Kubernetes may choose to dismiss an authorized action based on any number of admission control strategies. - -This proposal documents the basic design, and describes how any number of admission control plug-ins could be injected. - -Implementation of specific admission control strategies are handled in separate documents. - -## kube-apiserver - -The kube-apiserver takes the following OPTIONAL arguments to enable admission control - -| Option | Behavior | -| ------ | -------- | -| admission_control | Comma-delimited, ordered list of admission control choices to invoke prior to modifying or deleting an object. | -| admission_control_config_file | File with admission control configuration parameters to boot-strap plug-in. | - -An **AdmissionControl** plug-in is an implementation of the following interface: - -```go -package admission - -// Attributes is an interface used by a plug-in to make an admission decision on a individual request. -type Attributes interface { - GetNamespace() string - GetKind() string - GetOperation() string - GetObject() runtime.Object -} - -// Interface is an abstract, pluggable interface for Admission Control decisions. -type Interface interface { - // Admit makes an admission decision based on the request attributes - // An error is returned if it denies the request. - Admit(a Attributes) (err error) -} -``` - -A **plug-in** must be compiled with the binary, and is registered as an available option by providing a name, and implementation -of admission.Interface. - -```go -func init() { - admission.RegisterPlugin("AlwaysDeny", func(client client.Interface, config io.Reader) (admission.Interface, error) { return NewAlwaysDeny(), nil }) -} -``` - -Invocation of admission control is handled by the **APIServer** and not individual **RESTStorage** implementations. - -This design assumes that **Issue 297** is adopted, and as a consequence, the general framework of the APIServer request/response flow -will ensure the following: - -1. Incoming request -2. Authenticate user -3. Authorize user -4. If operation=create|update, then validate(object) -5. If operation=create|update|delete, then admission.Admit(requestAttributes) - a. invoke each admission.Interface object in sequence -6. Object is persisted - -If at any step, there is an error, the request is canceled. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/admission_control.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/design/admission_control.md?pixel)]() diff --git a/release-0.20.0/docs/design/admission_control_limit_range.md b/release-0.20.0/docs/design/admission_control_limit_range.md deleted file mode 100644 index 79b3669ccfc..00000000000 --- a/release-0.20.0/docs/design/admission_control_limit_range.md +++ /dev/null @@ -1,138 +0,0 @@ -# Admission control plugin: LimitRanger - -## Background - -This document proposes a system for enforcing min/max limits per resource as part of admission control. - -## Model Changes - -A new resource, **LimitRange**, is introduced to enumerate min/max limits for a resource type scoped to a -Kubernetes namespace. - -```go -const ( - // Limit that applies to all pods in a namespace - LimitTypePod string = "Pod" - // Limit that applies to all containers in a namespace - LimitTypeContainer string = "Container" -) - -// LimitRangeItem defines a min/max usage limit for any resource that matches on kind -type LimitRangeItem struct { - // Type of resource that this limit applies to - Type string `json:"type,omitempty"` - // Max usage constraints on this kind by resource name - Max ResourceList `json:"max,omitempty"` - // Min usage constraints on this kind by resource name - Min ResourceList `json:"min,omitempty"` - // Default usage constraints on this kind by resource name - Default ResourceList `json:"default,omitempty"` -} - -// LimitRangeSpec defines a min/max usage limit for resources that match on kind -type LimitRangeSpec struct { - // Limits is the list of LimitRangeItem objects that are enforced - Limits []LimitRangeItem `json:"limits"` -} - -// LimitRange sets resource usage limits for each kind of resource in a Namespace -type LimitRange struct { - TypeMeta `json:",inline"` - ObjectMeta `json:"metadata,omitempty"` - - // Spec defines the limits enforced - Spec LimitRangeSpec `json:"spec,omitempty"` -} - -// LimitRangeList is a list of LimitRange items. -type LimitRangeList struct { - TypeMeta `json:",inline"` - ListMeta `json:"metadata,omitempty"` - - // Items is a list of LimitRange objects - Items []LimitRange `json:"items"` -} -``` - -## AdmissionControl plugin: LimitRanger - -The **LimitRanger** plug-in introspects all incoming admission requests. - -It makes decisions by evaluating the incoming object against all defined **LimitRange** objects in the request context namespace. - -The following min/max limits are imposed: - -**Type: Container** - -| ResourceName | Description | -| ------------ | ----------- | -| cpu | Min/Max amount of cpu per container | -| memory | Min/Max amount of memory per container | - -**Type: Pod** - -| ResourceName | Description | -| ------------ | ----------- | -| cpu | Min/Max amount of cpu per pod | -| memory | Min/Max amount of memory per pod | - -If a resource specifies a default value, it may get applied on the incoming resource. For example, if a default -value is provided for container cpu, it is set on the incoming container if and only if the incoming container -does not specify a resource requirements limit field. - -If a resource specifies a min value, it may get applied on the incoming resource. For example, if a min -value is provided for container cpu, it is set on the incoming container if and only if the incoming container does -not specify a resource requirements requests field. - -If the incoming object would cause a violation of the enumerated constraints, the request is denied with a set of -messages explaining what constraints were the source of the denial. - -If a constraint is not enumerated by a **LimitRange** it is not tracked. - -## kube-apiserver - -The server is updated to be aware of **LimitRange** objects. - -The constraints are only enforced if the kube-apiserver is started as follows: - -``` -$ kube-apiserver -admission_control=LimitRanger -``` - -## kubectl - -kubectl is modified to support the **LimitRange** resource. - -```kubectl describe``` provides a human-readable output of limits. - -For example, - -```shell -$ kubectl namespace myspace -$ kubectl create -f examples/limitrange/limit-range.json -$ kubectl get limits -NAME -limits -$ kubectl describe limits limits -Name: limits -Type Resource Min Max Default ----- -------- --- --- --- -Pod memory 1Mi 1Gi - -Pod cpu 250m 2 - -Container memory 1Mi 1Gi 1Mi -Container cpu 250m 250m 250m -``` - -## Future Enhancements: Define limits for a particular pod or container. - -In the current proposal, the **LimitRangeItem** matches purely on **LimitRangeItem.Type** - -It is expected we will want to define limits for particular pods or containers by name/uid and label/field selector. - -To make a **LimitRangeItem** more restrictive, we will intend to add these additional restrictions at a future point in time. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/admission_control_limit_range.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/design/admission_control_limit_range.md?pixel)]() diff --git a/release-0.20.0/docs/design/admission_control_resource_quota.md b/release-0.20.0/docs/design/admission_control_resource_quota.md deleted file mode 100644 index 64dc9c3ed7a..00000000000 --- a/release-0.20.0/docs/design/admission_control_resource_quota.md +++ /dev/null @@ -1,159 +0,0 @@ -# Admission control plugin: ResourceQuota - -## Background - -This document proposes a system for enforcing hard resource usage limits per namespace as part of admission control. - -## Model Changes - -A new resource, **ResourceQuota**, is introduced to enumerate hard resource limits in a Kubernetes namespace. - -A new resource, **ResourceQuotaUsage**, is introduced to support atomic updates of a **ResourceQuota** status. - -```go -// The following identify resource constants for Kubernetes object types -const ( - // Pods, number - ResourcePods ResourceName = "pods" - // Services, number - ResourceServices ResourceName = "services" - // ReplicationControllers, number - ResourceReplicationControllers ResourceName = "replicationcontrollers" - // ResourceQuotas, number - ResourceQuotas ResourceName = "resourcequotas" -) - -// ResourceQuotaSpec defines the desired hard limits to enforce for Quota -type ResourceQuotaSpec struct { - // Hard is the set of desired hard limits for each named resource - Hard ResourceList `json:"hard,omitempty"` -} - -// ResourceQuotaStatus defines the enforced hard limits and observed use -type ResourceQuotaStatus struct { - // Hard is the set of enforced hard limits for each named resource - Hard ResourceList `json:"hard,omitempty"` - // Used is the current observed total usage of the resource in the namespace - Used ResourceList `json:"used,omitempty"` -} - -// ResourceQuota sets aggregate quota restrictions enforced per namespace -type ResourceQuota struct { - TypeMeta `json:",inline"` - ObjectMeta `json:"metadata,omitempty"` - - // Spec defines the desired quota - Spec ResourceQuotaSpec `json:"spec,omitempty"` - - // Status defines the actual enforced quota and its current usage - Status ResourceQuotaStatus `json:"status,omitempty"` -} - -// ResourceQuotaUsage captures system observed quota status per namespace -// It is used to enforce atomic updates of a backing ResourceQuota.Status field in storage -type ResourceQuotaUsage struct { - TypeMeta `json:",inline"` - ObjectMeta `json:"metadata,omitempty"` - - // Status defines the actual enforced quota and its current usage - Status ResourceQuotaStatus `json:"status,omitempty"` -} - -// ResourceQuotaList is a list of ResourceQuota items -type ResourceQuotaList struct { - TypeMeta `json:",inline"` - ListMeta `json:"metadata,omitempty"` - - // Items is a list of ResourceQuota objects - Items []ResourceQuota `json:"items"` -} - -``` - -## AdmissionControl plugin: ResourceQuota - -The **ResourceQuota** plug-in introspects all incoming admission requests. - -It makes decisions by evaluating the incoming object against all defined **ResourceQuota.Status.Hard** resource limits in the request -namespace. If acceptance of the resource would cause the total usage of a named resource to exceed its hard limit, the request is denied. - -The following resource limits are imposed as part of core Kubernetes at the namespace level: - -| ResourceName | Description | -| ------------ | ----------- | -| cpu | Total cpu usage | -| memory | Total memory usage | -| pods | Total number of pods | -| services | Total number of services | -| replicationcontrollers | Total number of replication controllers | -| resourcequotas | Total number of resource quotas | - -Any resource that is not part of core Kubernetes must follow the resource naming convention prescribed by Kubernetes. - -This means the resource must have a fully-qualified name (i.e. mycompany.org/shinynewresource) - -If the incoming request does not cause the total usage to exceed any of the enumerated hard resource limits, the plug-in will post a -**ResourceQuotaUsage** document to the server to atomically update the observed usage based on the previously read -**ResourceQuota.ResourceVersion**. This keeps incremental usage atomically consistent, but does introduce a bottleneck (intentionally) -into the system. - -To optimize system performance, it is encouraged that all resource quotas are tracked on the same **ResourceQuota** document. As a result, -its encouraged to actually impose a cap on the total number of individual quotas that are tracked in the **Namespace** to 1 by explicitly -capping it in **ResourceQuota** document. - -## kube-apiserver - -The server is updated to be aware of **ResourceQuota** objects. - -The quota is only enforced if the kube-apiserver is started as follows: - -``` -$ kube-apiserver -admission_control=ResourceQuota -``` - -## kube-controller-manager - -A new controller is defined that runs a synch loop to calculate quota usage across the namespace. - -**ResourceQuota** usage is only calculated if a namespace has a **ResourceQuota** object. - -If the observed usage is different than the recorded usage, the controller sends a **ResourceQuotaUsage** resource -to the server to atomically update. - -The synchronization loop frequency will control how quickly DELETE actions are recorded in the system and usage is ticked down. - -To optimize the synchronization loop, this controller will WATCH on Pod resources to track DELETE events, and in response, recalculate -usage. This is because a Pod deletion will have the most impact on observed cpu and memory usage in the system, and we anticipate -this being the resource most closely running at the prescribed quota limits. - -## kubectl - -kubectl is modified to support the **ResourceQuota** resource. - -```kubectl describe``` provides a human-readable output of quota. - -For example, - -``` -$ kubectl namespace myspace -$ kubectl create -f examples/resourcequota/resource-quota.json -$ kubectl get quota -NAME -quota -$ kubectl describe quota quota -Name: quota -Resource Used Hard --------- ---- ---- -cpu 0m 20 -memory 0 1Gi -pods 5 10 -replicationcontrollers 5 20 -resourcequotas 1 1 -services 3 5 -``` - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/admission_control_resource_quota.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/design/admission_control_resource_quota.md?pixel)]() diff --git a/release-0.20.0/docs/design/architecture.md b/release-0.20.0/docs/design/architecture.md deleted file mode 100644 index 010a811917d..00000000000 --- a/release-0.20.0/docs/design/architecture.md +++ /dev/null @@ -1,50 +0,0 @@ -# Kubernetes architecture - -A running Kubernetes cluster contains node agents (kubelet) and master components (APIs, scheduler, etc), on top of a distributed storage solution. This diagram shows our desired eventual state, though we're still working on a few things, like making kubelet itself (all our components, really) run within containers, and making the scheduler 100% pluggable. - -![Architecture Diagram](../architecture.png?raw=true "Architecture overview") - -## The Kubernetes Node - -When looking at the architecture of the system, we'll break it down to services that run on the worker node and services that compose the cluster-level control plane. - -The Kubernetes node has the services necessary to run application containers and be managed from the master systems. - -Each node runs Docker, of course. Docker takes care of the details of downloading images and running containers. - -### Kubelet -The **Kubelet** manages [pods](../pods.md) and their containers, their images, their volumes, etc. - -### Kube-Proxy - -Each node also runs a simple network proxy and load balancer (see the [services FAQ](https://github.com/GoogleCloudPlatform/kubernetes/wiki/Services-FAQ) for more details). This reflects `services` (see [the services doc](../services.md) for more details) as defined in the Kubernetes API on each node and can do simple TCP and UDP stream forwarding (round robin) across a set of backends. - -Service endpoints are currently found via [DNS](../dns.md) or through environment variables (both [Docker-links-compatible](https://docs.docker.com/userguide/dockerlinks/) and Kubernetes {FOO}_SERVICE_HOST and {FOO}_SERVICE_PORT variables are supported). These variables resolve to ports managed by the service proxy. - -## The Kubernetes Control Plane - -The Kubernetes control plane is split into a set of components. Currently they all run on a single _master_ node, but that is expected to change soon in order to support high-availability clusters. These components work together to provide a unified view of the cluster. - -### etcd - -All persistent master state is stored in an instance of `etcd`. This provides a great way to store configuration data reliably. With `watch` support, coordinating components can be notified very quickly of changes. - -### Kubernetes API Server - -The apiserver serves up the [Kubernetes API](../api.md). It is intended to be a CRUD-y server, with most/all business logic implemented in separate components or in plug-ins. It mainly processes REST operations, validates them, and updates the corresponding objects in `etcd` (and eventually other stores). - -### Scheduler - -The scheduler binds unscheduled pods to nodes via the `/binding` API. The scheduler is pluggable, and we expect to support multiple cluster schedulers and even user-provided schedulers in the future. - -### Kubernetes Controller Manager Server - -All other cluster-level functions are currently performed by the Controller Manager. For instance, `Endpoints` objects are created and updated by the endpoints controller, and nodes are discovered, managed, and monitored by the node controller. These could eventually be split into separate components to make them independently pluggable. - -The [`replicationcontroller`](../replication-controller.md) is a mechanism that is layered on top of the simple [`pod`](../pods.md) API. We eventually plan to port it to a generic plug-in mechanism, once one is implemented. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/architecture.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/design/architecture.md?pixel)]() diff --git a/release-0.20.0/docs/design/clustering.md b/release-0.20.0/docs/design/clustering.md deleted file mode 100644 index 693c812500a..00000000000 --- a/release-0.20.0/docs/design/clustering.md +++ /dev/null @@ -1,66 +0,0 @@ -# Clustering in Kubernetes - - -## Overview -The term "clustering" refers to the process of having all members of the kubernetes cluster find and trust each other. There are multiple different ways to achieve clustering with different security and usability profiles. This document attempts to lay out the user experiences for clustering that Kubernetes aims to address. - -Once a cluster is established, the following is true: - -1. **Master -> Node** The master needs to know which nodes can take work and what their current status is wrt capacity. - 1. **Location** The master knows the name and location of all of the nodes in the cluster. - * For the purposes of this doc, location and name should be enough information so that the master can open a TCP connection to the Node. Most probably we will make this either an IP address or a DNS name. It is going to be important to be consistent here (master must be able to reach kubelet on that DNS name) so that we can verify certificates appropriately. - 2. **Target AuthN** A way to securely talk to the kubelet on that node. Currently we call out to the kubelet over HTTP. This should be over HTTPS and the master should know what CA to trust for that node. - 3. **Caller AuthN/Z** This would be the master verifying itself (and permissions) when calling the node. Currently, this is only used to collect statistics as authorization isn't critical. This may change in the future though. -2. **Node -> Master** The nodes currently talk to the master to know which pods have been assigned to them and to publish events. - 1. **Location** The nodes must know where the master is at. - 2. **Target AuthN** Since the master is assigning work to the nodes, it is critical that they verify whom they are talking to. - 3. **Caller AuthN/Z** The nodes publish events and so must be authenticated to the master. Ideally this authentication is specific to each node so that authorization can be narrowly scoped. The details of the work to run (including things like environment variables) might be considered sensitive and should be locked down also. - -**Note:** While the description here refers to a singular Master, in the future we should enable multiple Masters operating in an HA mode. While the "Master" is currently the combination of the API Server, Scheduler and Controller Manager, we will restrict ourselves to thinking about the main API and policy engine -- the API Server. - -## Current Implementation - -A central authority (generally the master) is responsible for determining the set of machines which are members of the cluster. Calls to create and remove worker nodes in the cluster are restricted to this single authority, and any other requests to add or remove worker nodes are rejected. (1.i). - -Communication from the master to nodes is currently over HTTP and is not secured or authenticated in any way. (1.ii, 1.iii). - -The location of the master is communicated out of band to the nodes. For GCE, this is done via Salt. Other cluster instructions/scripts use other methods. (2.i) - -Currently most communication from the node to the master is over HTTP. When it is done over HTTPS there is currently no verification of the cert of the master (2.ii). - -Currently, the node/kubelet is authenticated to the master via a token shared across all nodes. This token is distributed out of band (using Salt for GCE) and is optional. If it is not present then the kubelet is unable to publish events to the master. (2.iii) - -Our current mix of out of band communication doesn't meet all of our needs from a security point of view and is difficult to set up and configure. - -## Proposed Solution - -The proposed solution will provide a range of options for setting up and maintaining a secure Kubernetes cluster. We want to both allow for centrally controlled systems (leveraging pre-existing trust and configuration systems) or more ad-hoc automagic systems that are incredibly easy to set up. - -The building blocks of an easier solution: - -* **Move to TLS** We will move to using TLS for all intra-cluster communication. We will explicitly identify the trust chain (the set of trusted CAs) as opposed to trusting the system CAs. We will also use client certificates for all AuthN. -* [optional] **API driven CA** Optionally, we will run a CA in the master that will mint certificates for the nodes/kubelets. There will be pluggable policies that will automatically approve certificate requests here as appropriate. - * **CA approval policy** This is a pluggable policy object that can automatically approve CA signing requests. Stock policies will include `always-reject`, `queue` and `insecure-always-approve`. With `queue` there would be an API for evaluating and accepting/rejecting requests. Cloud providers could implement a policy here that verifies other out of band information and automatically approves/rejects based on other external factors. -* **Scoped Kubelet Accounts** These accounts are per-minion and (optionally) give a minion permission to register itself. - * To start with, we'd have the kubelets generate a cert/account in the form of `kubelet:`. To start we would then hard code policy such that we give that particular account appropriate permissions. Over time, we can make the policy engine more generic. -* [optional] **Bootstrap API endpoint** This is a helper service hosted outside of the Kubernetes cluster that helps with initial discovery of the master. - -### Static Clustering - -In this sequence diagram there is out of band admin entity that is creating all certificates and distributing them. It is also making sure that the kubelets know where to find the master. This provides for a lot of control but is more difficult to set up as lots of information must be communicated outside of Kubernetes. - -![Static Sequence Diagram](clustering/static.png) - -### Dynamic Clustering - -This diagram dynamic clustering using the bootstrap API endpoint. That API endpoint is used to both find the location of the master and communicate the root CA for the master. - -This flow has the admin manually approving the kubelet signing requests. This is the `queue` policy defined above.This manual intervention could be replaced by code that can verify the signing requests via other means. - -![Dynamic Sequence Diagram](clustering/dynamic.png) - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/clustering.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/design/clustering.md?pixel)]() diff --git a/release-0.20.0/docs/design/clustering/.gitignore b/release-0.20.0/docs/design/clustering/.gitignore deleted file mode 100644 index 67bcd6cb58a..00000000000 --- a/release-0.20.0/docs/design/clustering/.gitignore +++ /dev/null @@ -1 +0,0 @@ -DroidSansMono.ttf diff --git a/release-0.20.0/docs/design/clustering/Dockerfile b/release-0.20.0/docs/design/clustering/Dockerfile deleted file mode 100644 index 3353419d843..00000000000 --- a/release-0.20.0/docs/design/clustering/Dockerfile +++ /dev/null @@ -1,12 +0,0 @@ -FROM debian:jessie - -RUN apt-get update -RUN apt-get -qy install python-seqdiag make curl - -WORKDIR /diagrams - -RUN curl -sLo DroidSansMono.ttf https://googlefontdirectory.googlecode.com/hg/apache/droidsansmono/DroidSansMono.ttf - -ADD . /diagrams - -CMD bash -c 'make >/dev/stderr && tar cf - *.png' \ No newline at end of file diff --git a/release-0.20.0/docs/design/clustering/Makefile b/release-0.20.0/docs/design/clustering/Makefile deleted file mode 100644 index f6aa53ed442..00000000000 --- a/release-0.20.0/docs/design/clustering/Makefile +++ /dev/null @@ -1,29 +0,0 @@ -FONT := DroidSansMono.ttf - -PNGS := $(patsubst %.seqdiag,%.png,$(wildcard *.seqdiag)) - -.PHONY: all -all: $(PNGS) - -.PHONY: watch -watch: - fswatch *.seqdiag | xargs -n 1 sh -c "make || true" - -$(FONT): - curl -sLo $@ https://googlefontdirectory.googlecode.com/hg/apache/droidsansmono/$(FONT) - -%.png: %.seqdiag $(FONT) - seqdiag --no-transparency -a -f '$(FONT)' $< - -# Build the stuff via a docker image -.PHONY: docker -docker: - docker build -t clustering-seqdiag . - docker run --rm clustering-seqdiag | tar xvf - - -docker-clean: - docker rmi clustering-seqdiag || true - docker images -q --filter "dangling=true" | xargs docker rmi - -fix-clock-skew: - boot2docker ssh sudo date -u -D "%Y%m%d%H%M.%S" --set "$(shell date -u +%Y%m%d%H%M.%S)" diff --git a/release-0.20.0/docs/design/clustering/README.md b/release-0.20.0/docs/design/clustering/README.md deleted file mode 100644 index bfff9e54853..00000000000 --- a/release-0.20.0/docs/design/clustering/README.md +++ /dev/null @@ -1,31 +0,0 @@ -This directory contains diagrams for the clustering design doc. - -This depends on the `seqdiag` [utility](http://blockdiag.com/en/seqdiag/index.html). Assuming you have a non-borked python install, this should be installable with - -```bash -pip install seqdiag -``` - -Just call `make` to regenerate the diagrams. - -## Building with Docker -If you are on a Mac or your pip install is messed up, you can easily build with docker. - -``` -make docker -``` - -The first run will be slow but things should be fast after that. - -To clean up the docker containers that are created (and other cruft that is left around) you can run `make docker-clean`. - -If you are using boot2docker and get warnings about clock skew (or if things aren't building for some reason) then you can fix that up with `make fix-clock-skew`. - -## Automatically rebuild on file changes - -If you have the fswatch utility installed, you can have it monitor the file system and automatically rebuild when files have changed. Just do a `make watch`. - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/clustering/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/design/clustering/README.md?pixel)]() diff --git a/release-0.20.0/docs/design/clustering/dynamic.png b/release-0.20.0/docs/design/clustering/dynamic.png deleted file mode 100644 index 92b40fee362..00000000000 Binary files a/release-0.20.0/docs/design/clustering/dynamic.png and /dev/null differ diff --git a/release-0.20.0/docs/design/clustering/dynamic.seqdiag b/release-0.20.0/docs/design/clustering/dynamic.seqdiag deleted file mode 100644 index 95bb395e886..00000000000 --- a/release-0.20.0/docs/design/clustering/dynamic.seqdiag +++ /dev/null @@ -1,24 +0,0 @@ -seqdiag { - activation = none; - - - user[label = "Admin User"]; - bootstrap[label = "Bootstrap API\nEndpoint"]; - master; - kubelet[stacked]; - - user -> bootstrap [label="createCluster", return="cluster ID"]; - user <-- bootstrap [label="returns\n- bootstrap-cluster-uri"]; - - user ->> master [label="start\n- bootstrap-cluster-uri"]; - master => bootstrap [label="setMaster\n- master-location\n- master-ca"]; - - user ->> kubelet [label="start\n- bootstrap-cluster-uri"]; - kubelet => bootstrap [label="get-master", return="returns\n- master-location\n- master-ca"]; - kubelet ->> master [label="signCert\n- unsigned-kubelet-cert", return="retuns\n- kubelet-cert"]; - user => master [label="getSignRequests"]; - user => master [label="approveSignRequests"]; - kubelet <<-- master [label="returns\n- kubelet-cert"]; - - kubelet => master [label="register\n- kubelet-location"] -} diff --git a/release-0.20.0/docs/design/clustering/static.png b/release-0.20.0/docs/design/clustering/static.png deleted file mode 100644 index bcdeca7e6f5..00000000000 Binary files a/release-0.20.0/docs/design/clustering/static.png and /dev/null differ diff --git a/release-0.20.0/docs/design/clustering/static.seqdiag b/release-0.20.0/docs/design/clustering/static.seqdiag deleted file mode 100644 index bdc54b764e2..00000000000 --- a/release-0.20.0/docs/design/clustering/static.seqdiag +++ /dev/null @@ -1,16 +0,0 @@ -seqdiag { - activation = none; - - admin[label = "Manual Admin"]; - ca[label = "Manual CA"] - master; - kubelet[stacked]; - - admin => ca [label="create\n- master-cert"]; - admin ->> master [label="start\n- ca-root\n- master-cert"]; - - admin => ca [label="create\n- kubelet-cert"]; - admin ->> kubelet [label="start\n- ca-root\n- kubelet-cert\n- master-location"]; - - kubelet => master [label="register\n- kubelet-location"]; -} diff --git a/release-0.20.0/docs/design/command_execution_port_forwarding.md b/release-0.20.0/docs/design/command_execution_port_forwarding.md deleted file mode 100644 index f06297f33ce..00000000000 --- a/release-0.20.0/docs/design/command_execution_port_forwarding.md +++ /dev/null @@ -1,149 +0,0 @@ -# Container Command Execution & Port Forwarding in Kubernetes - -## Abstract - -This describes an approach for providing support for: - -- executing commands in containers, with stdin/stdout/stderr streams attached -- port forwarding to containers - -## Background - -There are several related issues/PRs: - -- [Support attach](https://github.com/GoogleCloudPlatform/kubernetes/issues/1521) -- [Real container ssh](https://github.com/GoogleCloudPlatform/kubernetes/issues/1513) -- [Provide easy debug network access to services](https://github.com/GoogleCloudPlatform/kubernetes/issues/1863) -- [OpenShift container command execution proposal](https://github.com/openshift/origin/pull/576) - -## Motivation - -Users and administrators are accustomed to being able to access their systems -via SSH to run remote commands, get shell access, and do port forwarding. - -Supporting SSH to containers in Kubernetes is a difficult task. You must -specify a "user" and a hostname to make an SSH connection, and `sshd` requires -real users (resolvable by NSS and PAM). Because a container belongs to a pod, -and the pod belongs to a namespace, you need to specify namespace/pod/container -to uniquely identify the target container. Unfortunately, a -namespace/pod/container is not a real user as far as SSH is concerned. Also, -most Linux systems limit user names to 32 characters, which is unlikely to be -large enough to contain namespace/pod/container. We could devise some scheme to -map each namespace/pod/container to a 32-character user name, adding entries to -`/etc/passwd` (or LDAP, etc.) and keeping those entries fully in sync all the -time. Alternatively, we could write custom NSS and PAM modules that allow the -host to resolve a namespace/pod/container to a user without needing to keep -files or LDAP in sync. - -As an alternative to SSH, we are using a multiplexed streaming protocol that -runs on top of HTTP. There are no requirements about users being real users, -nor is there any limitation on user name length, as the protocol is under our -control. The only downside is that standard tooling that expects to use SSH -won't be able to work with this mechanism, unless adapters can be written. - -## Constraints and Assumptions - -- SSH support is not currently in scope -- CGroup confinement is ultimately desired, but implementing that support is not currently in scope -- SELinux confinement is ultimately desired, but implementing that support is not currently in scope - -## Use Cases - -- As a user of a Kubernetes cluster, I want to run arbitrary commands in a container, attaching my local stdin/stdout/stderr to the container -- As a user of a Kubernetes cluster, I want to be able to connect to local ports on my computer and have them forwarded to ports in the container - -## Process Flow - -### Remote Command Execution Flow -1. The client connects to the Kubernetes Master to initiate a remote command execution -request -2. The Master proxies the request to the Kubelet where the container lives -3. The Kubelet executes nsenter + the requested command and streams stdin/stdout/stderr back and forth between the client and the container - -### Port Forwarding Flow -1. The client connects to the Kubernetes Master to initiate a remote command execution -request -2. The Master proxies the request to the Kubelet where the container lives -3. The client listens on each specified local port, awaiting local connections -4. The client connects to one of the local listening ports -4. The client notifies the Kubelet of the new connection -5. The Kubelet executes nsenter + socat and streams data back and forth between the client and the port in the container - - -## Design Considerations - -### Streaming Protocol - -The current multiplexed streaming protocol used is SPDY. This is not the -long-term desire, however. As soon as there is viable support for HTTP/2 in Go, -we will switch to that. - -### Master as First Level Proxy - -Clients should not be allowed to communicate directly with the Kubelet for -security reasons. Therefore, the Master is currently the only suggested entry -point to be used for remote command execution and port forwarding. This is not -necessarily desirable, as it means that all remote command execution and port -forwarding traffic must travel through the Master, potentially impacting other -API requests. - -In the future, it might make more sense to retrieve an authorization token from -the Master, and then use that token to initiate a remote command execution or -port forwarding request with a load balanced proxy service dedicated to this -functionality. This would keep the streaming traffic out of the Master. - -### Kubelet as Backend Proxy - -The kubelet is currently responsible for handling remote command execution and -port forwarding requests. Just like with the Master described above, this means -that all remote command execution and port forwarding streaming traffic must -travel through the Kubelet, which could result in a degraded ability to service -other requests. - -In the future, it might make more sense to use a separate service on the node. - -Alternatively, we could possibly inject a process into the container that only -listens for a single request, expose that process's listening port on the node, -and then issue a redirect to the client such that it would connect to the first -level proxy, which would then proxy directly to the injected process's exposed -port. This would minimize the amount of proxying that takes place. - -### Scalability - -There are at least 2 different ways to execute a command in a container: -`docker exec` and `nsenter`. While `docker exec` might seem like an easier and -more obvious choice, it has some drawbacks. - -#### `docker exec` - -We could expose `docker exec` (i.e. have Docker listen on an exposed TCP port -on the node), but this would require proxying from the edge and securing the -Docker API. `docker exec` calls go through the Docker daemon, meaning that all -stdin/stdout/stderr traffic is proxied through the Daemon, adding an extra hop. -Additionally, you can't isolate 1 malicious `docker exec` call from normal -usage, meaning an attacker could initiate a denial of service or other attack -and take down the Docker daemon, or the node itself. - -We expect remote command execution and port forwarding requests to be long -running and/or high bandwidth operations, and routing all the streaming data -through the Docker daemon feels like a bottleneck we can avoid. - -#### `nsenter` - -The implementation currently uses `nsenter` to run commands in containers, -joining the appropriate container namespaces. `nsenter` runs directly on the -node and is not proxied through any single daemon process. - -### Security - -Authentication and authorization hasn't specifically been tested yet with this -functionality. We need to make sure that users are not allowed to execute -remote commands or do port forwarding to containers they aren't allowed to -access. - -Additional work is required to ensure that multiple command execution or port forwarding connections from different clients are not able to see each other's data. This can most likely be achieved via SELinux labeling and unique process contexts. - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/command_execution_port_forwarding.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/design/command_execution_port_forwarding.md?pixel)]() diff --git a/release-0.20.0/docs/design/event_compression.md b/release-0.20.0/docs/design/event_compression.md deleted file mode 100644 index 2aa84becab5..00000000000 --- a/release-0.20.0/docs/design/event_compression.md +++ /dev/null @@ -1,84 +0,0 @@ -# Kubernetes Event Compression - -This document captures the design of event compression. - - -## Background - -Kubernetes components can get into a state where they generate tons of events which are identical except for the timestamp. For example, when pulling a non-existing image, Kubelet will repeatedly generate ```image_not_existing``` and ```container_is_waiting``` events until upstream components correct the image. When this happens, the spam from the repeated events makes the entire event mechanism useless. It also appears to cause memory pressure in etcd (see [#3853](https://github.com/GoogleCloudPlatform/kubernetes/issues/3853)). - -## Proposal -Each binary that generates events (for example, ```kubelet```) should keep track of previously generated events so that it can collapse recurring events into a single event instead of creating a new instance for each new event. - -Event compression should be best effort (not guaranteed). Meaning, in the worst case, ```n``` identical (minus timestamp) events may still result in ```n``` event entries. - -## Design -Instead of a single Timestamp, each event object [contains](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/pkg/api/types.go#L1111) the following fields: - * ```FirstTimestamp util.Time``` - * The date/time of the first occurrence of the event. - * ```LastTimestamp util.Time``` - * The date/time of the most recent occurrence of the event. - * On first occurrence, this is equal to the FirstTimestamp. - * ```Count int``` - * The number of occurrences of this event between FirstTimestamp and LastTimestamp - * On first occurrence, this is 1. - -Each binary that generates events: - * Maintains a historical record of previously generated events: - * Implemented with ["Least Recently Used Cache"](https://github.com/golang/groupcache/blob/master/lru/lru.go) in [```pkg/client/record/events_cache.go```](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/pkg/client/record/events_cache.go). - * The key in the cache is generated from the event object minus timestamps/count/transient fields, specifically the following events fields are used to construct a unique key for an event: - * ```event.Source.Component``` - * ```event.Source.Host``` - * ```event.InvolvedObject.Kind``` - * ```event.InvolvedObject.Namespace``` - * ```event.InvolvedObject.Name``` - * ```event.InvolvedObject.UID``` - * ```event.InvolvedObject.APIVersion``` - * ```event.Reason``` - * ```event.Message``` - * The LRU cache is capped at 4096 events. That means if a component (e.g. kubelet) runs for a long period of time and generates tons of unique events, the previously generated events cache will not grow unchecked in memory. Instead, after 4096 unique events are generated, the oldest events are evicted from the cache. - * When an event is generated, the previously generated events cache is checked (see [```pkg/client/record/event.go```](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/pkg/client/record/event.go)). - * If the key for the new event matches the key for a previously generated event (meaning all of the above fields match between the new event and some previously generated event), then the event is considered to be a duplicate and the existing event entry is updated in etcd: - * The new PUT (update) event API is called to update the existing event entry in etcd with the new last seen timestamp and count. - * The event is also updated in the previously generated events cache with an incremented count, updated last seen timestamp, name, and new resource version (all required to issue a future event update). - * If the key for the new event does not match the key for any previously generated event (meaning none of the above fields match between the new event and any previously generated events), then the event is considered to be new/unique and a new event entry is created in etcd: - * The usual POST/create event API is called to create a new event entry in etcd. - * An entry for the event is also added to the previously generated events cache. - -## Issues/Risks - * Compression is not guaranteed, because each component keeps track of event history in memory - * An application restart causes event history to be cleared, meaning event history is not preserved across application restarts and compression will not occur across component restarts. - * Because an LRU cache is used to keep track of previously generated events, if too many unique events are generated, old events will be evicted from the cache, so events will only be compressed until they age out of the events cache, at which point any new instance of the event will cause a new entry to be created in etcd. - -## Example -Sample kubectl output -``` -FIRSTSEEN LASTSEEN COUNT NAME KIND SUBOBJECT REASON SOURCE MESSAGE -Thu, 12 Feb 2015 01:13:02 +0000 Thu, 12 Feb 2015 01:13:02 +0000 1 kubernetes-minion-4.c.saad-dev-vms.internal Minion starting {kubelet kubernetes-minion-4.c.saad-dev-vms.internal} Starting kubelet. -Thu, 12 Feb 2015 01:13:09 +0000 Thu, 12 Feb 2015 01:13:09 +0000 1 kubernetes-minion-1.c.saad-dev-vms.internal Minion starting {kubelet kubernetes-minion-1.c.saad-dev-vms.internal} Starting kubelet. -Thu, 12 Feb 2015 01:13:09 +0000 Thu, 12 Feb 2015 01:13:09 +0000 1 kubernetes-minion-3.c.saad-dev-vms.internal Minion starting {kubelet kubernetes-minion-3.c.saad-dev-vms.internal} Starting kubelet. -Thu, 12 Feb 2015 01:13:09 +0000 Thu, 12 Feb 2015 01:13:09 +0000 1 kubernetes-minion-2.c.saad-dev-vms.internal Minion starting {kubelet kubernetes-minion-2.c.saad-dev-vms.internal} Starting kubelet. -Thu, 12 Feb 2015 01:13:05 +0000 Thu, 12 Feb 2015 01:13:12 +0000 4 monitoring-influx-grafana-controller-0133o Pod failedScheduling {scheduler } Error scheduling: no minions available to schedule pods -Thu, 12 Feb 2015 01:13:05 +0000 Thu, 12 Feb 2015 01:13:12 +0000 4 elasticsearch-logging-controller-fplln Pod failedScheduling {scheduler } Error scheduling: no minions available to schedule pods -Thu, 12 Feb 2015 01:13:05 +0000 Thu, 12 Feb 2015 01:13:12 +0000 4 kibana-logging-controller-gziey Pod failedScheduling {scheduler } Error scheduling: no minions available to schedule pods -Thu, 12 Feb 2015 01:13:05 +0000 Thu, 12 Feb 2015 01:13:12 +0000 4 skydns-ls6k1 Pod failedScheduling {scheduler } Error scheduling: no minions available to schedule pods -Thu, 12 Feb 2015 01:13:05 +0000 Thu, 12 Feb 2015 01:13:12 +0000 4 monitoring-heapster-controller-oh43e Pod failedScheduling {scheduler } Error scheduling: no minions available to schedule pods -Thu, 12 Feb 2015 01:13:20 +0000 Thu, 12 Feb 2015 01:13:20 +0000 1 kibana-logging-controller-gziey BoundPod implicitly required container POD pulled {kubelet kubernetes-minion-4.c.saad-dev-vms.internal} Successfully pulled image "kubernetes/pause:latest" -Thu, 12 Feb 2015 01:13:20 +0000 Thu, 12 Feb 2015 01:13:20 +0000 1 kibana-logging-controller-gziey Pod scheduled {scheduler } Successfully assigned kibana-logging-controller-gziey to kubernetes-minion-4.c.saad-dev-vms.internal - -``` - -This demonstrates what would have been 20 separate entries (indicating scheduling failure) collapsed/compressed down to 5 entries. - -## Related Pull Requests/Issues - * Issue [#4073](https://github.com/GoogleCloudPlatform/kubernetes/issues/4073): Compress duplicate events - * PR [#4157](https://github.com/GoogleCloudPlatform/kubernetes/issues/4157): Add "Update Event" to Kubernetes API - * PR [#4206](https://github.com/GoogleCloudPlatform/kubernetes/issues/4206): Modify Event struct to allow compressing multiple recurring events in to a single event - * PR [#4306](https://github.com/GoogleCloudPlatform/kubernetes/issues/4306): Compress recurring events in to a single event to optimize etcd storage - * PR [#4444](https://github.com/GoogleCloudPlatform/kubernetes/pull/4444): Switch events history to use LRU cache instead of map - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/event_compression.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/design/event_compression.md?pixel)]() diff --git a/release-0.20.0/docs/design/expansion.md b/release-0.20.0/docs/design/expansion.md deleted file mode 100644 index d1de3152061..00000000000 --- a/release-0.20.0/docs/design/expansion.md +++ /dev/null @@ -1,391 +0,0 @@ -# Variable expansion in pod command, args, and env - -## Abstract - -A proposal for the expansion of environment variables using a simple `$(var)` syntax. - -## Motivation - -It is extremely common for users to need to compose environment variables or pass arguments to -their commands using the values of environment variables. Kubernetes should provide a facility for -the 80% cases in order to decrease coupling and the use of workarounds. - -## Goals - -1. Define the syntax format -2. Define the scoping and ordering of substitutions -3. Define the behavior for unmatched variables -4. Define the behavior for unexpected/malformed input - -## Constraints and Assumptions - -* This design should describe the simplest possible syntax to accomplish the use-cases -* Expansion syntax will not support more complicated shell-like behaviors such as default values - (viz: `$(VARIABLE_NAME:"default")`), inline substitution, etc. - -## Use Cases - -1. As a user, I want to compose new environment variables for a container using a substitution - syntax to reference other variables in the container's environment and service environment - variables -1. As a user, I want to substitute environment variables into a container's command -1. As a user, I want to do the above without requiring the container's image to have a shell -1. As a user, I want to be able to specify a default value for a service variable which may - not exist -1. As a user, I want to see an event associated with the pod if an expansion fails (ie, references - variable names that cannot be expanded) - -### Use Case: Composition of environment variables - -Currently, containers are injected with docker-style environment variables for the services in -their pod's namespace. There are several variables for each service, but users routinely need -to compose URLs based on these variables because there is not a variable for the exact format -they need. Users should be able to build new environment variables with the exact format they need. -Eventually, it should also be possible to turn off the automatic injection of the docker-style -variables into pods and let the users consume the exact information they need via the downward API -and composition. - -#### Expanding expanded variables - -It should be possible to reference an variable which is itself the result of an expansion, if the -referenced variable is declared in the container's environment prior to the one referencing it. -Put another way -- a container's environment is expanded in order, and expanded variables are -available to subsequent expansions. - -### Use Case: Variable expansion in command - -Users frequently need to pass the values of environment variables to a container's command. -Currently, Kubernetes does not perform any expansion of variables. The workaround is to invoke a -shell in the container's command and have the shell perform the substitution, or to write a wrapper -script that sets up the environment and runs the command. This has a number of drawbacks: - -1. Solutions that require a shell are unfriendly to images that do not contain a shell -2. Wrapper scripts make it harder to use images as base images -3. Wrapper scripts increase coupling to kubernetes - -Users should be able to do the 80% case of variable expansion in command without writing a wrapper -script or adding a shell invocation to their containers' commands. - -### Use Case: Images without shells - -The current workaround for variable expansion in a container's command requires the container's -image to have a shell. This is unfriendly to images that do not contain a shell (`scratch` images, -for example). Users should be able to perform the other use-cases in this design without regard to -the content of their images. - -### Use Case: See an event for incomplete expansions - -It is possible that a container with incorrect variable values or command line may continue to run -for a long period of time, and that the end-user would have no visual or obvious warning of the -incorrect configuration. If the kubelet creates an event when an expansion references a variable -that cannot be expanded, it will help users quickly detect problems with expansions. - -## Design Considerations - -### What features should be supported? - -In order to limit complexity, we want to provide the right amount of functionality so that the 80% -cases can be realized and nothing more. We felt that the essentials boiled down to: - -1. Ability to perform direct expansion of variables in a string -2. Ability to specify default values via a prioritized mapping function but without support for - defaults as a syntax-level feature - -### What should the syntax be? - -The exact syntax for variable expansion has a large impact on how users perceive and relate to the -feature. We considered implementing a very restrictive subset of the shell `${var}` syntax. This -syntax is an attractive option on some level, because many people are familiar with it. However, -this syntax also has a large number of lesser known features such as the ability to provide -default values for unset variables, perform inline substitution, etc. - -In the interest of preventing conflation of the expansion feature in Kubernetes with the shell -feature, we chose a different syntax similar to the one in Makefiles, `$(var)`. We also chose not -to support the bar `$var` format, since it is not required to implement the required use-cases. - -Nested references, ie, variable expansion within variable names, are not supported. - -#### How should unmatched references be treated? - -Ideally, it should be extremely clear when a variable reference couldn't be expanded. We decided -the best experience for unmatched variable references would be to have the entire reference, syntax -included, show up in the output. As an example, if the reference `$(VARIABLE_NAME)` cannot be -expanded, then `$(VARIABLE_NAME)` should be present in the output. - -#### Escaping the operator - -Although the `$(var)` syntax does overlap with the `$(command)` form of command substitution -supported by many shells, because unexpanded variables are present verbatim in the output, we -expect this will not present a problem to many users. If there is a collision between a variable -name and command substitution syntax, the syntax can be escaped with the form `$$(VARIABLE_NAME)`, -which will evaluate to `$(VARIABLE_NAME)` whether `VARIABLE_NAME` can be expanded or not. - -## Design - -This design encompasses the variable expansion syntax and specification and the changes needed to -incorporate the expansion feature into the container's environment and command. - -### Syntax and expansion mechanics - -This section describes the expansion syntax, evaluation of variable values, and how unexpected or -malformed inputs are handled. - -#### Syntax - -The inputs to the expansion feature are: - -1. A utf-8 string (the input string) which may contain variable references -2. A function (the mapping function) that maps the name of a variable to the variable's value, of - type `func(string) string` - -Variable references in the input string are indicated exclusively with the syntax -`$()`. The syntax tokens are: - -- `$`: the operator -- `(`: the reference opener -- `)`: the reference closer - -The operator has no meaning unless accompanied by the reference opener and closer tokens. The -operator can be escaped using `$$`. One literal `$` will be emitted for each `$$` in the input. - -The reference opener and closer characters have no meaning when not part of a variable reference. -If a variable reference is malformed, viz: `$(VARIABLE_NAME` without a closing expression, the -operator and expression opening characters are treated as ordinary characters without special -meanings. - -#### Scope and ordering of substitutions - -The scope in which variable references are expanded is defined by the mapping function. Within the -mapping function, any arbitrary strategy may be used to determine the value of a variable name. -The most basic implementation of a mapping function is to use a `map[string]string` to lookup the -value of a variable. - -In order to support default values for variables like service variables presented by the kubelet, -which may not be bound because the service that provides them does not yet exist, there should be a -mapping function that uses a list of `map[string]string` like: - -```go -func MakeMappingFunc(maps ...map[string]string) func(string) string { - return func(input string) string { - for _, context := range maps { - val, ok := context[input] - if ok { - return val - } - } - - return "" - } -} - -// elsewhere -containerEnv := map[string]string{ - "FOO": "BAR", - "ZOO": "ZAB", - "SERVICE2_HOST": "some-host", -} - -serviceEnv := map[string]string{ - "SERVICE_HOST": "another-host", - "SERVICE_PORT": "8083", -} - -// single-map variation -mapping := MakeMappingFunc(containerEnv) - -// default variables not found in serviceEnv -mappingWithDefaults := MakeMappingFunc(serviceEnv, containerEnv) -``` - -### Implementation changes - -The necessary changes to implement this functionality are: - -1. Add a new interface, `ObjectEventRecorder`, which is like the `EventRecorder` interface, but - scoped to a single object, and a function that returns an `ObjectEventRecorder` given an - `ObjectReference` and an `EventRecorder` -2. Introduce `third_party/golang/expansion` package that provides: - 1. An `Expand(string, func(string) string) string` function - 2. A `MappingFuncFor(ObjectEventRecorder, ...map[string]string) string` function -3. Make the kubelet expand environment correctly -4. Make the kubelet expand command correctly - -#### Event Recording - -In order to provide an event when an expansion references undefined variables, the mapping function -must be able to create an event. In order to facilitate this, we should create a new interface in -the `api/client/record` package which is similar to `EventRecorder`, but scoped to a single object: - -```go -// ObjectEventRecorder knows how to record events about a single object. -type ObjectEventRecorder interface { - // Event constructs an event from the given information and puts it in the queue for sending. - // 'reason' is the reason this event is generated. 'reason' should be short and unique; it will - // be used to automate handling of events, so imagine people writing switch statements to - // handle them. You want to make that easy. - // 'message' is intended to be human readable. - // - // The resulting event will be created in the same namespace as the reference object. - Event(reason, message string) - - // Eventf is just like Event, but with Sprintf for the message field. - Eventf(reason, messageFmt string, args ...interface{}) - - // PastEventf is just like Eventf, but with an option to specify the event's 'timestamp' field. - PastEventf(timestamp util.Time, reason, messageFmt string, args ...interface{}) -} -``` - -There should also be a function that can construct an `ObjectEventRecorder` from a `runtime.Object` -and an `EventRecorder`: - -```go -type objectRecorderImpl struct { - object runtime.Object - recorder EventRecorder -} - -func (r *objectRecorderImpl) Event(reason, message string) { - r.recorder.Event(r.object, reason, message) -} - -func ObjectEventRecorderFor(object runtime.Object, recorder EventRecorder) ObjectEventRecorder { - return &objectRecorderImpl{object, recorder} -} -``` - -#### Expansion package - -The expansion package should provide two methods: - -```go -// MappingFuncFor returns a mapping function for use with Expand that -// implements the expansion semantics defined in the expansion spec; it -// returns the input string wrapped in the expansion syntax if no mapping -// for the input is found. If no expansion is found for a key, an event -// is raised on the given recorder. -func MappingFuncFor(recorder record.ObjectEventRecorder, context ...map[string]string) func(string) string { - // ... -} - -// Expand replaces variable references in the input string according to -// the expansion spec using the given mapping function to resolve the -// values of variables. -func Expand(input string, mapping func(string) string) string { - // ... -} -``` - -#### Kubelet changes - -The Kubelet should be made to correctly expand variables references in a container's environment, -command, and args. Changes will need to be made to: - -1. The `makeEnvironmentVariables` function in the kubelet; this is used by - `GenerateRunContainerOptions`, which is used by both the docker and rkt container runtimes -2. The docker manager `setEntrypointAndCommand` func has to be changed to perform variable - expansion -3. The rkt runtime should be made to support expansion in command and args when support for it is - implemented - -### Examples - -#### Inputs and outputs - -These examples are in the context of the mapping: - -| Name | Value | -|-------------|------------| -| `VAR_A` | `"A"` | -| `VAR_B` | `"B"` | -| `VAR_C` | `"C"` | -| `VAR_REF` | `$(VAR_A)` | -| `VAR_EMPTY` | `""` | - -No other variables are defined. - -| Input | Result | -|--------------------------------|----------------------------| -| `"$(VAR_A)"` | `"A"` | -| `"___$(VAR_B)___"` | `"___B___"` | -| `"___$(VAR_C)"` | `"___C"` | -| `"$(VAR_A)-$(VAR_A)"` | `"A-A"` | -| `"$(VAR_A)-1"` | `"A-1"` | -| `"$(VAR_A)_$(VAR_B)_$(VAR_C)"` | `"A_B_C"` | -| `"$$(VAR_B)_$(VAR_A)"` | `"$(VAR_B)_A"` | -| `"$$(VAR_A)_$$(VAR_B)"` | `"$(VAR_A)_$(VAR_B)"` | -| `"f000-$$VAR_A"` | `"f000-$VAR_A"` | -| `"foo\\$(VAR_C)bar"` | `"foo\Cbar"` | -| `"foo\\\\$(VAR_C)bar"` | `"foo\\Cbar"` | -| `"foo\\\\\\\\$(VAR_A)bar"` | `"foo\\\\Abar"` | -| `"$(VAR_A$(VAR_B))"` | `"$(VAR_A$(VAR_B))"` | -| `"$(VAR_A$(VAR_B)"` | `"$(VAR_A$(VAR_B)"` | -| `"$(VAR_REF)"` | `"$(VAR_A)"` | -| `"%%$(VAR_REF)--$(VAR_REF)%%"` | `"%%$(VAR_A)--$(VAR_A)%%"` | -| `"foo$(VAR_EMPTY)bar"` | `"foobar"` | -| `"foo$(VAR_Awhoops!"` | `"foo$(VAR_Awhoops!"` | -| `"f00__(VAR_A)__"` | `"f00__(VAR_A)__"` | -| `"$?_boo_$!"` | `"$?_boo_$!"` | -| `"$VAR_A"` | `"$VAR_A"` | -| `"$(VAR_DNE)"` | `"$(VAR_DNE)"` | -| `"$$$$$$(BIG_MONEY)"` | `"$$$(BIG_MONEY)"` | -| `"$$$$$$(VAR_A)"` | `"$$$(VAR_A)"` | -| `"$$$$$$$(GOOD_ODDS)"` | `"$$$$(GOOD_ODDS)"` | -| `"$$$$$$$(VAR_A)"` | `"$$$A"` | -| `"$VAR_A)"` | `"$VAR_A)"` | -| `"${VAR_A}"` | `"${VAR_A}"` | -| `"$(VAR_B)_______$(A"` | `"B_______$(A"` | -| `"$(VAR_C)_______$("` | `"C_______$("` | -| `"$(VAR_A)foobarzab$"` | `"Afoobarzab$"` | -| `"foo-\\$(VAR_A"` | `"foo-\$(VAR_A"` | -| `"--$($($($($--"` | `"--$($($($($--"` | -| `"$($($($($--foo$("` | `"$($($($($--foo$("` | -| `"foo0--$($($($("` | `"foo0--$($($($("` | -| `"$(foo$$var)` | `$(foo$$var)` | - -#### In a pod: building a URL - -Notice the `$(var)` syntax. - -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: expansion-pod -spec: - containers: - - name: test-container - image: gcr.io/google_containers/busybox - command: [ "/bin/sh", "-c", "env" ] - env: - - name: PUBLIC_URL - value: "http://$(GITSERVER_SERVICE_HOST):$(GITSERVER_SERVICE_PORT)" - restartPolicy: Never -``` - -#### In a pod: building a URL using downward API - -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: expansion-pod -spec: - containers: - - name: test-container - image: gcr.io/google_containers/busybox - command: [ "/bin/sh", "-c", "env" ] - env: - - name: POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: "metadata.namespace" - - name: PUBLIC_URL - value: "http://gitserver.$(POD_NAMESPACE):$(SERVICE_PORT)" - restartPolicy: Never -``` - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/expansion.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/design/expansion.md?pixel)]() diff --git a/release-0.20.0/docs/design/identifiers.md b/release-0.20.0/docs/design/identifiers.md deleted file mode 100644 index 09a8aa27084..00000000000 --- a/release-0.20.0/docs/design/identifiers.md +++ /dev/null @@ -1,96 +0,0 @@ -# Identifiers and Names in Kubernetes - -A summarization of the goals and recommendations for identifiers in Kubernetes. Described in [GitHub issue #199](https://github.com/GoogleCloudPlatform/kubernetes/issues/199). - - -## Definitions - -UID -: A non-empty, opaque, system-generated value guaranteed to be unique in time and space; intended to distinguish between historical occurrences of similar entities. - -Name -: A non-empty string guaranteed to be unique within a given scope at a particular time; used in resource URLs; provided by clients at creation time and encouraged to be human friendly; intended to facilitate creation idempotence and space-uniqueness of singleton objects, distinguish distinct entities, and reference particular entities across operations. - -[rfc1035](http://www.ietf.org/rfc/rfc1035.txt)/[rfc1123](http://www.ietf.org/rfc/rfc1123.txt) label (DNS_LABEL) -: An alphanumeric (a-z, and 0-9) string, with a maximum length of 63 characters, with the '-' character allowed anywhere except the first or last character, suitable for use as a hostname or segment in a domain name - -[rfc1035](http://www.ietf.org/rfc/rfc1035.txt)/[rfc1123](http://www.ietf.org/rfc/rfc1123.txt) subdomain (DNS_SUBDOMAIN) -: One or more lowercase rfc1035/rfc1123 labels separated by '.' with a maximum length of 253 characters - -[rfc4122](http://www.ietf.org/rfc/rfc4122.txt) universally unique identifier (UUID) -: A 128 bit generated value that is extremely unlikely to collide across time and space and requires no central coordination - - -## Objectives for names and UIDs - -1. Uniquely identify (via a UID) an object across space and time - -2. Uniquely name (via a name) an object across space - -3. Provide human-friendly names in API operations and/or configuration files - -4. Allow idempotent creation of API resources (#148) and enforcement of space-uniqueness of singleton objects - -5. Allow DNS names to be automatically generated for some objects - - -## General design - -1. When an object is created via an API, a Name string (a DNS_SUBDOMAIN) must be specified. Name must be non-empty and unique within the apiserver. This enables idempotent and space-unique creation operations. Parts of the system (e.g. replication controller) may join strings (e.g. a base name and a random suffix) to create a unique Name. For situations where generating a name is impractical, some or all objects may support a param to auto-generate a name. Generating random names will defeat idempotency. - * Examples: "guestbook.user", "backend-x4eb1" - -2. When an object is created via an API, a Namespace string (a DNS_SUBDOMAIN? format TBD via #1114) may be specified. Depending on the API receiver, namespaces might be validated (e.g. apiserver might ensure that the namespace actually exists). If a namespace is not specified, one will be assigned by the API receiver. This assignment policy might vary across API receivers (e.g. apiserver might have a default, kubelet might generate something semi-random). - * Example: "api.k8s.example.com" - -3. Upon acceptance of an object via an API, the object is assigned a UID (a UUID). UID must be non-empty and unique across space and time. - * Example: "01234567-89ab-cdef-0123-456789abcdef" - - -## Case study: Scheduling a pod - -Pods can be placed onto a particular node in a number of ways. This case -study demonstrates how the above design can be applied to satisfy the -objectives. - -### A pod scheduled by a user through the apiserver - -1. A user submits a pod with Namespace="" and Name="guestbook" to the apiserver. - -2. The apiserver validates the input. - 1. A default Namespace is assigned. - 2. The pod name must be space-unique within the Namespace. - 3. Each container within the pod has a name which must be space-unique within the pod. - -3. The pod is accepted. - 1. A new UID is assigned. - -4. The pod is bound to a node. - 1. The kubelet on the node is passed the pod's UID, Namespace, and Name. - -5. Kubelet validates the input. - -6. Kubelet runs the pod. - 1. Each container is started up with enough metadata to distinguish the pod from whence it came. - 2. Each attempt to run a container is assigned a UID (a string) that is unique across time. - * This may correspond to Docker's container ID. - -### A pod placed by a config file on the node - -1. A config file is stored on the node, containing a pod with UID="", Namespace="", and Name="cadvisor". - -2. Kubelet validates the input. - 1. Since UID is not provided, kubelet generates one. - 2. Since Namespace is not provided, kubelet generates one. - 1. The generated namespace should be deterministic and cluster-unique for the source, such as a hash of the hostname and file path. - * E.g. Namespace="file-f4231812554558a718a01ca942782d81" - -3. Kubelet runs the pod. - 1. Each container is started up with enough metadata to distinguish the pod from whence it came. - 2. Each attempt to run a container is assigned a UID (a string) that is unique across time. - 1. This may correspond to Docker's container ID. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/identifiers.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/design/identifiers.md?pixel)]() diff --git a/release-0.20.0/docs/design/namespaces.md b/release-0.20.0/docs/design/namespaces.md deleted file mode 100644 index d2057e9e4cc..00000000000 --- a/release-0.20.0/docs/design/namespaces.md +++ /dev/null @@ -1,340 +0,0 @@ -# Namespaces - -## Abstract - -A Namespace is a mechanism to partition resources created by users into -a logically named group. - -## Motivation - -A single cluster should be able to satisfy the needs of multiple user communities. - -Each user community wants to be able to work in isolation from other communities. - -Each user community has its own: - -1. resources (pods, services, replication controllers, etc.) -2. policies (who can or cannot perform actions in their community) -3. constraints (this community is allowed this much quota, etc.) - -A cluster operator may create a Namespace for each unique user community. - -The Namespace provides a unique scope for: - -1. named resources (to avoid basic naming collisions) -2. delegated management authority to trusted users -3. ability to limit community resource consumption - -## Use cases - -1. As a cluster operator, I want to support multiple user communities on a single cluster. -2. As a cluster operator, I want to delegate authority to partitions of the cluster to trusted users - in those communities. -3. As a cluster operator, I want to limit the amount of resources each community can consume in order - to limit the impact to other communities using the cluster. -4. As a cluster user, I want to interact with resources that are pertinent to my user community in - isolation of what other user communities are doing on the cluster. - -## Design - -### Data Model - -A *Namespace* defines a logically named group for multiple *Kind*s of resources. - -``` -type Namespace struct { - TypeMeta `json:",inline"` - ObjectMeta `json:"metadata,omitempty"` - - Spec NamespaceSpec `json:"spec,omitempty"` - Status NamespaceStatus `json:"status,omitempty"` -} -``` - -A *Namespace* name is a DNS compatible label. - -A *Namespace* must exist prior to associating content with it. - -A *Namespace* must not be deleted if there is content associated with it. - -To associate a resource with a *Namespace* the following conditions must be satisfied: - -1. The resource's *Kind* must be registered as having *RESTScopeNamespace* with the server -2. The resource's *TypeMeta.Namespace* field must have a value that references an existing *Namespace* - -The *Name* of a resource associated with a *Namespace* is unique to that *Kind* in that *Namespace*. - -It is intended to be used in resource URLs; provided by clients at creation time, and encouraged to be -human friendly; intended to facilitate idempotent creation, space-uniqueness of singleton objects, -distinguish distinct entities, and reference particular entities across operations. - -### Authorization - -A *Namespace* provides an authorization scope for accessing content associated with the *Namespace*. - -See [Authorization plugins](../authorization.md) - -### Limit Resource Consumption - -A *Namespace* provides a scope to limit resource consumption. - -A *LimitRange* defines min/max constraints on the amount of resources a single entity can consume in -a *Namespace*. - -See [Admission control: Limit Range](admission_control_limit_range.md) - -A *ResourceQuota* tracks aggregate usage of resources in the *Namespace* and allows cluster operators -to define *Hard* resource usage limits that a *Namespace* may consume. - -See [Admission control: Resource Quota](admission_control_resource_quota.md) - -### Finalizers - -Upon creation of a *Namespace*, the creator may provide a list of *Finalizer* objects. - -``` -type FinalizerName string - -// These are internal finalizers to Kubernetes, must be qualified name unless defined here -const ( - FinalizerKubernetes FinalizerName = "kubernetes" -) - -// NamespaceSpec describes the attributes on a Namespace -type NamespaceSpec struct { - // Finalizers is an opaque list of values that must be empty to permanently remove object from storage - Finalizers []FinalizerName -} -``` - -A *FinalizerName* is a qualified name. - -The API Server enforces that a *Namespace* can only be deleted from storage if and only if -it's *Namespace.Spec.Finalizers* is empty. - -A *finalize* operation is the only mechanism to modify the *Namespace.Spec.Finalizers* field post creation. - -Each *Namespace* created has *kubernetes* as an item in its list of initial *Namespace.Spec.Finalizers* -set by default. - -### Phases - -A *Namespace* may exist in the following phases. - -``` -type NamespacePhase string -const( - NamespaceActive NamespacePhase = "Active" - NamespaceTerminating NamespaceTerminating = "Terminating" -) - -type NamespaceStatus struct { - ... - Phase NamespacePhase -} -``` - -A *Namespace* is in the **Active** phase if it does not have a *ObjectMeta.DeletionTimestamp*. - -A *Namespace* is in the **Terminating** phase if it has a *ObjectMeta.DeletionTimestamp*. - -**Active** - -Upon creation, a *Namespace* goes in the *Active* phase. This means that content may be associated with -a namespace, and all normal interactions with the namespace are allowed to occur in the cluster. - -If a DELETE request occurs for a *Namespace*, the *Namespace.ObjectMeta.DeletionTimestamp* is set -to the current server time. A *namespace controller* observes the change, and sets the *Namespace.Status.Phase* -to *Terminating*. - -**Terminating** - -A *namespace controller* watches for *Namespace* objects that have a *Namespace.ObjectMeta.DeletionTimestamp* -value set in order to know when to initiate graceful termination of the *Namespace* associated content that -are known to the cluster. - -The *namespace controller* enumerates each known resource type in that namespace and deletes it one by one. - -Admission control blocks creation of new resources in that namespace in order to prevent a race-condition -where the controller could believe all of a given resource type had been deleted from the namespace, -when in fact some other rogue client agent had created new objects. Using admission control in this -scenario allows each of registry implementations for the individual objects to not need to take into account Namespace life-cycle. - -Once all objects known to the *namespace controller* have been deleted, the *namespace controller* -executes a *finalize* operation on the namespace that removes the *kubernetes* value from -the *Namespace.Spec.Finalizers* list. - -If the *namespace controller* sees a *Namespace* whose *ObjectMeta.DeletionTimestamp* is set, and -whose *Namespace.Spec.Finalizers* list is empty, it will signal the server to permanently remove -the *Namespace* from storage by sending a final DELETE action to the API server. - -### REST API - -To interact with the Namespace API: - -| Action | HTTP Verb | Path | Description | -| ------ | --------- | ---- | ----------- | -| CREATE | POST | /api/{version}/namespaces | Create a namespace | -| LIST | GET | /api/{version}/namespaces | List all namespaces | -| UPDATE | PUT | /api/{version}/namespaces/{namespace} | Update namespace {namespace} | -| DELETE | DELETE | /api/{version}/namespaces/{namespace} | Delete namespace {namespace} | -| FINALIZE | POST | /api/{version}/namespaces/{namespace}/finalize | Finalize namespace {namespace} | -| WATCH | GET | /api/{version}/watch/namespaces | Watch all namespaces | - -This specification reserves the name *finalize* as a sub-resource to namespace. - -As a consequence, it is invalid to have a *resourceType* managed by a namespace whose kind is *finalize*. - -To interact with content associated with a Namespace: - -| Action | HTTP Verb | Path | Description | -| ---- | ---- | ---- | ---- | -| CREATE | POST | /api/{version}/namespaces/{namespace}/{resourceType}/ | Create instance of {resourceType} in namespace {namespace} | -| GET | GET | /api/{version}/namespaces/{namespace}/{resourceType}/{name} | Get instance of {resourceType} in namespace {namespace} with {name} | -| UPDATE | PUT | /api/{version}/namespaces/{namespace}/{resourceType}/{name} | Update instance of {resourceType} in namespace {namespace} with {name} | -| DELETE | DELETE | /api/{version}/namespaces/{namespace}/{resourceType}/{name} | Delete instance of {resourceType} in namespace {namespace} with {name} | -| LIST | GET | /api/{version}/namespaces/{namespace}/{resourceType} | List instances of {resourceType} in namespace {namespace} | -| WATCH | GET | /api/{version}/watch/namespaces/{namespace}/{resourceType} | Watch for changes to a {resourceType} in namespace {namespace} | -| WATCH | GET | /api/{version}/watch/{resourceType} | Watch for changes to a {resourceType} across all namespaces | -| LIST | GET | /api/{version}/list/{resourceType} | List instances of {resourceType} across all namespaces | - -The API server verifies the *Namespace* on resource creation matches the *{namespace}* on the path. - -The API server will associate a resource with a *Namespace* if not populated by the end-user based on the *Namespace* context -of the incoming request. If the *Namespace* of the resource being created, or updated does not match the *Namespace* on the request, -then the API server will reject the request. - -### Storage - -A namespace provides a unique identifier space and therefore must be in the storage path of a resource. - -In etcd, we want to continue to still support efficient WATCH across namespaces. - -Resources that persist content in etcd will have storage paths as follows: - -/{k8s_storage_prefix}/{resourceType}/{resource.Namespace}/{resource.Name} - -This enables consumers to WATCH /registry/{resourceType} for changes across namespace of a particular {resourceType}. - -### Kubelet - -The kubelet will register pod's it sources from a file or http source with a namespace associated with the -*cluster-id* - -### Example: OpenShift Origin managing a Kubernetes Namespace - -In this example, we demonstrate how the design allows for agents built on-top of -Kubernetes that manage their own set of resource types associated with a *Namespace* -to take part in Namespace termination. - -OpenShift creates a Namespace in Kubernetes - -``` -{ - "apiVersion":"v1", - "kind": "Namespace", - "metadata": { - "name": "development", - }, - "spec": { - "finalizers": ["openshift.com/origin", "kubernetes"], - }, - "status": { - "phase": "Active", - }, - "labels": { - "name": "development" - }, -} -``` - -OpenShift then goes and creates a set of resources (pods, services, etc) associated -with the "development" namespace. It also creates its own set of resources in its -own storage associated with the "development" namespace unknown to Kubernetes. - -User deletes the Namespace in Kubernetes, and Namespace now has following state: - -``` -{ - "apiVersion":"v1", - "kind": "Namespace", - "metadata": { - "name": "development", - "deletionTimestamp": "..." - }, - "spec": { - "finalizers": ["openshift.com/origin", "kubernetes"], - }, - "status": { - "phase": "Terminating", - }, - "labels": { - "name": "development" - }, -} -``` - -The Kubernetes *namespace controller* observes the namespace has a *deletionTimestamp* -and begins to terminate all of the content in the namespace that it knows about. Upon -success, it executes a *finalize* action that modifies the *Namespace* by -removing *kubernetes* from the list of finalizers: - -``` -{ - "apiVersion":"v1", - "kind": "Namespace", - "metadata": { - "name": "development", - "deletionTimestamp": "..." - }, - "spec": { - "finalizers": ["openshift.com/origin"], - }, - "status": { - "phase": "Terminating", - }, - "labels": { - "name": "development" - }, -} -``` - -OpenShift Origin has its own *namespace controller* that is observing cluster state, and -it observes the same namespace had a *deletionTimestamp* assigned to it. It too will go -and purge resources from its own storage that it manages associated with that namespace. -Upon completion, it executes a *finalize* action and removes the reference to "openshift.com/origin" -from the list of finalizers. - -This results in the following state: - -``` -{ - "apiVersion":"v1", - "kind": "Namespace", - "metadata": { - "name": "development", - "deletionTimestamp": "..." - }, - "spec": { - "finalizers": [], - }, - "status": { - "phase": "Terminating", - }, - "labels": { - "name": "development" - }, -} -``` - -At this point, the Kubernetes *namespace controller* in its sync loop will see that the namespace -has a deletion timestamp and that its list of finalizers is empty. As a result, it knows all -content associated from that namespace has been purged. It performs a final DELETE action -to remove that Namespace from the storage. - -At this point, all content associated with that Namespace, and the Namespace itself are gone. - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/namespaces.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/design/namespaces.md?pixel)]() diff --git a/release-0.20.0/docs/design/networking.md b/release-0.20.0/docs/design/networking.md deleted file mode 100644 index 159dd570da6..00000000000 --- a/release-0.20.0/docs/design/networking.md +++ /dev/null @@ -1,114 +0,0 @@ -# Networking - -## Model and motivation - -Kubernetes deviates from the default Docker networking model. The goal is for each pod to have an IP in a flat shared networking namespace that has full communication with other physical computers and containers across the network. IP-per-pod creates a clean, backward-compatible model where pods can be treated much like VMs or physical hosts from the perspectives of port allocation, networking, naming, service discovery, load balancing, application configuration, and migration. - -OTOH, dynamic port allocation requires supporting both static ports (e.g., for externally accessible services) and dynamically allocated ports, requires partitioning centrally allocated and locally acquired dynamic ports, complicates scheduling (since ports are a scarce resource), is inconvenient for users, complicates application configuration, is plagued by port conflicts and reuse and exhaustion, requires non-standard approaches to naming (e.g., etcd rather than DNS), requires proxies and/or redirection for programs using standard naming/addressing mechanisms (e.g., web browsers), requires watching and cache invalidation for address/port changes for instances in addition to watching group membership changes, and obstructs container/pod migration (e.g., using CRIU). NAT introduces additional complexity by fragmenting the addressing space, which breaks self-registration mechanisms, among other problems. - -With the IP-per-pod model, all user containers within a pod behave as if they are on the same host with regard to networking. They can all reach each other’s ports on localhost. Ports which are published to the host interface are done so in the normal Docker way. All containers in all pods can talk to all other containers in all other pods by their 10-dot addresses. - -In addition to avoiding the aforementioned problems with dynamic port allocation, this approach reduces friction for applications moving from the world of uncontainerized apps on physical or virtual hosts to containers within pods. People running application stacks together on the same host have already figured out how to make ports not conflict (e.g., by configuring them through environment variables) and have arranged for clients to find them. - -The approach does reduce isolation between containers within a pod — ports could conflict, and there couldn't be private ports across containers within a pod, but applications requiring their own port spaces could just run as separate pods and processes requiring private communication could run within the same container. Besides, the premise of pods is that containers within a pod share some resources (volumes, cpu, ram, etc.) and therefore expect and tolerate reduced isolation. Additionally, the user can control what containers belong to the same pod whereas, in general, they don't control what pods land together on a host. - -When any container calls SIOCGIFADDR, it sees the IP that any peer container would see them coming from — each pod has its own IP address that other pods can know. By making IP addresses and ports the same within and outside the containers and pods, we create a NAT-less, flat address space. "ip addr show" should work as expected. This would enable all existing naming/discovery mechanisms to work out of the box, including self-registration mechanisms and applications that distribute IP addresses. (We should test that with etcd and perhaps one other option, such as Eureka (used by Acme Air) or Consul.) We should be optimizing for inter-pod network communication. Within a pod, containers are more likely to use communication through volumes (e.g., tmpfs) or IPC. - -This is different from the standard Docker model. In that mode, each container gets an IP in the 172-dot space and would only see that 172-dot address from SIOCGIFADDR. If these containers connect to another container the peer would see the connect coming from a different IP than the container itself knows. In short — you can never self-register anything from a container, because a container can not be reached on its private IP. - -An alternative we considered was an additional layer of addressing: pod-centric IP per container. Each container would have its own local IP address, visible only within that pod. This would perhaps make it easier for containerized applications to move from physical/virtual hosts to pods, but would be more complex to implement (e.g., requiring a bridge per pod, split-horizon/VP DNS) and to reason about, due to the additional layer of address translation, and would break self-registration and IP distribution mechanisms. - -## Current implementation - -For the Google Compute Engine cluster configuration scripts, [advanced routing](https://developers.google.com/compute/docs/networking#routing) is set up so that each VM has an extra 256 IP addresses that get routed to it. This is in addition to the 'main' IP address assigned to the VM that is NAT-ed for Internet access. The networking bridge (called `cbr0` to differentiate it from `docker0`) is set up outside of Docker proper and only does NAT for egress network traffic that isn't aimed at the virtual network. - -Ports mapped in from the 'main IP' (and hence the internet if the right firewall rules are set up) are proxied in user mode by Docker. In the future, this should be done with `iptables` by either the Kubelet or Docker: [Issue #15](https://github.com/GoogleCloudPlatform/kubernetes/issues/15). - -We start Docker with: - DOCKER_OPTS="--bridge cbr0 --iptables=false" - -We set up this bridge on each node with SaltStack, in [container_bridge.py](cluster/saltbase/salt/_states/container_bridge.py). - - cbr0: - container_bridge.ensure: - - cidr: {{ grains['cbr-cidr'] }} - ... - grains: - roles: - - kubernetes-pool - cbr-cidr: $MINION_IP_RANGE - -We make these addresses routable in GCE: - - gcloud compute routes add "${MINION_NAMES[$i]}" \ - --project "${PROJECT}" \ - --destination-range "${MINION_IP_RANGES[$i]}" \ - --network "${NETWORK}" \ - --next-hop-instance "${MINION_NAMES[$i]}" \ - --next-hop-instance-zone "${ZONE}" & - -The minion IP ranges are /24s in the 10-dot space. - -GCE itself does not know anything about these IPs, though. - -These are not externally routable, though, so containers that need to communicate with the outside world need to use host networking. To set up an external IP that forwards to the VM, it will only forward to the VM's primary IP (which is assigned to no pod). So we use docker's -p flag to map published ports to the main interface. This has the side effect of disallowing two pods from exposing the same port. (More discussion on this in [Issue #390](https://github.com/GoogleCloudPlatform/kubernetes/issues/390).) - -We create a container to use for the pod network namespace — a single loopback device and a single veth device. All the user's containers get their network namespaces from this pod networking container. - -Docker allocates IP addresses from a bridge we create on each node, using its “container†networking mode. - -1. Create a normal (in the networking sense) container which uses a minimal image and runs a command that blocks forever. This is not a user-defined container, and gets a special well-known name. - - creates a new network namespace (netns) and loopback device - - creates a new pair of veth devices and binds them to the netns - - auto-assigns an IP from docker’s IP range - -2. Create the user containers and specify the name of the pod infra container as their “POD†argument. Docker finds the PID of the command running in the pod infra container and attaches to the netns and ipcns of that PID. - -### Other networking implementation examples -With the primary aim of providing IP-per-pod-model, other implementations exist to serve the purpose outside of GCE. - - [OpenVSwitch with GRE/VxLAN](../ovs-networking.md) - - [Flannel](https://github.com/coreos/flannel#flannel) - -## Challenges and future work - -### Docker API - -Right now, docker inspect doesn't show the networking configuration of the containers, since they derive it from another container. That information should be exposed somehow. - -### External IP assignment - -We want to be able to assign IP addresses externally from Docker ([Docker issue #6743](https://github.com/dotcloud/docker/issues/6743)) so that we don't need to statically allocate fixed-size IP ranges to each node, so that IP addresses can be made stable across pod infra container restarts ([Docker issue #2801](https://github.com/dotcloud/docker/issues/2801)), and to facilitate pod migration. Right now, if the pod infra container dies, all the user containers must be stopped and restarted because the netns of the pod infra container will change on restart, and any subsequent user container restart will join that new netns, thereby not being able to see its peers. Additionally, a change in IP address would encounter DNS caching/TTL problems. External IP assignment would also simplify DNS support (see below). - -### Naming, discovery, and load balancing - -In addition to enabling self-registration with 3rd-party discovery mechanisms, we'd like to setup DDNS automatically ([Issue #146](https://github.com/GoogleCloudPlatform/kubernetes/issues/146)). hostname, $HOSTNAME, etc. should return a name for the pod ([Issue #298](https://github.com/GoogleCloudPlatform/kubernetes/issues/298)), and gethostbyname should be able to resolve names of other pods. Probably we need to set up a DNS resolver to do the latter ([Docker issue #2267](https://github.com/dotcloud/docker/issues/2267)), so that we don't need to keep /etc/hosts files up to date dynamically. - -[Service](http://docs.k8s.io/services.md) endpoints are currently found through environment variables. Both [Docker-links-compatible](https://docs.docker.com/userguide/dockerlinks/) variables and kubernetes-specific variables ({NAME}_SERVICE_HOST and {NAME}_SERVICE_BAR) are supported, and resolve to ports opened by the service proxy. We don't actually use [the Docker ambassador pattern](https://docs.docker.com/articles/ambassador_pattern_linking/) to link containers because we don't require applications to identify all clients at configuration time, yet. While services today are managed by the service proxy, this is an implementation detail that applications should not rely on. Clients should instead use the [service IP](http://docs.k8s.io/services.md) (which the above environment variables will resolve to). However, a flat service namespace doesn't scale and environment variables don't permit dynamic updates, which complicates service deployment by imposing implicit ordering constraints. We intend to register each service's IP in DNS, and for that to become the preferred resolution protocol. - -We'd also like to accommodate other load-balancing solutions (e.g., HAProxy), non-load-balanced services ([Issue #260](https://github.com/GoogleCloudPlatform/kubernetes/issues/260)), and other types of groups (worker pools, etc.). Providing the ability to Watch a label selector applied to pod addresses would enable efficient monitoring of group membership, which could be directly consumed or synced with a discovery mechanism. Event hooks ([Issue #140](https://github.com/GoogleCloudPlatform/kubernetes/issues/140)) for join/leave events would probably make this even easier. - -### External routability - -We want traffic between containers to use the pod IP addresses across nodes. Say we have Node A with a container IP space of 10.244.1.0/24 and Node B with a container IP space of 10.244.2.0/24. And we have Container A1 at 10.244.1.1 and Container B1 at 10.244.2.1. We want Container A1 to talk to Container B1 directly with no NAT. B1 should see the "source" in the IP packets of 10.244.1.1 — not the "primary" host IP for Node A. That means that we want to turn off NAT for traffic between containers (and also between VMs and containers). - -We'd also like to make pods directly routable from the external internet. However, we can't yet support the extra container IPs that we've provisioned talking to the internet directly. So, we don't map external IPs to the container IPs. Instead, we solve that problem by having traffic that isn't to the internal network (! 10.0.0.0/8) get NATed through the primary host IP address so that it can get 1:1 NATed by the GCE networking when talking to the internet. Similarly, incoming traffic from the internet has to get NATed/proxied through the host IP. - -So we end up with 3 cases: - -1. Container -> Container or Container <-> VM. These should use 10. addresses directly and there should be no NAT. - -2. Container -> Internet. These have to get mapped to the primary host IP so that GCE knows how to egress that traffic. There is actually 2 layers of NAT here: Container IP -> Internal Host IP -> External Host IP. The first level happens in the guest with IP tables and the second happens as part of GCE networking. The first one (Container IP -> internal host IP) does dynamic port allocation while the second maps ports 1:1. - -3. Internet -> Container. This also has to go through the primary host IP and also has 2 levels of NAT, ideally. However, the path currently is a proxy with (External Host IP -> Internal Host IP -> Docker) -> (Docker -> Container IP). Once [issue #15](https://github.com/GoogleCloudPlatform/kubernetes/issues/15) is closed, it should be External Host IP -> Internal Host IP -> Container IP. But to get that second arrow we have to set up the port forwarding iptables rules per mapped port. - -Another approach could be to create a new host interface alias for each pod, if we had a way to route an external IP to it. This would eliminate the scheduling constraints resulting from using the host's IP address. - -### IPv6 - -IPv6 would be a nice option, also, but we can't depend on it yet. Docker support is in progress: [Docker issue #2974](https://github.com/dotcloud/docker/issues/2974), [Docker issue #6923](https://github.com/dotcloud/docker/issues/6923), [Docker issue #6975](https://github.com/dotcloud/docker/issues/6975). Additionally, direct ipv6 assignment to instances doesn't appear to be supported by major cloud providers (e.g., AWS EC2, GCE) yet. We'd happily take pull requests from people running Kubernetes on bare metal, though. :-) - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/networking.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/design/networking.md?pixel)]() diff --git a/release-0.20.0/docs/design/persistent-storage.md b/release-0.20.0/docs/design/persistent-storage.md deleted file mode 100644 index 3c6f2ed645b..00000000000 --- a/release-0.20.0/docs/design/persistent-storage.md +++ /dev/null @@ -1,220 +0,0 @@ -# Persistent Storage - -This document proposes a model for managing persistent, cluster-scoped storage for applications requiring long lived data. - -### tl;dr - -Two new API kinds: - -A `PersistentVolume` (PV) is a storage resource provisioned by an administrator. It is analogous to a node. - -A `PersistentVolumeClaim` (PVC) is a user's request for a persistent volume to use in a pod. It is analogous to a pod. - -One new system component: - -`PersistentVolumeClaimBinder` is a singleton running in master that watches all PersistentVolumeClaims in the system and binds them to the closest matching available PersistentVolume. The volume manager watches the API for newly created volumes to manage. - -One new volume: - -`PersistentVolumeClaimVolumeSource` references the user's PVC in the same namespace. This volume finds the bound PV and mounts that volume for the pod. A `PersistentVolumeClaimVolumeSource` is, essentially, a wrapper around another type of volume that is owned by someone else (the system). - -Kubernetes makes no guarantees at runtime that the underlying storage exists or is available. High availability is left to the storage provider. - -### Goals - -* Allow administrators to describe available storage -* Allow pod authors to discover and request persistent volumes to use with pods -* Enforce security through access control lists and securing storage to the same namespace as the pod volume -* Enforce quotas through admission control -* Enforce scheduler rules by resource counting -* Ensure developers can rely on storage being available without being closely bound to a particular disk, server, network, or storage device. - - -#### Describe available storage - -Cluster administrators use the API to manage *PersistentVolumes*. A custom store ```NewPersistentVolumeOrderedIndex``` will index volumes by access modes and sort by storage capacity. The ```PersistentVolumeClaimBinder``` watches for new claims for storage and binds them to an available volume by matching the volume's characteristics (AccessModes and storage size) to the user's request. - -PVs are system objects and, thus, have no namespace. - -Many means of dynamic provisioning will be eventually be implemented for various storage types. - - -##### PersistentVolume API - -| Action | HTTP Verb | Path | Description | -| ---- | ---- | ---- | ---- | -| CREATE | POST | /api/{version}/persistentvolumes/ | Create instance of PersistentVolume | -| GET | GET | /api/{version}persistentvolumes/{name} | Get instance of PersistentVolume with {name} | -| UPDATE | PUT | /api/{version}/persistentvolumes/{name} | Update instance of PersistentVolume with {name} | -| DELETE | DELETE | /api/{version}/persistentvolumes/{name} | Delete instance of PersistentVolume with {name} | -| LIST | GET | /api/{version}/persistentvolumes | List instances of PersistentVolume | -| WATCH | GET | /api/{version}/watch/persistentvolumes | Watch for changes to a PersistentVolume | - - -#### Request Storage - -Kubernetes users request persistent storage for their pod by creating a ```PersistentVolumeClaim```. Their request for storage is described by their requirements for resources and mount capabilities. - -Requests for volumes are bound to available volumes by the volume manager, if a suitable match is found. Requests for resources can go unfulfilled. - -Users attach their claim to their pod using a new ```PersistentVolumeClaimVolumeSource``` volume source. - - -##### PersistentVolumeClaim API - - -| Action | HTTP Verb | Path | Description | -| ---- | ---- | ---- | ---- | -| CREATE | POST | /api/{version}/namespaces/{ns}/persistentvolumeclaims/ | Create instance of PersistentVolumeClaim in namespace {ns} | -| GET | GET | /api/{version}/namespaces/{ns}/persistentvolumeclaims/{name} | Get instance of PersistentVolumeClaim in namespace {ns} with {name} | -| UPDATE | PUT | /api/{version}/namespaces/{ns}/persistentvolumeclaims/{name} | Update instance of PersistentVolumeClaim in namespace {ns} with {name} | -| DELETE | DELETE | /api/{version}/namespaces/{ns}/persistentvolumeclaims/{name} | Delete instance of PersistentVolumeClaim in namespace {ns} with {name} | -| LIST | GET | /api/{version}/namespaces/{ns}/persistentvolumeclaims | List instances of PersistentVolumeClaim in namespace {ns} | -| WATCH | GET | /api/{version}/watch/namespaces/{ns}/persistentvolumeclaims | Watch for changes to PersistentVolumeClaim in namespace {ns} | - - - -#### Scheduling constraints - -Scheduling constraints are to be handled similar to pod resource constraints. Pods will need to be annotated or decorated with the number of resources it requires on a node. Similarly, a node will need to list how many it has used or available. - -TBD - - -#### Events - -The implementation of persistent storage will not require events to communicate to the user the state of their claim. The CLI for bound claims contains a reference to the backing persistent volume. This is always present in the API and CLI, making an event to communicate the same unnecessary. - -Events that communicate the state of a mounted volume are left to the volume plugins. - - -### Example - -#### Admin provisions storage - -An administrator provisions storage by posting PVs to the API. Various way to automate this task can be scripted. Dynamic provisioning is a future feature that can maintain levels of PVs. - -``` -POST: - -kind: PersistentVolume -apiVersion: v1 -metadata: - name: pv0001 -spec: - capacity: - storage: 10 - persistentDisk: - pdName: "abc123" - fsType: "ext4" - --------------------------------------------------- - -kubectl get pv - -NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM -pv0001 map[] 10737418240 RWO Pending - - -``` - -#### Users request storage - -A user requests storage by posting a PVC to the API. Their request contains the AccessModes they wish their volume to have and the minimum size needed. - -The user must be within a namespace to create PVCs. - -``` - -POST: -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: myclaim-1 -spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 3 - --------------------------------------------------- - -kubectl get pvc - - -NAME LABELS STATUS VOLUME -myclaim-1 map[] pending - -``` - - -#### Matching and binding - - The ```PersistentVolumeClaimBinder``` attempts to find an available volume that most closely matches the user's request. If one exists, they are bound by putting a reference on the PV to the PVC. Requests can go unfulfilled if a suitable match is not found. - -``` - -kubectl get pv - -NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM -pv0001 map[] 10737418240 RWO Bound myclaim-1 / f4b3d283-c0ef-11e4-8be4-80e6500a981e - - -kubectl get pvc - -NAME LABELS STATUS VOLUME -myclaim-1 map[] Bound b16e91d6-c0ef-11e4-8be4-80e6500a981e - - -``` - -#### Claim usage - -The claim holder can use their claim as a volume. The ```PersistentVolumeClaimVolumeSource``` knows to fetch the PV backing the claim and mount its volume for a pod. - -The claim holder owns the claim and its data for as long as the claim exists. The pod using the claim can be deleted, but the claim remains in the user's namespace. It can be used again and again by many pods. - -``` -POST: - -kind: Pod -apiVersion: v1 -metadata: - name: mypod -spec: - containers: - - image: nginx - name: myfrontend - volumeMounts: - - mountPath: "/var/www/html" - name: mypd - volumes: - - name: mypd - source: - persistentVolumeClaim: - accessMode: ReadWriteOnce - claimRef: - name: myclaim-1 - -``` - -#### Releasing a claim and Recycling a volume - -When a claim holder is finished with their data, they can delete their claim. - -``` - -kubectl delete pvc myclaim-1 - -``` - -The ```PersistentVolumeClaimBinder``` will reconcile this by removing the claim reference from the PV and change the PVs status to 'Released'. - -Admins can script the recycling of released volumes. Future dynamic provisioners will understand how a volume should be recycled. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/persistent-storage.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/design/persistent-storage.md?pixel)]() diff --git a/release-0.20.0/docs/design/principles.md b/release-0.20.0/docs/design/principles.md deleted file mode 100644 index 548de192c81..00000000000 --- a/release-0.20.0/docs/design/principles.md +++ /dev/null @@ -1,61 +0,0 @@ -# Design Principles - -Principles to follow when extending Kubernetes. - -## API - -See also the [API conventions](../api-conventions.md). - -* All APIs should be declarative. -* API objects should be complementary and composable, not opaque wrappers. -* The control plane should be transparent -- there are no hidden internal APIs. -* The cost of API operations should be proportional to the number of objects intentionally operated upon. Therefore, common filtered lookups must be indexed. Beware of patterns of multiple API calls that would incur quadratic behavior. -* Object status must be 100% reconstructable by observation. Any history kept must be just an optimization and not required for correct operation. -* Cluster-wide invariants are difficult to enforce correctly. Try not to add them. If you must have them, don't enforce them atomically in master components, that is contention-prone and doesn't provide a recovery path in the case of a bug allowing the invariant to be violated. Instead, provide a series of checks to reduce the probability of a violation, and make every component involved able to recover from an invariant violation. -* Low-level APIs should be designed for control by higher-level systems. Higher-level APIs should be intent-oriented (think SLOs) rather than implementation-oriented (think control knobs). - -## Control logic - -* Functionality must be *level-based*, meaning the system must operate correctly given the desired state and the current/observed state, regardless of how many intermediate state updates may have been missed. Edge-triggered behavior must be just an optimization. -* Assume an open world: continually verify assumptions and gracefully adapt to external events and/or actors. Example: we allow users to kill pods under control of a replication controller; it just replaces them. -* Do not define comprehensive state machines for objects with behaviors associated with state transitions and/or "assumed" states that cannot be ascertained by observation. -* Don't assume a component's decisions will not be overridden or rejected, nor for the component to always understand why. For example, etcd may reject writes. Kubelet may reject pods. The scheduler may not be able to schedule pods. Retry, but back off and/or make alternative decisions. -* Components should be self-healing. For example, if you must keep some state (e.g., cache) the content needs to be periodically refreshed, so that if an item does get erroneously stored or a deletion event is missed etc, it will be soon fixed, ideally on timescales that are shorter than what will attract attention from humans. -* Component behavior should degrade gracefully. Prioritize actions so that the most important activities can continue to function even when overloaded and/or in states of partial failure. - -## Architecture - -* Only the apiserver should communicate with etcd/store, and not other components (scheduler, kubelet, etc.). -* Compromising a single node shouldn't compromise the cluster. -* Components should continue to do what they were last told in the absence of new instructions (e.g., due to network partition or component outage). -* All components should keep all relevant state in memory all the time. The apiserver should write through to etcd/store, other components should write through to the apiserver, and they should watch for updates made by other clients. -* Watch is preferred over polling. - -## Extensibility - -TODO: pluggability - -## Bootstrapping - -* [Self-hosting](https://github.com/GoogleCloudPlatform/kubernetes/issues/246) of all components is a goal. -* Minimize the number of dependencies, particularly those required for steady-state operation. -* Stratify the dependencies that remain via principled layering. -* Break any circular dependencies by converting hard dependencies to soft dependencies. - * Also accept that data from other components from another source, such as local files, which can then be manually populated at bootstrap time and then continuously updated once those other components are available. - * State should be rediscoverable and/or reconstructable. - * Make it easy to run temporary, bootstrap instances of all components in order to create the runtime state needed to run the components in the steady state; use a lock (master election for distributed components, file lock for local components like Kubelet) to coordinate handoff. We call this technique "pivoting". - * Have a solution to restart dead components. For distributed components, replication works well. For local components such as Kubelet, a process manager or even a simple shell loop works. - -## Availability - -TODO - -## General principles - -* [Eric Raymond's 17 UNIX rules](https://en.wikipedia.org/wiki/Unix_philosophy#Eric_Raymond.E2.80.99s_17_Unix_Rules) - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/principles.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/design/principles.md?pixel)]() diff --git a/release-0.20.0/docs/design/secrets.md b/release-0.20.0/docs/design/secrets.md deleted file mode 100644 index 533ce73c2fa..00000000000 --- a/release-0.20.0/docs/design/secrets.md +++ /dev/null @@ -1,582 +0,0 @@ - -## Abstract - -A proposal for the distribution of secrets (passwords, keys, etc) to the Kubelet and to -containers inside Kubernetes using a custom volume type. - -## Motivation - -Secrets are needed in containers to access internal resources like the Kubernetes master or -external resources such as git repositories, databases, etc. Users may also want behaviors in the -kubelet that depend on secret data (credentials for image pull from a docker registry) associated -with pods. - -Goals of this design: - -1. Describe a secret resource -2. Define the various challenges attendant to managing secrets on the node -3. Define a mechanism for consuming secrets in containers without modification - -## Constraints and Assumptions - -* This design does not prescribe a method for storing secrets; storage of secrets should be - pluggable to accommodate different use-cases -* Encryption of secret data and node security are orthogonal concerns -* It is assumed that node and master are secure and that compromising their security could also - compromise secrets: - * If a node is compromised, the only secrets that could potentially be exposed should be the - secrets belonging to containers scheduled onto it - * If the master is compromised, all secrets in the cluster may be exposed -* Secret rotation is an orthogonal concern, but it should be facilitated by this proposal -* A user who can consume a secret in a container can know the value of the secret; secrets must - be provisioned judiciously - -## Use Cases - -1. As a user, I want to store secret artifacts for my applications and consume them securely in - containers, so that I can keep the configuration for my applications separate from the images - that use them: - 1. As a cluster operator, I want to allow a pod to access the Kubernetes master using a custom - `.kubeconfig` file, so that I can securely reach the master - 2. As a cluster operator, I want to allow a pod to access a Docker registry using credentials - from a `.dockercfg` file, so that containers can push images - 3. As a cluster operator, I want to allow a pod to access a git repository using SSH keys, - so that I can push and fetch to and from the repository -2. As a user, I want to allow containers to consume supplemental information about services such - as username and password which should be kept secret, so that I can share secrets about a - service amongst the containers in my application securely -3. As a user, I want to associate a pod with a `ServiceAccount` that consumes a secret and have - the kubelet implement some reserved behaviors based on the types of secrets the service account - consumes: - 1. Use credentials for a docker registry to pull the pod's docker image - 2. Present kubernetes auth token to the pod or transparently decorate traffic between the pod - and master service -4. As a user, I want to be able to indicate that a secret expires and for that secret's value to - be rotated once it expires, so that the system can help me follow good practices - -### Use-Case: Configuration artifacts - -Many configuration files contain secrets intermixed with other configuration information. For -example, a user's application may contain a properties file than contains database credentials, -SaaS API tokens, etc. Users should be able to consume configuration artifacts in their containers -and be able to control the path on the container's filesystems where the artifact will be -presented. - -### Use-Case: Metadata about services - -Most pieces of information about how to use a service are secrets. For example, a service that -provides a MySQL database needs to provide the username, password, and database name to consumers -so that they can authenticate and use the correct database. Containers in pods consuming the MySQL -service would also consume the secrets associated with the MySQL service. - -### Use-Case: Secrets associated with service accounts - -[Service Accounts](http://docs.k8s.io/design/service_accounts.md) are proposed as a -mechanism to decouple capabilities and security contexts from individual human users. A -`ServiceAccount` contains references to some number of secrets. A `Pod` can specify that it is -associated with a `ServiceAccount`. Secrets should have a `Type` field to allow the Kubelet and -other system components to take action based on the secret's type. - -#### Example: service account consumes auth token secret - -As an example, the service account proposal discusses service accounts consuming secrets which -contain kubernetes auth tokens. When a Kubelet starts a pod associated with a service account -which consumes this type of secret, the Kubelet may take a number of actions: - -1. Expose the secret in a `.kubernetes_auth` file in a well-known location in the container's - file system -2. Configure that node's `kube-proxy` to decorate HTTP requests from that pod to the - `kubernetes-master` service with the auth token, e. g. by adding a header to the request - (see the [LOAS Daemon](https://github.com/GoogleCloudPlatform/kubernetes/issues/2209) proposal) - -#### Example: service account consumes docker registry credentials - -Another example use case is where a pod is associated with a secret containing docker registry -credentials. The Kubelet could use these credentials for the docker pull to retrieve the image. - -### Use-Case: Secret expiry and rotation - -Rotation is considered a good practice for many types of secret data. It should be possible to -express that a secret has an expiry date; this would make it possible to implement a system -component that could regenerate expired secrets. As an example, consider a component that rotates -expired secrets. The rotator could periodically regenerate the values for expired secrets of -common types and update their expiry dates. - -## Deferral: Consuming secrets as environment variables - -Some images will expect to receive configuration items as environment variables instead of files. -We should consider what the best way to allow this is; there are a few different options: - -1. Force the user to adapt files into environment variables. Users can store secrets that need to - be presented as environment variables in a format that is easy to consume from a shell: - - $ cat /etc/secrets/my-secret.txt - export MY_SECRET_ENV=MY_SECRET_VALUE - - The user could `source` the file at `/etc/secrets/my-secret` prior to executing the command for - the image either inline in the command or in an init script, - -2. Give secrets an attribute that allows users to express the intent that the platform should - generate the above syntax in the file used to present a secret. The user could consume these - files in the same manner as the above option. - -3. Give secrets attributes that allow the user to express that the secret should be presented to - the container as an environment variable. The container's environment would contain the - desired values and the software in the container could use them without accommodation the - command or setup script. - -For our initial work, we will treat all secrets as files to narrow the problem space. There will -be a future proposal that handles exposing secrets as environment variables. - -## Flow analysis of secret data with respect to the API server - -There are two fundamentally different use-cases for access to secrets: - -1. CRUD operations on secrets by their owners -2. Read-only access to the secrets needed for a particular node by the kubelet - -### Use-Case: CRUD operations by owners - -In use cases for CRUD operations, the user experience for secrets should be no different than for -other API resources. - -#### Data store backing the REST API - -The data store backing the REST API should be pluggable because different cluster operators will -have different preferences for the central store of secret data. Some possibilities for storage: - -1. An etcd collection alongside the storage for other API resources -2. A collocated [HSM](http://en.wikipedia.org/wiki/Hardware_security_module) -3. A secrets server like [Vault](https://www.vaultproject.io/) or [Keywhiz](https://square.github.io/keywhiz/) -4. An external datastore such as an external etcd, RDBMS, etc. - -#### Size limit for secrets - -There should be a size limit for secrets in order to: - -1. Prevent DOS attacks against the API server -2. Allow kubelet implementations that prevent secret data from touching the node's filesystem - -The size limit should satisfy the following conditions: - -1. Large enough to store common artifact types (encryption keypairs, certificates, small - configuration files) -2. Small enough to avoid large impact on node resource consumption (storage, RAM for tmpfs, etc) - -To begin discussion, we propose an initial value for this size limit of **1MB**. - -#### Other limitations on secrets - -Defining a policy for limitations on how a secret may be referenced by another API resource and how -constraints should be applied throughout the cluster is tricky due to the number of variables -involved: - -1. Should there be a maximum number of secrets a pod can reference via a volume? -2. Should there be a maximum number of secrets a service account can reference? -3. Should there be a total maximum number of secrets a pod can reference via its own spec and its - associated service account? -4. Should there be a total size limit on the amount of secret data consumed by a pod? -5. How will cluster operators want to be able to configure these limits? -6. How will these limits impact API server validations? -7. How will these limits affect scheduling? - -For now, we will not implement validations around these limits. Cluster operators will decide how -much node storage is allocated to secrets. It will be the operator's responsibility to ensure that -the allocated storage is sufficient for the workload scheduled onto a node. - -For now, kubelets will only attach secrets to api-sourced pods, and not file- or http-sourced -ones. Doing so would: - - confuse the secrets admission controller in the case of mirror pods. - - create an apiserver-liveness dependency -- avoiding this dependency is a main reason to use non-api-source pods. - -### Use-Case: Kubelet read of secrets for node - -The use-case where the kubelet reads secrets has several additional requirements: - -1. Kubelets should only be able to receive secret data which is required by pods scheduled onto - the kubelet's node -2. Kubelets should have read-only access to secret data -3. Secret data should not be transmitted over the wire insecurely -4. Kubelets must ensure pods do not have access to each other's secrets - -#### Read of secret data by the Kubelet - -The Kubelet should only be allowed to read secrets which are consumed by pods scheduled onto that -Kubelet's node and their associated service accounts. Authorization of the Kubelet to read this -data would be delegated to an authorization plugin and associated policy rule. - -#### Secret data on the node: data at rest - -Consideration must be given to whether secret data should be allowed to be at rest on the node: - -1. If secret data is not allowed to be at rest, the size of secret data becomes another draw on - the node's RAM - should it affect scheduling? -2. If secret data is allowed to be at rest, should it be encrypted? - 1. If so, how should be this be done? - 2. If not, what threats exist? What types of secret are appropriate to store this way? - -For the sake of limiting complexity, we propose that initially secret data should not be allowed -to be at rest on a node; secret data should be stored on a node-level tmpfs filesystem. This -filesystem can be subdivided into directories for use by the kubelet and by the volume plugin. - -#### Secret data on the node: resource consumption - -The Kubelet will be responsible for creating the per-node tmpfs file system for secret storage. -It is hard to make a prescriptive declaration about how much storage is appropriate to reserve for -secrets because different installations will vary widely in available resources, desired pod to -node density, overcommit policy, and other operation dimensions. That being the case, we propose -for simplicity that the amount of secret storage be controlled by a new parameter to the kubelet -with a default value of **64MB**. It is the cluster operator's responsibility to handle choosing -the right storage size for their installation and configuring their Kubelets correctly. - -Configuring each Kubelet is not the ideal story for operator experience; it is more intuitive that -the cluster-wide storage size be readable from a central configuration store like the one proposed -in [#1553](https://github.com/GoogleCloudPlatform/kubernetes/issues/1553). When such a store -exists, the Kubelet could be modified to read this configuration item from the store. - -When the Kubelet is modified to advertise node resources (as proposed in -[#4441](https://github.com/GoogleCloudPlatform/kubernetes/issues/4441)), the capacity calculation -for available memory should factor in the potential size of the node-level tmpfs in order to avoid -memory overcommit on the node. - -#### Secret data on the node: isolation - -Every pod will have a [security context](http://docs.k8s.io/design/security_context.md). -Secret data on the node should be isolated according to the security context of the container. The -Kubelet volume plugin API will be changed so that a volume plugin receives the security context of -a volume along with the volume spec. This will allow volume plugins to implement setting the -security context of volumes they manage. - -## Community work: - -Several proposals / upstream patches are notable as background for this proposal: - -1. [Docker vault proposal](https://github.com/docker/docker/issues/10310) -2. [Specification for image/container standardization based on volumes](https://github.com/docker/docker/issues/9277) -3. [Kubernetes service account proposal](http://docs.k8s.io/design/service_accounts.md) -4. [Secrets proposal for docker (1)](https://github.com/docker/docker/pull/6075) -5. [Secrets proposal for docker (2)](https://github.com/docker/docker/pull/6697) - -## Proposed Design - -We propose a new `Secret` resource which is mounted into containers with a new volume type. Secret -volumes will be handled by a volume plugin that does the actual work of fetching the secret and -storing it. Secrets contain multiple pieces of data that are presented as different files within -the secret volume (example: SSH key pair). - -In order to remove the burden from the end user in specifying every file that a secret consists of, -it should be possible to mount all files provided by a secret with a single ```VolumeMount``` entry -in the container specification. - -### Secret API Resource - -A new resource for secrets will be added to the API: - -```go -type Secret struct { - TypeMeta - ObjectMeta - - // Data contains the secret data. Each key must be a valid DNS_SUBDOMAIN. - // The serialized form of the secret data is a base64 encoded string, - // representing the arbitrary (possibly non-string) data value here. - Data map[string][]byte `json:"data,omitempty"` - - // Used to facilitate programmatic handling of secret data. - Type SecretType `json:"type,omitempty"` -} - -type SecretType string - -const ( - SecretTypeOpaque SecretType = "Opaque" // Opaque (arbitrary data; default) - SecretTypeKubernetesAuthToken SecretType = "KubernetesAuth" // Kubernetes auth token - SecretTypeDockerRegistryAuth SecretType = "DockerRegistryAuth" // Docker registry auth - // FUTURE: other type values -) - -const MaxSecretSize = 1 * 1024 * 1024 -``` - -A Secret can declare a type in order to provide type information to system components that work -with secrets. The default type is `opaque`, which represents arbitrary user-owned data. - -Secrets are validated against `MaxSecretSize`. The keys in the `Data` field must be valid DNS -subdomains. - -A new REST API and registry interface will be added to accompany the `Secret` resource. The -default implementation of the registry will store `Secret` information in etcd. Future registry -implementations could store the `TypeMeta` and `ObjectMeta` fields in etcd and store the secret -data in another data store entirely, or store the whole object in another data store. - -#### Other validations related to secrets - -Initially there will be no validations for the number of secrets a pod references, or the number of -secrets that can be associated with a service account. These may be added in the future as the -finer points of secrets and resource allocation are fleshed out. - -### Secret Volume Source - -A new `SecretSource` type of volume source will be added to the ```VolumeSource``` struct in the -API: - -```go -type VolumeSource struct { - // Other fields omitted - - // SecretSource represents a secret that should be presented in a volume - SecretSource *SecretSource `json:"secret"` -} - -type SecretSource struct { - Target ObjectReference -} -``` - -Secret volume sources are validated to ensure that the specified object reference actually points -to an object of type `Secret`. - -In the future, the `SecretSource` will be extended to allow: - -1. Fine-grained control over which pieces of secret data are exposed in the volume -2. The paths and filenames for how secret data are exposed - -### Secret Volume Plugin - -A new Kubelet volume plugin will be added to handle volumes with a secret source. This plugin will -require access to the API server to retrieve secret data and therefore the volume `Host` interface -will have to change to expose a client interface: - -```go -type Host interface { - // Other methods omitted - - // GetKubeClient returns a client interface - GetKubeClient() client.Interface -} -``` - -The secret volume plugin will be responsible for: - -1. Returning a `volume.Builder` implementation from `NewBuilder` that: - 1. Retrieves the secret data for the volume from the API server - 2. Places the secret data onto the container's filesystem - 3. Sets the correct security attributes for the volume based on the pod's `SecurityContext` -2. Returning a `volume.Cleaner` implementation from `NewClear` that cleans the volume from the - container's filesystem - -### Kubelet: Node-level secret storage - -The Kubelet must be modified to accept a new parameter for the secret storage size and to create -a tmpfs file system of that size to store secret data. Rough accounting of specific changes: - -1. The Kubelet should have a new field added called `secretStorageSize`; units are megabytes -2. `NewMainKubelet` should accept a value for secret storage size -3. The Kubelet server should have a new flag added for secret storage size -4. The Kubelet's `setupDataDirs` method should be changed to create the secret storage - -### Kubelet: New behaviors for secrets associated with service accounts - -For use-cases where the Kubelet's behavior is affected by the secrets associated with a pod's -`ServiceAccount`, the Kubelet will need to be changed. For example, if secrets of type -`docker-reg-auth` affect how the pod's images are pulled, the Kubelet will need to be changed -to accommodate this. Subsequent proposals can address this on a type-by-type basis. - -## Examples - -For clarity, let's examine some detailed examples of some common use-cases in terms of the -suggested changes. All of these examples are assumed to be created in a namespace called -`example`. - -### Use-Case: Pod with ssh keys - -To create a pod that uses an ssh key stored as a secret, we first need to create a secret: - -```json -{ - "kind": "Secret", - "apiVersion": "v1", - "metadata": { - "name": "ssh-key-secret" - }, - "data": { - "id-rsa": "dmFsdWUtMg0KDQo=", - "id-rsa.pub": "dmFsdWUtMQ0K" - } -} -``` - -**Note:** The serialized JSON and YAML values of secret data are encoded as -base64 strings. Newlines are not valid within these strings and must be -omitted. - -Now we can create a pod which references the secret with the ssh key and consumes it in a volume: - -```json -{ - "kind": "Pod", - "apiVersion": "v1", - "metadata": { - "name": "secret-test-pod", - "labels": { - "name": "secret-test" - } - }, - "spec": { - "volumes": [ - { - "name": "secret-volume", - "secret": { - "secretName": "ssh-key-secret" - } - } - ], - "containers": [ - { - "name": "ssh-test-container", - "image": "mySshImage", - "volumeMounts": [ - { - "name": "secret-volume", - "readOnly": true, - "mountPath": "/etc/secret-volume" - } - ] - } - ] - } -} -``` - -When the container's command runs, the pieces of the key will be available in: - - /etc/secret-volume/id-rsa.pub - /etc/secret-volume/id-rsa - -The container is then free to use the secret data to establish an ssh connection. - -### Use-Case: Pods with pod / test credentials - -This example illustrates a pod which consumes a secret containing prod -credentials and another pod which consumes a secret with test environment -credentials. - -The secrets: - -```json -{ - "apiVersion": "v1", - "kind": "List", - "items": - [{ - "kind": "Secret", - "apiVersion": "v1", - "metadata": { - "name": "prod-db-secret" - }, - "data": { - "password": "dmFsdWUtMg0KDQo=", - "username": "dmFsdWUtMQ0K" - } - }, - { - "kind": "Secret", - "apiVersion": "v1", - "metadata": { - "name": "test-db-secret" - }, - "data": { - "password": "dmFsdWUtMg0KDQo=", - "username": "dmFsdWUtMQ0K" - } - }] -} -``` - -The pods: - -```json -{ - "apiVersion": "v1", - "kind": "List", - "items": - [{ - "kind": "Pod", - "apiVersion": "v1", - "metadata": { - "name": "prod-db-client-pod", - "labels": { - "name": "prod-db-client" - } - }, - "spec": { - "volumes": [ - { - "name": "secret-volume", - "secret": { - "secretName": "prod-db-secret" - } - } - ], - "containers": [ - { - "name": "db-client-container", - "image": "myClientImage", - "volumeMounts": [ - { - "name": "secret-volume", - "readOnly": true, - "mountPath": "/etc/secret-volume" - } - ] - } - ] - } - }, - { - "kind": "Pod", - "apiVersion": "v1", - "metadata": { - "name": "test-db-client-pod", - "labels": { - "name": "test-db-client" - } - }, - "spec": { - "volumes": [ - { - "name": "secret-volume", - "secret": { - "secretName": "test-db-secret" - } - } - ], - "containers": [ - { - "name": "db-client-container", - "image": "myClientImage", - "volumeMounts": [ - { - "name": "secret-volume", - "readOnly": true, - "mountPath": "/etc/secret-volume" - } - ] - } - ] - } - }] -} -``` - -The specs for the two pods differ only in the value of the object referred to by the secret volume -source. Both containers will have the following files present on their filesystems: - - /etc/secret-volume/username - /etc/secret-volume/password - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/secrets.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/design/secrets.md?pixel)]() diff --git a/release-0.20.0/docs/design/security.md b/release-0.20.0/docs/design/security.md deleted file mode 100644 index 52f25225dbf..00000000000 --- a/release-0.20.0/docs/design/security.md +++ /dev/null @@ -1,123 +0,0 @@ -# Security in Kubernetes - -Kubernetes should define a reasonable set of security best practices that allows processes to be isolated from each other, from the cluster infrastructure, and which preserves important boundaries between those who manage the cluster, and those who use the cluster. - -While Kubernetes today is not primarily a multi-tenant system, the long term evolution of Kubernetes will increasingly rely on proper boundaries between users and administrators. The code running on the cluster must be appropriately isolated and secured to prevent malicious parties from affecting the entire cluster. - - -## High Level Goals - -1. Ensure a clear isolation between the container and the underlying host it runs on -2. Limit the ability of the container to negatively impact the infrastructure or other containers -3. [Principle of Least Privilege](http://en.wikipedia.org/wiki/Principle_of_least_privilege) - ensure components are only authorized to perform the actions they need, and limit the scope of a compromise by limiting the capabilities of individual components -4. Reduce the number of systems that have to be hardened and secured by defining clear boundaries between components -5. Allow users of the system to be cleanly separated from administrators -6. Allow administrative functions to be delegated to users where necessary -7. Allow applications to be run on the cluster that have "secret" data (keys, certs, passwords) which is properly abstracted from "public" data. - - -## Use cases - -### Roles: - -We define "user" as a unique identity accessing the Kubernetes API server, which may be a human or an automated process. Human users fall into the following categories: - -1. k8s admin - administers a kubernetes cluster and has access to the underlying components of the system -2. k8s project administrator - administrates the security of a small subset of the cluster -3. k8s developer - launches pods on a kubernetes cluster and consumes cluster resources - -Automated process users fall into the following categories: - -1. k8s container user - a user that processes running inside a container (on the cluster) can use to access other cluster resources independent of the human users attached to a project -2. k8s infrastructure user - the user that kubernetes infrastructure components use to perform cluster functions with clearly defined roles - - -### Description of roles: - -* Developers: - * write pod specs. - * making some of their own images, and using some "community" docker images - * know which pods need to talk to which other pods - * decide which pods should share files with other pods, and which should not. - * reason about application level security, such as containing the effects of a local-file-read exploit in a webserver pod. - * do not often reason about operating system or organizational security. - * are not necessarily comfortable reasoning about the security properties of a system at the level of detail of Linux Capabilities, SELinux, AppArmor, etc. - -* Project Admins: - * allocate identity and roles within a namespace - * reason about organizational security within a namespace - * don't give a developer permissions that are not needed for role. - * protect files on shared storage from unnecessary cross-team access - * are less focused about application security - -* Administrators: - * are less focused on application security. Focused on operating system security. - * protect the node from bad actors in containers, and properly-configured innocent containers from bad actors in other containers. - * comfortable reasoning about the security properties of a system at the level of detail of Linux Capabilities, SELinux, AppArmor, etc. - * decides who can use which Linux Capabilities, run privileged containers, use hostDir, etc. - * e.g. a team that manages Ceph or a mysql server might be trusted to have raw access to storage devices in some organizations, but teams that develop the applications at higher layers would not. - - -## Proposed Design - -A pod runs in a *security context* under a *service account* that is defined by an administrator or project administrator, and the *secrets* a pod has access to is limited by that *service account*. - - -1. The API should authenticate and authorize user actions [authn and authz](http://docs.k8s.io/design/access.md) -2. All infrastructure components (kubelets, kube-proxies, controllers, scheduler) should have an infrastructure user that they can authenticate with and be authorized to perform only the functions they require against the API. -3. Most infrastructure components should use the API as a way of exchanging data and changing the system, and only the API should have access to the underlying data store (etcd) -4. When containers run on the cluster and need to talk to other containers or the API server, they should be identified and authorized clearly as an autonomous process via a [service account](http://docs.k8s.io/design/service_accounts.md) - 1. If the user who started a long-lived process is removed from access to the cluster, the process should be able to continue without interruption - 2. If the user who started processes are removed from the cluster, administrators may wish to terminate their processes in bulk - 3. When containers run with a service account, the user that created / triggered the service account behavior must be associated with the container's action -5. When container processes run on the cluster, they should run in a [security context](http://docs.k8s.io/design/security_context.md) that isolates those processes via Linux user security, user namespaces, and permissions. - 1. Administrators should be able to configure the cluster to automatically confine all container processes as a non-root, randomly assigned UID - 2. Administrators should be able to ensure that container processes within the same namespace are all assigned the same unix user UID - 3. Administrators should be able to limit which developers and project administrators have access to higher privilege actions - 4. Project administrators should be able to run pods within a namespace under different security contexts, and developers must be able to specify which of the available security contexts they may use - 5. Developers should be able to run their own images or images from the community and expect those images to run correctly - 6. Developers may need to ensure their images work within higher security requirements specified by administrators - 7. When available, Linux kernel user namespaces can be used to ensure 5.2 and 5.4 are met. - 8. When application developers want to share filesytem data via distributed filesystems, the Unix user ids on those filesystems must be consistent across different container processes -6. Developers should be able to define [secrets](http://docs.k8s.io/design/secrets.md) that are automatically added to the containers when pods are run - 1. Secrets are files injected into the container whose values should not be displayed within a pod. Examples: - 1. An SSH private key for git cloning remote data - 2. A client certificate for accessing a remote system - 3. A private key and certificate for a web server - 4. A .kubeconfig file with embedded cert / token data for accessing the Kubernetes master - 5. A .dockercfg file for pulling images from a protected registry - 2. Developers should be able to define the pod spec so that a secret lands in a specific location - 3. Project administrators should be able to limit developers within a namespace from viewing or modifying secrets (anyone who can launch an arbitrary pod can view secrets) - 4. Secrets are generally not copied from one namespace to another when a developer's application definitions are copied - - -### Related design discussion - -* Authorization and authentication http://docs.k8s.io/design/access.md -* Secret distribution via files https://github.com/GoogleCloudPlatform/kubernetes/pull/2030 -* Docker secrets https://github.com/docker/docker/pull/6697 -* Docker vault https://github.com/docker/docker/issues/10310 -* Service Accounts: http://docs.k8s.io/design/service_accounts.md -* Secret volumes https://github.com/GoogleCloudPlatform/kubernetes/4126 - -## Specific Design Points - -### TODO: authorization, authentication - -### Isolate the data store from the nodes and supporting infrastructure - -Access to the central data store (etcd) in Kubernetes allows an attacker to run arbitrary containers on hosts, to gain access to any protected information stored in either volumes or in pods (such as access tokens or shared secrets provided as environment variables), to intercept and redirect traffic from running services by inserting middlemen, or to simply delete the entire history of the custer. - -As a general principle, access to the central data store should be restricted to the components that need full control over the system and which can apply appropriate authorization and authentication of change requests. In the future, etcd may offer granular access control, but that granularity will require an administrator to understand the schema of the data to properly apply security. An administrator must be able to properly secure Kubernetes at a policy level, rather than at an implementation level, and schema changes over time should not risk unintended security leaks. - -Both the Kubelet and Kube Proxy need information related to their specific roles - for the Kubelet, the set of pods it should be running, and for the Proxy, the set of services and endpoints to load balance. The Kubelet also needs to provide information about running pods and historical termination data. The access pattern for both Kubelet and Proxy to load their configuration is an efficient "wait for changes" request over HTTP. It should be possible to limit the Kubelet and Proxy to only access the information they need to perform their roles and no more. - -The controller manager for Replication Controllers and other future controllers act on behalf of a user via delegation to perform automated maintenance on Kubernetes resources. Their ability to access or modify resource state should be strictly limited to their intended duties and they should be prevented from accessing information not pertinent to their role. For example, a replication controller needs only to create a copy of a known pod configuration, to determine the running state of an existing pod, or to delete an existing pod that it created - it does not need to know the contents or current state of a pod, nor have access to any data in the pods attached volumes. - -The Kubernetes pod scheduler is responsible for reading data from the pod to fit it onto a node in the cluster. At a minimum, it needs access to view the ID of a pod (to craft the binding), its current state, any resource information necessary to identify placement, and other data relevant to concerns like anti-affinity, zone or region preference, or custom logic. It does not need the ability to modify pods or see other resources, only to create bindings. It should not need the ability to delete bindings unless the scheduler takes control of relocating components on failed hosts (which could be implemented by a separate component that can delete bindings but not create them). The scheduler may need read access to user or project-container information to determine preferential location (underspecified at this time). - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/security.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/design/security.md?pixel)]() diff --git a/release-0.20.0/docs/design/security_context.md b/release-0.20.0/docs/design/security_context.md deleted file mode 100644 index 0ca308241d7..00000000000 --- a/release-0.20.0/docs/design/security_context.md +++ /dev/null @@ -1,163 +0,0 @@ -# Security Contexts -## Abstract -A security context is a set of constraints that are applied to a container in order to achieve the following goals (from [security design](security.md)): - -1. Ensure a clear isolation between container and the underlying host it runs on -2. Limit the ability of the container to negatively impact the infrastructure or other containers - -## Background - -The problem of securing containers in Kubernetes has come up [before](https://github.com/GoogleCloudPlatform/kubernetes/issues/398) and the potential problems with container security are [well known](http://opensource.com/business/14/7/docker-security-selinux). Although it is not possible to completely isolate Docker containers from their hosts, new features like [user namespaces](https://github.com/docker/libcontainer/pull/304) make it possible to greatly reduce the attack surface. - -## Motivation - -### Container isolation - -In order to improve container isolation from host and other containers running on the host, containers should only be -granted the access they need to perform their work. To this end it should be possible to take advantage of Docker -features such as the ability to [add or remove capabilities](https://docs.docker.com/reference/run/#runtime-privilege-linux-capabilities-and-lxc-configuration) and [assign MCS labels](https://docs.docker.com/reference/run/#security-configuration) -to the container process. - -Support for user namespaces has recently been [merged](https://github.com/docker/libcontainer/pull/304) into Docker's libcontainer project and should soon surface in Docker itself. It will make it possible to assign a range of unprivileged uids and gids from the host to each container, improving the isolation between host and container and between containers. - -### External integration with shared storage -In order to support external integration with shared storage, processes running in a Kubernetes cluster -should be able to be uniquely identified by their Unix UID, such that a chain of ownership can be established. -Processes in pods will need to have consistent UID/GID/SELinux category labels in order to access shared disks. - -## Constraints and Assumptions -* It is out of the scope of this document to prescribe a specific set - of constraints to isolate containers from their host. Different use cases need different - settings. -* The concept of a security context should not be tied to a particular security mechanism or platform - (ie. SELinux, AppArmor) -* Applying a different security context to a scope (namespace or pod) requires a solution such as the one proposed for - [service accounts](./service_accounts.md). - -## Use Cases - -In order of increasing complexity, following are example use cases that would -be addressed with security contexts: - -1. Kubernetes is used to run a single cloud application. In order to protect - nodes from containers: - * All containers run as a single non-root user - * Privileged containers are disabled - * All containers run with a particular MCS label - * Kernel capabilities like CHOWN and MKNOD are removed from containers - -2. Just like case #1, except that I have more than one application running on - the Kubernetes cluster. - * Each application is run in its own namespace to avoid name collisions - * For each application a different uid and MCS label is used - -3. Kubernetes is used as the base for a PAAS with - multiple projects, each project represented by a namespace. - * Each namespace is associated with a range of uids/gids on the node that - are mapped to uids/gids on containers using linux user namespaces. - * Certain pods in each namespace have special privileges to perform system - actions such as talking back to the server for deployment, run docker - builds, etc. - * External NFS storage is assigned to each namespace and permissions set - using the range of uids/gids assigned to that namespace. - -## Proposed Design - -### Overview -A *security context* consists of a set of constraints that determine how a container -is secured before getting created and run. A security context resides on the container and represents the runtime parameters that will -be used to create and run the container via container APIs. A *security context provider* is passed to the Kubelet so it can have a chance -to mutate Docker API calls in order to apply the security context. - -It is recommended that this design be implemented in two phases: - -1. Implement the security context provider extension point in the Kubelet - so that a default security context can be applied on container run and creation. -2. Implement a security context structure that is part of a service account. The - default context provider can then be used to apply a security context based - on the service account associated with the pod. - -### Security Context Provider - -The Kubelet will have an interface that points to a `SecurityContextProvider`. The `SecurityContextProvider` is invoked before creating and running a given container: - -```go -type SecurityContextProvider interface { - // ModifyContainerConfig is called before the Docker createContainer call. - // The security context provider can make changes to the Config with which - // the container is created. - // An error is returned if it's not possible to secure the container as - // requested with a security context. - ModifyContainerConfig(pod *api.Pod, container *api.Container, config *docker.Config) - - // ModifyHostConfig is called before the Docker runContainer call. - // The security context provider can make changes to the HostConfig, affecting - // security options, whether the container is privileged, volume binds, etc. - // An error is returned if it's not possible to secure the container as requested - // with a security context. - ModifyHostConfig(pod *api.Pod, container *api.Container, hostConfig *docker.HostConfig) -} -``` - -If the value of the SecurityContextProvider field on the Kubelet is nil, the kubelet will create and run the container as it does today. - -### Security Context - -A security context resides on the container and represents the runtime parameters that will -be used to create and run the container via container APIs. Following is an example of an initial implementation: - -```go -type type Container struct { - ... other fields omitted ... - // Optional: SecurityContext defines the security options the pod should be run with - SecurityContext *SecurityContext -} - -// SecurityContext holds security configuration that will be applied to a container. SecurityContext -// contains duplication of some existing fields from the Container resource. These duplicate fields -// will be populated based on the Container configuration if they are not set. Defining them on -// both the Container AND the SecurityContext will result in an error. -type SecurityContext struct { - // Capabilities are the capabilities to add/drop when running the container - Capabilities *Capabilities - - // Run the container in privileged mode - Privileged *bool - - // SELinuxOptions are the labels to be applied to the container - // and volumes - SELinuxOptions *SELinuxOptions - - // RunAsUser is the UID to run the entrypoint of the container process. - RunAsUser *int64 -} - -// SELinuxOptions are the labels to be applied to the container. -type SELinuxOptions struct { - // SELinux user label - User string - - // SELinux role label - Role string - - // SELinux type label - Type string - - // SELinux level label. - Level string -} -``` -### Admission - -It is up to an admission plugin to determine if the security context is acceptable or not. At the -time of writing, the admission control plugin for security contexts will only allow a context that -has defined capabilities or privileged. Contexts that attempt to define a UID or SELinux options -will be denied by default. In the future the admission plugin will base this decision upon -configurable policies that reside within the [service account](https://github.com/GoogleCloudPlatform/kubernetes/pull/2297). - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/security_context.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/design/security_context.md?pixel)]() diff --git a/release-0.20.0/docs/design/service_accounts.md b/release-0.20.0/docs/design/service_accounts.md deleted file mode 100644 index cc9f72d18de..00000000000 --- a/release-0.20.0/docs/design/service_accounts.md +++ /dev/null @@ -1,170 +0,0 @@ -#Service Accounts - -## Motivation - -Processes in Pods may need to call the Kubernetes API. For example: - - scheduler - - replication controller - - node controller - - a map-reduce type framework which has a controller that then tries to make a dynamically determined number of workers and watch them - - continuous build and push system - - monitoring system - -They also may interact with services other than the Kubernetes API, such as: - - an image repository, such as docker -- both when the images are pulled to start the containers, and for writing - images in the case of pods that generate images. - - accessing other cloud services, such as blob storage, in the context of a large, integrated, cloud offering (hosted - or private). - - accessing files in an NFS volume attached to the pod - -## Design Overview -A service account binds together several things: - - a *name*, understood by users, and perhaps by peripheral systems, for an identity - - a *principal* that can be authenticated and [authorized](../authorization.md) - - a [security context](./security_context.md), which defines the Linux Capabilities, User IDs, Groups IDs, and other - capabilities and controls on interaction with the file system and OS. - - a set of [secrets](./secrets.md), which a container may use to - access various networked resources. - -## Design Discussion - -A new object Kind is added: -```go -type ServiceAccount struct { - TypeMeta `json:",inline" yaml:",inline"` - ObjectMeta `json:"metadata,omitempty" yaml:"metadata,omitempty"` - - username string - securityContext ObjectReference // (reference to a securityContext object) - secrets []ObjectReference // (references to secret objects -} -``` - -The name ServiceAccount is chosen because it is widely used already (e.g. by Kerberos and LDAP) -to refer to this type of account. Note that it has no relation to kubernetes Service objects. - -The ServiceAccount object does not include any information that could not be defined separately: - - username can be defined however users are defined. - - securityContext and secrets are only referenced and are created using the REST API. - -The purpose of the serviceAccount object is twofold: - - to bind usernames to securityContexts and secrets, so that the username can be used to refer succinctly - in contexts where explicitly naming securityContexts and secrets would be inconvenient - - to provide an interface to simplify allocation of new securityContexts and secrets. -These features are explained later. - -### Names - -From the standpoint of the Kubernetes API, a `user` is any principal which can authenticate to kubernetes API. -This includes a human running `kubectl` on her desktop and a container in a Pod on a Node making API calls. - -There is already a notion of a username in kubernetes, which is populated into a request context after authentication. -However, there is no API object representing a user. While this may evolve, it is expected that in mature installations, -the canonical storage of user identifiers will be handled by a system external to kubernetes. - -Kubernetes does not dictate how to divide up the space of user identifier strings. User names can be -simple Unix-style short usernames, (e.g. `alice`), or may be qualified to allow for federated identity ( -`alice@example.com` vs `alice@example.org`.) Naming convention may distinguish service accounts from user -accounts (e.g. `alice@example.com` vs `build-service-account-a3b7f0@foo-namespace.service-accounts.example.com`), -but Kubernetes does not require this. - -Kubernetes also does not require that there be a distinction between human and Pod users. It will be possible -to setup a cluster where Alice the human talks to the kubernetes API as username `alice` and starts pods that -also talk to the API as user `alice` and write files to NFS as user `alice`. But, this is not recommended. - -Instead, it is recommended that Pods and Humans have distinct identities, and reference implementations will -make this distinction. - -The distinction is useful for a number of reasons: - - the requirements for humans and automated processes are different: - - Humans need a wide range of capabilities to do their daily activities. Automated processes often have more narrowly-defined activities. - - Humans may better tolerate the exceptional conditions created by expiration of a token. Remembering to handle - this in a program is more annoying. So, either long-lasting credentials or automated rotation of credentials is - needed. - - A Human typically keeps credentials on a machine that is not part of the cluster and so not subject to automatic - management. A VM with a role/service-account can have its credentials automatically managed. - - the identity of a Pod cannot in general be mapped to a single human. - - If policy allows, it may be created by one human, and then updated by another, and another, until its behavior cannot be attributed to a single human. - -**TODO**: consider getting rid of separate serviceAccount object and just rolling its parts into the SecurityContext or -Pod Object. - -The `secrets` field is a list of references to /secret objects that an process started as that service account should -have access to to be able to assert that role. - -The secrets are not inline with the serviceAccount object. This way, most or all users can have permission to `GET /serviceAccounts` so they can remind themselves -what serviceAccounts are available for use. - -Nothing will prevent creation of a serviceAccount with two secrets of type `SecretTypeKubernetesAuth`, or secrets of two -different types. Kubelet and client libraries will have some behavior, TBD, to handle the case of multiple secrets of a -given type (pick first or provide all and try each in order, etc). - -When a serviceAccount and a matching secret exist, then a `User.Info` for the serviceAccount and a `BearerToken` from the secret -are added to the map of tokens used by the authentication process in the apiserver, and similarly for other types. (We -might have some types that do not do anything on apiserver but just get pushed to the kubelet.) - -### Pods -The `PodSpec` is extended to have a `Pods.Spec.ServiceAccountUsername` field. If this is unset, then a -default value is chosen. If it is set, then the corresponding value of `Pods.Spec.SecurityContext` is set by the -Service Account Finalizer (see below). - -TBD: how policy limits which users can make pods with which service accounts. - -### Authorization -Kubernetes API Authorization Policies refer to users. Pods created with a `Pods.Spec.ServiceAccountUsername` typically -get a `Secret` which allows them to authenticate to the Kubernetes APIserver as a particular user. So any -policy that is desired can be applied to them. - -A higher level workflow is needed to coordinate creation of serviceAccounts, secrets and relevant policy objects. -Users are free to extend kubernetes to put this business logic wherever is convenient for them, though the -Service Account Finalizer is one place where this can happen (see below). - -### Kubelet - -The kubelet will treat as "not ready to run" (needing a finalizer to act on it) any Pod which has an empty -SecurityContext. - -The kubelet will set a default, restrictive, security context for any pods created from non-Apiserver config -sources (http, file). - -Kubelet watches apiserver for secrets which are needed by pods bound to it. - -**TODO**: how to only let kubelet see secrets it needs to know. - -### The service account finalizer - -There are several ways to use Pods with SecurityContexts and Secrets. - -One way is to explicitly specify the securityContext and all secrets of a Pod when the pod is initially created, -like this: - -**TODO**: example of pod with explicit refs. - -Another way is with the *Service Account Finalizer*, a plugin process which is optional, and which handles -business logic around service accounts. - -The Service Account Finalizer watches Pods, Namespaces, and ServiceAccount definitions. - -First, if it finds pods which have a `Pod.Spec.ServiceAccountUsername` but no `Pod.Spec.SecurityContext` set, -then it copies in the referenced securityContext and secrets references for the corresponding `serviceAccount`. - -Second, if ServiceAccount definitions change, it may take some actions. -**TODO**: decide what actions it takes when a serviceAccount definition changes. Does it stop pods, or just -allow someone to list ones that out out of spec? In general, people may want to customize this? - -Third, if a new namespace is created, it may create a new serviceAccount for that namespace. This may include -a new username (e.g. `NAMESPACE-default-service-account@serviceaccounts.$CLUSTERID.kubernetes.io`), a new -securityContext, a newly generated secret to authenticate that serviceAccount to the Kubernetes API, and default -policies for that service account. -**TODO**: more concrete example. What are typical default permissions for default service account (e.g. readonly access -to services in the same namespace and read-write access to events in that namespace?) - -Finally, it may provide an interface to automate creation of new serviceAccounts. In that case, the user may want -to GET serviceAccounts to see what has been created. - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/service_accounts.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/design/service_accounts.md?pixel)]() diff --git a/release-0.20.0/docs/design/simple-rolling-update.md b/release-0.20.0/docs/design/simple-rolling-update.md deleted file mode 100644 index 0f6c4a269f7..00000000000 --- a/release-0.20.0/docs/design/simple-rolling-update.md +++ /dev/null @@ -1,97 +0,0 @@ -## Simple rolling update -This is a lightweight design document for simple rolling update in ```kubectl``` - -Complete execution flow can be found [here](#execution-details). - -### Lightweight rollout -Assume that we have a current replication controller named ```foo``` and it is running image ```image:v1``` - -```kubectl rolling-update rc foo [foo-v2] --image=myimage:v2``` - -If the user doesn't specify a name for the 'next' replication controller, then the 'next' replication controller is renamed to -the name of the original replication controller. - -Obviously there is a race here, where if you kill the client between delete foo, and creating the new version of 'foo' you might be surprised about what is there, but I think that's ok. -See [Recovery](#recovery) below - -If the user does specify a name for the 'next' replication controller, then the 'next' replication controller is retained with its existing name, -and the old 'foo' replication controller is deleted. For the purposes of the rollout, we add a unique-ifying label ```kubernetes.io/deployment``` to both the ```foo``` and ```foo-next``` replication controllers. -The value of that label is the hash of the complete JSON representation of the```foo-next``` or```foo``` replication controller. The name of this label can be overridden by the user with the ```--deployment-label-key``` flag. - -#### Recovery -If a rollout fails or is terminated in the middle, it is important that the user be able to resume the roll out. -To facilitate recovery in the case of a crash of the updating process itself, we add the following annotations to each replication controller in the ```kubernetes.io/``` annotation namespace: - * ```desired-replicas``` The desired number of replicas for this replication controller (either N or zero) - * ```update-partner``` A pointer to the replication controller resource that is the other half of this update (syntax `````` the namespace is assumed to be identical to the namespace of this replication controller.) - -Recovery is achieved by issuing the same command again: - -``` -kubectl rolling-update rc foo [foo-v2] --image=myimage:v2 -``` - -Whenever the rolling update command executes, the kubectl client looks for replication controllers called ```foo``` and ```foo-next```, if they exist, an attempt is -made to roll ```foo``` to ```foo-next```. If ```foo-next``` does not exist, then it is created, and the rollout is a new rollout. If ```foo``` doesn't exist, then -it is assumed that the rollout is nearly completed, and ```foo-next``` is renamed to ```foo```. Details of the execution flow are given below. - - -### Aborting a rollout -Abort is assumed to want to reverse a rollout in progress. - -```kubectl rolling-update rc foo [foo-v2] --rollback``` - -This is really just semantic sugar for: - -```kubectl rolling-update rc foo-v2 foo``` - -With the added detail that it moves the ```desired-replicas``` annotation from ```foo-v2``` to ```foo``` - - -### Execution Details - -For the purposes of this example, assume that we are rolling from ```foo``` to ```foo-next``` where the only change is an image update from `v1` to `v2` - -If the user doesn't specify a ```foo-next``` name, then it is either discovered from the ```update-partner``` annotation on ```foo```. If that annotation doesn't exist, -then ```foo-next``` is synthesized using the pattern ```-``` - -#### Initialization - * If ```foo``` and ```foo-next``` do not exist: - * Exit, and indicate an error to the user, that the specified controller doesn't exist. - * If ```foo``` exists, but ```foo-next``` does not: - * Create ```foo-next``` populate it with the ```v2``` image, set ```desired-replicas``` to ```foo.Spec.Replicas``` - * Goto Rollout - * If ```foo-next``` exists, but ```foo``` does not: - * Assume that we are in the rename phase. - * Goto Rename - * If both ```foo``` and ```foo-next``` exist: - * Assume that we are in a partial rollout - * If ```foo-next``` is missing the ```desired-replicas``` annotation - * Populate the ```desired-replicas``` annotation to ```foo-next``` using the current size of ```foo``` - * Goto Rollout - -#### Rollout - * While size of ```foo-next``` < ```desired-replicas``` annotation on ```foo-next``` - * increase size of ```foo-next``` - * if size of ```foo``` > 0 - decrease size of ```foo``` - * Goto Rename - -#### Rename - * delete ```foo``` - * create ```foo``` that is identical to ```foo-next``` - * delete ```foo-next``` - -#### Abort - * If ```foo-next``` doesn't exist - * Exit and indicate to the user that they may want to simply do a new rollout with the old version - * If ```foo``` doesn't exist - * Exit and indicate not found to the user - * Otherwise, ```foo-next``` and ```foo``` both exist - * Set ```desired-replicas``` annotation on ```foo``` to match the annotation on ```foo-next``` - * Goto Rollout with ```foo``` and ```foo-next``` trading places. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/simple-rolling-update.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/design/simple-rolling-update.md?pixel)]() diff --git a/release-0.20.0/docs/devel/README.md b/release-0.20.0/docs/devel/README.md deleted file mode 100644 index e97d71a892d..00000000000 --- a/release-0.20.0/docs/devel/README.md +++ /dev/null @@ -1,37 +0,0 @@ -# Developing Kubernetes - -Docs in this directory relate to developing Kubernetes. - -* **On Collaborative Development** ([collab.md](collab.md)): info on pull requests and code reviews. - -* **Development Guide** ([development.md](development.md)): Setting up your environment tests. - -* **Making release notes** ([making-release-notes.md](making-release-notes.md)): Generating release nodes for a new release. - -* **Hunting flaky tests** ([flaky-tests.md](flaky-tests.md)): We have a goal of 99.9% flake free tests. - Here's how to run your tests many times. - -* **GitHub Issues** ([issues.md](issues.md)): How incoming issues are reviewed and prioritized. - -* **Logging Conventions** ([logging.md](logging.md)]: Glog levels. - -* **Pull Request Process** ([pull-requests.md](pull-requests.md)): When and why pull requests are closed. - -* **Releasing Kubernetes** ([releasing.md](releasing.md)): How to create a Kubernetes release (as in version) - and how the version information gets embedded into the built binaries. - -* **Profiling Kubernetes** ([profiling.md](profiling.md)): How to plug in go pprof profiler to Kubernetes. - -* **Instrumenting Kubernetes with a new metric** - ([instrumentation.md](instrumentation.md)): How to add a new metrics to the - Kubernetes code base. - -* **Coding Conventions** ([coding-conventions.md](coding-conventions.md)): - Coding style advice for contributors. - -* **Faster PR reviews** ([faster_reviews.md](faster_reviews.md)): How to get faster PR reviews. - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/devel/README.md?pixel)]() diff --git a/release-0.20.0/docs/devel/api_changes.md b/release-0.20.0/docs/devel/api_changes.md deleted file mode 100644 index e6f7b9040a9..00000000000 --- a/release-0.20.0/docs/devel/api_changes.md +++ /dev/null @@ -1,348 +0,0 @@ -# So you want to change the API? - -The Kubernetes API has two major components - the internal structures and -the versioned APIs. The versioned APIs are intended to be stable, while the -internal structures are implemented to best reflect the needs of the Kubernetes -code itself. - -What this means for API changes is that you have to be somewhat thoughtful in -how you approach changes, and that you have to touch a number of pieces to make -a complete change. This document aims to guide you through the process, though -not all API changes will need all of these steps. - -## Operational overview - -It is important to have a high level understanding of the API system used in -Kubernetes in order to navigate the rest of this document. - -As mentioned above, the internal representation of an API object is decoupled -from any one API version. This provides a lot of freedom to evolve the code, -but it requires robust infrastructure to convert between representations. There -are multiple steps in processing an API operation - even something as simple as -a GET involves a great deal of machinery. - -The conversion process is logically a "star" with the internal form at the -center. Every versioned API can be converted to the internal form (and -vice-versa), but versioned APIs do not convert to other versioned APIs directly. -This sounds like a heavy process, but in reality we do not intend to keep more -than a small number of versions alive at once. While all of the Kubernetes code -operates on the internal structures, they are always converted to a versioned -form before being written to storage (disk or etcd) or being sent over a wire. -Clients should consume and operate on the versioned APIs exclusively. - -To demonstrate the general process, here is a (hypothetical) example: - - 1. A user POSTs a `Pod` object to `/api/v7beta1/...` - 2. The JSON is unmarshalled into a `v7beta1.Pod` structure - 3. Default values are applied to the `v7beta1.Pod` - 4. The `v7beta1.Pod` is converted to an `api.Pod` structure - 5. The `api.Pod` is validated, and any errors are returned to the user - 6. The `api.Pod` is converted to a `v6.Pod` (because v6 is the latest stable - version) - 7. The `v6.Pod` is marshalled into JSON and written to etcd - -Now that we have the `Pod` object stored, a user can GET that object in any -supported api version. For example: - - 1. A user GETs the `Pod` from `/api/v5/...` - 2. The JSON is read from etcd and unmarshalled into a `v6.Pod` structure - 3. Default values are applied to the `v6.Pod` - 4. The `v6.Pod` is converted to an `api.Pod` structure - 5. The `api.Pod` is converted to a `v5.Pod` structure - 6. The `v5.Pod` is marshalled into JSON and sent to the user - -The implication of this process is that API changes must be done carefully and -backward-compatibly. - -## On compatibility - -Before talking about how to make API changes, it is worthwhile to clarify what -we mean by API compatibility. An API change is considered backward-compatible -if it: - * adds new functionality that is not required for correct behavior - * does not change existing semantics - * does not change existing defaults - -Put another way: - -1. Any API call (e.g. a structure POSTed to a REST endpoint) that worked before - your change must work the same after your change. -2. Any API call that uses your change must not cause problems (e.g. crash or - degrade behavior) when issued against servers that do not include your change. -3. It must be possible to round-trip your change (convert to different API - versions and back) with no loss of information. - -If your change does not meet these criteria, it is not considered strictly -compatible. There are times when this might be OK, but mostly we want changes -that meet this definition. If you think you need to break compatibility, you -should talk to the Kubernetes team first. - -Let's consider some examples. In a hypothetical API (assume we're at version -v6), the `Frobber` struct looks something like this: - -```go -// API v6. -type Frobber struct { - Height int `json:"height"` - Param string `json:"param"` -} -``` - -You want to add a new `Width` field. It is generally safe to add new fields -without changing the API version, so you can simply change it to: - -```go -// Still API v6. -type Frobber struct { - Height int `json:"height"` - Width int `json:"width"` - Param string `json:"param"` -} -``` - -The onus is on you to define a sane default value for `Width` such that rule #1 -above is true - API calls and stored objects that used to work must continue to -work. - -For your next change you want to allow multiple `Param` values. You can not -simply change `Param string` to `Params []string` (without creating a whole new -API version) - that fails rules #1 and #2. You can instead do something like: - -```go -// Still API v6, but kind of clumsy. -type Frobber struct { - Height int `json:"height"` - Width int `json:"width"` - Param string `json:"param"` // the first param - ExtraParams []string `json:"params"` // additional params -} -``` - -Now you can satisfy the rules: API calls that provide the old style `Param` -will still work, while servers that don't understand `ExtraParams` can ignore -it. This is somewhat unsatisfying as an API, but it is strictly compatible. - -Part of the reason for versioning APIs and for using internal structs that are -distinct from any one version is to handle growth like this. The internal -representation can be implemented as: - -```go -// Internal, soon to be v7beta1. -type Frobber struct { - Height int - Width int - Params []string -} -``` - -The code that converts to/from versioned APIs can decode this into the somewhat -uglier (but compatible!) structures. Eventually, a new API version, let's call -it v7beta1, will be forked and it can use the clean internal structure. - -We've seen how to satisfy rules #1 and #2. Rule #3 means that you can not -extend one versioned API without also extending the others. For example, an -API call might POST an object in API v7beta1 format, which uses the cleaner -`Params` field, but the API server might store that object in trusty old v6 -form (since v7beta1 is "beta"). When the user reads the object back in the -v7beta1 API it would be unacceptable to have lost all but `Params[0]`. This -means that, even though it is ugly, a compatible change must be made to the v6 -API. - -As another interesting example, enumerated values provide a unique challenge. -Adding a new value to an enumerated set is *not* a compatible change. Clients -which assume they know how to handle all possible values of a given field will -not be able to handle the new values. However, removing value from an -enumerated set *can* be a compatible change, if handled properly (treat the -removed value as deprecated but allowed). - -## Changing versioned APIs - -For most changes, you will probably find it easiest to change the versioned -APIs first. This forces you to think about how to make your change in a -compatible way. Rather than doing each step in every version, it's usually -easier to do each versioned API one at a time, or to do all of one version -before starting "all the rest". - -### Edit types.go - -The struct definitions for each API are in `pkg/api//types.go`. Edit -those files to reflect the change you want to make. Note that all non-online -fields in versioned APIs must have description tags - these are used to generate -documentation. - -### Edit defaults.go - -If your change includes new fields for which you will need default values, you -need to add cases to `pkg/api//defaults.go`. Of course, since you -have added code, you have to add a test: `pkg/api//defaults_test.go`. - -Do use pointers to scalars when you need to distinguish between an unset value -and an an automatic zero value. For example, -`PodSpec.TerminationGracePeriodSeconds` is defined as `*int64` the go type -definition. A zero value means 0 seconds, and a nil value asks the system to -pick a default. - -Don't forget to run the tests! - -### Edit conversion.go - -Given that you have not yet changed the internal structs, this might feel -premature, and that's because it is. You don't yet have anything to convert to -or from. We will revisit this in the "internal" section. If you're doing this -all in a different order (i.e. you started with the internal structs), then you -should jump to that topic below. In the very rare case that you are making an -incompatible change you might or might not want to do this now, but you will -have to do more later. The files you want are -`pkg/api//conversion.go` and `pkg/api//conversion_test.go`. - -## Changing the internal structures - -Now it is time to change the internal structs so your versioned changes can be -used. - -### Edit types.go - -Similar to the versioned APIs, the definitions for the internal structs are in -`pkg/api/types.go`. Edit those files to reflect the change you want to make. -Keep in mind that the internal structs must be able to express *all* of the -versioned APIs. - -## Edit validation.go - -Most changes made to the internal structs need some form of input validation. -Validation is currently done on internal objects in -`pkg/api/validation/validation.go`. This validation is the one of the first -opportunities we have to make a great user experience - good error messages and -thorough validation help ensure that users are giving you what you expect and, -when they don't, that they know why and how to fix it. Think hard about the -contents of `string` fields, the bounds of `int` fields and the -requiredness/optionalness of fields. - -Of course, code needs tests - `pkg/api/validation/validation_test.go`. - -## Edit version conversions - -At this point you have both the versioned API changes and the internal -structure changes done. If there are any notable differences - field names, -types, structural change in particular - you must add some logic to convert -versioned APIs to and from the internal representation. If you see errors from -the `serialization_test`, it may indicate the need for explicit conversions. - -Performance of conversions very heavily influence performance of apiserver. -Thus, we are auto-generating conversion functions that are much more efficient -than the generic ones (which are based on reflections and thus are highly -inefficient). - -The conversion code resides with each versioned API. There are two files: - - `pkg/api//conversion.go` containing manually written conversion - functions - - `pkg/api//conversion_generated.go` containing auto-generated - conversion functions - -Since auto-generated conversion functions are using manually written ones, -those manually written should be named with a defined convention, i.e. a function -converting type X in pkg a to type Y in pkg b, should be named: -`convert_a_X_To_b_Y`. - -Also note that you can (and for efficiency reasons should) use auto-generated -conversion functions when writing your conversion functions. - -Once all the necessary manually written conversions are added, you need to -regenerate auto-generated ones. To regenerate them: - - run -``` - $ hack/update-generated-conversions.sh -``` - -If running the above script is impossible due to compile errors, the easiest -workaround is to comment out the code causing errors and let the script to -regenerate it. If the auto-generated conversion methods are not used by the -manually-written ones, it's fine to just remove the whole file and let the -generator to create it from scratch. - -Unsurprisingly, adding manually written conversion also requires you to add tests to -`pkg/api//conversion_test.go`. - -## Update the fuzzer - -Part of our testing regimen for APIs is to "fuzz" (fill with random values) API -objects and then convert them to and from the different API versions. This is -a great way of exposing places where you lost information or made bad -assumptions. If you have added any fields which need very careful formatting -(the test does not run validation) or if you have made assumptions such as -"this slice will always have at least 1 element", you may get an error or even -a panic from the `serialization_test`. If so, look at the diff it produces (or -the backtrace in case of a panic) and figure out what you forgot. Encode that -into the fuzzer's custom fuzz functions. Hint: if you added defaults for a field, -that field will need to have a custom fuzz function that ensures that the field is -fuzzed to a non-empty value. - -The fuzzer can be found in `pkg/api/testing/fuzzer.go`. - -## Update the semantic comparisons - -VERY VERY rarely is this needed, but when it hits, it hurts. In some rare -cases we end up with objects (e.g. resource quantities) that have morally -equivalent values with different bitwise representations (e.g. value 10 with a -base-2 formatter is the same as value 0 with a base-10 formatter). The only way -Go knows how to do deep-equality is through field-by-field bitwise comparisons. -This is a problem for us. - -The first thing you should do is try not to do that. If you really can't avoid -this, I'd like to introduce you to our semantic DeepEqual routine. It supports -custom overrides for specific types - you can find that in `pkg/api/helpers.go`. - -There's one other time when you might have to touch this: unexported fields. -You see, while Go's `reflect` package is allowed to touch unexported fields, us -mere mortals are not - this includes semantic DeepEqual. Fortunately, most of -our API objects are "dumb structs" all the way down - all fields are exported -(start with a capital letter) and there are no unexported fields. But sometimes -you want to include an object in our API that does have unexported fields -somewhere in it (for example, `time.Time` has unexported fields). If this hits -you, you may have to touch the semantic DeepEqual customization functions. - -## Implement your change - -Now you have the API all changed - go implement whatever it is that you're -doing! - -## Write end-to-end tests - -This is, sadly, still sort of painful. Talk to us and we'll try to help you -figure out the best way to make sure your cool feature keeps working forever. - -## Examples and docs - -At last, your change is done, all unit tests pass, e2e passes, you're done, -right? Actually, no. You just changed the API. If you are touching an -existing facet of the API, you have to try *really* hard to make sure that -*all* the examples and docs are updated. There's no easy way to do this, due -in part to JSON and YAML silently dropping unknown fields. You're clever - -you'll figure it out. Put `grep` or `ack` to good use. - -If you added functionality, you should consider documenting it and/or writing -an example to illustrate your change. - -Make sure you update the swagger API spec by running: - -```shell -$ hack/update-swagger-spec.sh -``` - -The API spec changes should be in a commit separate from your other changes. - -## Incompatible API changes -If your change is going to be backward incompatible or might be a breaking change for API -consumers, please send an announcement to `kubernetes-dev@googlegroups.com` before -the change gets in. If you are unsure, ask. Also make sure that the change gets documented in -`CHANGELOG.md` for the next release. - -## Adding new REST objects - -TODO(smarterclayton): write this. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/api_changes.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/devel/api_changes.md?pixel)]() diff --git a/release-0.20.0/docs/devel/coding-conventions.md b/release-0.20.0/docs/devel/coding-conventions.md deleted file mode 100644 index 6affb41696f..00000000000 --- a/release-0.20.0/docs/devel/coding-conventions.md +++ /dev/null @@ -1,13 +0,0 @@ -Coding style advice for contributors - - Bash - - https://google-styleguide.googlecode.com/svn/trunk/shell.xml - - Go - - https://github.com/golang/go/wiki/CodeReviewComments - - https://gist.github.com/lavalamp/4bd23295a9f32706a48f - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/coding-conventions.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/devel/coding-conventions.md?pixel)]() diff --git a/release-0.20.0/docs/devel/collab.md b/release-0.20.0/docs/devel/collab.md deleted file mode 100644 index ce2bba6aff0..00000000000 --- a/release-0.20.0/docs/devel/collab.md +++ /dev/null @@ -1,46 +0,0 @@ -# On Collaborative Development - -Kubernetes is open source, but many of the people working on it do so as their day job. In order to avoid forcing people to be "at work" effectively 24/7, we want to establish some semi-formal protocols around development. Hopefully these rules make things go more smoothly. If you find that this is not the case, please complain loudly. - -## Patches welcome - -First and foremost: as a potential contributor, your changes and ideas are welcome at any hour of the day or night, weekdays, weekends, and holidays. Please do not ever hesitate to ask a question or send a PR. - -## Code reviews - -All changes must be code reviewed. For non-maintainers this is obvious, since you can't commit anyway. But even for maintainers, we want all changes to get at least one review, preferably (for non-trivial changes obligatorily) from someone who knows the areas the change touches. For non-trivial changes we may want two reviewers. The primary reviewer will make this decision and nominate a second reviewer, if needed. Except for trivial changes, PRs should not be committed until relevant parties (e.g. owners of the subsystem affected by the PR) have had a reasonable chance to look at PR in their local business hours. - -Most PRs will find reviewers organically. If a maintainer intends to be the primary reviewer of a PR they should set themselves as the assignee on GitHub and say so in a reply to the PR. Only the primary reviewer of a change should actually do the merge, except in rare cases (e.g. they are unavailable in a reasonable timeframe). - -If a PR has gone 2 work days without an owner emerging, please poke the PR thread and ask for a reviewer to be assigned. - -Except for rare cases, such as trivial changes (e.g. typos, comments) or emergencies (e.g. broken builds), maintainers should not merge their own changes. - -Expect reviewers to request that you avoid [common go style mistakes](https://github.com/golang/go/wiki/CodeReviewComments) in your PRs. - -## Assigned reviews - -Maintainers can assign reviews to other maintainers, when appropriate. The assignee becomes the shepherd for that PR and is responsible for merging the PR once they are satisfied with it or else closing it. The assignee might request reviews from non-maintainers. - -## Merge hours - -Maintainers will do merges of appropriately reviewed-and-approved changes during their local "business hours" (typically 7:00 am Monday to 5:00 pm (17:00h) Friday). PRs that arrive over the weekend or on holidays will only be merged if there is a very good reason for it and if the code review requirements have been met. Concretely this means that nobody should merge changes immediately before going to bed for the night. - -There may be discussion an even approvals granted outside of the above hours, but merges will generally be deferred. - -If a PR is considered complex or controversial, the merge of that PR should be delayed to give all interested parties in all timezones the opportunity to provide feedback. Concretely, this means that such PRs should be held for 24 -hours before merging. Of course "complex" and "controversial" are left to the judgment of the people involved, but we trust that part of being a committer is the judgment required to evaluate such things honestly, and not be -motivated by your desire (or your cube-mate's desire) to get their code merged. Also see "Holds" below, any reviewer can issue a "hold" to indicate that the PR is in fact complicated or complex and deserves further review. - -PRs that are incorrectly judged to be merge-able, may be reverted and subject to re-review, if subsequent reviewers believe that they in fact are controversial or complex. - - -## Holds - -Any maintainer or core contributor who wants to review a PR but does not have time immediately may put a hold on a PR simply by saying so on the PR discussion and offering an ETA measured in single-digit days at most. Any PR that has a hold shall not be merged until the person who requested the hold acks the review, withdraws their hold, or is overruled by a preponderance of maintainers. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/collab.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/devel/collab.md?pixel)]() diff --git a/release-0.20.0/docs/devel/developer-guides/vagrant.md b/release-0.20.0/docs/devel/developer-guides/vagrant.md deleted file mode 100644 index fa6aed032b8..00000000000 --- a/release-0.20.0/docs/devel/developer-guides/vagrant.md +++ /dev/null @@ -1,341 +0,0 @@ -## Getting started with Vagrant - -Running kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/develop on your local machine (Linux, Mac OS X). - -### Prerequisites -1. Install latest version >= 1.6.2 of vagrant from http://www.vagrantup.com/downloads.html -2. Install one of: - 1. The latest version of Virtual Box from https://www.virtualbox.org/wiki/Downloads - 2. [VMWare Fusion](https://www.vmware.com/products/fusion/) version 5 or greater as well as the appropriate [Vagrant VMWare Fusion provider](https://www.vagrantup.com/vmware) - 3. [VMWare Workstation](https://www.vmware.com/products/workstation/) version 9 or greater as well as the [Vagrant VMWare Workstation provider](https://www.vagrantup.com/vmware) - 4. [Parallels Desktop](https://www.parallels.com/products/desktop/) version 9 or greater as well as the [Vagrant Parallels provider](https://parallels.github.io/vagrant-parallels/) -3. Get or build a [binary release](/docs/getting-started-guides/binary_release.md) - -### Setup - -By default, the Vagrant setup will create a single kubernetes-master and 1 kubernetes-minion. Each VM will take 1 GB, so make sure you have at least 2GB to 4GB of free memory (plus appropriate free disk space). To start your local cluster, open a shell and run: - -```sh -cd kubernetes - -export KUBERNETES_PROVIDER=vagrant -./cluster/kube-up.sh -``` - -The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine. - -If you installed more than one Vagrant provider, Kubernetes will usually pick the appropriate one. However, you can override which one Kubernetes will use by setting the [`VAGRANT_DEFAULT_PROVIDER`](https://docs.vagrantup.com/v2/providers/default.html) environment variable: - -```sh -export VAGRANT_DEFAULT_PROVIDER=parallels -export KUBERNETES_PROVIDER=vagrant -./cluster/kube-up.sh -``` - -Vagrant will provision each machine in the cluster with all the necessary components to run Kubernetes. The initial setup can take a few minutes to complete on each machine. - -By default, each VM in the cluster is running Fedora, and all of the Kubernetes services are installed into systemd. - -To access the master or any minion: - -```sh -vagrant ssh master -vagrant ssh minion-1 -``` - -If you are running more than one minion, you can access the others by: - -```sh -vagrant ssh minion-2 -vagrant ssh minion-3 -``` - -To view the service status and/or logs on the kubernetes-master: -```sh -vagrant ssh master -[vagrant@kubernetes-master ~] $ sudo systemctl status kube-apiserver -[vagrant@kubernetes-master ~] $ sudo journalctl -r -u kube-apiserver - -[vagrant@kubernetes-master ~] $ sudo systemctl status kube-controller-manager -[vagrant@kubernetes-master ~] $ sudo journalctl -r -u kube-controller-manager - -[vagrant@kubernetes-master ~] $ sudo systemctl status etcd -[vagrant@kubernetes-master ~] $ sudo systemctl status nginx -``` - -To view the services on any of the kubernetes-minion(s): -```sh -vagrant ssh minion-1 -[vagrant@kubernetes-minion-1] $ sudo systemctl status docker -[vagrant@kubernetes-minion-1] $ sudo journalctl -r -u docker -[vagrant@kubernetes-minion-1] $ sudo systemctl status kubelet -[vagrant@kubernetes-minion-1] $ sudo journalctl -r -u kubelet -``` - -### Interacting with your Kubernetes cluster with Vagrant. - -With your Kubernetes cluster up, you can manage the nodes in your cluster with the regular Vagrant commands. - -To push updates to new Kubernetes code after making source changes: -```sh -./cluster/kube-push.sh -``` - -To stop and then restart the cluster: -```sh -vagrant halt -./cluster/kube-up.sh -``` - -To destroy the cluster: -```sh -vagrant destroy -``` - -Once your Vagrant machines are up and provisioned, the first thing to do is to check that you can use the `kubectl.sh` script. - -You may need to build the binaries first, you can do this with ```make``` - -```sh -$ ./cluster/kubectl.sh get minions - -NAME LABELS -10.245.1.4 -10.245.1.5 -10.245.1.3 -``` - -### Interacting with your Kubernetes cluster with the `kube-*` scripts. - -Alternatively to using the vagrant commands, you can also use the `cluster/kube-*.sh` scripts to interact with the vagrant based provider just like any other hosting platform for kubernetes. - -All of these commands assume you have set `KUBERNETES_PROVIDER` appropriately: - -```sh -export KUBERNETES_PROVIDER=vagrant -``` - -Bring up a vagrant cluster - -```sh -./cluster/kube-up.sh -``` - -Destroy the vagrant cluster - -```sh -./cluster/kube-down.sh -``` - -Update the vagrant cluster after you make changes (only works when building your own releases locally): - -```sh -./cluster/kube-push.sh -``` - -Interact with the cluster - -```sh -./cluster/kubectl.sh -``` - -### Authenticating with your master - -When using the vagrant provider in Kubernetes, the `cluster/kubectl.sh` script will cache your credentials in a `~/.kubernetes_vagrant_auth` file so you will not be prompted for them in the future. - -```sh -cat ~/.kubernetes_vagrant_auth -{ "User": "vagrant", - "Password": "vagrant" - "CAFile": "/home/k8s_user/.kubernetes.vagrant.ca.crt", - "CertFile": "/home/k8s_user/.kubecfg.vagrant.crt", - "KeyFile": "/home/k8s_user/.kubecfg.vagrant.key" -} -``` - -You should now be set to use the `cluster/kubectl.sh` script. For example try to list the minions that you have started with: - -```sh -./cluster/kubectl.sh get minions -``` - -### Running containers - -Your cluster is running, you can list the minions in your cluster: - -```sh -$ ./cluster/kubectl.sh get minions - -NAME LABELS -10.245.2.4 -10.245.2.3 -10.245.2.2 -``` - -Now start running some containers! - -You can now use any of the cluster/kube-*.sh commands to interact with your VM machines. -Before starting a container there will be no pods, services and replication controllers. - -``` -$ cluster/kubectl.sh get pods -NAME IMAGE(S) HOST LABELS STATUS - -$ cluster/kubectl.sh get services -NAME LABELS SELECTOR IP PORT - -$ cluster/kubectl.sh get replicationcontrollers -NAME IMAGE(S SELECTOR REPLICAS -``` - -Start a container running nginx with a replication controller and three replicas - -``` -$ cluster/kubectl.sh run my-nginx --image=nginx --replicas=3 --port=80 -``` - -When listing the pods, you will see that three containers have been started and are in Waiting state: - -``` -$ cluster/kubectl.sh get pods -NAME IMAGE(S) HOST LABELS STATUS -781191ff-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.4/10.245.2.4 name=myNginx Waiting -7813c8bd-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.2/10.245.2.2 name=myNginx Waiting -78140853-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.3/10.245.2.3 name=myNginx Waiting -``` - -You need to wait for the provisioning to complete, you can monitor the minions by doing: - -```sh -$ sudo salt '*minion-1' cmd.run 'docker images' -kubernetes-minion-1: - REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE - 96864a7d2df3 26 hours ago 204.4 MB - kubernetes/pause latest 6c4579af347b 8 weeks ago 239.8 kB -``` - -Once the docker image for nginx has been downloaded, the container will start and you can list it: - -```sh -$ sudo salt '*minion-1' cmd.run 'docker ps' -kubernetes-minion-1: - CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES - dbe79bf6e25b nginx:latest "nginx" 21 seconds ago Up 19 seconds k8s--mynginx.8c5b8a3a--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--fcfa837f - fa0e29c94501 kubernetes/pause:latest "/pause" 8 minutes ago Up 8 minutes 0.0.0.0:8080->80/tcp k8s--net.a90e7ce4--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--baf5b21b -``` - -Going back to listing the pods, services and replicationcontrollers, you now have: - -``` -$ cluster/kubectl.sh get pods -NAME IMAGE(S) HOST LABELS STATUS -781191ff-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.4/10.245.2.4 name=myNginx Running -7813c8bd-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.2/10.245.2.2 name=myNginx Running -78140853-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.3/10.245.2.3 name=myNginx Running - -$ cluster/kubectl.sh get services -NAME LABELS SELECTOR IP PORT - -$ cluster/kubectl.sh get replicationcontrollers -NAME IMAGE(S SELECTOR REPLICAS -myNginx nginx name=my-nginx 3 -``` - -We did not start any services, hence there are none listed. But we see three replicas displayed properly. -Check the [guestbook](/examples/guestbook/README.md) application to learn how to create a service. -You can already play with scaling the replicas with: - -```sh -$ ./cluster/kubectl.sh scale rc my-nginx --replicas=2 -$ ./cluster/kubectl.sh get pods -NAME IMAGE(S) HOST LABELS STATUS -7813c8bd-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.2/10.245.2.2 name=myNginx Running -78140853-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.3/10.245.2.3 name=myNginx Running -``` - -Congratulations! - -### Testing - -The following will run all of the end-to-end testing scenarios assuming you set your environment in `cluster/kube-env.sh`: - -```sh -NUM_MINIONS=3 hack/e2e-test.sh -``` - -### Troubleshooting - -#### I keep downloading the same (large) box all the time! - -By default the Vagrantfile will download the box from S3. You can change this (and cache the box locally) by providing a name and an alternate URL when calling `kube-up.sh` - -```sh -export KUBERNETES_BOX_NAME=choose_your_own_name_for_your_kuber_box -export KUBERNETES_BOX_URL=path_of_your_kuber_box -export KUBERNETES_PROVIDER=vagrant -./cluster/kube-up.sh -``` - -#### I just created the cluster, but I am getting authorization errors! - -You probably have an incorrect ~/.kubernetes_vagrant_auth file for the cluster you are attempting to contact. - -```sh -rm ~/.kubernetes_vagrant_auth -``` - -After using kubectl.sh make sure that the correct credentials are set: - -```sh -cat ~/.kubernetes_vagrant_auth -{ - "User": "vagrant", - "Password": "vagrant" -} -``` - -#### I just created the cluster, but I do not see my container running! - -If this is your first time creating the cluster, the kubelet on each minion schedules a number of docker pull requests to fetch prerequisite images. This can take some time and as a result may delay your initial pod getting provisioned. - -#### I changed Kubernetes code, but it's not running! - -Are you sure there was no build error? After running `$ vagrant provision`, scroll up and ensure that each Salt state was completed successfully on each box in the cluster. -It's very likely you see a build error due to an error in your source files! - -#### I have brought Vagrant up but the minions won't validate! - -Are you sure you built a release first? Did you install `net-tools`? For more clues, login to one of the minions (`vagrant ssh minion-1`) and inspect the salt minion log (`sudo cat /var/log/salt/minion`). - -#### I want to change the number of minions! - -You can control the number of minions that are instantiated via the environment variable `NUM_MINIONS` on your host machine. If you plan to work with replicas, we strongly encourage you to work with enough minions to satisfy your largest intended replica size. If you do not plan to work with replicas, you can save some system resources by running with a single minion. You do this, by setting `NUM_MINIONS` to 1 like so: - -```sh -export NUM_MINIONS=1 -``` - -#### I want my VMs to have more memory! - -You can control the memory allotted to virtual machines with the `KUBERNETES_MEMORY` environment variable. -Just set it to the number of megabytes you would like the machines to have. For example: - -```sh -export KUBERNETES_MEMORY=2048 -``` - -If you need more granular control, you can set the amount of memory for the master and minions independently. For example: - -```sh -export KUBERNETES_MASTER_MEMORY=1536 -export KUBERNETES_MINION_MEMORY=2048 -``` - -#### I ran vagrant suspend and nothing works! -```vagrant suspend``` seems to mess up the network. It's not supported at this time. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/developer-guides/vagrant.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/devel/developer-guides/vagrant.md?pixel)]() diff --git a/release-0.20.0/docs/devel/development.md b/release-0.20.0/docs/devel/development.md deleted file mode 100644 index feeb1d553aa..00000000000 --- a/release-0.20.0/docs/devel/development.md +++ /dev/null @@ -1,292 +0,0 @@ -# Development Guide - -# Releases and Official Builds - -Official releases are built in Docker containers. Details are [here](../../build/README.md). You can do simple builds and development with just a local Docker installation. If want to build go locally outside of docker, please continue below. - -## Go development environment - -Kubernetes is written in [Go](http://golang.org) programming language. If you haven't set up Go development environment, please follow [this instruction](http://golang.org/doc/code.html) to install go tool and set up GOPATH. Ensure your version of Go is at least 1.3. - -## Git Setup - -Below, we outline one of the more common git workflows that core developers use. Other git workflows are also valid. - -### Visual overview -![Git workflow](git_workflow.png) - -### Fork the main repository - -1. Go to https://github.com/GoogleCloudPlatform/kubernetes -2. Click the "Fork" button (at the top right) - -### Clone your fork - -The commands below require that you have $GOPATH set ([$GOPATH docs](https://golang.org/doc/code.html#GOPATH)). We highly recommend you put kubernetes' code into your GOPATH. Note: the commands below will not work if there is more than one directory in your `$GOPATH`. - -``` -$ mkdir -p $GOPATH/src/github.com/GoogleCloudPlatform/ -$ cd $GOPATH/src/github.com/GoogleCloudPlatform/ -# Replace "$YOUR_GITHUB_USERNAME" below with your github username -$ git clone https://github.com/$YOUR_GITHUB_USERNAME/kubernetes.git -$ cd kubernetes -$ git remote add upstream 'https://github.com/GoogleCloudPlatform/kubernetes.git' -``` - -### Create a branch and make changes - -``` -$ git checkout -b myfeature -# Make your code changes -``` - -### Keeping your development fork in sync - -``` -$ git fetch upstream -$ git rebase upstream/master -``` - -Note: If you have write access to the main repository at github.com/GoogleCloudPlatform/kubernetes, you should modify your git configuration so that you can't accidentally push to upstream: - -``` -git remote set-url --push upstream no_push -``` - -### Commiting changes to your fork - -``` -$ git commit -$ git push -f origin myfeature -``` - -### Creating a pull request -1. Visit http://github.com/$YOUR_GITHUB_USERNAME/kubernetes -2. Click the "Compare and pull request" button next to your "myfeature" branch. - - -## godep and dependency management - -Kubernetes uses [godep](https://github.com/tools/godep) to manage dependencies. It is not strictly required for building Kubernetes but it is required when managing dependencies under the Godeps/ tree, and is required by a number of the build and test scripts. Please make sure that ``godep`` is installed and in your ``$PATH``. - -### Installing godep -There are many ways to build and host go binaries. Here is an easy way to get utilities like ```godep``` installed: - -1) Ensure that [mercurial](http://mercurial.selenic.com/wiki/Download) is installed on your system. (some of godep's dependencies use the mercurial -source control system). Use ```apt-get install mercurial``` or ```yum install mercurial``` on Linux, or [brew.sh](http://brew.sh) on OS X, or download -directly from mercurial. - -2) Create a new GOPATH for your tools and install godep: -``` -export GOPATH=$HOME/go-tools -mkdir -p $GOPATH -go get github.com/tools/godep -``` - -3) Add $GOPATH/bin to your path. Typically you'd add this to your ~/.profile: -``` -export GOPATH=$HOME/go-tools -export PATH=$PATH:$GOPATH/bin -``` - -### Using godep -Here's a quick walkthrough of one way to use godeps to add or update a Kubernetes dependency into Godeps/_workspace. For more details, please see the instructions in [godep's documentation](https://github.com/tools/godep). - -1) Devote a directory to this endeavor: -``` -export KPATH=$HOME/code/kubernetes -mkdir -p $KPATH/src/github.com/GoogleCloudPlatform/kubernetes -cd $KPATH/src/github.com/GoogleCloudPlatform/kubernetes -git clone https://path/to/your/fork . -# Or copy your existing local repo here. IMPORTANT: making a symlink doesn't work. -``` - -2) Set up your GOPATH. -``` -# Option A: this will let your builds see packages that exist elsewhere on your system. -export GOPATH=$KPATH:$GOPATH -# Option B: This will *not* let your local builds see packages that exist elsewhere on your system. -export GOPATH=$KPATH -# Option B is recommended if you're going to mess with the dependencies. -``` - -3) Populate your new GOPATH. -``` -cd $KPATH/src/github.com/GoogleCloudPlatform/kubernetes -godep restore -``` - -4) Next, you can either add a new dependency or update an existing one. -``` -# To add a new dependency, do: -cd $KPATH/src/github.com/GoogleCloudPlatform/kubernetes -go get path/to/dependency -# Change code in Kubernetes to use the dependency. -godep save ./... - -# To update an existing dependency, do: -cd $KPATH/src/github.com/GoogleCloudPlatform/kubernetes -go get -u path/to/dependency -# Change code in Kubernetes accordingly if necessary. -godep update path/to/dependency -``` - -5) Before sending your PR, it's a good idea to sanity check that your Godeps.json file is ok by re-restoring: ```godep restore``` - -It is sometimes expedient to manually fix the /Godeps/godeps.json file to minimize the changes. - -Please send dependency updates in separate commits within your PR, for easier reviewing. - -## Hooks - -Before committing any changes, please link/copy these hooks into your .git -directory. This will keep you from accidentally committing non-gofmt'd go code. - -``` -cd kubernetes/.git/hooks/ -ln -s ../../hooks/pre-commit . -``` - -## Unit tests - -``` -cd kubernetes -hack/test-go.sh -``` - -Alternatively, you could also run: - -``` -cd kubernetes -godep go test ./... -``` - -If you only want to run unit tests in one package, you could run ``godep go test`` under the package directory. For example, the following commands will run all unit tests in package kubelet: - -``` -$ cd kubernetes # step into kubernetes' directory. -$ cd pkg/kubelet -$ godep go test -# some output from unit tests -PASS -ok github.com/GoogleCloudPlatform/kubernetes/pkg/kubelet 0.317s -``` - -## Coverage - -Currently, collecting coverage is only supported for the Go unit tests. - -To run all unit tests and generate an HTML coverage report, run the following: - -``` -cd kubernetes -KUBE_COVER=y hack/test-go.sh -``` - -At the end of the run, an the HTML report will be generated with the path printed to stdout. - -To run tests and collect coverage in only one package, pass its relative path under the `kubernetes` directory as an argument, for example: -``` -cd kubernetes -KUBE_COVER=y hack/test-go.sh pkg/kubectl -``` - -Multiple arguments can be passed, in which case the coverage results will be combined for all tests run. - -Coverage results for the project can also be viewed on [Coveralls](https://coveralls.io/r/GoogleCloudPlatform/kubernetes), and are continuously updated as commits are merged. Additionally, all pull requests which spawn a Travis build will report unit test coverage results to Coveralls. - -## Integration tests - -You need an [etcd](https://github.com/coreos/etcd/releases/tag/v2.0.0) in your path, please make sure it is installed and in your ``$PATH``. -``` -cd kubernetes -hack/test-integration.sh -``` - -## End-to-End tests - -You can run an end-to-end test which will bring up a master and two minions, perform some tests, and then tear everything down. Make sure you have followed the getting started steps for your chosen cloud platform (which might involve changing the `KUBERNETES_PROVIDER` environment variable to something other than "gce". -``` -cd kubernetes -hack/e2e-test.sh -``` - -Pressing control-C should result in an orderly shutdown but if something goes wrong and you still have some VMs running you can force a cleanup with this command: -``` -go run hack/e2e.go --down -``` - -### Flag options -See the flag definitions in `hack/e2e.go` for more options, such as reusing an existing cluster, here is an overview: - -```sh -# Build binaries for testing -go run hack/e2e.go --build - -# Create a fresh cluster. Deletes a cluster first, if it exists -go run hack/e2e.go --up - -# Create a fresh cluster at a specific release version. -go run hack/e2e.go --up --version=0.7.0 - -# Test if a cluster is up. -go run hack/e2e.go --isup - -# Push code to an existing cluster -go run hack/e2e.go --push - -# Push to an existing cluster, or bring up a cluster if it's down. -go run hack/e2e.go --pushup - -# Run all tests -go run hack/e2e.go --test - -# Run tests matching the regex "Pods.*env" -go run hack/e2e.go -v -test --test_args="--ginkgo.focus=Pods.*env" - -# Alternately, if you have the e2e cluster up and no desire to see the event stream, you can run ginkgo-e2e.sh directly: -hack/ginkgo-e2e.sh --ginkgo.focus=Pods.*env -``` - -### Combining flags -```sh -# Flags can be combined, and their actions will take place in this order: -# -build, -push|-up|-pushup, -test|-tests=..., -down -# e.g.: -go run hack/e2e.go -build -pushup -test -down - -# -v (verbose) can be added if you want streaming output instead of only -# seeing the output of failed commands. - -# -ctl can be used to quickly call kubectl against your e2e cluster. Useful for -# cleaning up after a failed test or viewing logs. Use -v to avoid suppressing -# kubectl output. -go run hack/e2e.go -v -ctl='get events' -go run hack/e2e.go -v -ctl='delete pod foobar' -``` - -## Conformance testing -End-to-end testing, as described above, is for [development -distributions](../../docs/devel/writing-a-getting-started-guide.md). A conformance test is used on -a [versioned distro](../../docs/devel/writing-a-getting-started-guide.md). - -The conformance test runs a subset of the e2e-tests against a manually-created cluster. It does not -require support for up/push/down and other operations. To run a conformance test, you need to know the -IP of the master for your cluster and the authorization arguments to use. The conformance test is -intended to run against a cluster at a specific binary release of Kubernetes. -See [conformance-test.sh](../../hack/conformance-test.sh). - -## Testing out flaky tests -[Instructions here](flaky-tests.md) - -## Regenerating the CLI documentation - -``` -hack/run-gendocs.sh -``` - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/development.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/devel/development.md?pixel)]() diff --git a/release-0.20.0/docs/devel/faster_reviews.md b/release-0.20.0/docs/devel/faster_reviews.md deleted file mode 100644 index 64728cf3962..00000000000 --- a/release-0.20.0/docs/devel/faster_reviews.md +++ /dev/null @@ -1,183 +0,0 @@ -# How to get faster PR reviews - -Most of what is written here is not at all specific to Kubernetes, but it bears -being written down in the hope that it will occasionally remind people of "best -practices" around code reviews. - -You've just had a brilliant idea on how to make Kubernetes better. Let's call -that idea "FeatureX". Feature X is not even that complicated. You have a -pretty good idea of how to implement it. You jump in and implement it, fixing a -bunch of stuff along the way. You send your PR - this is awesome! And it sits. -And sits. A week goes by and nobody reviews it. Finally someone offers a few -comments, which you fix up and wait for more review. And you wait. Another -week or two goes by. This is horrible. - -What went wrong? One particular problem that comes up frequently is this - your -PR is too big to review. You've touched 39 files and have 8657 insertions. -When your would-be reviewers pull up the diffs they run away - this PR is going -to take 4 hours to review and they don't have 4 hours right now. They'll get to it -later, just as soon as they have more free time (ha!). - -Let's talk about how to avoid this. - -## 1. Don't build a cathedral in one PR - -Are you sure FeatureX is something the Kubernetes team wants or will accept, or -that it is implemented to fit with other changes in flight? Are you willing to -bet a few days or weeks of work on it? If you have any doubt at all about the -usefulness of your feature or the design - make a proposal doc or a sketch PR -or both. Write or code up just enough to express the idea and the design and -why you made those choices, then get feedback on this. Now, when we ask you to -change a bunch of facets of the design, you don't have to re-write it all. - -## 2. Smaller diffs are exponentially better - -Small PRs get reviewed faster and are more likely to be correct than big ones. -Let's face it - attention wanes over time. If your PR takes 60 minutes to -review, I almost guarantee that the reviewer's eye for details is not as keen in -the last 30 minutes as it was in the first. This leads to multiple rounds of -review when one might have sufficed. In some cases the review is delayed in its -entirety by the need for a large contiguous block of time to sit and read your -code. - -Whenever possible, break up your PRs into multiple commits. Making a series of -discrete commits is a powerful way to express the evolution of an idea or the -different ideas that make up a single feature. There's a balance to be struck, -obviously. If your commits are too small they become more cumbersome to deal -with. Strive to group logically distinct ideas into commits. - -For example, if you found that FeatureX needed some "prefactoring" to fit in, -make a commit that JUST does that prefactoring. Then make a new commit for -FeatureX. Don't lump unrelated things together just because you didn't think -about prefactoring. If you need to, fork a new branch, do the prefactoring -there and send a PR for that. If you can explain why you are doing seemingly -no-op work ("it makes the FeatureX change easier, I promise") we'll probably be -OK with it. - -Obviously, a PR with 25 commits is still very cumbersome to review, so use -common sense. - -## 3. Multiple small PRs are often better than multiple commits - -If you can extract whole ideas from your PR and send those as PRs of their own, -you can avoid the painful problem of continually rebasing. Kubernetes is a -fast-moving codebase - lock in your changes ASAP, and make merges be someone -else's problem. - -Obviously, we want every PR to be useful on its own, so you'll have to use -common sense in deciding what can be a PR vs what should be a commit in a larger -PR. Rule of thumb - if this commit or set of commits is directly related to -FeatureX and nothing else, it should probably be part of the FeatureX PR. If -you can plausibly imagine someone finding value in this commit outside of -FeatureX, try it as a PR. - -Don't worry about flooding us with PRs. We'd rather have 100 small, obvious PRs -than 10 unreviewable monoliths. - -## 4. Don't rename, reformat, comment, etc in the same PR - -Often, as you are implementing FeatureX, you find things that are just wrong. -Bad comments, poorly named functions, bad structure, weak type-safety. You -should absolutely fix those things (or at least file issues, please) - but not -in this PR. See the above points - break unrelated changes out into different -PRs or commits. Otherwise your diff will have WAY too many changes, and your -reviewer won't see the forest because of all the trees. - -## 5. Comments matter - -Read up on GoDoc - follow those general rules. If you're writing code and you -think there is any possible chance that someone might not understand why you did -something (or that you won't remember what you yourself did), comment it. If -you think there's something pretty obvious that we could follow up on, add a -TODO. Many code-review comments are about this exact issue. - -## 5. Tests are almost always required - -Nothing is more frustrating than doing a review, only to find that the tests are -inadequate or even entirely absent. Very few PRs can touch code and NOT touch -tests. If you don't know how to test FeatureX - ask! We'll be happy to help -you design things for easy testing or to suggest appropriate test cases. - -## 6. Look for opportunities to generify - -If you find yourself writing something that touches a lot of modules, think hard -about the dependencies you are introducing between packages. Can some of what -you're doing be made more generic and moved up and out of the FeatureX package? -Do you need to use a function or type from an otherwise unrelated package? If -so, promote! We have places specifically for hosting more generic code. - -Likewise if FeatureX is similar in form to FeatureW which was checked in last -month and it happens to exactly duplicate some tricky stuff from FeatureW, -consider prefactoring core logic out and using it in both FeatureW and FeatureX. -But do that in a different commit or PR, please. - -## 7. Fix feedback in a new commit - -Your reviewer has finally sent you some feedback on FeatureX. You make a bunch -of changes and ... what? You could patch those into your commits with git -"squash" or "fixup" logic. But that makes your changes hard to verify. Unless -your whole PR is pretty trivial, you should instead put your fixups into a new -commit and re-push. Your reviewer can then look at that commit on its own - so -much faster to review than starting over. - -We might still ask you to clean up your commits at the very end, for the sake -of a more readable history. - -## 8. KISS, YAGNI, MVP, etc - -Sometimes we need to remind each other of core tenets of software design - Keep -It Simple, You Aren't Gonna Need It, Minimum Viable Product, and so on. Adding -features "because we might need it later" is antithetical to software that -ships. Add the things you need NOW and (ideally) leave room for things you -might need later - but don't implement them now. - -## 9. Push back - -We understand that it is hard to imagine, but sometimes we make mistakes. It's -OK to push back on changes requested during a review. If you have a good reason -for doing something a certain way, you are absolutely allowed to debate the -merits of a requested change. You might be overruled, but you might also -prevail. We're mostly pretty reasonable people. Mostly. - -## 10. I'm still getting stalled - help?! - -So, you've done all that and you still aren't getting any PR love? Here's some -things you can do that might help kick a stalled process along: - - * Make sure that your PR has an assigned reviewer (assignee in GitHub). If - this is not the case, reply to the PR comment stream asking for one to be - assigned. - - * Ping the assignee (@username) on the PR comment stream asking for an - estimate of when they can get to it. - - * Ping the assignee by email (many of us have email addresses that are well - published or are the same as our GitHub handle @google.com or @redhat.com). - -If you think you have fixed all the issues in a round of review, and you haven't -heard back, you should ping the reviewer (assignee) on the comment stream with a -"please take another look" (PTAL) or similar comment indicating you are done and -you think it is ready for re-review. In fact, this is probably a good habit for -all PRs. - -One phenomenon of open-source projects (where anyone can comment on any issue) -is the dog-pile - your PR gets so many comments from so many people it becomes -hard to follow. In this situation you can ask the primary reviewer -(assignee) whether they want you to fork a new PR to clear out all the comments. -Remember: you don't HAVE to fix every issue raised by every person who feels -like commenting, but you should at least answer reasonable comments with an -explanation. - -## Final: Use common sense - -Obviously, none of these points are hard rules. There is no document that can -take the place of common sense and good taste. Use your best judgment, but put -a bit of thought into how your work can be made easier to review. If you do -these things your PRs will flow much more easily. - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/faster_reviews.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/devel/faster_reviews.md?pixel)]() diff --git a/release-0.20.0/docs/devel/flaky-tests.md b/release-0.20.0/docs/devel/flaky-tests.md deleted file mode 100644 index 2f76cc9714c..00000000000 --- a/release-0.20.0/docs/devel/flaky-tests.md +++ /dev/null @@ -1,68 +0,0 @@ -# Hunting flaky tests in Kubernetes -Sometimes unit tests are flaky. This means that due to (usually) race conditions, they will occasionally fail, even though most of the time they pass. - -We have a goal of 99.9% flake free tests. This means that there is only one flake in one thousand runs of a test. - -Running a test 1000 times on your own machine can be tedious and time consuming. Fortunately, there is a better way to achieve this using Kubernetes. - -_Note: these instructions are mildly hacky for now, as we get run once semantics and logging they will get better_ - -There is a testing image ```brendanburns/flake``` up on the docker hub. We will use this image to test our fix. - -Create a replication controller with the following config: -```yaml -apiVersion: v1 -kind: ReplicationController -metadata: - name: flakecontroller -spec: - replicas: 24 - template: - metadata: - labels: - name: flake - spec: - containers: - - name: flake - image: brendanburns/flake - env: - - name: TEST_PACKAGE - value: pkg/tools - - name: REPO_SPEC - value: https://github.com/GoogleCloudPlatform/kubernetes -``` -Note that we omit the labels and the selector fields of the replication controller, because they will be populated from the labels field of the pod template by default. - -``` -kubectl create -f controller.yaml -``` - -This will spin up 24 instances of the test. They will run to completion, then exit, and the kubelet will restart them, accumulating more and more runs of the test. -You can examine the recent runs of the test by calling ```docker ps -a``` and looking for tasks that exited with non-zero exit codes. Unfortunately, docker ps -a only keeps around the exit status of the last 15-20 containers with the same image, so you have to check them frequently. -You can use this script to automate checking for failures, assuming your cluster is running on GCE and has four nodes: - -```sh -echo "" > output.txt -for i in {1..4}; do - echo "Checking kubernetes-minion-${i}" - echo "kubernetes-minion-${i}:" >> output.txt - gcloud compute ssh "kubernetes-minion-${i}" --command="sudo docker ps -a" >> output.txt -done -grep "Exited ([^0])" output.txt -``` - -Eventually you will have sufficient runs for your purposes. At that point you can stop and delete the replication controller by running: - -```sh -kubectl stop replicationcontroller flakecontroller -``` - -If you do a final check for flakes with ```docker ps -a```, ignore tasks that exited -1, since that's what happens when you stop the replication controller. - -Happy flake hunting! - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/flaky-tests.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/devel/flaky-tests.md?pixel)]() diff --git a/release-0.20.0/docs/devel/git_workflow.png b/release-0.20.0/docs/devel/git_workflow.png deleted file mode 100644 index e3bd70da02c..00000000000 Binary files a/release-0.20.0/docs/devel/git_workflow.png and /dev/null differ diff --git a/release-0.20.0/docs/devel/instrumentation.md b/release-0.20.0/docs/devel/instrumentation.md deleted file mode 100644 index 81027edf792..00000000000 --- a/release-0.20.0/docs/devel/instrumentation.md +++ /dev/null @@ -1,39 +0,0 @@ -Instrumenting Kubernetes with a new metric -=================== - -The following is a step-by-step guide for adding a new metric to the Kubernetes code base. - -We use the Prometheus monitoring system's golang client library for instrumenting our code. Once you've picked out a file that you want to add a metric to, you should: - -1. Import "github.com/prometheus/client_golang/prometheus". - -2. Create a top-level var to define the metric. For this, you have to: - 1. Pick the type of metric. Use a Gauge for things you want to set to a particular value, a Counter for things you want to increment, or a Histogram or Summary for histograms/distributions of values (typically for latency). Histograms are better if you're going to aggregate the values across jobs, while summaries are better if you just want the job to give you a useful summary of the values. - 2. Give the metric a name and description. - 3. Pick whether you want to distinguish different categories of things using labels on the metric. If so, add "Vec" to the name of the type of metric you want and add a slice of the label names to the definition. - - https://github.com/GoogleCloudPlatform/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/apiserver/apiserver.go#L53 - https://github.com/GoogleCloudPlatform/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/kubelet/metrics/metrics.go#L31 - -3. Register the metric so that prometheus will know to export it. - - https://github.com/GoogleCloudPlatform/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/kubelet/metrics/metrics.go#L74 - https://github.com/GoogleCloudPlatform/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/apiserver/apiserver.go#L78 - -4. Use the metric by calling the appropriate method for your metric type (Set, Inc/Add, or Observe, respectively for Gauge, Counter, or Histogram/Summary), first calling WithLabelValues if your metric has any labels - - https://github.com/GoogleCloudPlatform/kubernetes/blob/3ce7fe8310ff081dbbd3d95490193e1d5250d2c9/pkg/kubelet/kubelet.go#L1384 - https://github.com/GoogleCloudPlatform/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/apiserver/apiserver.go#L87 - - -These are the metric type definitions if you're curious to learn about them or need more information: -https://github.com/prometheus/client_golang/blob/master/prometheus/gauge.go -https://github.com/prometheus/client_golang/blob/master/prometheus/counter.go -https://github.com/prometheus/client_golang/blob/master/prometheus/histogram.go -https://github.com/prometheus/client_golang/blob/master/prometheus/summary.go - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/instrumentation.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/devel/instrumentation.md?pixel)]() diff --git a/release-0.20.0/docs/devel/issues.md b/release-0.20.0/docs/devel/issues.md deleted file mode 100644 index 188e9ba1c88..00000000000 --- a/release-0.20.0/docs/devel/issues.md +++ /dev/null @@ -1,25 +0,0 @@ -GitHub Issues for the Kubernetes Project -======================================== - -A list quick overview of how we will review and prioritize incoming issues at https://github.com/GoogleCloudPlatform/kubernetes/issues - -Priorities ----------- - -We will use GitHub issue labels for prioritization. The absence of a priority label means the bug has not been reviewed and prioritized yet. - -Definitions ------------ -* P0 - something broken for users, build broken, or critical security issue. Someone must drop everything and work on it. -* P1 - must fix for earliest possible binary release (every two weeks) -* P2 - should be fixed in next major release version -* P3 - default priority for lower importance bugs that we still want to track and plan to fix at some point -* design - priority/design is for issues that are used to track design discussions -* support - priority/support is used for issues tracking user support requests -* untriaged - anything without a priority/X label will be considered untriaged - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/issues.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/devel/issues.md?pixel)]() diff --git a/release-0.20.0/docs/devel/logging.md b/release-0.20.0/docs/devel/logging.md deleted file mode 100644 index 0b03c43c36a..00000000000 --- a/release-0.20.0/docs/devel/logging.md +++ /dev/null @@ -1,32 +0,0 @@ -Logging Conventions -=================== - -The following conventions for the glog levels to use. [glog](http://godoc.org/github.com/golang/glog) is globally preferred to [log](http://golang.org/pkg/log/) for better runtime control. - -* glog.Errorf() - Always an error -* glog.Warningf() - Something unexpected, but probably not an error -* glog.Infof() has multiple levels: - * glog.V(0) - Generally useful for this to ALWAYS be visible to an operator - * Programmer errors - * Logging extra info about a panic - * CLI argument handling - * glog.V(1) - A reasonable default log level if you don't want verbosity. - * Information about config (listening on X, watching Y) - * Errors that repeat frequently that relate to conditions that can be corrected (pod detected as unhealthy) - * glog.V(2) - Useful steady state information about the service and important log messages that may correlate to significant changes in the system. This is the recommended default log level for most systems. - * Logging HTTP requests and their exit code - * System state changing (killing pod) - * Controller state change events (starting pods) - * Scheduler log messages - * glog.V(3) - Extended information about changes - * More info about system state changes - * glog.V(4) - Debug level verbosity (for now) - * Logging in particularly thorny parts of code where you may want to come back later and check it - -As per the comments, the practical default level is V(2). Developers and QE environments may wish to run at V(3) or V(4). If you wish to change the log level, you can pass in `-v=X` where X is the desired maximum level to log. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/logging.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/devel/logging.md?pixel)]() diff --git a/release-0.20.0/docs/devel/making-release-notes.md b/release-0.20.0/docs/devel/making-release-notes.md deleted file mode 100644 index 1725616c717..00000000000 --- a/release-0.20.0/docs/devel/making-release-notes.md +++ /dev/null @@ -1,36 +0,0 @@ -## Making release notes -This documents the process for making release notes for a release. - -### 1) Note the PR number of the previous release -Find the PR that was merged with the previous release. Remember this number -_TODO_: Figure out a way to record this somewhere to save the next release engineer time. - -### 2) Build the release-notes tool -```bash -${KUBERNETES_ROOT}/build/make-release-notes.sh -``` - -### 3) Trim the release notes -This generates a list of the entire set of PRs merged since the last release. It is likely long -and many PRs aren't worth mentioning. - -Open up ```candidate-notes.md``` in your favorite editor. - -Remove, regroup, organize to your hearts content. - - -### 4) Update CHANGELOG.md -With the final markdown all set, cut and paste it to the top of ```CHANGELOG.md``` - -### 5) Update the Release page - * Switch to the [releases](https://github.com/GoogleCloudPlatform/kubernetes/releases) page. - * Open up the release you are working on. - * Cut and paste the final markdown from above into the release notes - * Press Save. - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/making-release-notes.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/devel/making-release-notes.md?pixel)]() diff --git a/release-0.20.0/docs/devel/profiling.md b/release-0.20.0/docs/devel/profiling.md deleted file mode 100644 index cb2acd1d2a9..00000000000 --- a/release-0.20.0/docs/devel/profiling.md +++ /dev/null @@ -1,40 +0,0 @@ -# Profiling Kubernetes - -This document explain how to plug in profiler and how to profile Kubernetes services. - -## Profiling library - -Go comes with inbuilt 'net/http/pprof' profiling library and profiling web service. The way service works is binding debug/pprof/ subtree on a running webserver to the profiler. Reading from subpages of debug/pprof returns pprof-formatted profiles of the running binary. The output can be processed offline by the tool of choice, or used as an input to handy 'go tool pprof', which can graphically represent the result. - -## Adding profiling to services to APIserver. - -TL;DR: Add lines: -``` - m.mux.HandleFunc("/debug/pprof/", pprof.Index) - m.mux.HandleFunc("/debug/pprof/profile", pprof.Profile) - m.mux.HandleFunc("/debug/pprof/symbol", pprof.Symbol) -``` -to the init(c *Config) method in 'pkg/master/master.go' and import 'net/http/pprof' package. - -In most use cases to use profiler service it's enough to do 'import _ net/http/pprof', which automatically registers a handler in the default http.Server. Slight inconvenience is that APIserver uses default server for intra-cluster communication, so plugging profiler to it is not really useful. In 'pkg/master/server/server.go' more servers are created and started as separate goroutines. The one that is usually serving external traffic is secureServer. The handler for this traffic is defined in 'pkg/master/master.go' and stored in Handler variable. It is created from HTTP multiplexer, so the only thing that needs to be done is adding profiler handler functions to this multiplexer. This is exactly what lines after TL;DR do. - -## Connecting to the profiler -Even when running profiler I found not really straightforward to use 'go tool pprof' with it. The problem is that at least for dev purposes certificates generated for APIserver are not signed by anyone trusted and because secureServer serves only secure traffic it isn't straightforward to connect to the service. The best workaround I found is by creating an ssh tunnel from the kubernetes_master open unsecured port to some external server, and use this server as a proxy. To save everyone looking for correct ssh flags, it is done by running: -``` - ssh kubernetes_master -L:localhost:8080 -``` -or analogous one for you Cloud provider. Afterwards you can e.g. run -``` -go tool pprof http://localhost:/debug/pprof/profile -``` -to get 30 sec. CPU profile. - -## Contention profiling - -To enable contention profiling you need to add line ```rt.SetBlockProfileRate(1)``` in addition to ```m.mux.HandleFunc(...)``` added before (```rt``` stands for ```runtime``` in ```master.go```). This enables 'debug/pprof/block' subpage, which can be used as an input to ```go tool pprof```. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/profiling.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/devel/profiling.md?pixel)]() diff --git a/release-0.20.0/docs/devel/pull-requests.md b/release-0.20.0/docs/devel/pull-requests.md deleted file mode 100644 index c71bbce946f..00000000000 --- a/release-0.20.0/docs/devel/pull-requests.md +++ /dev/null @@ -1,34 +0,0 @@ -Pull Request Process -==================== - -An overview of how we will manage old or out-of-date pull requests. - -Process -------- - -We will close any pull requests older than two weeks. - -Exceptions can be made for PRs that have active review comments, or that are awaiting other dependent PRs. Closed pull requests are easy to recreate, and little work is lost by closing a pull request that subsequently needs to be reopened. - -We want to limit the total number of PRs in flight to: -* Maintain a clean project -* Remove old PRs that would be difficult to rebase as the underlying code has changed over time -* Encourage code velocity - -RC to v1.0 Pull Requests ------------------------- - -Between the first RC build (~6/22) and v1.0, we will adopt a higher bar for PR merges. For v1.0 to be a stable release, we need to ensure that any fixes going in are very well tested and have a low risk of breaking anything. Refactors and complex changes will be rejected in favor of more strategic and smaller workarounds. - -These PRs require: -* A risk assessment by the code author in the PR. This should outline which parts of the code are being touched, the risk of regression, and complexity of the code. -* Two LGTMs from experienced reviewers. - -Once those requirements are met, they will be labeled [ok-to-merge](https://github.com/GoogleCloudPlatform/kubernetes/pulls?utf8=%E2%9C%93&q=is%3Aopen+is%3Apr+label%3Aok-to-merge) and can be merged. - -These restrictions will be relaxed after v1.0 is released. - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/pull-requests.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/devel/pull-requests.md?pixel)]() diff --git a/release-0.20.0/docs/devel/releasing.dot b/release-0.20.0/docs/devel/releasing.dot deleted file mode 100644 index fe8124c36da..00000000000 --- a/release-0.20.0/docs/devel/releasing.dot +++ /dev/null @@ -1,113 +0,0 @@ -// Build it with: -// $ dot -Tsvg releasing.dot >releasing.svg - -digraph tagged_release { - size = "5,5" - // Arrows go up. - rankdir = BT - subgraph left { - // Group the left nodes together. - ci012abc -> pr101 -> ci345cde -> pr102 - style = invis - } - subgraph right { - // Group the right nodes together. - version_commit -> dev_commit - style = invis - } - { // Align the version commit and the info about it. - rank = same - // Align them with pr101 - pr101 - version_commit - // release_info shows the change in the commit. - release_info - } - { // Align the dev commit and the info about it. - rank = same - // Align them with 345cde - ci345cde - dev_commit - dev_info - } - // Join the nodes from subgraph left. - pr99 -> ci012abc - pr102 -> pr100 - // Do the version node. - pr99 -> version_commit - dev_commit -> pr100 - tag -> version_commit - pr99 [ - label = "Merge PR #99" - shape = box - fillcolor = "#ccccff" - style = "filled" - fontname = "Helvetica Neue, Helvetica, Segoe UI, Arial, freesans, sans-serif" - ]; - ci012abc [ - label = "012abc" - shape = circle - fillcolor = "#ffffcc" - style = "filled" - fontname = "Consolas, Liberation Mono, Menlo, Courier, monospace" - ]; - pr101 [ - label = "Merge PR #101" - shape = box - fillcolor = "#ccccff" - style = "filled" - fontname = "Helvetica Neue, Helvetica, Segoe UI, Arial, freesans, sans-serif" - ]; - ci345cde [ - label = "345cde" - shape = circle - fillcolor = "#ffffcc" - style = "filled" - fontname = "Consolas, Liberation Mono, Menlo, Courier, monospace" - ]; - pr102 [ - label = "Merge PR #102" - shape = box - fillcolor = "#ccccff" - style = "filled" - fontname = "Helvetica Neue, Helvetica, Segoe UI, Arial, freesans, sans-serif" - ]; - version_commit [ - label = "678fed" - shape = circle - fillcolor = "#ccffcc" - style = "filled" - fontname = "Consolas, Liberation Mono, Menlo, Courier, monospace" - ]; - dev_commit [ - label = "456dcb" - shape = circle - fillcolor = "#ffffcc" - style = "filled" - fontname = "Consolas, Liberation Mono, Menlo, Courier, monospace" - ]; - pr100 [ - label = "Merge PR #100" - shape = box - fillcolor = "#ccccff" - style = "filled" - fontname = "Helvetica Neue, Helvetica, Segoe UI, Arial, freesans, sans-serif" - ]; - release_info [ - label = "pkg/version/base.go:\ngitVersion = \"v0.5\";" - shape = none - fontname = "Helvetica Neue, Helvetica, Segoe UI, Arial, freesans, sans-serif" - ]; - dev_info [ - label = "pkg/version/base.go:\ngitVersion = \"v0.5-dev\";" - shape = none - fontname = "Helvetica Neue, Helvetica, Segoe UI, Arial, freesans, sans-serif" - ]; - tag [ - label = "$ git tag -a v0.5" - fillcolor = "#ffcccc" - style = "filled" - fontname = "Helvetica Neue, Helvetica, Segoe UI, Arial, freesans, sans-serif" - ]; -} - diff --git a/release-0.20.0/docs/devel/releasing.md b/release-0.20.0/docs/devel/releasing.md deleted file mode 100644 index 36140b2fd8f..00000000000 --- a/release-0.20.0/docs/devel/releasing.md +++ /dev/null @@ -1,171 +0,0 @@ -# Releasing Kubernetes - -This document explains how to create a Kubernetes release (as in version) and -how the version information gets embedded into the built binaries. - -## Origin of the Sources - -Kubernetes may be built from either a git tree (using `hack/build-go.sh`) or -from a tarball (using either `hack/build-go.sh` or `go install`) or directly by -the Go native build system (using `go get`). - -When building from git, we want to be able to insert specific information about -the build tree at build time. In particular, we want to use the output of `git -describe` to generate the version of Kubernetes and the status of the build -tree (add a `-dirty` prefix if the tree was modified.) - -When building from a tarball or using the Go build system, we will not have -access to the information about the git tree, but we still want to be able to -tell whether this build corresponds to an exact release (e.g. v0.3) or is -between releases (e.g. at some point in development between v0.3 and v0.4). - -## Version Number Format - -In order to account for these use cases, there are some specific formats that -may end up representing the Kubernetes version. Here are a few examples: - -- **v0.5**: This is official version 0.5 and this version will only be used - when building from a clean git tree at the v0.5 git tag, or from a tree - extracted from the tarball corresponding to that specific release. -- **v0.5-15-g0123abcd4567**: This is the `git describe` output and it indicates - that we are 15 commits past the v0.5 release and that the SHA1 of the commit - where the binaries were built was `0123abcd4567`. It is only possible to have - this level of detail in the version information when building from git, not - when building from a tarball. -- **v0.5-15-g0123abcd4567-dirty** or **v0.5-dirty**: The extra `-dirty` prefix - means that the tree had local modifications or untracked files at the time of - the build, so there's no guarantee that the source code matches exactly the - state of the tree at the `0123abcd4567` commit or at the `v0.5` git tag - (resp.) -- **v0.5-dev**: This means we are building from a tarball or using `go get` or, - if we have a git tree, we are using `go install` directly, so it is not - possible to inject the git version into the build information. Additionally, - this is not an official release, so the `-dev` prefix indicates that the - version we are building is after `v0.5` but before `v0.6`. (There is actually - an exception where a commit with `v0.5-dev` is not present on `v0.6`, see - later for details.) - -## Injecting Version into Binaries - -In order to cover the different build cases, we start by providing information -that can be used when using only Go build tools or when we do not have the git -version information available. - -To be able to provide a meaningful version in those cases, we set the contents -of variables in a Go source file that will be used when no overrides are -present. - -We are using `pkg/version/base.go` as the source of versioning in absence of -information from git. Here is a sample of that file's contents: - -``` - var ( - gitVersion string = "v0.4-dev" // version from git, output of $(git describe) - gitCommit string = "" // sha1 from git, output of $(git rev-parse HEAD) - ) -``` - -This means a build with `go install` or `go get` or a build from a tarball will -yield binaries that will identify themselves as `v0.4-dev` and will not be able -to provide you with a SHA1. - -To add the extra versioning information when building from git, the -`hack/build-go.sh` script will gather that information (using `git describe` and -`git rev-parse`) and then create a `-ldflags` string to pass to `go install` and -tell the Go linker to override the contents of those variables at build time. It -can, for instance, tell it to override `gitVersion` and set it to -`v0.4-13-g4567bcdef6789-dirty` and set `gitCommit` to `4567bcdef6789...` which -is the complete SHA1 of the (dirty) tree used at build time. - -## Handling Official Versions - -Handling official versions from git is easy, as long as there is an annotated -git tag pointing to a specific version then `git describe` will return that tag -exactly which will match the idea of an official version (e.g. `v0.5`). - -Handling it on tarballs is a bit harder since the exact version string must be -present in `pkg/version/base.go` for it to get embedded into the binaries. But -simply creating a commit with `v0.5` on its own would mean that the commits -coming after it would also get the `v0.5` version when built from tarball or `go -get` while in fact they do not match `v0.5` (the one that was tagged) exactly. - -To handle that case, creating a new release should involve creating two adjacent -commits where the first of them will set the version to `v0.5` and the second -will set it to `v0.5-dev`. In that case, even in the presence of merges, there -will be a single commit where the exact `v0.5` version will be used and all -others around it will either have `v0.4-dev` or `v0.5-dev`. - -The diagram below illustrates it. - -![Diagram of git commits involved in the release](./releasing.png) - -After working on `v0.4-dev` and merging PR 99 we decide it is time to release -`v0.5`. So we start a new branch, create one commit to update -`pkg/version/base.go` to include `gitVersion = "v0.5"` and `git commit` it. - -We test it and make sure everything is working as expected. - -Before sending a PR for it, we create a second commit on that same branch, -updating `pkg/version/base.go` to include `gitVersion = "v0.5-dev"`. That will -ensure that further builds (from tarball or `go install`) on that tree will -always include the `-dev` prefix and will not have a `v0.5` version (since they -do not match the official `v0.5` exactly.) - -We then send PR 100 with both commits in it. - -Once the PR is accepted, we can use `git tag -a` to create an annotated tag -*pointing to the one commit* that has `v0.5` in `pkg/version/base.go` and push -it to GitHub. (Unfortunately GitHub tags/releases are not annotated tags, so -this needs to be done from a git client and pushed to GitHub using SSH.) - -## Parallel Commits - -While we are working on releasing `v0.5`, other development takes place and -other PRs get merged. For instance, in the example above, PRs 101 and 102 get -merged to the master branch before the versioning PR gets merged. - -This is not a problem, it is only slightly inaccurate that checking out the tree -at commit `012abc` or commit `345cde` or at the commit of the merges of PR 101 -or 102 will yield a version of `v0.4-dev` *but* those commits are not present in -`v0.5`. - -In that sense, there is a small window in which commits will get a -`v0.4-dev` or `v0.4-N-gXXX` label and while they're indeed later than `v0.4` -but they are not really before `v0.5` in that `v0.5` does not contain those -commits. - -Unfortunately, there is not much we can do about it. On the other hand, other -projects seem to live with that and it does not really become a large problem. - -As an example, Docker commit a327d9b91edf has a `v1.1.1-N-gXXX` label but it is -not present in Docker `v1.2.0`: - -``` - $ git describe a327d9b91edf - v1.1.1-822-ga327d9b91edf - - $ git log --oneline v1.2.0..a327d9b91edf - a327d9b91edf Fix data space reporting from Kb/Mb to KB/MB - - (Non-empty output here means the commit is not present on v1.2.0.) -``` - -## Release Notes - -No official release should be made final without properly matching release notes. - -There should be made available, per release, a small summary, preamble, of the -major changes, both in terms of feature improvements/bug fixes and notes about -functional feature changes (if any) regarding the previous released version so -that the BOM regarding updating to it gets as obvious and trouble free as possible. - -After this summary, preamble, all the relevant PRs/issues that got in that -version should be listed and linked together with a small summary understandable -by plain mortals (in a perfect world PR/issue's title would be enough but often -it is just too cryptic/geeky/domain-specific that it isn't). - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/releasing.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/devel/releasing.md?pixel)]() diff --git a/release-0.20.0/docs/devel/releasing.png b/release-0.20.0/docs/devel/releasing.png deleted file mode 100644 index 935628deddc..00000000000 Binary files a/release-0.20.0/docs/devel/releasing.png and /dev/null differ diff --git a/release-0.20.0/docs/devel/releasing.svg b/release-0.20.0/docs/devel/releasing.svg deleted file mode 100644 index f703e6e2ac9..00000000000 --- a/release-0.20.0/docs/devel/releasing.svg +++ /dev/null @@ -1,113 +0,0 @@ - - - - - - -tagged_release - - -ci012abc - -012abc - - -pr101 - -Merge PR #101 - - -ci012abc->pr101 - - - - -ci345cde - -345cde - - -pr101->ci345cde - - - - -pr102 - -Merge PR #102 - - -ci345cde->pr102 - - - - -pr100 - -Merge PR #100 - - -pr102->pr100 - - - - -version_commit - -678fed - - -dev_commit - -456dcb - - -version_commit->dev_commit - - - - -dev_commit->pr100 - - - - -release_info -pkg/version/base.go: -gitVersion = "v0.5"; - - -dev_info -pkg/version/base.go: -gitVersion = "v0.5-dev"; - - -pr99 - -Merge PR #99 - - -pr99->ci012abc - - - - -pr99->version_commit - - - - -tag - -$ git tag -a v0.5 - - -tag->version_commit - - - - - diff --git a/release-0.20.0/docs/devel/writing-a-getting-started-guide.md b/release-0.20.0/docs/devel/writing-a-getting-started-guide.md deleted file mode 100644 index 5653dbef96e..00000000000 --- a/release-0.20.0/docs/devel/writing-a-getting-started-guide.md +++ /dev/null @@ -1,105 +0,0 @@ -# Writing a Getting Started Guide -This page gives some advice for anyone planning to write or update a Getting Started Guide for Kubernetes. -It also gives some guidelines which reviewers should follow when reviewing a pull request for a -guide. - -A Getting Started Guide is instructions on how to create a Kubernetes cluster on top of a particular -type(s) of infrastructure. Infrastructure includes: the IaaS provider for VMs; -the node OS; inter-node networking; and node Configuration Management system. -A guide refers to scripts, Configuration Management files, and/or binary assets such as RPMs. We call -the combination of all these things needed to run on a particular type of infrastructure a -**distro**. - -[The Matrix](../../docs/getting-started-guides/README.md) lists the distros. If there is already a guide -which is similar to the one you have planned, consider improving that one. - - -Distros fall into two categories: - - **versioned distros** are tested to work with a particular binary release of Kubernetes. These - come in a wide variety, reflecting a wide range of ideas and preferences in how to run a cluster. - - **development distros** are tested work with the latest Kubernetes source code. But, there are - relatively few of these and the bar is much higher for creating one. - -There are different guidelines for each. - -## Versioned Distro Guidelines -These guidelines say *what* to do. See the Rationale section for *why*. - - Send us a PR. - - Put the instructions in `docs/getting-started-guides/...`. Scripts go there too. This helps devs easily - search for uses of flags by guides. - - We may ask that you host binary assets or large amounts of code in our `contrib` directory or on your - own repo. - - Setup a cluster and run the [conformance test](../../docs/devel/conformance-test.md) against it, and report the - results in your PR. - - Add or update a row in [The Matrix](../../docs/getting-started-guides/README.md). - - State the binary version of kubernetes that you tested clearly in your Guide doc and in The Matrix. - - Even if you are just updating the binary version used, please still do a conformance test. - - If it worked before and now fails, you can ask on IRC, - check the release notes since your last tested version, or look at git -logs for files in other distros - that are updated to the new version. - - Versioned distros should typically not modify or add code in `cluster/`. That is just scripts for developer - distros. - - If a versioned distro has not been updated for many binary releases, it may be dropped from the Matrix. - -If you have a cluster partially working, but doing all the above steps seems like too much work, -we still want to hear from you. We suggest you write a blog post or a Gist, and we will link to it on our wiki page. -Just file an issue or chat us on IRC and one of the committers will link to it from the wiki. - -## Development Distro Guidelines -These guidelines say *what* to do. See the Rationale section for *why*. - - the main reason to add a new development distro is to support a new IaaS provider (VM and - network management). This means implementing a new `pkg/cloudprovider/$IAAS_NAME`. - - Development distros should use Saltstack for Configuration Management. - - development distros need to support automated cluster creation, deletion, upgrading, etc. - This mean writing scripts in `cluster/$IAAS_NAME`. - - all commits to the tip of this repo need to not break any of the development distros - - the author of the change is responsible for making changes necessary on all the cloud-providers if the - change affects any of them, and reverting the change if it breaks any of the CIs. - - a development distro needs to have an organization which owns it. This organization needs to: - - Setting up and maintaining Continuous Integration that runs e2e frequently (multiple times per day) against the - Distro at head, and which notifies all devs of breakage. - - being reasonably available for questions and assisting with - refactoring and feature additions that affect code for their IaaS. - -## Rationale - - We want want people to create Kubernetes clusters with whatever IaaS, Node OS, - configuration management tools, and so on, which they are familiar with. The - guidelines for **versioned distros** are designed for flexibility. - - We want developers to be able to work without understanding all the permutations of - IaaS, NodeOS, and configuration management. The guidelines for **developer distros** are designed - for consistency. - - We want users to have a uniform experience with Kubernetes whenever they follow instructions anywhere - in our Github repository. So, we ask that versioned distros pass a **conformance test** to make sure - really work. - - We ask versioned distros to **clearly state a version**. People pulling from Github may - expect any instructions there to work at Head, so stuff that has not been tested at Head needs - to be called out. We are still changing things really fast, and, while the REST API is versioned, - it is not practical at this point to version or limit changes that affect distros. We still change - flags at the Kubernetes/Infrastructure interface. - - We want to **limit the number of development distros** for several reasons. Developers should - only have to change a limited number of places to add a new feature. Also, since we will - gate commits on passing CI for all distros, and since end-to-end tests are typically somewhat - flaky, it would be highly likely for there to be false positives and CI backlogs with many CI pipelines. - - We do not require versioned distros to do **CI** for several reasons. It is a steep - learning curve to understand our our automated testing scripts. And it is considerable effort - to fully automate setup and teardown of a cluster, which is needed for CI. And, not everyone - has the time and money to run CI. We do not want to - discourage people from writing and sharing guides because of this. - - Versioned distro authors are free to run their own CI and let us know if there is breakage, but we - will not include them as commit hooks -- there cannot be so many commit checks that it is impossible - to pass them all. - - We prefer a single Configuration Management tool for development distros. If there were more - than one, the core developers would have to learn multiple tools and update config in multiple - places. **Saltstack** happens to be the one we picked when we started the project. We - welcome versioned distros that use any tool; there are already examples of - CoreOS Fleet, Ansible, and others. - - You can still run code from head or your own branch - if you use another Configuration Management tool -- you just have to do some manual steps - during testing and deployment. - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/writing-a-getting-started-guide.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/devel/writing-a-getting-started-guide.md?pixel)]() diff --git a/release-0.20.0/docs/developer-guide.md b/release-0.20.0/docs/developer-guide.md deleted file mode 100644 index a2dc2e62a0c..00000000000 --- a/release-0.20.0/docs/developer-guide.md +++ /dev/null @@ -1,41 +0,0 @@ -# Kubernetes Developer Guide - -The developer guide is for anyone wanting to either write code which directly accesses the -kubernetes API, or to contribute directly to the kubernetes project. -It assumes some familiarity with concepts in the [User Guide](user-guide.md) and the [Cluster Admin -Guide](cluster-admin-guide.md). - - -## Developing against the Kubernetes API - -* API objects are explained at [http://kubernetes.io/third_party/swagger-ui/](http://kubernetes.io/third_party/swagger-ui/). - -* **Annotations** ([annotations.md](annotations.md)): are for attaching arbitrary non-identifying metadata to objects. - Programs that automate Kubernetes objects may use annotations to store small amounts of their state. - -* **API Conventions** ([api-conventions.md](api-conventions.md)): - Defining the verbs and resources used in the Kubernetes API. - -* **API Client Libraries** ([client-libraries.md](client-libraries.md)): - A list of existing client libraries, both supported and user-contributed. - -## Writing Plugins - -* **Authentication Plugins** ([authentication.md](authentication.md)): - The current and planned states of authentication tokens. - -* **Authorization Plugins** ([authorization.md](authorization.md)): - Authorization applies to all HTTP requests on the main apiserver port. - This doc explains the available authorization implementations. - -* **Admission Control Plugins** ([admission_control](design/admission_control.md)) - -## Contributing to the Kubernetes Project - -See this [README](../docs/devel/README.md). - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/developer-guide.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/developer-guide.md?pixel)]() diff --git a/release-0.20.0/docs/dns.md b/release-0.20.0/docs/dns.md deleted file mode 100644 index c03c02b46ed..00000000000 --- a/release-0.20.0/docs/dns.md +++ /dev/null @@ -1,44 +0,0 @@ -# DNS Integration with Kubernetes - -As of kubernetes 0.8, DNS is offered as a [cluster add-on](../cluster/addons/README.md). -If enabled, a DNS Pod and Service will be scheduled on the cluster, and the kubelets will be -configured to tell individual containers to use the DNS Service's IP. - -Every Service defined in the cluster (including the DNS server itself) will be -assigned a DNS name. By default, a client Pod's DNS search list will -include the Pod's own namespace and the cluster's default domain. This is best -illustrated by example: - -Assume a Service named `foo` in the kubernetes namespace `bar`. A Pod running -in namespace `bar` can look up this service by simply doing a DNS query for -`foo`. A Pod running in namespace `quux` can look up this service by doing a -DNS query for `foo.bar`. - -The cluster DNS server ([SkyDNS](https://github.com/skynetservices/skydns)) -supports forward lookups (A records) and service lookups (SRV records). - -## How it Works - -The DNS pod that runs holds 3 containers - skydns, etcd (which skydns uses), -and a kubernetes-to-skydns bridge called kube2sky. The kube2sky process -watches the kubernetes master for changes in Services, and then writes the -information to etcd, which skydns reads. This etcd instance is not linked to -any other etcd clusters that might exist, including the kubernetes master. - -## Issues - -The skydns service is reachable directly from kubernetes nodes (outside -of any container) and DNS resolution works if the skydns service is targeted -explicitly. However, nodes are not configured to use the cluster DNS service or -to search the cluster's DNS domain by default. This may be resolved at a later -time. - -## For more information - -See [the docs for the DNS cluster addon](../cluster/addons/dns/README.md). - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/dns.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/dns.md?pixel)]() diff --git a/release-0.20.0/docs/downward_api.md b/release-0.20.0/docs/downward_api.md deleted file mode 100644 index 737519827f8..00000000000 --- a/release-0.20.0/docs/downward_api.md +++ /dev/null @@ -1,53 +0,0 @@ -# Downward API - -The downward API allows containers to consume information about the system without coupling to the -kubernetes client or REST API. - -### Capabilities - -Containers can consume the following information via the downward API: - -* Their pod's name -* Their pod's namespace - -### Consuming information about a pod in a container - -Containers consume information from the downward API using environment variables. In the future, -containers will also be able to consume the downward API via a volume plugin. The `valueFrom` -field of an environment variable allows you to specify an `ObjectFieldSelector` to select fields -from the pod's definition. The `ObjectFieldSelector` has an `apiVersion` field and a `fieldPath` -field. The `fieldPath` field is an expression designating a field on the pod. The `apiVersion` -field is the version of the API schema that the `fieldPath` is written in terms of. If the -`apiVersion` field is not specified it is defaulted to the API version of the enclosing object. - -### Example: consuming the downward API - -This is an example of a pod that consumes its name and namespace via the downward API: - -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: dapi-test-pod -spec: - containers: - - name: test-container - image: gcr.io/google_containers/busybox - command: [ "/bin/sh", "-c", "env" ] - env: - - name: POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - restartPolicy: Never -``` - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/downward_api.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/downward_api.md?pixel)]() diff --git a/release-0.20.0/docs/getting-started-guides/README.md b/release-0.20.0/docs/getting-started-guides/README.md deleted file mode 100644 index 35ffc8dac15..00000000000 --- a/release-0.20.0/docs/getting-started-guides/README.md +++ /dev/null @@ -1,66 +0,0 @@ -If you are not sure what OSes and infrastructure is supported, the table below lists all the combinations which have -been tested recently. - -For the easiest "kick the tires" experience, please try the [local docker](docker.md) guide. - -If you are considering contributing a new guide, please read the -[guidelines](../../docs/devel/writing-a-getting-started-guide.md). - -IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conformance | Support Level | Notes --------------------- | ------------ | ------ | ---------- | ------------------------------------------------------------------------------ | ----------- | ---------------------------- | ----- -GKE | | | GCE | [docs](https://cloud.google.com/container-engine) | | Commercial | Uses K8s version 0.15.0 -Vagrant | Saltstack | Fedora | OVS | [docs](../../docs/getting-started-guides/vagrant.md) | | Project | Uses latest via https://get.k8s.io/ -GCE | Saltstack | Debian | GCE | [docs](../../docs/getting-started-guides/gce.md) | | Project | Tested with 0.15.0 by @robertbailey -Azure | CoreOS | CoreOS | Weave | [docs](../../docs/getting-started-guides/coreos/azure/README.md) | | Community ([@errordeveloper](https://github.com/errordeveloper), [@squillace](https://github.com/squillace), [@chanezon](https://github.com/chanezon), [@crossorigin](https://github.com/crossorigin)) | Uses K8s version 0.17.0 -Docker Single Node | custom | N/A | local | [docs](docker.md) | | Project (@brendandburns) | Tested @ 0.14.1 | -Docker Multi Node | Flannel | N/A | local | [docs](docker-multinode.md) | | Project (@brendandburns) | Tested @ 0.14.1 | -Bare-metal | Ansible | Fedora | flannel | [docs](../../docs/getting-started-guides/fedora/fedora_ansible_config.md) | | Project | Uses K8s v0.13.2 -Bare-metal | custom | Fedora | _none_ | [docs](../../docs/getting-started-guides/fedora/fedora_manual_config.md) | | Project | Uses K8s v0.13.2 -Bare-metal | custom | Fedora | flannel | [docs](../../docs/getting-started-guides/fedora/flannel_multi_node_cluster.md) | | Community ([@aveshagarwal](https://github.com/aveshagarwal))| Tested with 0.15.0 -libvirt | custom | Fedora | flannel | [docs](../../docs/getting-started-guides/fedora/flannel_multi_node_cluster.md) | | Community ([@aveshagarwal](https://github.com/aveshagarwal))| Tested with 0.15.0 -KVM | custom | Fedora | flannel | [docs](../../docs/getting-started-guides/fedora/flannel_multi_node_cluster.md) | | Community ([@aveshagarwal](https://github.com/aveshagarwal))| Tested with 0.15.0 -Mesos/GCE | | | | [docs](../../docs/getting-started-guides/mesos.md) | | [Community](https://github.com/mesosphere/kubernetes-mesos) ([@jdef](https://github.com/jdef)) | Uses K8s v0.11.2 -AWS | CoreOS | CoreOS | flannel | [docs](../../docs/getting-started-guides/coreos.md) | | Community | Uses K8s version 0.17.0 -GCE | CoreOS | CoreOS | flannel | [docs](../../docs/getting-started-guides/coreos.md) | | Community (@kelseyhightower) | Uses K8s version 0.15.0 -Vagrant | CoreOS | CoreOS | flannel | [docs](../../docs/getting-started-guides/coreos.md) | | Community ( [@pires](https://github.com/pires), [@AntonioMeireles](https://github.com/AntonioMeireles) ) | Uses K8s version 0.15.0 -Bare-metal (Offline) | CoreOS | CoreOS | flannel | [docs](../../docs/getting-started-guides/coreos/bare_metal_offline.md) | | Community([@jeffbean](https://github.com/jeffbean)) | Uses K8s version 0.15.0 -CloudStack | Ansible | CoreOS | flannel | [docs](../../docs/getting-started-guides/cloudstack.md) | | Community (@runseb) | Uses K8s version 0.9.1 -Vmware | | Debian | OVS | [docs](../../docs/getting-started-guides/vsphere.md) | | Community (@pietern) | Uses K8s version 0.9.1 -Bare-metal | custom | CentOS | _none_ | [docs](../../docs/getting-started-guides/centos/centos_manual_config.md) | | Community(@coolsvap) | Uses K8s v0.9.1 -AWS | Juju | Ubuntu | flannel | [docs](../../docs/getting-started-guides/juju.md) | | [Community](https://github.com/whitmo/bundle-kubernetes) ( [@whit](https://github.com/whitmo), [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) | [Tested](http://reports.vapour.ws/charm-tests-by-charm/kubernetes) K8s v0.8.1 -OpenStack/HPCloud | Juju | Ubuntu | flannel | [docs](../../docs/getting-started-guides/juju.md) | | [Community](https://github.com/whitmo/bundle-kubernetes) ( [@whit](https://github.com/whitmo), [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) | [Tested](http://reports.vapour.ws/charm-tests-by-charm/kubernetes) K8s v0.8.1 -Joyent | Juju | Ubuntu | flannel | [docs](../../docs/getting-started-guides/juju.md) | | [Community](https://github.com/whitmo/bundle-kubernetes) ( [@whit](https://github.com/whitmo), [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) ) | [Tested](http://reports.vapour.ws/charm-tests-by-charm/kubernetes) K8s v0.8.1 -AWS | Saltstack | Ubuntu | OVS | [docs](../../docs/getting-started-guides/aws.md) | | Community (@justinsb) | Uses K8s version 0.5.0 -Vmware | CoreOS | CoreOS | flannel | [docs](../../docs/getting-started-guides/coreos.md) | | Community (@kelseyhightower) | Uses K8s version 0.15.0 -Azure | Saltstack | Ubuntu | OpenVPN | [docs](../../docs/getting-started-guides/azure.md) | | Community | -Bare-metal | custom | Ubuntu | flannel | [docs](../../docs/getting-started-guides/ubuntu.md) | | Community (@resouer @WIZARD-CXY) | use k8s version 0.18.0 -Local | | | _none_ | [docs](../../docs/getting-started-guides/locally.md) | | Community (@preillyme) | -libvirt/KVM | CoreOS | CoreOS | libvirt/KVM | [docs](../../docs/getting-started-guides/libvirt-coreos.md) | | Community (@lhuard1A) | -oVirt | | | | [docs](../../docs/getting-started-guides/ovirt.md) | | Community (@simon3z) | -Rackspace | CoreOS | CoreOS | flannel | [docs](../../docs/getting-started-guides/rackspace.md) | | Community (@doublerr) | use k8s version 0.18.0 - - -*Note*: The above table is ordered by version test/used in notes followed by support level. - -Definition of columns: - - **IaaS Provider** is who/what provides the virtual or physical machines (nodes) that Kubernetes runs on. - - **OS** is the base operating system of the nodes. - - **Config. Mgmt** is the configuration management system that helps install and maintain kubernetes software on the - nodes. - - **Networking** is what implements the [networking model](../../docs/networking.md). Those with networking type - _none_ may not support more than one node, or may support multiple VM nodes only in the same physical node. - - **Conformance** indicates whether a cluster created with this configuration has passed the project's conformance - tests. - - Support Levels - - **Project**: Kubernetes Committers regularly use this configuration, so it usually works with the latest release - of Kubernetes. - - **Commercial**: A commercial offering with its own support arrangements. - - **Community**: Actively supported by community contributions. May not work with more recent releases of kubernetes. - - **Inactive**: No active maintainer. Not recommended for first-time K8s users, and may be deleted soon. - - **Notes** is relevant information such as version k8s used. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/README.md?pixel)]() diff --git a/release-0.20.0/docs/getting-started-guides/all-lines.png b/release-0.20.0/docs/getting-started-guides/all-lines.png deleted file mode 100644 index 7de0438af63..00000000000 Binary files a/release-0.20.0/docs/getting-started-guides/all-lines.png and /dev/null differ diff --git a/release-0.20.0/docs/getting-started-guides/aws-coreos.md b/release-0.20.0/docs/getting-started-guides/aws-coreos.md deleted file mode 100644 index 513878e9f7f..00000000000 --- a/release-0.20.0/docs/getting-started-guides/aws-coreos.md +++ /dev/null @@ -1,220 +0,0 @@ -# Getting started on Amazon EC2 with CoreOS - -The example below creates an elastic Kubernetes cluster with a custom number of worker nodes and a master. - -**Warning:** contrary to the [supported procedure](aws.md), the examples below provision Kubernetes with an insecure API server (plain HTTP, -no security tokens, no basic auth). For demonstration purposes only. - -## Highlights - -* Cluster bootstrapping using [cloud-config](https://coreos.com/docs/cluster-management/setup/cloudinit-cloud-config/) -* Cross container networking with [flannel](https://github.com/coreos/flannel#flannel) -* Auto worker registration with [kube-register](https://github.com/kelseyhightower/kube-register#kube-register) -* Kubernetes v0.17.0 [official binaries](https://github.com/GoogleCloudPlatform/kubernetes/releases/tag/v0.17.0) - -## Prerequisites - -* [aws CLI](http://aws.amazon.com/cli) -* [CoreOS image for AWS](https://coreos.com/docs/running-coreos/cloud-providers/ec2/) -* [kubectl CLI](aws/kubectl.md) - -## Starting a Cluster - -### CloudFormation - -The [cloudformation-template.json](aws/cloudformation-template.json) can be used to bootstrap a Kubernetes cluster with a single command: - -```bash -aws cloudformation create-stack --stack-name kubernetes --region us-west-2 \ ---template-body file://aws/cloudformation-template.json \ ---parameters ParameterKey=KeyPair,ParameterValue= \ - ParameterKey=ClusterSize,ParameterValue= \ - ParameterKey=VpcId,ParameterValue= \ - ParameterKey=SubnetId,ParameterValue= \ - ParameterKey=SubnetAZ,ParameterValue= -``` - -It will take a few minutes for the entire stack to come up. You can monitor the stack progress with the following command: - -```bash -aws cloudformation describe-stack-events --stack-name kubernetes -``` - -Record the Kubernetes Master IP address: - -```bash -aws cloudformation describe-stacks --stack-name kubernetes -``` - -[Skip to kubectl client configuration](#configure-the-kubectl-ssh-tunnel) - -### AWS CLI - -The following commands shall use the latest CoreOS alpha AMI for the `us-west-2` region. For a list of different regions and corresponding AMI IDs see the [CoreOS EC2 cloud provider documentation](https://coreos.com/docs/running-coreos/cloud-providers/ec2/#choosing-a-channel). - -#### Create the Kubernetes Security Group - -```bash -aws ec2 create-security-group --group-name kubernetes --description "Kubernetes Security Group" -aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 22 --cidr 0.0.0.0/0 -aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 80 --cidr 0.0.0.0/0 -aws ec2 authorize-security-group-ingress --group-name kubernetes --source-security-group-name kubernetes -``` - -#### Save the master and node cloud-configs - -* [master.yaml](aws/cloud-configs/master.yaml) -* [node.yaml](aws/cloud-configs/node.yaml) - -#### Launch the master - -*Attention:* replace `` below for a [suitable version of CoreOS image for AWS](https://coreos.com/docs/running-coreos/cloud-providers/ec2/). - -```bash -aws ec2 run-instances --image-id --key-name \ ---region us-west-2 --security-groups kubernetes --instance-type m3.medium \ ---user-data file://master.yaml -``` - -Record the `InstanceId` for the master. - -Gather the public and private IPs for the master node: - -```bash -aws ec2 describe-instances --instance-id -``` - -``` -{ - "Reservations": [ - { - "Instances": [ - { - "PublicDnsName": "ec2-54-68-97-117.us-west-2.compute.amazonaws.com", - "RootDeviceType": "ebs", - "State": { - "Code": 16, - "Name": "running" - }, - "PublicIpAddress": "54.68.97.117", - "PrivateIpAddress": "172.31.9.9", -... -``` - -#### Update the node.yaml cloud-config - -Edit `node.yaml` and replace all instances of `` with the **private** IP address of the master node. - -### Launch 3 worker nodes - -*Attention:* Replace `` below for a [suitable version of CoreOS image for AWS](https://coreos.com/docs/running-coreos/cloud-providers/ec2/#choosing-a-channel). - -```bash -aws ec2 run-instances --count 3 --image-id --key-name \ ---region us-west-2 --security-groups kubernetes --instance-type m3.medium \ ---user-data file://node.yaml -``` - -### Add additional worker nodes - -*Attention:* replace `` below for a [suitable version of CoreOS image for AWS](https://coreos.com/docs/running-coreos/cloud-providers/ec2/#choosing-a-channel). - -```bash -aws ec2 run-instances --count 1 --image-id --key-name \ ---region us-west-2 --security-groups kubernetes --instance-type m3.medium \ ---user-data file://node.yaml -``` - -### Configure the kubectl SSH tunnel - -This command enables secure communication between the kubectl client and the Kubernetes API. - -```bash -ssh -f -nNT -L 8080:127.0.0.1:8080 core@ -``` - -### Listing worker nodes - -Once the worker instances have fully booted, they will be automatically registered with the Kubernetes API server by the kube-register service running on the master node. It may take a few mins. - -```bash -kubectl get nodes -``` - -## Starting a simple pod - -Create a pod manifest: `pod.json` - -```json -{ - "apiVersion": "v1", - "kind": "Pod", - "metadata": { - "name": "hello", - "labels": { - "name": "hello", - "environment": "testing" - } - }, - "spec": { - "containers": [{ - "name": "hello", - "image": "quay.io/kelseyhightower/hello", - "ports": [{ - "containerPort": 80, - "hostPort": 80 - }] - }] - } -} -``` - -### Create the pod using the kubectl command line tool - -```bash -kubectl create -f pod.json -``` - -### Testing - -```bash -kubectl get pods -``` - -Record the **Host** of the pod, which should be the private IP address. - -Gather the public IP address for the worker node. - -```bash -aws ec2 describe-instances --filters 'Name=private-ip-address,Values=' -``` - -``` -{ - "Reservations": [ - { - "Instances": [ - { - "PublicDnsName": "ec2-54-68-97-117.us-west-2.compute.amazonaws.com", - "RootDeviceType": "ebs", - "State": { - "Code": 16, - "Name": "running" - }, - "PublicIpAddress": "54.68.97.117", -... -``` - -Visit the public IP address in your browser to view the running pod. - -### Delete the pod - -```bash -kubectl delete pods hello -``` - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/aws-coreos.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/aws-coreos.md?pixel)]() diff --git a/release-0.20.0/docs/getting-started-guides/aws.md b/release-0.20.0/docs/getting-started-guides/aws.md deleted file mode 100644 index 418cdda1c79..00000000000 --- a/release-0.20.0/docs/getting-started-guides/aws.md +++ /dev/null @@ -1,102 +0,0 @@ -Getting started on AWS EC2 --------------------------- - -**Table of Contents** - -- [Prerequisites](#prerequisites) -- [Cluster turnup](#cluster-turnup) - - [Supported procedure: `get-kube`](#supported-procedure-get-kube) - - [Alternatives](#alternatives) -- [Getting started with your cluster](#getting-started-with-your-cluster) - - [Command line administration tool: `kubectl`](#command-line-administration-tool-kubectl) - - [Examples](#examples) -- [Tearing down the cluster](#tearing-down-the-cluster) -- [Further reading](#further-reading) - -## Prerequisites - -1. You need an AWS account. Visit [http://aws.amazon.com](http://aws.amazon.com) to get started -2. Install and configure [AWS Command Line Interface](http://aws.amazon.com/cli) -3. You need an AWS [instance profile and role](http://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html) with EC2 full access. - -## Cluster turnup -### Supported procedure: `get-kube` -```bash -#Using wget -export KUBERNETES_PROVIDER=aws; wget -q -O - https://get.k8s.io | bash - -#Using cURL -export KUBERNETES_PROVIDER=aws; curl -sS https://get.k8s.io | bash -``` - -NOTE: This script calls [cluster/kube-up.sh](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/kube-up.sh) -which in turn calls [cluster/aws/util.sh](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/aws/util.sh) -using [cluster/aws/config-default.sh](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/aws/config-default.sh). - -This process takes about 5 to 10 minutes. Once the cluster is up, the IP addresses of your master and node(s) will be printed, -as well as information about the default services running in the cluster (monitoring, logging, dns). User credentials and security -tokens are written in `~/.kube/kubeconfig`, they will be necessary to use the CLI or the HTTP Basic Auth. - -By default, the script will provision a new VPC and a 4 node k8s cluster in us-west-2a (Oregon) with `t2.micro` instances running on Ubuntu. -You can override the variables defined in [config-default.sh](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/aws/config-default.sh) to change this behavior as follows: - -```bash -export KUBE_AWS_ZONE=eu-west-1c -export NUM_MINIONS=2 -export MINION_SIZE=m3.medium -export AWS_S3_REGION=eu-west-1 -export AWS_S3_BUCKET=mycompany-kubernetes-artifacts -export INSTANCE_PREFIX=k8s -... -``` - -It will also try to create or reuse a keypair called "kubernetes", and IAM profiles called "kubernetes-master" and "kubernetes-minion". -If these already exist, make sure you want them to be used here. - -NOTE: If using an existing keypair named "kubernetes" then you must set the `AWS_SSH_KEY` key to point to your private key. - -### Alternatives -A contributed [example](aws-coreos.md) allows you to setup a Kubernetes cluster based on [CoreOS](http://www.coreos.com), either using -AWS CloudFormation or EC2 with user data (cloud-config). - -## Getting started with your cluster -### Command line administration tool: `kubectl` -Copy the appropriate `kubectl` binary to any location defined in your `PATH` environment variable, for example: - -```bash -# OS X -sudo cp kubernetes/platforms/darwin/amd64/kubectl /usr/local/bin/kubectl - -# Linux -sudo cp kubernetes/platforms/linux/amd64/kubectl /usr/local/bin/kubectl -``` - -An up-to-date documentation page for this tool is available here: [kubectl manual](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/kubectl.md) - -By default, `kubectl` will use the `kubeconfig` file generated during the cluster startup for authenticating against the API. -For more information, please read [kubeconfig files](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/kubeconfig-file.md) - -### Examples -See [a simple nginx example](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/examples/simple-nginx.md) to try out your new cluster. - -The "Guestbook" application is another popular example to get started with Kubernetes: [guestbook example](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/guestbook) - -For more complete applications, please look in the [examples directory](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/examples) - -## Tearing down the cluster -Make sure the environment variables you used to provision your cluster are still exported, then call the following script inside the -`kubernetes` directory: - -```bash -cluster/kube-down.sh -``` - -## Further reading -Please see the [Kubernetes docs](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/docs) for more details on administering -and using a Kubernetes cluster. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/aws.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/aws.md?pixel)]() diff --git a/release-0.20.0/docs/getting-started-guides/aws/cloud-configs/master.yaml b/release-0.20.0/docs/getting-started-guides/aws/cloud-configs/master.yaml deleted file mode 100644 index af8d61078a7..00000000000 --- a/release-0.20.0/docs/getting-started-guides/aws/cloud-configs/master.yaml +++ /dev/null @@ -1,177 +0,0 @@ -#cloud-config - -write_files: - - path: /opt/bin/waiter.sh - owner: root - permissions: 0755 - content: | - #! /usr/bin/bash - until curl http://127.0.0.1:2379/v2/machines; do sleep 2; done - -coreos: - etcd2: - name: master - initial-cluster-token: k8s_etcd - initial-cluster: master=http://$private_ipv4:2380 - listen-peer-urls: http://$private_ipv4:2380,http://localhost:2380 - initial-advertise-peer-urls: http://$private_ipv4:2380 - listen-client-urls: http://$private_ipv4:2379,http://localhost:2379 - advertise-client-urls: http://$private_ipv4:2379 - fleet: - etcd_servers: http://localhost:2379 - metadata: k8srole=master - flannel: - etcd_endpoints: http://localhost:2379 - locksmithd: - endpoint: http://localhost:2379 - units: - - name: etcd2.service - command: start - - name: fleet.service - command: start - - name: etcd2-waiter.service - command: start - content: | - [Unit] - Description=etcd waiter - Wants=network-online.target - Wants=etcd2.service - After=etcd2.service - After=network-online.target - Before=flanneld.service fleet.service locksmithd.service - - [Service] - ExecStart=/usr/bin/bash /opt/bin/waiter.sh - RemainAfterExit=true - Type=oneshot - - name: flanneld.service - command: start - drop-ins: - - name: 50-network-config.conf - content: | - [Service] - ExecStartPre=-/usr/bin/etcdctl mk /coreos.com/network/config '{"Network": "10.244.0.0/16", "Backend": {"Type": "vxlan"}}' - - name: docker-cache.service - command: start - content: | - [Unit] - Description=Docker cache proxy - Requires=early-docker.service - After=early-docker.service - Before=early-docker.target - - [Service] - Restart=always - TimeoutStartSec=0 - RestartSec=5 - Environment=TMPDIR=/var/tmp/ - Environment=DOCKER_HOST=unix:///var/run/early-docker.sock - ExecStartPre=-/usr/bin/docker kill docker-registry - ExecStartPre=-/usr/bin/docker rm docker-registry - ExecStartPre=/usr/bin/docker pull quay.io/devops/docker-registry:latest - # GUNICORN_OPTS is an workaround for - # https://github.com/docker/docker-registry/issues/892 - ExecStart=/usr/bin/docker run --rm --net host --name docker-registry \ - -e STANDALONE=false \ - -e GUNICORN_OPTS=[--preload] \ - -e MIRROR_SOURCE=https://registry-1.docker.io \ - -e MIRROR_SOURCE_INDEX=https://index.docker.io \ - -e MIRROR_TAGS_CACHE_TTL=1800 \ - quay.io/devops/docker-registry:latest - - name: docker.service - drop-ins: - - name: 51-docker-mirror.conf - content: | - [Unit] - # making sure that docker-cache is up and that flanneld finished - # startup, otherwise containers won't land in flannel's network... - Requires=docker-cache.service - After=docker-cache.service - - [Service] - Environment=DOCKER_OPTS='--registry-mirror=http://$private_ipv4:5000' - - name: get-kubectl.service - command: start - content: | - [Unit] - Description=Get kubectl client tool - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=network-online.target - After=network-online.target - - [Service] - ExecStart=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.17.0/bin/linux/amd64/kubectl - ExecStart=/usr/bin/chmod +x /opt/bin/kubectl - Type=oneshot - RemainAfterExit=true - - name: kube-apiserver.service - command: start - content: | - [Unit] - Description=Kubernetes API Server - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=etcd2-waiter.service - After=etcd2-waiter.service - - [Service] - ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.17.0/bin/linux/amd64/kube-apiserver - ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-apiserver - ExecStart=/opt/bin/kube-apiserver \ - --insecure-bind-address=0.0.0.0 \ - --service-cluster-ip-range=10.100.0.0/16 \ - --etcd-servers=http://localhost:2379 - Restart=always - RestartSec=10 - - name: kube-controller-manager.service - command: start - content: | - [Unit] - Description=Kubernetes Controller Manager - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=kube-apiserver.service - After=kube-apiserver.service - - [Service] - ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.17.0/bin/linux/amd64/kube-controller-manager - ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-controller-manager - ExecStart=/opt/bin/kube-controller-manager \ - --master=127.0.0.1:8080 - Restart=always - RestartSec=10 - - name: kube-scheduler.service - command: start - content: | - [Unit] - Description=Kubernetes Scheduler - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=kube-apiserver.service - After=kube-apiserver.service - - [Service] - ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.17.0/bin/linux/amd64/kube-scheduler - ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-scheduler - ExecStart=/opt/bin/kube-scheduler \ - --master=127.0.0.1:8080 - Restart=always - RestartSec=10 - - name: kube-register.service - command: start - content: | - [Unit] - Description=Kubernetes Registration Service - Documentation=https://github.com/kelseyhightower/kube-register - Requires=kube-apiserver.service fleet.service - After=kube-apiserver.service fleet.service - - [Service] - ExecStartPre=-/usr/bin/wget -nc -O /opt/bin/kube-register https://github.com/kelseyhightower/kube-register/releases/download/v0.0.3/kube-register-0.0.3-linux-amd64 - ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-register - ExecStart=/opt/bin/kube-register \ - --metadata=k8srole=node \ - --fleet-endpoint=unix:///var/run/fleet.sock \ - --api-endpoint=http://127.0.0.1:8080 - Restart=always - RestartSec=10 - update: - group: alpha - reboot-strategy: off diff --git a/release-0.20.0/docs/getting-started-guides/aws/cloud-configs/node.yaml b/release-0.20.0/docs/getting-started-guides/aws/cloud-configs/node.yaml deleted file mode 100644 index 9d3d61d868a..00000000000 --- a/release-0.20.0/docs/getting-started-guides/aws/cloud-configs/node.yaml +++ /dev/null @@ -1,81 +0,0 @@ -#cloud-config - -write_files: - - path: /opt/bin/wupiao - owner: root - permissions: 0755 - content: | - #!/bin/bash - # [w]ait [u]ntil [p]ort [i]s [a]ctually [o]pen - [ -n "$1" ] && [ -n "$2" ] && while ! curl --output /dev/null \ - --silent --head --fail \ - http://${1}:${2}; do sleep 1 && echo -n .; done; - exit $? - -coreos: - etcd2: - listen-client-urls: http://localhost:2379 - advertise-client-urls: http://0.0.0.0:2379 - initial-cluster: master=http://:2380 - proxy: on - fleet: - etcd_servers: http://localhost:2379 - metadata: k8srole=node - flannel: - etcd_endpoints: http://localhost:2379 - locksmithd: - endpoint: http://localhost:2379 - units: - - name: etcd2.service - command: start - - name: fleet.service - command: start - - name: flanneld.service - command: start - - name: docker.service - command: start - drop-ins: - - name: 50-docker-mirror.conf - content: | - [Service] - Environment=DOCKER_OPTS='--registry-mirror=http://:5000' - - name: kubelet.service - command: start - content: | - [Unit] - Description=Kubernetes Kubelet - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=network-online.target - After=network-online.target - - [Service] - ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.17.0/bin/linux/amd64/kubelet - ExecStartPre=/usr/bin/chmod +x /opt/bin/kubelet - # wait for kubernetes master to be up and ready - ExecStartPre=/opt/bin/wupiao 8080 - ExecStart=/opt/bin/kubelet \ - --api-servers=:8080 \ - --hostname-override=$private_ipv4 - Restart=always - RestartSec=10 - - name: kube-proxy.service - command: start - content: | - [Unit] - Description=Kubernetes Proxy - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=network-online.target - After=network-online.target - - [Service] - ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.17.0/bin/linux/amd64/kube-proxy - ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-proxy - # wait for kubernetes master to be up and ready - ExecStartPre=/opt/bin/wupiao 8080 - ExecStart=/opt/bin/kube-proxy \ - --master=http://:8080 - Restart=always - RestartSec=10 - update: - group: alpha - reboot-strategy: off diff --git a/release-0.20.0/docs/getting-started-guides/aws/cloudformation-template.json b/release-0.20.0/docs/getting-started-guides/aws/cloudformation-template.json deleted file mode 100644 index 5aa6ee83443..00000000000 --- a/release-0.20.0/docs/getting-started-guides/aws/cloudformation-template.json +++ /dev/null @@ -1,421 +0,0 @@ -{ - "AWSTemplateFormatVersion": "2010-09-09", - "Description": "Kubernetes 0.18.2 on EC2 powered by CoreOS 681.0.0 (alpha)", - "Mappings": { - "RegionMap": { - "eu-central-1" : { - "AMI" : "ami-4c4f7151" - }, - "ap-northeast-1" : { - "AMI" : "ami-3a35fd3a" - }, - "us-gov-west-1" : { - "AMI" : "ami-57117174" - }, - "sa-east-1" : { - "AMI" : "ami-fbcc4ae6" - }, - "ap-southeast-2" : { - "AMI" : "ami-593c4263" - }, - "ap-southeast-1" : { - "AMI" : "ami-3a083668" - }, - "us-east-1" : { - "AMI" : "ami-40322028" - }, - "us-west-2" : { - "AMI" : "ami-23b58613" - }, - "us-west-1" : { - "AMI" : "ami-15618f51" - }, - "eu-west-1" : { - "AMI" : "ami-8d1164fa" - } - } - }, - "Parameters": { - "InstanceType": { - "Description": "EC2 HVM instance type (m3.medium, etc).", - "Type": "String", - "Default": "m3.medium", - "AllowedValues": [ - "m3.medium", - "m3.large", - "m3.xlarge", - "m3.2xlarge", - "c3.large", - "c3.xlarge", - "c3.2xlarge", - "c3.4xlarge", - "c3.8xlarge", - "cc2.8xlarge", - "cr1.8xlarge", - "hi1.4xlarge", - "hs1.8xlarge", - "i2.xlarge", - "i2.2xlarge", - "i2.4xlarge", - "i2.8xlarge", - "r3.large", - "r3.xlarge", - "r3.2xlarge", - "r3.4xlarge", - "r3.8xlarge", - "t2.micro", - "t2.small", - "t2.medium" - ], - "ConstraintDescription": "Must be a valid EC2 HVM instance type." - }, - "ClusterSize": { - "Description": "Number of nodes in cluster (2-12).", - "Default": "2", - "MinValue": "2", - "MaxValue": "12", - "Type": "Number" - }, - "AllowSSHFrom": { - "Description": "The net block (CIDR) that SSH is available to.", - "Default": "0.0.0.0/0", - "Type": "String" - }, - "KeyPair": { - "Description": "The name of an EC2 Key Pair to allow SSH access to the instance.", - "Type": "AWS::EC2::KeyPair::KeyName" - }, - "VpcId": { - "Description": "The ID of the VPC to launch into.", - "Type": "AWS::EC2::VPC::Id" - }, - "SubnetId": { - "Description": "The ID of the subnet to launch into (that must be within the supplied VPC)", - "Type": "AWS::EC2::Subnet::Id" - }, - "SubnetAZ": { - "Description": "The availability zone of the subnet supplied (for example eu-west-1a)", - "Type": "String" - } - }, - "Conditions": { - "UseEC2Classic": {"Fn::Equals": [{"Ref": "VpcId"}, ""]} - }, - "Resources": { - "KubernetesSecurityGroup": { - "Type": "AWS::EC2::SecurityGroup", - "Properties": { - "VpcId": {"Fn::If": ["UseEC2Classic", {"Ref": "AWS::NoValue"}, {"Ref": "VpcId"}]}, - "GroupDescription": "Kubernetes SecurityGroup", - "SecurityGroupIngress": [ - { - "IpProtocol": "tcp", - "FromPort": "22", - "ToPort": "22", - "CidrIp": {"Ref": "AllowSSHFrom"} - } - ] - } - }, - "KubernetesIngress": { - "Type": "AWS::EC2::SecurityGroupIngress", - "Properties": { - "GroupId": {"Fn::GetAtt": ["KubernetesSecurityGroup", "GroupId"]}, - "IpProtocol": "tcp", - "FromPort": "1", - "ToPort": "65535", - "SourceSecurityGroupId": { - "Fn::GetAtt" : [ "KubernetesSecurityGroup", "GroupId" ] - } - } - }, - "KubernetesIngressUDP": { - "Type": "AWS::EC2::SecurityGroupIngress", - "Properties": { - "GroupId": {"Fn::GetAtt": ["KubernetesSecurityGroup", "GroupId"]}, - "IpProtocol": "udp", - "FromPort": "1", - "ToPort": "65535", - "SourceSecurityGroupId": { - "Fn::GetAtt" : [ "KubernetesSecurityGroup", "GroupId" ] - } - } - }, - "KubernetesMasterInstance": { - "Type": "AWS::EC2::Instance", - "Properties": { - "NetworkInterfaces" : [{ - "GroupSet" : [{"Fn::GetAtt": ["KubernetesSecurityGroup", "GroupId"]}], - "AssociatePublicIpAddress" : "true", - "DeviceIndex" : "0", - "DeleteOnTermination" : "true", - "SubnetId" : {"Fn::If": ["UseEC2Classic", {"Ref": "AWS::NoValue"}, {"Ref": "SubnetId"}]} - }], - "ImageId": {"Fn::FindInMap" : ["RegionMap", {"Ref": "AWS::Region" }, "AMI"]}, - "InstanceType": {"Ref": "InstanceType"}, - "KeyName": {"Ref": "KeyPair"}, - "Tags" : [ - {"Key" : "Name", "Value" : {"Fn::Join" : [ "-", [ {"Ref" : "AWS::StackName"}, "k8s-master" ] ]}}, - {"Key" : "KubernetesRole", "Value" : "node"} - ], - "UserData": { "Fn::Base64": {"Fn::Join" : ["", [ - "#cloud-config\n\n", - "write_files:\n", - "- path: /opt/bin/waiter.sh\n", - " owner: root\n", - " content: |\n", - " #! /usr/bin/bash\n", - " until curl http://127.0.0.1:2379/v2/machines; do sleep 2; done\n", - "coreos:\n", - " etcd2:\n", - " name: master\n", - " initial-cluster-token: k8s_etcd\n", - " initial-cluster: master=http://$private_ipv4:2380\n", - " listen-peer-urls: http://$private_ipv4:2380,http://localhost:2380\n", - " initial-advertise-peer-urls: http://$private_ipv4:2380\n", - " listen-client-urls: http://$private_ipv4:2379,http://localhost:2379\n", - " advertise-client-urls: http://$private_ipv4:2379\n", - " fleet:\n", - " etcd_servers: http://localhost:2379\n", - " metadata: k8srole=master\n", - " flannel:\n", - " etcd_endpoints: http://localhost:2379\n", - " locksmithd:\n", - " endpoint: http://localhost:2379\n", - " units:\n", - " - name: etcd2.service\n", - " command: start\n", - " - name: fleet.service\n", - " command: start\n", - " - name: etcd2-waiter.service\n", - " command: start\n", - " content: |\n", - " [Unit]\n", - " Description=etcd waiter\n", - " Wants=network-online.target\n", - " Wants=etcd2.service\n", - " After=etcd2.service\n", - " After=network-online.target\n", - " Before=flanneld.service fleet.service locksmithd.service\n\n", - " [Service]\n", - " ExecStart=/usr/bin/bash /opt/bin/waiter.sh\n", - " RemainAfterExit=true\n", - " Type=oneshot\n", - " - name: flanneld.service\n", - " command: start\n", - " drop-ins:\n", - " - name: 50-network-config.conf\n", - " content: |\n", - " [Service]\n", - " ExecStartPre=-/usr/bin/etcdctl mk /coreos.com/network/config '{\"Network\": \"10.244.0.0/16\", \"Backend\": {\"Type\": \"vxlan\"}}'\n", - " - name: docker-cache.service\n", - " command: start\n", - " content: |\n", - " [Unit]\n", - " Description=Docker cache proxy\n", - " Requires=early-docker.service\n", - " After=early-docker.service\n", - " Before=early-docker.target\n\n", - " [Service]\n", - " Restart=always\n", - " TimeoutStartSec=0\n", - " RestartSec=5\n", - " Environment=TMPDIR=/var/tmp/\n", - " Environment=DOCKER_HOST=unix:///var/run/early-docker.sock\n", - " ExecStartPre=-/usr/bin/docker kill docker-registry\n", - " ExecStartPre=-/usr/bin/docker rm docker-registry\n", - " ExecStartPre=/usr/bin/docker pull quay.io/devops/docker-registry:latest\n", - " # GUNICORN_OPTS is an workaround for\n", - " # https://github.com/docker/docker-registry/issues/892\n", - " ExecStart=/usr/bin/docker run --rm --net host --name docker-registry \\\n", - " -e STANDALONE=false \\\n", - " -e GUNICORN_OPTS=[--preload] \\\n", - " -e MIRROR_SOURCE=https://registry-1.docker.io \\\n", - " -e MIRROR_SOURCE_INDEX=https://index.docker.io \\\n", - " -e MIRROR_TAGS_CACHE_TTL=1800 \\\n", - " quay.io/devops/docker-registry:latest\n", - " - name: get-kubectl.service\n", - " command: start\n", - " content: |\n", - " [Unit]\n", - " Description=Get kubectl client tool\n", - " Documentation=https://github.com/GoogleCloudPlatform/kubernetes\n", - " Requires=network-online.target\n", - " After=network-online.target\n\n", - " [Service]\n", - " ExecStart=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kubectl\n", - " ExecStart=/usr/bin/chmod +x /opt/bin/kubectl\n", - " Type=oneshot\n", - " RemainAfterExit=true\n", - " - name: kube-apiserver.service\n", - " command: start\n", - " content: |\n", - " [Unit]\n", - " Description=Kubernetes API Server\n", - " Documentation=https://github.com/GoogleCloudPlatform/kubernetes\n", - " Requires=etcd2-waiter.service\n", - " After=etcd2-waiter.service\n\n", - " [Service]\n", - " ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kube-apiserver\n", - " ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-apiserver\n", - " ExecStart=/opt/bin/kube-apiserver \\\n", - " --insecure-bind-address=0.0.0.0 \\\n", - " --service-cluster-ip-range=10.100.0.0/16 \\\n", - " --etcd-servers=http://localhost:2379\n", - " Restart=always\n", - " RestartSec=10\n", - " - name: kube-controller-manager.service\n", - " command: start\n", - " content: |\n", - " [Unit]\n", - " Description=Kubernetes Controller Manager\n", - " Documentation=https://github.com/GoogleCloudPlatform/kubernetes\n", - " Requires=kube-apiserver.service\n", - " After=kube-apiserver.service\n\n", - " [Service]\n", - " ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kube-controller-manager\n", - " ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-controller-manager\n", - " ExecStart=/opt/bin/kube-controller-manager \\\n", - " --master=127.0.0.1:8080\n", - " Restart=always\n", - " RestartSec=10\n", - " - name: kube-scheduler.service\n", - " command: start\n", - " content: |\n", - " [Unit]\n", - " Description=Kubernetes Scheduler\n", - " Documentation=https://github.com/GoogleCloudPlatform/kubernetes\n", - " Requires=kube-apiserver.service\n", - " After=kube-apiserver.service\n\n", - " [Service]\n", - " ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kube-scheduler\n", - " ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-scheduler\n", - " ExecStart=/opt/bin/kube-scheduler \\\n", - " --master=127.0.0.1:8080\n", - " Restart=always\n", - " RestartSec=10\n", - " - name: kube-register.service\n", - " command: start\n", - " content: |\n", - " [Unit]\n", - " Description=Kubernetes Registration Service\n", - " Documentation=https://github.com/kelseyhightower/kube-register\n", - " Requires=kube-apiserver.service fleet.service\n", - " After=kube-apiserver.service fleet.service\n\n", - " [Service]\n", - " ExecStartPre=-/usr/bin/wget -nc -O /opt/bin/kube-register https://github.com/kelseyhightower/kube-register/releases/download/v0.0.3/kube-register-0.0.3-linux-amd64\n", - " ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-register\n", - " ExecStart=/opt/bin/kube-register \\\n", - " --metadata=k8srole=node \\\n", - " --fleet-endpoint=unix:///var/run/fleet.sock \\\n", - " --api-endpoint=http://127.0.0.1:8080\n", - " Restart=always\n", - " RestartSec=10\n", - " update:\n", - " group: alpha\n", - " reboot-strategy: off\n" - ]]} - } - } - }, - "KubernetesNodeLaunchConfig": { - "Type": "AWS::AutoScaling::LaunchConfiguration", - "Properties": { - "ImageId": {"Fn::FindInMap" : ["RegionMap", {"Ref": "AWS::Region" }, "AMI" ]}, - "InstanceType": {"Ref": "InstanceType"}, - "KeyName": {"Ref": "KeyPair"}, - "AssociatePublicIpAddress" : "true", - "SecurityGroups": [{"Fn::If": [ - "UseEC2Classic", - {"Ref": "KubernetesSecurityGroup"}, - {"Fn::GetAtt": ["KubernetesSecurityGroup", "GroupId"]}] - }], - "UserData": { "Fn::Base64": {"Fn::Join" : ["", [ - "#cloud-config\n\n", - "coreos:\n", - " etcd2:\n", - " listen-client-urls: http://localhost:2379\n", - " initial-cluster: master=http://", {"Fn::GetAtt" :["KubernetesMasterInstance" , "PrivateIp"]}, ":2380\n", - " proxy: on\n", - " fleet:\n", - " etcd_servers: http://localhost:2379\n", - " metadata: k8srole=node\n", - " flannel:\n", - " etcd_endpoints: http://localhost:2379\n", - " locksmithd:\n", - " endpoint: http://localhost:2379\n", - " units:\n", - " - name: etcd2.service\n", - " command: start\n", - " - name: fleet.service\n", - " command: start\n", - " - name: flanneld.service\n", - " command: start\n", - " - name: docker.service\n", - " command: start\n", - " drop-ins:\n", - " - name: 50-docker-mirror.conf\n", - " content: |\n", - " [Service]\n", - " Environment=DOCKER_OPTS='--registry-mirror=http://", {"Fn::GetAtt" :["KubernetesMasterInstance" , "PrivateIp"]}, ":5000'\n", - " - name: kubelet.service\n", - " command: start\n", - " content: |\n", - " [Unit]\n", - " Description=Kubernetes Kubelet\n", - " Documentation=https://github.com/GoogleCloudPlatform/kubernetes\n", - " Requires=network-online.target\n", - " After=network-online.target\n\n", - " [Service]\n", - " ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kubelet\n", - " ExecStartPre=/usr/bin/chmod +x /opt/bin/kubelet\n", - " ExecStart=/opt/bin/kubelet \\\n", - " --api-servers=", {"Fn::GetAtt" :["KubernetesMasterInstance" , "PrivateIp"]}, ":8080 \\\n", - " --hostname-override=$private_ipv4\n", - " Restart=always\n", - " RestartSec=10\n", - " - name: kube-proxy.service\n", - " command: start\n", - " content: |\n", - " [Unit]\n", - " Description=Kubernetes Proxy\n", - " Documentation=https://github.com/GoogleCloudPlatform/kubernetes\n", - " Requires=network-online.target\n", - " After=network-online.target\n\n", - " [Service]\n", - " ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kube-proxy\n", - " ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-proxy\n", - " ExecStart=/opt/bin/kube-proxy \\\n", - " --master=http://", {"Fn::GetAtt" :["KubernetesMasterInstance" , "PrivateIp"]}, ":8080\n", - " Restart=always\n", - " RestartSec=10\n", - " update:\n", - " group: alpha\n", - " reboot-strategy: off\n" - ]]} - } - } - }, - "KubernetesAutoScalingGroup": { - "Type": "AWS::AutoScaling::AutoScalingGroup", - "Properties": { - "AvailabilityZones": {"Fn::If": ["UseEC2Classic", {"Fn::GetAZs": ""}, [{"Ref": "SubnetAZ"}]]}, - "VPCZoneIdentifier": {"Fn::If": ["UseEC2Classic", {"Ref": "AWS::NoValue"}, [{"Ref": "SubnetId"}]]}, - "LaunchConfigurationName": {"Ref": "KubernetesNodeLaunchConfig"}, - "MinSize": "2", - "MaxSize": "12", - "DesiredCapacity": {"Ref": "ClusterSize"}, - "Tags" : [ - {"Key" : "Name", "Value" : {"Fn::Join" : [ "-", [ {"Ref" : "AWS::StackName"}, "k8s-node" ] ]}, "PropagateAtLaunch" : true}, - {"Key" : "KubernetesRole", "Value" : "node", "PropagateAtLaunch" : true} - ] - } - } - }, - "Outputs": { - "KubernetesMasterPublicIp": { - "Description": "Public Ip of the newly created Kubernetes Master instance", - "Value": {"Fn::GetAtt": ["KubernetesMasterInstance" , "PublicIp"]} - } - } -} diff --git a/release-0.20.0/docs/getting-started-guides/aws/kubectl.md b/release-0.20.0/docs/getting-started-guides/aws/kubectl.md deleted file mode 100644 index 8a8e7f7c4d9..00000000000 --- a/release-0.20.0/docs/getting-started-guides/aws/kubectl.md +++ /dev/null @@ -1,27 +0,0 @@ -# Install and configure kubectl - -## Download the kubectl CLI tool -```bash -### Darwin -wget https://storage.googleapis.com/kubernetes-release/release/v0.17.0/bin/darwin/amd64/kubectl - -### Linux -wget https://storage.googleapis.com/kubernetes-release/release/v0.17.0/bin/linux/amd64/kubectl -``` - -### Copy kubectl to your path -```bash -chmod +x kubectl -mv kubectl /usr/local/bin/ -``` - -### Create a secure tunnel for API communication -```bash -ssh -f -nNT -L 8080:127.0.0.1:8080 core@ -``` - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/aws/kubectl.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/aws/kubectl.md?pixel)]() diff --git a/release-0.20.0/docs/getting-started-guides/azure.md b/release-0.20.0/docs/getting-started-guides/azure.md deleted file mode 100644 index b82d7d7ed8b..00000000000 --- a/release-0.20.0/docs/getting-started-guides/azure.md +++ /dev/null @@ -1,65 +0,0 @@ -Getting started on Microsoft Azure ----------------------------------- - -**Table of Contents** - - - [Prerequisites](#prerequisites) - - [Setup](#setup) - - [Getting started with your cluster](#getting-started-with-your-cluster) - - [Tearing down the cluster](#tearing-down-the-cluster) - - -## Prerequisites - -** Azure Prerequisites** - -1. You need an Azure account. Visit http://azure.microsoft.com/ to get started. -2. Install and configure the Azure cross-platform command-line interface. http://azure.microsoft.com/en-us/documentation/articles/xplat-cli/ -3. Make sure you have a default account set in the Azure cli, using `azure account set` - -**Prerequisites for your workstation** - -1. Be running a Linux or Mac OS X. -2. Get or build a [binary release](binary_release.md) -3. If you want to build your own release, you need to have [Docker -installed](https://docs.docker.com/installation/). On Mac OS X you can use -[boot2docker](http://boot2docker.io/). - -## Setup -The cluster setup scripts can setup Kubernetes for multiple targets. First modify `cluster/kube-env.sh` to specify azure: - - KUBERNETES_PROVIDER="azure" - -Next, specify an existing virtual network and subnet in `cluster/azure/config-default.sh`: - - AZ_VNET= - AZ_SUBNET= - -You can create a virtual network: - - azure network vnet create --subnet= --location "West US" -v - -Now you're ready. - -You can then use the `cluster/kube-*.sh` scripts to manage your azure cluster, start with: - - cluster/kube-up.sh - -The script above will start (by default) a single master VM along with 4 worker VMs. You -can tweak some of these parameters by editing `cluster/azure/config-default.sh`. - -## Getting started with your cluster -See [a simple nginx example](../../examples/simple-nginx.md) to try out your new cluster. - -For more complete applications, please look in the [examples directory](../../examples). - -## Tearing down the cluster -``` -cluster/kube-down.sh -``` - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/azure.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/azure.md?pixel)]() diff --git a/release-0.20.0/docs/getting-started-guides/bigquery-logging.png b/release-0.20.0/docs/getting-started-guides/bigquery-logging.png deleted file mode 100644 index b7a6f94c288..00000000000 Binary files a/release-0.20.0/docs/getting-started-guides/bigquery-logging.png and /dev/null differ diff --git a/release-0.20.0/docs/getting-started-guides/binary_release.md b/release-0.20.0/docs/getting-started-guides/binary_release.md deleted file mode 100644 index 49a982da2c4..00000000000 --- a/release-0.20.0/docs/getting-started-guides/binary_release.md +++ /dev/null @@ -1,29 +0,0 @@ -## Getting a Binary Release - -You can either build a release from sources or download a pre-built release. If you do not plan on developing Kubernetes itself, we suggest a pre-built release. - -### Prebuilt Binary Release - -The list of binary releases is available for download from the [GitHub Kubernetes repo release page](https://github.com/GoogleCloudPlatform/kubernetes/releases). - -Download the latest release and unpack this tar file on Linux or OS X, cd to the created `kubernetes/` directory, and then follow the getting started guide for your cloud. - -### Building from source - -Get the Kubernetes source. If you are simply building a release from source there is no need to set up a full golang environment as all building happens in a Docker container. - -Building a release is simple. - -```bash -git clone https://github.com/GoogleCloudPlatform/kubernetes.git -cd kubernetes -make release -``` - -For more details on the release process see the [`build/` directory](../../build) - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/binary_release.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/binary_release.md?pixel)]() diff --git a/release-0.20.0/docs/getting-started-guides/centos/centos_manual_config.md b/release-0.20.0/docs/getting-started-guides/centos/centos_manual_config.md deleted file mode 100644 index a14ad3842c1..00000000000 --- a/release-0.20.0/docs/getting-started-guides/centos/centos_manual_config.md +++ /dev/null @@ -1,178 +0,0 @@ -Getting started on [CentOS](http://centos.org) ----------------------------------------------- - -**Table of Contents** - - - [Prerequisites](#prerequisites) - - [Starting a cluster](#starting-a-cluster) -## Prerequisites -You need two machines with CentOS installed on them. - -## Starting a cluster -This is a getting started guide for CentOS. It is a manual configuration so you understand all the underlying packages / services / ports, etc... - -This guide will only get ONE minion working. Multiple minions requires a functional [networking configuration](http://docs.k8s.io/networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious. - -The kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, centos-master, will be the kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_. The remaining host, centos-minion will be the minion and run kubelet, proxy, cadvisor and docker. - -**System Information:** - -Hosts: -``` -centos-master = 192.168.121.9 -centos-minion = 192.168.121.65 -``` - -**Prepare the hosts:** - -* Create virt7-testing repo on all hosts - centos-{master,minion} with following information. - -``` -[virt7-testing] -name=virt7-testing -baseurl=http://cbs.centos.org/repos/virt7-testing/x86_64/os/ -gpgcheck=0 -``` - -* Install kubernetes on all hosts - centos-{master,minion}. This will also pull in etcd, docker, and cadvisor. - -``` -yum -y install --enablerepo=virt7-testing kubernetes -``` - -* Note * Using etcd-0.4.6-7 (This is temperory update in documentation) - -If you do not get etcd-0.4.6-7 installed with virt7-testing repo, - -In the current virt7-testing repo, the etcd package is updated which causes service failure. To avoid this, - -``` -yum erase etcd -``` - -It will uninstall the current available etcd package - -``` -yum install http://cbs.centos.org/kojifiles/packages/etcd/0.4.6/7.el7.centos/x86_64/etcd-0.4.6-7.el7.centos.x86_64.rpm -yum -y install --enablerepo=virt7-testing kubernetes -``` - -* Add master and minion to /etc/hosts on all machines (not needed if hostnames already in DNS) - -``` -echo "192.168.121.9 centos-master -192.168.121.65 centos-minion" >> /etc/hosts -``` - -* Edit /etc/kubernetes/config which will be the same on all hosts to contain: - -``` -# Comma separated list of nodes in the etcd cluster -KUBE_ETCD_SERVERS="--etcd_servers=http://centos-master:4001" - -# logging to stderr means we get it in the systemd journal -KUBE_LOGTOSTDERR="--logtostderr=true" - -# journal message level, 0 is debug -KUBE_LOG_LEVEL="--v=0" - -# Should this cluster be allowed to run privileged docker containers -KUBE_ALLOW_PRIV="--allow_privileged=false" -``` - -* Disable the firewall on both the master and minon, as docker does not play well with other firewall rule managers - -``` -systemctl disable iptables-services firewalld -systemctl stop iptables-services firewalld -``` - -**Configure the kubernetes services on the master.** - -* Edit /etc/kubernetes/apiserver to appear as such: - -``` -# The address on the local server to listen to. -KUBE_API_ADDRESS="--address=0.0.0.0" - -# The port on the local server to listen on. -KUBE_API_PORT="--port=8080" - -# How the replication controller and scheduler find the kube-apiserver -KUBE_MASTER="--master=http://centos-master:8080" - -# Port minions listen on -KUBELET_PORT="--kubelet_port=10250" - -# Address range to use for services -KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" - -# Add your own! -KUBE_API_ARGS="" -``` - -* Edit /etc/kubernetes/controller-manager to appear as such: -``` -# Comma separated list of minions -KUBELET_ADDRESSES="--machines=centos-minion" -``` - -* Start the appropriate services on master: - -``` -for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do - systemctl restart $SERVICES - systemctl enable $SERVICES - systemctl status $SERVICES -done -``` - -**Configure the kubernetes services on the minion.** - -***We need to configure the kubelet and start the kubelet and proxy*** - -* Edit /etc/kubernetes/kubelet to appear as such: - -``` -# The address for the info server to serve on -KUBELET_ADDRESS="--address=0.0.0.0" - -# The port for the info server to serve on -KUBELET_PORT="--port=10250" - -# You may leave this blank to use the actual hostname -KUBELET_HOSTNAME="--hostname_override=centos-minion" - -# Add your own! -KUBELET_ARGS="" -``` - -* Start the appropriate services on minion (centos-minion). - -``` -for SERVICES in kube-proxy kubelet docker; do - systemctl restart $SERVICES - systemctl enable $SERVICES - systemctl status $SERVICES -done -``` - -*You should be finished!* - -* Check to make sure the cluster can see the minion (on centos-master) - -``` -kubectl get minions -NAME LABELS STATUS -centos-minion Ready -``` - -**The cluster should be running! Launch a test pod.** - -You should have a functional cluster, check out [101](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/examples/walkthrough/README.md)! - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/centos/centos_manual_config.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/centos/centos_manual_config.md?pixel)]() diff --git a/release-0.20.0/docs/getting-started-guides/cloud-logging-console.png b/release-0.20.0/docs/getting-started-guides/cloud-logging-console.png deleted file mode 100644 index fae0aecbc53..00000000000 Binary files a/release-0.20.0/docs/getting-started-guides/cloud-logging-console.png and /dev/null differ diff --git a/release-0.20.0/docs/getting-started-guides/cloudstack.md b/release-0.20.0/docs/getting-started-guides/cloudstack.md deleted file mode 100644 index 52ac5dabeb4..00000000000 --- a/release-0.20.0/docs/getting-started-guides/cloudstack.md +++ /dev/null @@ -1,97 +0,0 @@ -Getting started on [CloudStack](http://cloudstack.apache.org) ------------------------------------------------------------- - -**Table of Contents** - - - [Introduction](#introduction) - - [Prerequisites](#prerequisites) - - [Clone the playbook](#clone-the-playbook) - - [Create a Kubernetes cluster](#create-a-kubernetes-cluster) - -### Introduction - -CloudStack is a software to build public and private clouds based on hardware virtualization principles (traditional IaaS). To deploy Kubernetes on CloudStack there are several possibilities depending on the Cloud being used and what images are made available. [Exoscale](http://exoscale.ch) for instance makes a [CoreOS](http://coreos.com) template available, therefore instructions to deploy Kubernetes on coreOS can be used. CloudStack also has a vagrant plugin available, hence Vagrant could be used to deploy Kubernetes either using the existing shell provisioner or using new Salt based recipes. - -[CoreOS](http://coreos.com) templates for CloudStack are built [nightly](http://stable.release.core-os.net/amd64-usr/current/). CloudStack operators need to [register](http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/templates.html) this template in their cloud before proceeding with these Kubernetes deployment instructions. - -This guide uses an [Ansible playbook](https://github.com/runseb/ansible-kubernetes). -This is a completely automated, a single playbook deploys Kubernetes based on the coreOS [instructions](http://docs.k8s.io/getting-started-guides/coreos/coreos_multinode_cluster.md). - - -This [Ansible](http://ansibleworks.com) playbook deploys Kubernetes on a CloudStack based Cloud using CoreOS images. The playbook, creates an ssh key pair, creates a security group and associated rules and finally starts coreOS instances configured via cloud-init. - -###Prerequisites - - $ sudo apt-get install -y python-pip - $ sudo pip install ansible - $ sudo pip install cs - -[_cs_](http://github.com/exoscale/cs) is a python module for the CloudStack API. - -Set your CloudStack endpoint, API keys and HTTP method used. - -You can define them as environment variables: `CLOUDSTACK_ENDPOINT`, `CLOUDSTACK_KEY`, `CLOUDSTACK_SECRET` and `CLOUDSTACK_METHOD`. - -Or create a `~/.cloudstack.ini` file: - - [cloudstack] - endpoint = - key = - secret = - method = post - -We need to use the http POST method to pass the _large_ userdata to the coreOS instances. - -###Clone the playbook - - $ git clone --recursive https://github.com/runseb/ansible-kubernetes.git - $ cd ansible-kubernetes - -The [ansible-cloudstack](https://github.com/resmo/ansible-cloudstack) module is setup in this repository as a submodule, hence the `--recursive`. - -###Create a Kubernetes cluster - -You simply need to run the playbook. - - $ ansible-playbook k8s.yml - -Some variables can be edited in the `k8s.yml` file. - - vars: - ssh_key: k8s - k8s_num_nodes: 2 - k8s_security_group_name: k8s - k8s_node_prefix: k8s2 - k8s_template: Linux CoreOS alpha 435 64-bit 10GB Disk - k8s_instance_type: Tiny - -This will start a Kubernetes master node and a number of compute nodes (by default 2). -The `instance_type` and `template` by default are specific to [exoscale](http://exoscale.ch), edit them to specify your CloudStack cloud specific template and instance type (i.e service offering). - -Check the tasks and templates in `roles/k8s` if you want to modify anything. - -Once the playbook as finished, it will print out the IP of the Kubernetes master: - - TASK: [k8s | debug msg='k8s master IP is {{ k8s_master.default_ip }}'] ******** - -SSH to it using the key that was created and using the _core_ user and you can list the machines in your cluster: - - $ ssh -i ~/.ssh/id_rsa_k8s core@ - $ fleetctl list-machines - MACHINE IP METADATA - a017c422... role=node - ad13bf84... role=master - e9af8293... role=node - - - - - - - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/cloudstack.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/cloudstack.md?pixel)]() diff --git a/release-0.20.0/docs/getting-started-guides/coreos.md b/release-0.20.0/docs/getting-started-guides/coreos.md deleted file mode 100644 index fa03e9e66be..00000000000 --- a/release-0.20.0/docs/getting-started-guides/coreos.md +++ /dev/null @@ -1,18 +0,0 @@ -## Getting started on [CoreOS](http://coreos.com) - -There are multiple guides on running Kubernetes with [CoreOS](http://coreos.com): - -* [Single Node Cluster](coreos/coreos_single_node_cluster.md) -* [Multi-node Cluster](coreos/coreos_multinode_cluster.md) -* [Setup Multi-node Cluster on GCE in an easy way](https://github.com/rimusz/coreos-multi-node-k8s-gce/blob/master/README.md) -* [Multi-node cluster using cloud-config and Weave on Vagrant](https://github.com/errordeveloper/weave-demos/blob/master/poseidon/README.md) -* [Multi-node cluster using cloud-config and Vagrant](https://github.com/pires/kubernetes-vagrant-coreos-cluster/blob/master/README.md) -* [Yet another multi-node cluster using cloud-config and Vagrant](https://github.com/AntonioMeireles/kubernetes-vagrant-coreos-cluster/blob/master/README.md) (similar to the one above but with an increased, more *aggressive* focus on features and flexibility) -* [Multi-node cluster with Vagrant and fleet units using a small OS X App](https://github.com/rimusz/coreos-osx-gui-kubernetes-cluster/blob/master/README.md) -* [Resizable multi-node cluster on Azure with Weave](coreos/azure/README.md) - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/coreos.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/coreos.md?pixel)]() diff --git a/release-0.20.0/docs/getting-started-guides/coreos/azure/.gitignore b/release-0.20.0/docs/getting-started-guides/coreos/azure/.gitignore deleted file mode 100644 index c2658d7d1b3..00000000000 --- a/release-0.20.0/docs/getting-started-guides/coreos/azure/.gitignore +++ /dev/null @@ -1 +0,0 @@ -node_modules/ diff --git a/release-0.20.0/docs/getting-started-guides/coreos/azure/README.md b/release-0.20.0/docs/getting-started-guides/coreos/azure/README.md deleted file mode 100644 index b1b9e7a08fe..00000000000 --- a/release-0.20.0/docs/getting-started-guides/coreos/azure/README.md +++ /dev/null @@ -1,210 +0,0 @@ -Kubernetes on Azure with CoreOS and [Weave](http://weave.works) ---------------------------------------------------------------- - -**Table of Contents** - -- [Introduction](#introduction) -- [Prerequisites](#prerequisites) -- [Let's go!](#lets-go) -- [Deploying the workload](#deploying-the-workload) -- [Scaling](#scaling) -- [Exposing the app to the outside world](#exposing-the-app-to-the-outside-world) -- [Next steps](#next-steps) -- [Tear down...](#tear-down) - -## Introduction - -In this guide I will demonstrate how to deploy a Kubernetes cluster to Azure cloud. You will be using CoreOS with Weave, which implements simple and secure networking, in a transparent, yet robust way. The purpose of this guide is to provide an out-of-the-box implementation that can ultimately be taken into production with little change. It will demonstrate how to provision a dedicated Kubernetes master and etcd nodes, and show how to scale the cluster with ease. - -### Prerequisites -1. You need an Azure account. - -## Let's go! - -To get started, you need to checkout the code: -``` -git clone https://github.com/GoogleCloudPlatform/kubernetes -cd kubernetes/docs/getting-started-guides/coreos/azure/ -``` - -You will need to have [Node.js installed](http://nodejs.org/download/) on you machine. If you have previously used Azure CLI, you should have it already. - -First, you need to install some of the dependencies with - -``` -npm install -``` - -Now, all you need to do is: - -``` -./azure-login.js -u -./create-kubernetes-cluster.js -``` - -This script will provision a cluster suitable for production use, where there is a ring of 3 dedicated etcd nodes, Kubernetes master and 2 nodes. The `kube-00` VM will be the master, your work loads are only to be deployed on the minion nodes, `kube-01` and `kube-02`. Initially, all VMs are single-core, to ensure a user of the free tier can reproduce it without paying extra. I will show how to add more bigger VMs later. - -![VMs in Azure](initial_cluster.png) - -Once the creation of Azure VMs has finished, you should see the following: - -``` -... -azure_wrapper/info: Saved SSH config, you can use it like so: `ssh -F ./output/kube_1c1496016083b4_ssh_conf ` -azure_wrapper/info: The hosts in this deployment are: - [ 'etcd-00', 'etcd-01', 'etcd-02', 'kube-00', 'kube-01', 'kube-02' ] -azure_wrapper/info: Saved state into `./output/kube_1c1496016083b4_deployment.yml` -``` - -Let's login to the master node like so: -``` -ssh -F ./output/kube_1c1496016083b4_ssh_conf kube-00 -``` -> Note: config file name will be different, make sure to use the one you see. - -Check there are 2 nodes in the cluster: -``` -core@kube-00 ~ $ kubectl get nodes -NAME LABELS STATUS -kube-01 environment=production Ready -kube-02 environment=production Ready -``` - -## Deploying the workload - -Let's follow the Guestbook example now: -``` -cd guestbook-example -kubectl create -f redis-master-controller.json -kubectl create -f redis-master-service.json -kubectl create -f redis-slave-controller.json -kubectl create -f redis-slave-service.json -kubectl create -f frontend-controller.json -kubectl create -f frontend-service.json -``` - -You need to wait for the pods to get deployed, run the following and wait for `STATUS` to change from `Unknown`, through `Pending` to `Running`. -``` -kubectl get pods --watch -``` -> Note: the most time it will spend downloading Docker container images on each of the nodes. - -Eventually you should see: -``` -POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS -frontend-controller-0133o 10.2.1.14 php-redis kubernetes/example-guestbook-php-redis kube-01/172.18.0.13 name=frontend,uses=redisslave,redis-master Running -frontend-controller-ls6k1 10.2.3.10 php-redis kubernetes/example-guestbook-php-redis name=frontend,uses=redisslave,redis-master Running -frontend-controller-oh43e 10.2.2.15 php-redis kubernetes/example-guestbook-php-redis kube-02/172.18.0.14 name=frontend,uses=redisslave,redis-master Running -redis-master 10.2.1.3 master redis kube-01/172.18.0.13 name=redis-master Running -redis-slave-controller-fplln 10.2.2.3 slave brendanburns/redis-slave kube-02/172.18.0.14 name=redisslave,uses=redis-master Running -redis-slave-controller-gziey 10.2.1.4 slave brendanburns/redis-slave kube-01/172.18.0.13 name=redisslave,uses=redis-master Running - -``` - -## Scaling - -Two single-core nodes are certainly not enough for a production system of today, and, as you can see, there is one _unassigned_ pod. Let's scale the cluster by adding a couple of bigger nodes. - -You will need to open another terminal window on your machine and go to the same working directory (e.g. `~/Workspace/weave-demos/coreos-azure`). - -First, lets set the size of new VMs: -``` -export AZ_VM_SIZE=Large -``` -Now, run scale script with state file of the previous deployment and number of nodes to add: -``` -./scale-kubernetes-cluster.js ./output/kube_1c1496016083b4_deployment.yml 2 -... -azure_wrapper/info: Saved SSH config, you can use it like so: `ssh -F ./output/kube_8f984af944f572_ssh_conf ` -azure_wrapper/info: The hosts in this deployment are: - [ 'etcd-00', - 'etcd-01', - 'etcd-02', - 'kube-00', - 'kube-01', - 'kube-02', - 'kube-03', - 'kube-04' ] -azure_wrapper/info: Saved state into `./output/kube_8f984af944f572_deployment.yml` -``` -> Note: this step has created new files in `./output`. - -Back on `kube-00`: -``` -core@kube-00 ~ $ kubectl get nodes -NAME LABELS STATUS -kube-01 environment=production Ready -kube-02 environment=production Ready -kube-03 environment=production Ready -kube-04 environment=production Ready -``` - -You can see that two more nodes joined happily. Let's scale the number of Guestbook instances now. - -First, double-check how many replication controllers there are: - -``` -core@kube-00 ~ $ kubectl get rc -CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS -frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=frontend 3 -redis-master master redis name=redis-master 1 -redis-slave slave kubernetes/redis-slave:v2 name=redis-slave 2 -``` -As there are 4 nodes, let's scale proportionally: -``` -core@kube-00 ~ $ kubectl scale --replicas=4 rc redis-slave -scaled -core@kube-00 ~ $ kubectl scale --replicas=4 rc frontend -scaled -``` -Check what you have now: -``` -core@kube-00 ~ $ kubectl get rc -CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS -frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=frontend 4 -redis-master master redis name=redis-master 1 -redis-slave slave kubernetes/redis-slave:v2 name=redis-slave 4 -``` - -You now will have more instances of front-end Guestbook apps and Redis slaves; and, if you look up all pods labeled `name=frontend`, you should see one running on each node. - -``` -core@kube-00 ~/guestbook-example $ kubectl get pods -l name=frontend -POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS -frontend-controller-0133o 10.2.1.19 php-redis kubernetes/example-guestbook-php-redis kube-01/172.18.0.13 name=frontend,uses=redisslave,redis-master Running -frontend-controller-i7hvs 10.2.4.5 php-redis kubernetes/example-guestbook-php-redis kube-04/172.18.0.21 name=frontend,uses=redisslave,redis-master Running -frontend-controller-ls6k1 10.2.3.18 php-redis kubernetes/example-guestbook-php-redis kube-03/172.18.0.20 name=frontend,uses=redisslave,redis-master Running -frontend-controller-oh43e 10.2.2.22 php-redis kubernetes/example-guestbook-php-redis kube-02/172.18.0.14 name=frontend,uses=redisslave,redis-master Running -``` - -## Exposing the app to the outside world - -To makes sure the app is working, you probably want to load it in the browser. For accessing the Guestbook service from the outside world, an Azure endpoint needs to be created like shown on the picture below. - -![Creating an endpoint](external_access.png) - -You then should be able to access it from anywhere via the Azure virtual IP for `kube-01`, i.e. `http://104.40.211.194:8000/` as per screenshot. - -## Next steps - -You now have a full-blow cluster running in Azure, congrats! - -You should probably try deploy other [example apps](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples) or write your own ;) - -## Tear down... - -If you don't wish care about the Azure bill, you can tear down the cluster. It's easy to redeploy it, as you can see. - -``` -./destroy-cluster.js ./output/kube_8f984af944f572_deployment.yml -``` - -> Note: make sure to use the _latest state file_, as after scaling there is a new one. - -By the way, with the scripts shown, you can deploy multiple clusters, if you like :) - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/coreos/azure/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/coreos/azure/README.md?pixel)]() diff --git a/release-0.20.0/docs/getting-started-guides/coreos/azure/addons/cluster-monitoring/influxdb/grafana-service.yaml b/release-0.20.0/docs/getting-started-guides/coreos/azure/addons/cluster-monitoring/influxdb/grafana-service.yaml deleted file mode 100644 index 76e49087231..00000000000 --- a/release-0.20.0/docs/getting-started-guides/coreos/azure/addons/cluster-monitoring/influxdb/grafana-service.yaml +++ /dev/null @@ -1,14 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - labels: - kubernetes.io/cluster-service: "true" - kubernetes.io/name: "Grafana" - name: monitoring-grafana -spec: - ports: - - port: 80 - targetPort: 8080 - selector: - name: influxGrafana - diff --git a/release-0.20.0/docs/getting-started-guides/coreos/azure/addons/cluster-monitoring/influxdb/heapster-controller.yaml b/release-0.20.0/docs/getting-started-guides/coreos/azure/addons/cluster-monitoring/influxdb/heapster-controller.yaml deleted file mode 100644 index bac59a62c7f..00000000000 --- a/release-0.20.0/docs/getting-started-guides/coreos/azure/addons/cluster-monitoring/influxdb/heapster-controller.yaml +++ /dev/null @@ -1,24 +0,0 @@ -apiVersion: v1 -kind: ReplicationController -metadata: - labels: - name: heapster - kubernetes.io/cluster-service: "true" - name: monitoring-heapster-controller -spec: - replicas: 1 - selector: - name: heapster - template: - metadata: - labels: - name: heapster - kubernetes.io/cluster-service: "true" - spec: - containers: - - image: gcr.io/google_containers/heapster:v0.12.1 - name: heapster - command: - - /heapster - - --source=kubernetes:http://kubernetes?auth= - - --sink=influxdb:http://monitoring-influxdb:8086 diff --git a/release-0.20.0/docs/getting-started-guides/coreos/azure/addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml b/release-0.20.0/docs/getting-started-guides/coreos/azure/addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml deleted file mode 100644 index 92ee15d0c23..00000000000 --- a/release-0.20.0/docs/getting-started-guides/coreos/azure/addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml +++ /dev/null @@ -1,35 +0,0 @@ -apiVersion: v1 -kind: ReplicationController -metadata: - labels: - name: influxGrafana - kubernetes.io/cluster-service: "true" - name: monitoring-influx-grafana-controller -spec: - replicas: 1 - selector: - name: influxGrafana - template: - metadata: - labels: - name: influxGrafana - kubernetes.io/cluster-service: "true" - spec: - containers: - - image: gcr.io/google_containers/heapster_influxdb:v0.3 - name: influxdb - ports: - - containerPort: 8083 - hostPort: 8083 - - containerPort: 8086 - hostPort: 8086 - - image: gcr.io/google_containers/heapster_grafana:v0.7 - name: grafana - env: - - name: INFLUXDB_EXTERNAL_URL - value: /api/v1/proxy/namespaces/default/services/monitoring-grafana/db/ - - name: INFLUXDB_HOST - value: monitoring-influxdb - - name: INFLUXDB_PORT - value: "8086" - diff --git a/release-0.20.0/docs/getting-started-guides/coreos/azure/addons/cluster-monitoring/influxdb/influxdb-service.yaml b/release-0.20.0/docs/getting-started-guides/coreos/azure/addons/cluster-monitoring/influxdb/influxdb-service.yaml deleted file mode 100644 index 8301d782597..00000000000 --- a/release-0.20.0/docs/getting-started-guides/coreos/azure/addons/cluster-monitoring/influxdb/influxdb-service.yaml +++ /dev/null @@ -1,17 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - labels: - name: influxGrafana - name: monitoring-influxdb -spec: - ports: - - name: http - port: 8083 - targetPort: 8083 - - name: api - port: 8086 - targetPort: 8086 - selector: - name: influxGrafana - diff --git a/release-0.20.0/docs/getting-started-guides/coreos/azure/addons/fluentd-elasticsearch/es-controller.yaml b/release-0.20.0/docs/getting-started-guides/coreos/azure/addons/fluentd-elasticsearch/es-controller.yaml deleted file mode 100644 index f4cda7b032a..00000000000 --- a/release-0.20.0/docs/getting-started-guides/coreos/azure/addons/fluentd-elasticsearch/es-controller.yaml +++ /dev/null @@ -1,37 +0,0 @@ -apiVersion: v1 -kind: ReplicationController -metadata: - name: elasticsearch-logging-v1 - namespace: default - labels: - k8s-app: elasticsearch-logging - version: v1 - kubernetes.io/cluster-service: "true" -spec: - replicas: 2 - selector: - k8s-app: elasticsearch-logging - version: v1 - template: - metadata: - labels: - k8s-app: elasticsearch-logging - version: v1 - kubernetes.io/cluster-service: "true" - spec: - containers: - - image: gcr.io/google_containers/elasticsearch:1.3 - name: elasticsearch-logging - ports: - - containerPort: 9200 - name: es-port - protocol: TCP - - containerPort: 9300 - name: es-transport-port - protocol: TCP - volumeMounts: - - name: es-persistent-storage - mountPath: /data - volumes: - - name: es-persistent-storage - emptyDir: {} diff --git a/release-0.20.0/docs/getting-started-guides/coreos/azure/addons/fluentd-elasticsearch/es-service.yaml b/release-0.20.0/docs/getting-started-guides/coreos/azure/addons/fluentd-elasticsearch/es-service.yaml deleted file mode 100644 index 3b7ae06e7aa..00000000000 --- a/release-0.20.0/docs/getting-started-guides/coreos/azure/addons/fluentd-elasticsearch/es-service.yaml +++ /dev/null @@ -1,16 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: elasticsearch-logging - namespace: default - labels: - k8s-app: elasticsearch-logging - kubernetes.io/cluster-service: "true" - kubernetes.io/name: "Elasticsearch" -spec: - ports: - - port: 9200 - protocol: TCP - targetPort: es-port - selector: - k8s-app: elasticsearch-logging diff --git a/release-0.20.0/docs/getting-started-guides/coreos/azure/addons/fluentd-elasticsearch/kibana-controller.yaml b/release-0.20.0/docs/getting-started-guides/coreos/azure/addons/fluentd-elasticsearch/kibana-controller.yaml deleted file mode 100644 index 677bc5f664a..00000000000 --- a/release-0.20.0/docs/getting-started-guides/coreos/azure/addons/fluentd-elasticsearch/kibana-controller.yaml +++ /dev/null @@ -1,31 +0,0 @@ -apiVersion: v1 -kind: ReplicationController -metadata: - name: kibana-logging-v1 - namespace: default - labels: - k8s-app: kibana-logging - version: v1 - kubernetes.io/cluster-service: "true" -spec: - replicas: 1 - selector: - k8s-app: kibana-logging - version: v1 - template: - metadata: - labels: - k8s-app: kibana-logging - version: v1 - kubernetes.io/cluster-service: "true" - spec: - containers: - - name: kibana-logging - image: gcr.io/google_containers/kibana:1.3 - env: - - name: "ELASTICSEARCH_URL" - value: "http://elasticsearch-logging:9200" - ports: - - containerPort: 5601 - name: kibana-port - protocol: TCP diff --git a/release-0.20.0/docs/getting-started-guides/coreos/azure/addons/fluentd-elasticsearch/kibana-service.yaml b/release-0.20.0/docs/getting-started-guides/coreos/azure/addons/fluentd-elasticsearch/kibana-service.yaml deleted file mode 100644 index ac9aa5ce320..00000000000 --- a/release-0.20.0/docs/getting-started-guides/coreos/azure/addons/fluentd-elasticsearch/kibana-service.yaml +++ /dev/null @@ -1,17 +0,0 @@ - -apiVersion: v1 -kind: Service -metadata: - name: kibana-logging - namespace: default - labels: - k8s-app: kibana-logging - kubernetes.io/cluster-service: "true" - kubernetes.io/name: "Kibana" -spec: - ports: - - port: 5601 - protocol: TCP - targetPort: kibana-port - selector: - k8s-app: kibana-logging diff --git a/release-0.20.0/docs/getting-started-guides/coreos/azure/azure-login.js b/release-0.20.0/docs/getting-started-guides/coreos/azure/azure-login.js deleted file mode 100755 index 624916b2b56..00000000000 --- a/release-0.20.0/docs/getting-started-guides/coreos/azure/azure-login.js +++ /dev/null @@ -1,3 +0,0 @@ -#!/usr/bin/env node - -require('child_process').fork('node_modules/azure-cli/bin/azure', ['login'].concat(process.argv)); diff --git a/release-0.20.0/docs/getting-started-guides/coreos/azure/cloud_config_templates/kubernetes-cluster-etcd-node-template.yml b/release-0.20.0/docs/getting-started-guides/coreos/azure/cloud_config_templates/kubernetes-cluster-etcd-node-template.yml deleted file mode 100644 index cb1c1b254dd..00000000000 --- a/release-0.20.0/docs/getting-started-guides/coreos/azure/cloud_config_templates/kubernetes-cluster-etcd-node-template.yml +++ /dev/null @@ -1,60 +0,0 @@ -## This file is used as input to deployment script, which ammends it as needed. -## More specifically, we need to add peer hosts for each but the elected peer. - -write_files: - - path: /opt/bin/curl-retry.sh - permissions: '0755' - owner: root - content: | - #!/bin/sh -x - until curl $@ - do sleep 1 - done - -coreos: - units: - - name: download-etcd2.service - enable: true - command: start - content: | - [Unit] - After=network-online.target - Before=etcd2.service - Description=Download etcd2 Binaries - Documentation=https://github.com/coreos/etcd/ - Requires=network-online.target - [Service] - Environment=ETCD2_RELEASE_TARBALL=https://github.com/coreos/etcd/releases/download/v2.0.11/etcd-v2.0.11-linux-amd64.tar.gz - ExecStartPre=/bin/mkdir -p /opt/bin - ExecStart=/opt/bin/curl-retry.sh --silent --location $ETCD2_RELEASE_TARBALL --output /tmp/etcd2.tgz - ExecStart=/bin/tar xzvf /tmp/etcd2.tgz -C /opt - ExecStartPost=/bin/ln -s /opt/etcd-v2.0.11-linux-amd64/etcd /opt/bin/etcd2 - ExecStartPost=/bin/ln -s /opt/etcd-v2.0.11-linux-amd64/etcdctl /opt/bin/etcdctl2 - RemainAfterExit=yes - Type=oneshot - [Install] - WantedBy=multi-user.target - - name: etcd2.service - enable: true - command: start - content: | - [Unit] - After=download-etcd2.service - Description=etcd 2 - Documentation=https://github.com/coreos/etcd/ - [Service] - Environment=ETCD_NAME=%H - Environment=ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster - Environment=ETCD_INITIAL_ADVERTISE_PEER_URLS=http://%H:2380 - Environment=ETCD_LISTEN_PEER_URLS=http://%H:2380 - Environment=ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379,http://0.0.0.0:4001 - Environment=ETCD_ADVERTISE_CLIENT_URLS=http://%H:2379,http://%H:4001 - Environment=ETCD_INITIAL_CLUSTER_STATE=new - ExecStart=/opt/bin/etcd2 - Restart=always - RestartSec=10 - [Install] - WantedBy=multi-user.target - update: - group: stable - reboot-strategy: off diff --git a/release-0.20.0/docs/getting-started-guides/coreos/azure/cloud_config_templates/kubernetes-cluster-main-nodes-template.yml b/release-0.20.0/docs/getting-started-guides/coreos/azure/cloud_config_templates/kubernetes-cluster-main-nodes-template.yml deleted file mode 100644 index 16638e87199..00000000000 --- a/release-0.20.0/docs/getting-started-guides/coreos/azure/cloud_config_templates/kubernetes-cluster-main-nodes-template.yml +++ /dev/null @@ -1,388 +0,0 @@ -## This file is used as input to deployment script, which ammends it as needed. -## More specifically, we need to add environment files for as many nodes as we -## are going to deploy. - -write_files: - - path: /opt/bin/curl-retry.sh - permissions: '0755' - owner: root - content: | - #!/bin/sh -x - until curl $@ - do sleep 1 - done - - - path: /opt/bin/register_minion.sh - permissions: '0755' - owner: root - content: | - #!/bin/sh -xe - minion_id="${1}" - master_url="${2}" - env_label="${3}" - until healthcheck=$(curl --fail --silent "${master_url}/healthz") - do sleep 2 - done - test -n "${healthcheck}" - test "${healthcheck}" = "ok" - printf '{ - "id": "%s", - "kind": "Minion", - "apiVersion": "v1beta1", - "labels": { "environment": "%s" } - }' "${minion_id}" "${env_label}" \ - | /opt/bin/kubectl create -s "${master_url}" -f - - - - path: /etc/kubernetes/manifests/fluentd.manifest - permissions: '0755' - owner: root - content: | - apiVersion: v1 - kind: Pod - metadata: - name: fluentd-elasticsearch - spec: - containers: - - name: fluentd-elasticsearch - image: gcr.io/google_containers/fluentd-elasticsearch:1.5 - env: - - name: "FLUENTD_ARGS" - value: "-qq" - volumeMounts: - - name: varlog - mountPath: /varlog - - name: containers - mountPath: /var/lib/docker/containers - volumes: - - name: varlog - hostPath: - path: /var/log - - name: containers - hostPath: - path: /var/lib/docker/containers - -coreos: - update: - group: stable - reboot-strategy: off - units: - - name: systemd-networkd-wait-online.service - drop-ins: - - name: 50-check-github-is-reachable.conf - content: | - [Service] - ExecStart=/bin/sh -x -c \ - 'until curl --silent --fail https://status.github.com/api/status.json | grep -q \"good\"; do sleep 2; done' - - - name: docker.service - drop-ins: - - name: 50-weave-kubernetes.conf - content: | - [Service] - Environment=DOCKER_OPTS='--bridge="weave" -r="false"' - - - name: weave-network.target - enable: true - content: | - [Unit] - Description=Weave Network Setup Complete - Documentation=man:systemd.special(7) - RefuseManualStart=no - After=network-online.target - [Install] - WantedBy=multi-user.target - WantedBy=kubernetes-master.target - WantedBy=kubernetes-minion.target - - - name: kubernetes-master.target - enable: true - command: start - content: | - [Unit] - Description=Kubernetes Cluster Master - Documentation=http://kubernetes.io/ - RefuseManualStart=no - After=weave-network.target - Requires=weave-network.target - ConditionHost=kube-00 - Wants=apiserver.service - Wants=scheduler.service - Wants=controller-manager.service - [Install] - WantedBy=multi-user.target - - - name: kubernetes-minion.target - enable: true - command: start - content: | - [Unit] - Description=Kubernetes Cluster Minion - Documentation=http://kubernetes.io/ - RefuseManualStart=no - After=weave-network.target - Requires=weave-network.target - ConditionHost=!kube-00 - Wants=proxy.service - Wants=kubelet.service - [Install] - WantedBy=multi-user.target - - - name: 10-weave.network - runtime: false - content: | - [Match] - Type=bridge - Name=weave* - [Network] - - - name: install-weave.service - enable: true - content: | - [Unit] - After=network-online.target - Before=weave.service - Before=weave-helper.service - Before=docker.service - Description=Install Weave - Documentation=http://docs.weave.works/ - Requires=network-online.target - [Service] - Type=oneshot - RemainAfterExit=yes - ExecStartPre=/bin/mkdir -p /opt/bin/ - ExecStartPre=/opt/bin/curl-retry.sh \ - --silent \ - --location \ - https://github.com/weaveworks/weave/releases/download/latest_release/weave \ - --output /opt/bin/weave - ExecStartPre=/opt/bin/curl-retry.sh \ - --silent \ - --location \ - https://raw.github.com/errordeveloper/weave-demos/master/poseidon/weave-helper \ - --output /opt/bin/weave-helper - ExecStartPre=/usr/bin/chmod +x /opt/bin/weave - ExecStartPre=/usr/bin/chmod +x /opt/bin/weave-helper - ExecStart=/bin/echo Weave Installed - [Install] - WantedBy=weave-network.target - WantedBy=weave.service - - - name: weave-helper.service - enable: true - content: | - [Unit] - After=install-weave.service - After=docker.service - Description=Weave Network Router - Documentation=http://docs.weave.works/ - Requires=docker.service - Requires=install-weave.service - [Service] - ExecStart=/opt/bin/weave-helper - Restart=always - [Install] - WantedBy=weave-network.target - - - name: weave.service - enable: true - content: | - [Unit] - After=install-weave.service - After=docker.service - Description=Weave Network Router - Documentation=http://docs.weave.works/ - Requires=docker.service - Requires=install-weave.service - [Service] - TimeoutStartSec=0 - EnvironmentFile=/etc/weave.%H.env - ExecStartPre=/opt/bin/weave setup - ExecStartPre=/opt/bin/weave launch $WEAVE_PEERS - ExecStart=/usr/bin/docker attach weave - Restart=on-failure - Restart=always - ExecStop=/opt/bin/weave stop - [Install] - WantedBy=weave-network.target - - - name: weave-create-bridge.service - enable: true - content: | - [Unit] - After=network.target - After=install-weave.service - Before=weave.service - Before=docker.service - Requires=network.target - Requires=install-weave.service - [Service] - Type=oneshot - EnvironmentFile=/etc/weave.%H.env - ExecStart=/opt/bin/weave --local create-bridge - ExecStart=/usr/bin/ip addr add dev weave $BRIDGE_ADDRESS_CIDR - ExecStart=/usr/bin/ip route add $BREAKOUT_ROUTE dev weave scope link - ExecStart=/usr/bin/ip route add 224.0.0.0/4 dev weave - [Install] - WantedBy=multi-user.target - WantedBy=weave-network.target - - - name: download-kubernetes.service - enable: true - content: | - [Unit] - After=network-online.target - Before=apiserver.service - Before=controller-manager.service - Before=kubelet.service - Before=proxy.service - Description=Download Kubernetes Binaries - Documentation=http://kubernetes.io/ - Requires=network-online.target - [Service] - Environment=KUBE_RELEASE_TARBALL=https://github.com/GoogleCloudPlatform/kubernetes/releases/download/v0.18.0/kubernetes.tar.gz - ExecStartPre=/bin/mkdir -p /opt/ - ExecStart=/opt/bin/curl-retry.sh --silent --location $KUBE_RELEASE_TARBALL --output /tmp/kubernetes.tgz - ExecStart=/bin/tar xzvf /tmp/kubernetes.tgz -C /tmp/ - ExecStart=/bin/tar xzvf /tmp/kubernetes/server/kubernetes-server-linux-amd64.tar.gz -C /opt - ExecStartPost=/bin/chmod o+rx -R /opt/kubernetes - ExecStartPost=/bin/ln -s /opt/kubernetes/server/bin/kubectl /opt/bin/ - ExecStartPost=/bin/mv /tmp/kubernetes/examples/guestbook /home/core/guestbook-example - ExecStartPost=/bin/chown core. -R /home/core/guestbook-example - ExecStartPost=/bin/rm -rf /tmp/kubernetes - ExecStartPost=/bin/sed 's/\("createExternalLoadBalancer":\) true/\1 false/' -i /home/core/guestbook-example/frontend-service.json - RemainAfterExit=yes - Type=oneshot - [Install] - WantedBy=kubernetes-master.target - WantedBy=kubernetes-minion.target - - - name: apiserver.service - enable: true - content: | - [Unit] - After=download-kubernetes.service - Before=controller-manager.service - Before=scheduler.service - ConditionFileIsExecutable=/opt/kubernetes/server/bin/kube-apiserver - Description=Kubernetes API Server - Documentation=http://kubernetes.io/ - Wants=download-kubernetes.service - ConditionHost=kube-00 - [Service] - ExecStart=/opt/kubernetes/server/bin/kube-apiserver \ - --address=0.0.0.0 \ - --port=8080 \ - $ETCD_SERVERS \ - --service-cluster-ip-range=10.1.0.0/16 \ - --cloud_provider=vagrant \ - --logtostderr=true --v=3 - Restart=always - RestartSec=10 - [Install] - WantedBy=kubernetes-master.target - - - name: scheduler.service - enable: true - content: | - [Unit] - After=apiserver.service - After=download-kubernetes.service - ConditionFileIsExecutable=/opt/kubernetes/server/bin/kube-scheduler - Description=Kubernetes Scheduler - Documentation=http://kubernetes.io/ - Wants=apiserver.service - ConditionHost=kube-00 - [Service] - ExecStart=/opt/kubernetes/server/bin/kube-scheduler \ - --logtostderr=true \ - --master=127.0.0.1:8080 - Restart=always - RestartSec=10 - [Install] - WantedBy=kubernetes-master.target - - - name: controller-manager.service - enable: true - content: | - [Unit] - After=download-kubernetes.service - After=apiserver.service - ConditionFileIsExecutable=/opt/kubernetes/server/bin/kube-controller-manager - Description=Kubernetes Controller Manager - Documentation=http://kubernetes.io/ - Wants=apiserver.service - Wants=download-kubernetes.service - ConditionHost=kube-00 - [Service] - ExecStart=/opt/kubernetes/server/bin/kube-controller-manager \ - --cloud_provider=vagrant \ - --master=127.0.0.1:8080 \ - --logtostderr=true - Restart=always - RestartSec=10 - [Install] - WantedBy=kubernetes-master.target - - - name: kubelet.service - enable: true - content: | - [Unit] - After=download-kubernetes.service - ConditionFileIsExecutable=/opt/kubernetes/server/bin/kubelet - Description=Kubernetes Kubelet - Documentation=http://kubernetes.io/ - Wants=download-kubernetes.service - ConditionHost=!kube-00 - [Service] - ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests/ - ExecStart=/opt/kubernetes/server/bin/kubelet \ - --address=0.0.0.0 \ - --port=10250 \ - --hostname_override=%H \ - --api_servers=http://kube-00:8080 \ - --logtostderr=true \ - --cluster_dns=10.1.0.3 \ - --cluster_domain=kube.local \ - --config=/etc/kubernetes/manifests/ - Restart=always - RestartSec=10 - [Install] - WantedBy=kubernetes-minion.target - - - name: proxy.service - enable: true - content: | - [Unit] - After=download-kubernetes.service - ConditionFileIsExecutable=/opt/kubernetes/server/bin/kube-proxy - Description=Kubernetes Proxy - Documentation=http://kubernetes.io/ - Wants=download-kubernetes.service - ConditionHost=!kube-00 - [Service] - ExecStart=/opt/kubernetes/server/bin/kube-proxy \ - --master=http://kube-00:8080 \ - --logtostderr=true - Restart=always - RestartSec=10 - [Install] - WantedBy=kubernetes-minion.target - - - name: kubectl-create-minion.service - enable: true - content: | - [Unit] - After=download-kubernetes.service - Before=proxy.service - Before=kubelet.service - ConditionFileIsExecutable=/opt/kubernetes/server/bin/kubectl - ConditionFileIsExecutable=/opt/bin/register_minion.sh - Description=Kubernetes Create Minion - Documentation=http://kubernetes.io/ - Wants=download-kubernetes.service - ConditionHost=!kube-00 - [Service] - ExecStart=/opt/bin/register_minion.sh %H http://kube-00:8080 production - Type=oneshot - [Install] - WantedBy=kubernetes-minion.target diff --git a/release-0.20.0/docs/getting-started-guides/coreos/azure/create-kubernetes-cluster.js b/release-0.20.0/docs/getting-started-guides/coreos/azure/create-kubernetes-cluster.js deleted file mode 100755 index 70248c596c6..00000000000 --- a/release-0.20.0/docs/getting-started-guides/coreos/azure/create-kubernetes-cluster.js +++ /dev/null @@ -1,15 +0,0 @@ -#!/usr/bin/env node - -var azure = require('./lib/azure_wrapper.js'); -var kube = require('./lib/deployment_logic/kubernetes.js'); - -azure.create_config('kube', { 'etcd': 3, 'kube': 3 }); - -azure.run_task_queue([ - azure.queue_default_network(), - azure.queue_storage_if_needed(), - azure.queue_machines('etcd', 'stable', - kube.create_etcd_cloud_config), - azure.queue_machines('kube', 'stable', - kube.create_node_cloud_config), -]); diff --git a/release-0.20.0/docs/getting-started-guides/coreos/azure/destroy-cluster.js b/release-0.20.0/docs/getting-started-guides/coreos/azure/destroy-cluster.js deleted file mode 100755 index ce441e538a5..00000000000 --- a/release-0.20.0/docs/getting-started-guides/coreos/azure/destroy-cluster.js +++ /dev/null @@ -1,7 +0,0 @@ -#!/usr/bin/env node - -var azure = require('./lib/azure_wrapper.js'); - -azure.destroy_cluster(process.argv[2]); - -console.log('The cluster had been destroyed, you can delete the state file now.'); diff --git a/release-0.20.0/docs/getting-started-guides/coreos/azure/external_access.png b/release-0.20.0/docs/getting-started-guides/coreos/azure/external_access.png deleted file mode 100644 index 6541309b0ac..00000000000 Binary files a/release-0.20.0/docs/getting-started-guides/coreos/azure/external_access.png and /dev/null differ diff --git a/release-0.20.0/docs/getting-started-guides/coreos/azure/initial_cluster.png b/release-0.20.0/docs/getting-started-guides/coreos/azure/initial_cluster.png deleted file mode 100644 index 99646a3fd06..00000000000 Binary files a/release-0.20.0/docs/getting-started-guides/coreos/azure/initial_cluster.png and /dev/null differ diff --git a/release-0.20.0/docs/getting-started-guides/coreos/azure/lib/azure_wrapper.js b/release-0.20.0/docs/getting-started-guides/coreos/azure/lib/azure_wrapper.js deleted file mode 100644 index 8f48b25181a..00000000000 --- a/release-0.20.0/docs/getting-started-guides/coreos/azure/lib/azure_wrapper.js +++ /dev/null @@ -1,271 +0,0 @@ -var _ = require('underscore'); - -var fs = require('fs'); -var cp = require('child_process'); - -var yaml = require('js-yaml'); - -var openssl = require('openssl-wrapper'); - -var clr = require('colors'); -var inspect = require('util').inspect; - -var util = require('./util.js'); - -var coreos_image_ids = { - 'stable': '2b171e93f07c4903bcad35bda10acf22__CoreOS-Stable-647.2.0', - 'beta': '2b171e93f07c4903bcad35bda10acf22__CoreOS-Beta-681.0.0', // untested - 'alpha': '2b171e93f07c4903bcad35bda10acf22__CoreOS-Alpha-695.0.0' // untested -}; - -var conf = {}; - -var hosts = { - collection: [], - ssh_port_counter: 2200, -}; - -var task_queue = []; - -exports.run_task_queue = function (dummy) { - var tasks = { - todo: task_queue, - done: [], - }; - - var pop_task = function() { - console.log(clr.yellow('azure_wrapper/task:'), clr.grey(inspect(tasks))); - var ret = {}; - ret.current = tasks.todo.shift(); - ret.remaining = tasks.todo.length; - return ret; - }; - - (function iter (task) { - if (task.current === undefined) { - if (conf.destroying === undefined) { - create_ssh_conf(); - save_state(); - } - return; - } else { - if (task.current.length !== 0) { - console.log(clr.yellow('azure_wrapper/exec:'), clr.blue(inspect(task.current))); - cp.fork('node_modules/azure-cli/bin/azure', task.current) - .on('exit', function (code, signal) { - tasks.done.push({ - code: code, - signal: signal, - what: task.current.join(' '), - remaining: task.remaining, - }); - if (code !== 0 && conf.destroying === undefined) { - console.log(clr.red('azure_wrapper/fail: Exiting due to an error.')); - save_state(); - console.log(clr.cyan('azure_wrapper/info: You probably want to destroy and re-run.')); - process.abort(); - } else { - iter(pop_task()); - } - }); - } else { - iter(pop_task()); - } - } - })(pop_task()); -}; - -var save_state = function () { - var file_name = util.join_output_file_path(conf.name, 'deployment.yml'); - try { - conf.hosts = hosts.collection; - fs.writeFileSync(file_name, yaml.safeDump(conf)); - console.log(clr.yellow('azure_wrapper/info: Saved state into `%s`'), file_name); - } catch (e) { - console.log(clr.red(e)); - } -}; - -var load_state = function (file_name) { - try { - conf = yaml.safeLoad(fs.readFileSync(file_name, 'utf8')); - console.log(clr.yellow('azure_wrapper/info: Loaded state from `%s`'), file_name); - return conf; - } catch (e) { - console.log(clr.red(e)); - } -}; - -var create_ssh_key = function (prefix) { - var opts = { - x509: true, - nodes: true, - newkey: 'rsa:2048', - subj: '/O=Weaveworks, Inc./L=London/C=GB/CN=weave.works', - keyout: util.join_output_file_path(prefix, 'ssh.key'), - out: util.join_output_file_path(prefix, 'ssh.pem'), - }; - openssl.exec('req', opts, function (err, buffer) { - if (err) console.log(clr.red(err)); - fs.chmod(opts.keyout, '0600', function (err) { - if (err) console.log(clr.red(err)); - }); - }); - return { - key: opts.keyout, - pem: opts.out, - } -} - -var create_ssh_conf = function () { - var file_name = util.join_output_file_path(conf.name, 'ssh_conf'); - var ssh_conf_head = [ - "Host *", - "\tHostname " + conf.resources['service'] + ".cloudapp.net", - "\tUser core", - "\tCompression yes", - "\tLogLevel FATAL", - "\tStrictHostKeyChecking no", - "\tUserKnownHostsFile /dev/null", - "\tIdentitiesOnly yes", - "\tIdentityFile " + conf.resources['ssh_key']['key'], - "\n", - ]; - - fs.writeFileSync(file_name, ssh_conf_head.concat(_.map(hosts.collection, function (host) { - return _.template("Host <%= name %>\n\tPort <%= port %>\n")(host); - })).join('\n')); - console.log(clr.yellow('azure_wrapper/info:'), clr.green('Saved SSH config, you can use it like so: `ssh -F ', file_name, '`')); - console.log(clr.yellow('azure_wrapper/info:'), clr.green('The hosts in this deployment are:\n'), _.map(hosts.collection, function (host) { return host.name; })); -}; - -var get_location = function () { - if (process.env['AZ_AFFINITY']) { - return '--affinity-group=' + process.env['AZ_AFFINITY']; - } else if (process.env['AZ_LOCATION']) { - return '--location=' + process.env['AZ_LOCATION']; - } else { - return '--location=West Europe'; - } -} -var get_vm_size = function () { - if (process.env['AZ_VM_SIZE']) { - return '--vm-size=' + process.env['AZ_VM_SIZE']; - } else { - return '--vm-size=Small'; - } -} - -exports.queue_default_network = function () { - task_queue.push([ - 'network', 'vnet', 'create', - get_location(), - '--address-space=172.16.0.0', - conf.resources['vnet'], - ]); -} - -exports.queue_storage_if_needed = function() { - if (!process.env['AZURE_STORAGE_ACCOUNT']) { - conf.resources['storage_account'] = util.rand_suffix; - task_queue.push([ - 'storage', 'account', 'create', - '--type=LRS', - get_location(), - conf.resources['storage_account'], - ]); - process.env['AZURE_STORAGE_ACCOUNT'] = conf.resources['storage_account']; - } else { - // Preserve it for resizing, so we don't create a new one by accedent, - // when the environment variable is unset - conf.resources['storage_account'] = process.env['AZURE_STORAGE_ACCOUNT']; - } -}; - -exports.queue_machines = function (name_prefix, coreos_update_channel, cloud_config_creator) { - var x = conf.nodes[name_prefix]; - var vm_create_base_args = [ - 'vm', 'create', - get_location(), - get_vm_size(), - '--connect=' + conf.resources['service'], - '--virtual-network-name=' + conf.resources['vnet'], - '--no-ssh-password', - '--ssh-cert=' + conf.resources['ssh_key']['pem'], - ]; - - var cloud_config = cloud_config_creator(x, conf); - - var next_host = function (n) { - hosts.ssh_port_counter += 1; - var host = { name: util.hostname(n, name_prefix), port: hosts.ssh_port_counter }; - if (cloud_config instanceof Array) { - host.cloud_config_file = cloud_config[n]; - } else { - host.cloud_config_file = cloud_config; - } - hosts.collection.push(host); - return _.map([ - "--vm-name=<%= name %>", - "--ssh=<%= port %>", - "--custom-data=<%= cloud_config_file %>", - ], function (arg) { return _.template(arg)(host); }); - }; - - task_queue = task_queue.concat(_(x).times(function (n) { - if (conf.resizing && n < conf.old_size) { - return []; - } else { - return vm_create_base_args.concat(next_host(n), [ - coreos_image_ids[coreos_update_channel], 'core', - ]); - } - })); -}; - -exports.create_config = function (name, nodes) { - conf = { - name: name, - nodes: nodes, - weave_salt: util.rand_string(), - resources: { - vnet: [name, 'internal-vnet', util.rand_suffix].join('-'), - service: [name, util.rand_suffix].join('-'), - ssh_key: create_ssh_key(name), - } - }; - -}; - -exports.destroy_cluster = function (state_file) { - load_state(state_file); - if (conf.hosts === undefined) { - console.log(clr.red('azure_wrapper/fail: Nothing to delete.')); - process.abort(); - } - - conf.destroying = true; - task_queue = _.map(conf.hosts, function (host) { - return ['vm', 'delete', '--quiet', '--blob-delete', host.name]; - }); - - task_queue.push(['network', 'vnet', 'delete', '--quiet', conf.resources['vnet']]); - task_queue.push(['storage', 'account', 'delete', '--quiet', conf.resources['storage_account']]); - - exports.run_task_queue(); -}; - -exports.load_state_for_resizing = function (state_file, node_type, new_nodes) { - load_state(state_file); - if (conf.hosts === undefined) { - console.log(clr.red('azure_wrapper/fail: Nothing to look at.')); - process.abort(); - } - conf.resizing = true; - conf.old_size = conf.nodes[node_type]; - conf.old_state_file = state_file; - conf.nodes[node_type] += new_nodes; - hosts.collection = conf.hosts; - hosts.ssh_port_counter += conf.hosts.length; - process.env['AZURE_STORAGE_ACCOUNT'] = conf.resources['storage_account']; -} diff --git a/release-0.20.0/docs/getting-started-guides/coreos/azure/lib/cloud_config.js b/release-0.20.0/docs/getting-started-guides/coreos/azure/lib/cloud_config.js deleted file mode 100644 index 75cff6cf2db..00000000000 --- a/release-0.20.0/docs/getting-started-guides/coreos/azure/lib/cloud_config.js +++ /dev/null @@ -1,43 +0,0 @@ -var _ = require('underscore'); -var fs = require('fs'); -var yaml = require('js-yaml'); -var colors = require('colors/safe'); - - -var write_cloud_config_from_object = function (data, output_file) { - try { - fs.writeFileSync(output_file, [ - '#cloud-config', - yaml.safeDump(data), - ].join("\n")); - return output_file; - } catch (e) { - console.log(colors.red(e)); - } -}; - -exports.generate_environment_file_entry_from_object = function (hostname, environ) { - var data = { - hostname: hostname, - environ_array: _.map(environ, function (value, key) { - return [key.toUpperCase(), JSON.stringify(value.toString())].join('='); - }), - }; - - return { - permissions: '0600', - owner: 'root', - content: _.template("<%= environ_array.join('\\n') %>\n")(data), - path: _.template("/etc/weave.<%= hostname %>.env")(data), - }; -}; - -exports.process_template = function (input_file, output_file, processor) { - var data = {}; - try { - data = yaml.safeLoad(fs.readFileSync(input_file, 'utf8')); - } catch (e) { - console.log(colors.red(e)); - } - return write_cloud_config_from_object(processor(_.clone(data)), output_file); -}; diff --git a/release-0.20.0/docs/getting-started-guides/coreos/azure/lib/deployment_logic/kubernetes.js b/release-0.20.0/docs/getting-started-guides/coreos/azure/lib/deployment_logic/kubernetes.js deleted file mode 100644 index e497a55708d..00000000000 --- a/release-0.20.0/docs/getting-started-guides/coreos/azure/lib/deployment_logic/kubernetes.js +++ /dev/null @@ -1,76 +0,0 @@ -var _ = require('underscore'); -_.mixin(require('underscore.string').exports()); - -var util = require('../util.js'); -var cloud_config = require('../cloud_config.js'); - - -etcd_initial_cluster_conf_self = function (conf) { - var port = '2380'; - - var data = { - nodes: _(conf.nodes.etcd).times(function (n) { - var host = util.hostname(n, 'etcd'); - return [host, [host, port].join(':')].join('=http://'); - }), - }; - - return { - 'name': 'etcd2.service', - 'drop-ins': [{ - 'name': '50-etcd-initial-cluster.conf', - 'content': _.template("[Service]\nEnvironment=ETCD_INITIAL_CLUSTER=<%= nodes.join(',') %>\n")(data), - }], - }; -}; - -etcd_initial_cluster_conf_kube = function (conf) { - var port = '4001'; - - var data = { - nodes: _(conf.nodes.etcd).times(function (n) { - var host = util.hostname(n, 'etcd'); - return 'http://' + [host, port].join(':'); - }), - }; - - return { - 'name': 'apiserver.service', - 'drop-ins': [{ - 'name': '50-etcd-initial-cluster.conf', - 'content': _.template("[Service]\nEnvironment=ETCD_SERVERS=--etcd_servers=<%= nodes.join(',') %>\n")(data), - }], - }; -}; - -exports.create_etcd_cloud_config = function (node_count, conf) { - var input_file = './cloud_config_templates/kubernetes-cluster-etcd-node-template.yml'; - var output_file = util.join_output_file_path('kubernetes-cluster-etcd-nodes', 'generated.yml'); - - return cloud_config.process_template(input_file, output_file, function(data) { - data.coreos.units.push(etcd_initial_cluster_conf_self(conf)); - return data; - }); -}; - -exports.create_node_cloud_config = function (node_count, conf) { - var elected_node = 0; - - var input_file = './cloud_config_templates/kubernetes-cluster-main-nodes-template.yml'; - var output_file = util.join_output_file_path('kubernetes-cluster-main-nodes', 'generated.yml'); - - var make_node_config = function (n) { - return cloud_config.generate_environment_file_entry_from_object(util.hostname(n, 'kube'), { - weave_password: conf.weave_salt, - weave_peers: n === elected_node ? "" : util.hostname(elected_node, 'kube'), - breakout_route: util.ipv4([10, 2, 0, 0], 16), - bridge_address_cidr: util.ipv4([10, 2, n, 1], 24), - }); - }; - - return cloud_config.process_template(input_file, output_file, function(data) { - data.write_files = data.write_files.concat(_(node_count).times(make_node_config)); - data.coreos.units.push(etcd_initial_cluster_conf_kube(conf)); - return data; - }); -}; diff --git a/release-0.20.0/docs/getting-started-guides/coreos/azure/lib/util.js b/release-0.20.0/docs/getting-started-guides/coreos/azure/lib/util.js deleted file mode 100644 index 2c88b8cff35..00000000000 --- a/release-0.20.0/docs/getting-started-guides/coreos/azure/lib/util.js +++ /dev/null @@ -1,33 +0,0 @@ -var _ = require('underscore'); -_.mixin(require('underscore.string').exports()); - -exports.ipv4 = function (ocets, prefix) { - return { - ocets: ocets, - prefix: prefix, - toString: function () { - return [ocets.join('.'), prefix].join('/'); - } - } -}; - -exports.hostname = function hostname (n, prefix) { - return _.template("<%= pre %>-<%= seq %>")({ - pre: prefix || 'core', - seq: _.pad(n, 2, '0'), - }); -}; - -exports.rand_string = function () { - var crypto = require('crypto'); - var shasum = crypto.createHash('sha256'); - shasum.update(crypto.randomBytes(256)); - return shasum.digest('hex'); -}; - - -exports.rand_suffix = exports.rand_string().substring(50); - -exports.join_output_file_path = function(prefix, suffix) { - return './output/' + [prefix, exports.rand_suffix, suffix].join('_'); -}; diff --git a/release-0.20.0/docs/getting-started-guides/coreos/azure/package.json b/release-0.20.0/docs/getting-started-guides/coreos/azure/package.json deleted file mode 100644 index 2eb45fd03ff..00000000000 --- a/release-0.20.0/docs/getting-started-guides/coreos/azure/package.json +++ /dev/null @@ -1,19 +0,0 @@ -{ - "name": "coreos-azure-weave", - "version": "1.0.0", - "description": "Small utility to bring up a woven CoreOS cluster", - "main": "index.js", - "scripts": { - "test": "echo \"Error: no test specified\" && exit 1" - }, - "author": "Ilya Dmitrichenko ", - "license": "Apache 2.0", - "dependencies": { - "azure-cli": "^0.9.2", - "colors": "^1.0.3", - "js-yaml": "^3.2.5", - "openssl-wrapper": "^0.2.1", - "underscore": "^1.7.0", - "underscore.string": "^3.0.2" - } -} diff --git a/release-0.20.0/docs/getting-started-guides/coreos/azure/scale-kubernetes-cluster.js b/release-0.20.0/docs/getting-started-guides/coreos/azure/scale-kubernetes-cluster.js deleted file mode 100755 index f606898874c..00000000000 --- a/release-0.20.0/docs/getting-started-guides/coreos/azure/scale-kubernetes-cluster.js +++ /dev/null @@ -1,10 +0,0 @@ -#!/usr/bin/env node - -var azure = require('./lib/azure_wrapper.js'); -var kube = require('./lib/deployment_logic/kubernetes.js'); - -azure.load_state_for_resizing(process.argv[2], 'kube', parseInt(process.argv[3] || 1)); - -azure.run_task_queue([ - azure.queue_machines('kube', 'stable', kube.create_node_cloud_config), -]); diff --git a/release-0.20.0/docs/getting-started-guides/coreos/bare_metal_offline.md b/release-0.20.0/docs/getting-started-guides/coreos/bare_metal_offline.md deleted file mode 100644 index 00182e01562..00000000000 --- a/release-0.20.0/docs/getting-started-guides/coreos/bare_metal_offline.md +++ /dev/null @@ -1,663 +0,0 @@ -Bare Metal CoreOS with Kubernetes (OFFLINE) ------------------------------------------- -Deploy a CoreOS running Kubernetes environment. This particular guild is made to help those in an OFFLINE system, wither for testing a POC before the real deal, or you are restricted to be totally offline for your applications. - -**Table of Contents** - -- [Prerequisites](#prerequisites) -- [High Level Design](#high-level-design) -- [This Guides variables](#this-guides-variables) -- [Setup PXELINUX CentOS](#setup-pxelinux-centos) -- [Adding CoreOS to PXE](#adding-coreos-to-pxe) -- [DHCP configuration](#dhcp-configuration) -- [Kubernetes](#kubernetes) -- [Cloud Configs](#cloud-configs) - - [master.yml](#masteryml) - - [node.yml](#nodeyml) -- [New pxelinux.cfg file](#new-pxelinuxcfg-file) -- [Specify the pxelinux targets](#specify-the-pxelinux-targets) -- [Creating test pod](#creating-test-pod) -- [Helping commands for debugging](#helping-commands-for-debugging) - - -## Prerequisites -1. Installed *CentOS 6* for PXE server -2. At least two bare metal nodes to work with - -## High Level Design -1. Manage the tftp directory - * /tftpboot/(coreos)(centos)(RHEL) - * /tftpboot/pxelinux.0/(MAC) -> linked to Linux image config file -2. Update per install the link for pxelinux -3. Update the DHCP config to reflect the host needing deployment -4. Setup nodes to deploy CoreOS creating a etcd cluster. -5. Have no access to the public [etcd discovery tool](https://discovery.etcd.io/). -6. Installing the CoreOS slaves to become Kubernetes minions. - -## This Guides variables -| Node Description | MAC | IP | -| :---------------------------- | :---------------: | :---------: | -| CoreOS/etcd/Kubernetes Master | d0:00:67:13:0d:00 | 10.20.30.40 | -| CoreOS Slave 1 | d0:00:67:13:0d:01 | 10.20.30.41 | -| CoreOS Slave 2 | d0:00:67:13:0d:02 | 10.20.30.42 | - - -## Setup PXELINUX CentOS -To setup CentOS PXELINUX environment there is a complete [guide here](http://docs.fedoraproject.org/en-US/Fedora/7/html/Installation_Guide/ap-pxe-server.html). This section is the abbreviated version. - -1. Install packages needed on CentOS - - sudo yum install tftp-server dhcp syslinux - -2. ```vi /etc/xinetd.d/tftp``` to enable tftp service and change disable to 'no' - disable = no - -3. Copy over the syslinux images we will need. - - su - - mkdir -p /tftpboot - cd /tftpboot - cp /usr/share/syslinux/pxelinux.0 /tftpboot - cp /usr/share/syslinux/menu.c32 /tftpboot - cp /usr/share/syslinux/memdisk /tftpboot - cp /usr/share/syslinux/mboot.c32 /tftpboot - cp /usr/share/syslinux/chain.c32 /tftpboot - - /sbin/service dhcpd start - /sbin/service xinetd start - /sbin/chkconfig tftp on - -4. Setup default boot menu - - mkdir /tftpboot/pxelinux.cfg - touch /tftpboot/pxelinux.cfg/default - -5. Edit the menu ```vi /tftpboot/pxelinux.cfg/default``` - - default menu.c32 - prompt 0 - timeout 15 - ONTIMEOUT local - display boot.msg - - MENU TITLE Main Menu - - LABEL local - MENU LABEL Boot local hard drive - LOCALBOOT 0 - -Now you should have a working PXELINUX setup to image CoreOS nodes. You can verify the services by using VirtualBox locally or with bare metal servers. - -## Adding CoreOS to PXE -This section describes how to setup the CoreOS images to live alongside a pre-existing PXELINUX environment. - -1. Find or create the TFTP root directory that everything will be based off of. - * For this document we will assume ```/tftpboot/``` is our root directory. -2. Once we know and have our tftp root directory we will create a new directory structure for our CoreOS images. -3. Download the CoreOS PXE files provided by the CoreOS team. - - MY_TFTPROOT_DIR=/tftpboot - mkdir -p $MY_TFTPROOT_DIR/images/coreos/ - cd $MY_TFTPROOT_DIR/images/coreos/ - wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe.vmlinuz - wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe.vmlinuz.sig - wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe_image.cpio.gz - wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe_image.cpio.gz.sig - gpg --verify coreos_production_pxe.vmlinuz.sig - gpg --verify coreos_production_pxe_image.cpio.gz.sig - -4. Edit the menu ```vi /tftpboot/pxelinux.cfg/default``` again - - default menu.c32 - prompt 0 - timeout 300 - ONTIMEOUT local - display boot.msg - - MENU TITLE Main Menu - - LABEL local - MENU LABEL Boot local hard drive - LOCALBOOT 0 - - MENU BEGIN CoreOS Menu - - LABEL coreos-master - MENU LABEL CoreOS Master - KERNEL images/coreos/coreos_production_pxe.vmlinuz - APPEND initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http:///pxe-cloud-config-single-master.yml - - LABEL coreos-slave - MENU LABEL CoreOS Slave - KERNEL images/coreos/coreos_production_pxe.vmlinuz - APPEND initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http:///pxe-cloud-config-slave.yml - MENU END - -This configuration file will now boot from local drive but have the option to PXE image CoreOS. - -## DHCP configuration -This section covers configuring the DHCP server to hand out our new images. In this case we are assuming that there are other servers that will boot alongside other images. - -1. Add the ```filename``` to the _host_ or _subnet_ sections. - - filename "/tftpboot/pxelinux.0"; - -2. At this point we want to make pxelinux configuration files that will be the templates for the different CoreOS deployments. - - subnet 10.20.30.0 netmask 255.255.255.0 { - next-server 10.20.30.242; - option broadcast-address 10.20.30.255; - filename ""; - - ... - # http://www.syslinux.org/wiki/index.php/PXELINUX - host core_os_master { - hardware ethernet d0:00:67:13:0d:00; - option routers 10.20.30.1; - fixed-address 10.20.30.40; - option domain-name-servers 10.20.30.242; - filename "/pxelinux.0"; - } - host core_os_slave { - hardware ethernet d0:00:67:13:0d:01; - option routers 10.20.30.1; - fixed-address 10.20.30.41; - option domain-name-servers 10.20.30.242; - filename "/pxelinux.0"; - } - host core_os_slave2 { - hardware ethernet d0:00:67:13:0d:02; - option routers 10.20.30.1; - fixed-address 10.20.30.42; - option domain-name-servers 10.20.30.242; - filename "/pxelinux.0"; - } - ... - } - -We will be specifying the node configuration later in the guide. - -## Kubernetes -To deploy our configuration we need to create an ```etcd``` master. To do so we want to pxe CoreOS with a specific cloud-config.yml. There are two options we have here. -1. Is to template the cloud config file and programmatically create new static configs for different cluster setups. -2. Have a service discovery protocol running in our stack to do auto discovery. - -This demo we just make a static single ```etcd``` server to host our Kubernetes and ```etcd``` master servers. - -Since we are OFFLINE here most of the helping processes in CoreOS and Kubernetes are then limited. To do our setup we will then have to download and serve up our binaries for Kubernetes in our local environment. - -An easy solution is to host a small web server on the DHCP/TFTP host for all our binaries to make them available to the local CoreOS PXE machines. - -To get this up and running we are going to setup a simple ```apache``` server to serve our binaries needed to bootstrap Kubernetes. - -This is on the PXE server from the previous section: - - rm /etc/httpd/conf.d/welcome.conf - cd /var/www/html/ - wget -O kube-register https://github.com/kelseyhightower/kube-register/releases/download/v0.0.2/kube-register-0.0.2-linux-amd64 - wget -O setup-network-environment https://github.com/kelseyhightower/setup-network-environment/releases/download/v1.0.0/setup-network-environment - wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubernetes --no-check-certificate - wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-apiserver --no-check-certificate - wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-controller-manager --no-check-certificate - wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-scheduler --no-check-certificate - wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubectl --no-check-certificate - wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubecfg --no-check-certificate - wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubelet --no-check-certificate - wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-proxy --no-check-certificate - wget -O flanneld https://storage.googleapis.com/k8s/flanneld --no-check-certificate - -This sets up our binaries we need to run Kubernetes. This would need to be enhanced to download from the Internet for updates in the future. - -Now for the good stuff! - -## Cloud Configs -The following config files are tailored for the OFFLINE version of a Kubernetes deployment. - -These are based on the work found here: [master.yml](http://docs.k8s.io/getting-started-guides/coreos/cloud-configs/master.yaml), [node.yml](http://docs.k8s.io/getting-started-guides/coreos/cloud-configs/node.yaml) - -To make the setup work, you need to replace a few placeholders: - - - Replace `` with your PXE server ip address (e.g. 10.20.30.242) - - Replace `` with the kubernetes master ip address (e.g. 10.20.30.40) - - If you run a private docker registry, replace `rdocker.example.com` with your docker registry dns name. - - If you use a proxy, replace `rproxy.example.com` with your proxy server (and port) - - Add your own SSH public key(s) to the cloud config at the end - -### master.yml -On the PXE server make and fill in the variables ```vi /var/www/html/coreos/pxe-cloud-config-master.yml```. - - - #cloud-config - --- - write_files: - - path: /opt/bin/waiter.sh - owner: root - content: | - #! /usr/bin/bash - until curl http://127.0.0.1:4001/v2/machines; do sleep 2; done - - path: /opt/bin/kubernetes-download.sh - owner: root - permissions: 0755 - content: | - #! /usr/bin/bash - /usr/bin/wget -N -P "/opt/bin" "http:///kubectl" - /usr/bin/wget -N -P "/opt/bin" "http:///kubernetes" - /usr/bin/wget -N -P "/opt/bin" "http:///kubecfg" - chmod +x /opt/bin/* - - path: /etc/profile.d/opt-path.sh - owner: root - permissions: 0755 - content: | - #! /usr/bin/bash - PATH=$PATH/opt/bin - coreos: - units: - - name: 10-eno1.network - runtime: true - content: | - [Match] - Name=eno1 - [Network] - DHCP=yes - - name: 20-nodhcp.network - runtime: true - content: | - [Match] - Name=en* - [Network] - DHCP=none - - name: get-kube-tools.service - runtime: true - command: start - content: | - [Service] - ExecStartPre=-/usr/bin/mkdir -p /opt/bin - ExecStart=/opt/bin/kubernetes-download.sh - RemainAfterExit=yes - Type=oneshot - - name: setup-network-environment.service - command: start - content: | - [Unit] - Description=Setup Network Environment - Documentation=https://github.com/kelseyhightower/setup-network-environment - Requires=network-online.target - After=network-online.target - [Service] - ExecStartPre=-/usr/bin/mkdir -p /opt/bin - ExecStartPre=/usr/bin/wget -N -P /opt/bin http:///setup-network-environment - ExecStartPre=/usr/bin/chmod +x /opt/bin/setup-network-environment - ExecStart=/opt/bin/setup-network-environment - RemainAfterExit=yes - Type=oneshot - - name: etcd.service - command: start - content: | - [Unit] - Description=etcd - Requires=setup-network-environment.service - After=setup-network-environment.service - [Service] - EnvironmentFile=/etc/network-environment - User=etcd - PermissionsStartOnly=true - ExecStart=/usr/bin/etcd \ - --name ${DEFAULT_IPV4} \ - --addr ${DEFAULT_IPV4}:4001 \ - --bind-addr 0.0.0.0 \ - --cluster-active-size 1 \ - --data-dir /var/lib/etcd \ - --http-read-timeout 86400 \ - --peer-addr ${DEFAULT_IPV4}:7001 \ - --snapshot true - Restart=always - RestartSec=10s - - name: fleet.socket - command: start - content: | - [Socket] - ListenStream=/var/run/fleet.sock - - name: fleet.service - command: start - content: | - [Unit] - Description=fleet daemon - Wants=etcd.service - After=etcd.service - Wants=fleet.socket - After=fleet.socket - [Service] - Environment="FLEET_ETCD_SERVERS=http://127.0.0.1:4001" - Environment="FLEET_METADATA=role=master" - ExecStart=/usr/bin/fleetd - Restart=always - RestartSec=10s - - name: etcd-waiter.service - command: start - content: | - [Unit] - Description=etcd waiter - Wants=network-online.target - Wants=etcd.service - After=etcd.service - After=network-online.target - Before=flannel.service - Before=setup-network-environment.service - [Service] - ExecStartPre=/usr/bin/chmod +x /opt/bin/waiter.sh - ExecStart=/usr/bin/bash /opt/bin/waiter.sh - RemainAfterExit=true - Type=oneshot - - name: flannel.service - command: start - content: | - [Unit] - Wants=etcd-waiter.service - After=etcd-waiter.service - Requires=etcd.service - After=etcd.service - After=network-online.target - Wants=network-online.target - Description=flannel is an etcd backed overlay network for containers - [Service] - Type=notify - ExecStartPre=-/usr/bin/mkdir -p /opt/bin - ExecStartPre=/usr/bin/wget -N -P /opt/bin http:///flanneld - ExecStartPre=/usr/bin/chmod +x /opt/bin/flanneld - ExecStartPre=-/usr/bin/etcdctl mk /coreos.com/network/config '{"Network":"10.100.0.0/16", "Backend": {"Type": "vxlan"}}' - ExecStart=/opt/bin/flanneld - - name: kube-apiserver.service - command: start - content: | - [Unit] - Description=Kubernetes API Server - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=etcd.service - After=etcd.service - [Service] - ExecStartPre=-/usr/bin/mkdir -p /opt/bin - ExecStartPre=/usr/bin/wget -N -P /opt/bin http:///kube-apiserver - ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-apiserver - ExecStart=/opt/bin/kube-apiserver \ - --address=0.0.0.0 \ - --port=8080 \ - --service-cluster-ip-range=10.100.0.0/16 \ - --etcd_servers=http://127.0.0.1:4001 \ - --logtostderr=true - Restart=always - RestartSec=10 - - name: kube-controller-manager.service - command: start - content: | - [Unit] - Description=Kubernetes Controller Manager - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=kube-apiserver.service - After=kube-apiserver.service - [Service] - ExecStartPre=/usr/bin/wget -N -P /opt/bin http:///kube-controller-manager - ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-controller-manager - ExecStart=/opt/bin/kube-controller-manager \ - --master=127.0.0.1:8080 \ - --logtostderr=true - Restart=always - RestartSec=10 - - name: kube-scheduler.service - command: start - content: | - [Unit] - Description=Kubernetes Scheduler - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=kube-apiserver.service - After=kube-apiserver.service - [Service] - ExecStartPre=/usr/bin/wget -N -P /opt/bin http:///kube-scheduler - ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-scheduler - ExecStart=/opt/bin/kube-scheduler --master=127.0.0.1:8080 - Restart=always - RestartSec=10 - - name: kube-register.service - command: start - content: | - [Unit] - Description=Kubernetes Registration Service - Documentation=https://github.com/kelseyhightower/kube-register - Requires=kube-apiserver.service - After=kube-apiserver.service - Requires=fleet.service - After=fleet.service - [Service] - ExecStartPre=/usr/bin/wget -N -P /opt/bin http:///kube-register - ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-register - ExecStart=/opt/bin/kube-register \ - --metadata=role=node \ - --fleet-endpoint=unix:///var/run/fleet.sock \ - --healthz-port=10248 \ - --api-endpoint=http://127.0.0.1:8080 - Restart=always - RestartSec=10 - update: - group: stable - reboot-strategy: off - ssh_authorized_keys: - - ssh-rsa AAAAB3NzaC1yc2EAAAAD... - - -### node.yml -On the PXE server make and fill in the variables ```vi /var/www/html/coreos/pxe-cloud-config-slave.yml```. - - #cloud-config - --- - write_files: - - path: /etc/default/docker - content: | - DOCKER_EXTRA_OPTS='--insecure-registry="rdocker.example.com:5000"' - coreos: - units: - - name: 10-eno1.network - runtime: true - content: | - [Match] - Name=eno1 - [Network] - DHCP=yes - - name: 20-nodhcp.network - runtime: true - content: | - [Match] - Name=en* - [Network] - DHCP=none - - name: etcd.service - mask: true - - name: docker.service - drop-ins: - - name: 50-insecure-registry.conf - content: | - [Service] - Environment="HTTP_PROXY=http://rproxy.example.com:3128/" "NO_PROXY=localhost,127.0.0.0/8,rdocker.example.com" - - name: fleet.service - command: start - content: | - [Unit] - Description=fleet daemon - Wants=fleet.socket - After=fleet.socket - [Service] - Environment="FLEET_ETCD_SERVERS=http://:4001" - Environment="FLEET_METADATA=role=node" - ExecStart=/usr/bin/fleetd - Restart=always - RestartSec=10s - - name: flannel.service - command: start - content: | - [Unit] - After=network-online.target - Wants=network-online.target - Description=flannel is an etcd backed overlay network for containers - [Service] - Type=notify - ExecStartPre=-/usr/bin/mkdir -p /opt/bin - ExecStartPre=/usr/bin/wget -N -P /opt/bin http:///flanneld - ExecStartPre=/usr/bin/chmod +x /opt/bin/flanneld - ExecStart=/opt/bin/flanneld -etcd-endpoints http://:4001 - - name: docker.service - command: start - content: | - [Unit] - After=flannel.service - Wants=flannel.service - Description=Docker Application Container Engine - Documentation=http://docs.docker.io - [Service] - EnvironmentFile=-/etc/default/docker - EnvironmentFile=/run/flannel/subnet.env - ExecStartPre=/bin/mount --make-rprivate / - ExecStart=/usr/bin/docker -d --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU} -s=overlay -H fd:// ${DOCKER_EXTRA_OPTS} - [Install] - WantedBy=multi-user.target - - name: setup-network-environment.service - command: start - content: | - [Unit] - Description=Setup Network Environment - Documentation=https://github.com/kelseyhightower/setup-network-environment - Requires=network-online.target - After=network-online.target - [Service] - ExecStartPre=-/usr/bin/mkdir -p /opt/bin - ExecStartPre=/usr/bin/wget -N -P /opt/bin http:///setup-network-environment - ExecStartPre=/usr/bin/chmod +x /opt/bin/setup-network-environment - ExecStart=/opt/bin/setup-network-environment - RemainAfterExit=yes - Type=oneshot - - name: kube-proxy.service - command: start - content: | - [Unit] - Description=Kubernetes Proxy - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=setup-network-environment.service - After=setup-network-environment.service - [Service] - ExecStartPre=/usr/bin/wget -N -P /opt/bin http:///kube-proxy - ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-proxy - ExecStart=/opt/bin/kube-proxy \ - --etcd_servers=http://:4001 \ - --logtostderr=true - Restart=always - RestartSec=10 - - name: kube-kubelet.service - command: start - content: | - [Unit] - Description=Kubernetes Kubelet - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=setup-network-environment.service - After=setup-network-environment.service - [Service] - EnvironmentFile=/etc/network-environment - ExecStartPre=/usr/bin/wget -N -P /opt/bin http:///kubelet - ExecStartPre=/usr/bin/chmod +x /opt/bin/kubelet - ExecStart=/opt/bin/kubelet \ - --address=0.0.0.0 \ - --port=10250 \ - --hostname_override=${DEFAULT_IPV4} \ - --api_servers=:8080 \ - --healthz_bind_address=0.0.0.0 \ - --healthz_port=10248 \ - --logtostderr=true - Restart=always - RestartSec=10 - update: - group: stable - reboot-strategy: off - ssh_authorized_keys: - - ssh-rsa AAAAB3NzaC1yc2EAAAAD... - - -## New pxelinux.cfg file -Create a pxelinux target file for a _slave_ node: ```vi /tftpboot/pxelinux.cfg/coreos-node-slave``` - - default coreos - prompt 1 - timeout 15 - - display boot.msg - - label coreos - menu default - kernel images/coreos/coreos_production_pxe.vmlinuz - append initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http:///coreos/pxe-cloud-config-slave.yml console=tty0 console=ttyS0 coreos.autologin=tty1 coreos.autologin=ttyS0 - -And one for the _master_ node: ```vi /tftpboot/pxelinux.cfg/coreos-node-master``` - - default coreos - prompt 1 - timeout 15 - - display boot.msg - - label coreos - menu default - kernel images/coreos/coreos_production_pxe.vmlinuz - append initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http:///coreos/pxe-cloud-config-master.yml console=tty0 console=ttyS0 coreos.autologin=tty1 coreos.autologin=ttyS0 - -## Specify the pxelinux targets -Now that we have our new targets setup for master and slave we want to configure the specific hosts to those targets. We will do this by using the pxelinux mechanism of setting a specific MAC addresses to a specific pxelinux.cfg file. - -Refer to the MAC address table in the beginning of this guide. Documentation for more details can be found [here](http://www.syslinux.org/wiki/index.php/PXELINUX). - - cd /tftpboot/pxelinux.cfg - ln -s coreos-node-master 01-d0-00-67-13-0d-00 - ln -s coreos-node-slave 01-d0-00-67-13-0d-01 - ln -s coreos-node-slave 01-d0-00-67-13-0d-02 - - -Reboot these servers to get the images PXEd and ready for running containers! - -## Creating test pod -Now that the CoreOS with Kubernetes installed is up and running lets spin up some Kubernetes pods to demonstrate the system. - -See [a simple nginx example](../../../examples/simple-nginx.md) to try out your new cluster. - -For more complete applications, please look in the [examples directory](../../../examples). - -## Helping commands for debugging - -List all keys in etcd: - - etcdctl ls --recursive - -List fleet machines - - fleetctl list-machines - -Check system status of services on master node: - - systemctl status kube-apiserver - systemctl status kube-controller-manager - systemctl status kube-scheduler - systemctl status kube-register - -Check system status of services on a minion node: - - systemctl status kube-kubelet - systemctl status docker.service - -List Kubernetes - - kubectl get pods - kubectl get minions - - -Kill all pods: - - for i in `kubectl get pods | awk '{print $1}'`; do kubectl stop pod $i; done - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/coreos/bare_metal_offline.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/coreos/bare_metal_offline.md?pixel)]() diff --git a/release-0.20.0/docs/getting-started-guides/coreos/cloud-configs/master.yaml b/release-0.20.0/docs/getting-started-guides/coreos/cloud-configs/master.yaml deleted file mode 100644 index 7310c22582c..00000000000 --- a/release-0.20.0/docs/getting-started-guides/coreos/cloud-configs/master.yaml +++ /dev/null @@ -1,180 +0,0 @@ -#cloud-config - ---- -hostname: master -coreos: - etcd2: - name: master - listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001 - advertise-client-urls: http://$private_ipv4:2379,http://$private_ipv4:4001 - initial-cluster-token: k8s_etcd - listen-peer-urls: http://$private_ipv4:2380,http://$private_ipv4:7001 - initial-advertise-peer-urls: http://$private_ipv4:2380 - initial-cluster: master=http://$private_ipv4:2380 - initial-cluster-state: new - fleet: - metadata: "role=master" - units: - - name: setup-network-environment.service - command: start - content: | - [Unit] - Description=Setup Network Environment - Documentation=https://github.com/kelseyhightower/setup-network-environment - Requires=network-online.target - After=network-online.target - - [Service] - ExecStartPre=-/usr/bin/mkdir -p /opt/bin - ExecStartPre=/usr/bin/curl -L -o /opt/bin/setup-network-environment -z /opt/bin/setup-network-environment https://github.com/kelseyhightower/setup-network-environment/releases/download/v1.0.0/setup-network-environment - ExecStartPre=/usr/bin/chmod +x /opt/bin/setup-network-environment - ExecStart=/opt/bin/setup-network-environment - RemainAfterExit=yes - Type=oneshot - - name: fleet.service - command: start - - name: flanneld.service - command: start - drop-ins: - - name: 50-network-config.conf - content: | - [Unit] - Requires=etcd2.service - [Service] - ExecStartPre=/usr/bin/etcdctl set /coreos.com/network/config '{"Network":"10.244.0.0/16", "Backend": {"Type": "vxlan"}}' - - name: docker-cache.service - command: start - content: | - [Unit] - Description=Docker cache proxy - Requires=early-docker.service - After=early-docker.service - Before=early-docker.target - - [Service] - Restart=always - TimeoutStartSec=0 - RestartSec=5 - Environment="TMPDIR=/var/tmp/" - Environment="DOCKER_HOST=unix:///var/run/early-docker.sock" - ExecStartPre=-/usr/bin/docker kill docker-registry - ExecStartPre=-/usr/bin/docker rm docker-registry - ExecStartPre=/usr/bin/docker pull quay.io/devops/docker-registry:latest - # GUNICORN_OPTS is an workaround for - # https://github.com/docker/docker-registry/issues/892 - ExecStart=/usr/bin/docker run --rm --net host --name docker-registry \ - -e STANDALONE=false \ - -e GUNICORN_OPTS=[--preload] \ - -e MIRROR_SOURCE=https://registry-1.docker.io \ - -e MIRROR_SOURCE_INDEX=https://index.docker.io \ - -e MIRROR_TAGS_CACHE_TTL=1800 \ - quay.io/devops/docker-registry:latest - - name: docker.service - content: | - [Unit] - Description=Docker Application Container Engine - Documentation=http://docs.docker.com - After=docker.socket early-docker.target network.target - Requires=docker.socket early-docker.target - - [Service] - Environment=TMPDIR=/var/tmp - EnvironmentFile=-/run/flannel_docker_opts.env - EnvironmentFile=/etc/network-environment - MountFlags=slave - LimitNOFILE=1048576 - LimitNPROC=1048576 - ExecStart=/usr/lib/coreos/dockerd --daemon --host=fd:// --registry-mirror=http://${DEFAULT_IPV4}:5000 $DOCKER_OPT_BIP $DOCKER_OPT_MTU $DOCKER_OPT_IPMASQ - - [Install] - WantedBy=multi-user.target - drop-ins: - - name: 51-docker-mirror.conf - content: | - [Unit] - # making sure that docker-cache is up and that flanneld finished - # startup, otherwise containers won't land in flannel's network... - Requires=docker-cache.service flanneld.service - After=docker-cache.service flanneld.service - - name: kube-apiserver.service - command: start - content: | - [Unit] - Description=Kubernetes API Server - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=etcd2.service setup-network-environment.service - After=etcd2.service setup-network-environment.service - - [Service] - EnvironmentFile=/etc/network-environment - ExecStartPre=-/usr/bin/mkdir -p /opt/bin - ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-apiserver -z /opt/bin/kube-apiserver https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kube-apiserver - ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-apiserver - ExecStart=/opt/bin/kube-apiserver \ - --allow_privileged=true \ - --insecure_bind_address=0.0.0.0 \ - --insecure_port=8080 \ - --kubelet_https=true \ - --secure_port=6443 \ - --service-cluster-ip-range=10.100.0.0/16 \ - --etcd_servers=http://127.0.0.1:4001 \ - --public_address_override=${DEFAULT_IPV4} \ - --logtostderr=true - Restart=always - RestartSec=10 - - name: kube-controller-manager.service - command: start - content: | - [Unit] - Description=Kubernetes Controller Manager - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=kube-apiserver.service - After=kube-apiserver.service - - [Service] - ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-controller-manager -z /opt/bin/kube-controller-manager https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kube-controller-manager - ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-controller-manager - ExecStart=/opt/bin/kube-controller-manager \ - --master=127.0.0.1:8080 \ - --logtostderr=true - Restart=always - RestartSec=10 - - name: kube-scheduler.service - command: start - content: | - [Unit] - Description=Kubernetes Scheduler - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=kube-apiserver.service - After=kube-apiserver.service - - [Service] - ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-scheduler -z /opt/bin/kube-scheduler https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kube-scheduler - ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-scheduler - ExecStart=/opt/bin/kube-scheduler --master=127.0.0.1:8080 - Restart=always - RestartSec=10 - - name: kube-register.service - command: start - content: | - [Unit] - Description=Kubernetes Registration Service - Documentation=https://github.com/kelseyhightower/kube-register - Requires=kube-apiserver.service - After=kube-apiserver.service - Requires=fleet.service - After=fleet.service - - [Service] - ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-register -z /opt/bin/kube-register https://github.com/kelseyhightower/kube-register/releases/download/v0.0.3/kube-register-0.0.3-linux-amd64 - ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-register - ExecStart=/opt/bin/kube-register \ - --metadata=role=node \ - --fleet-endpoint=unix:///var/run/fleet.sock \ - --api-endpoint=http://127.0.0.1:8080 \ - --healthz-port=10248 - Restart=always - RestartSec=10 - update: - group: alpha - reboot-strategy: off diff --git a/release-0.20.0/docs/getting-started-guides/coreos/cloud-configs/node.yaml b/release-0.20.0/docs/getting-started-guides/coreos/cloud-configs/node.yaml deleted file mode 100644 index c13c7a97fc1..00000000000 --- a/release-0.20.0/docs/getting-started-guides/coreos/cloud-configs/node.yaml +++ /dev/null @@ -1,105 +0,0 @@ -#cloud-config -write-files: - - path: /opt/bin/wupiao - permissions: '0755' - content: | - #!/bin/bash - # [w]ait [u]ntil [p]ort [i]s [a]ctually [o]pen - [ -n "$1" ] && [ -n "$2" ] && while ! curl --output /dev/null \ - --silent --head --fail \ - http://${1}:${2}; do sleep 1 && echo -n .; done; - exit $? -coreos: - etcd2: - listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001 - advertise-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001 - initial-cluster: master=http://:2380 - proxy: on - fleet: - metadata: "role=node" - units: - - name: fleet.service - command: start - - name: flanneld.service - command: start - drop-ins: - - name: 50-network-config.conf - content: | - [Unit] - Requires=etcd2.service - [Service] - ExecStartPre=/usr/bin/etcdctl set /coreos.com/network/config '{"Network":"10.244.0.0/16", "Backend": {"Type": "vxlan"}}' - - name: docker.service - command: start - drop-ins: - - name: 51-docker-mirror.conf - content: | - [Unit] - Requires=flanneld.service - After=flanneld.service - [Service] - Environment=DOCKER_OPTS='--registry-mirror=http://:5000' - - name: setup-network-environment.service - command: start - content: | - [Unit] - Description=Setup Network Environment - Documentation=https://github.com/kelseyhightower/setup-network-environment - Requires=network-online.target - After=network-online.target - - [Service] - ExecStartPre=-/usr/bin/mkdir -p /opt/bin - ExecStartPre=/usr/bin/curl -L -o /opt/bin/setup-network-environment -z /opt/bin/setup-network-environment https://github.com/kelseyhightower/setup-network-environment/releases/download/v1.0.0/setup-network-environment - ExecStartPre=/usr/bin/chmod +x /opt/bin/setup-network-environment - ExecStart=/opt/bin/setup-network-environment - RemainAfterExit=yes - Type=oneshot - - name: kube-proxy.service - command: start - content: | - [Unit] - Description=Kubernetes Proxy - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=setup-network-environment.service - After=setup-network-environment.service - - [Service] - ExecStartPre=/usr/bin/curl -L -o /opt/bin/kube-proxy -z /opt/bin/kube-proxy https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kube-proxy - ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-proxy - # wait for kubernetes master to be up and ready - ExecStartPre=/opt/bin/wupiao 8080 - ExecStart=/opt/bin/kube-proxy \ - --master=:8080 \ - --logtostderr=true - Restart=always - RestartSec=10 - - name: kube-kubelet.service - command: start - content: | - [Unit] - Description=Kubernetes Kubelet - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=setup-network-environment.service - After=setup-network-environment.service - - [Service] - EnvironmentFile=/etc/network-environment - ExecStartPre=/usr/bin/curl -L -o /opt/bin/kubelet -z /opt/bin/kubelet https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kubelet - ExecStartPre=/usr/bin/chmod +x /opt/bin/kubelet - # wait for kubernetes master to be up and ready - ExecStartPre=/opt/bin/wupiao 8080 - ExecStart=/opt/bin/kubelet \ - --address=0.0.0.0 \ - --port=10250 \ - --hostname_override=${DEFAULT_IPV4} \ - --api_servers=:8080 \ - --allow_privileged=true \ - --logtostderr=true \ - --healthz_bind_address=0.0.0.0 \ - --healthz_port=10248 - Restart=always - RestartSec=10 - update: - group: alpha - reboot-strategy: off diff --git a/release-0.20.0/docs/getting-started-guides/coreos/cloud-configs/standalone.yaml b/release-0.20.0/docs/getting-started-guides/coreos/cloud-configs/standalone.yaml deleted file mode 100644 index 722e5a3c060..00000000000 --- a/release-0.20.0/docs/getting-started-guides/coreos/cloud-configs/standalone.yaml +++ /dev/null @@ -1,168 +0,0 @@ -#cloud-config - ---- -hostname: master -coreos: - etcd2: - name: master - listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001 - advertise-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001 - initial-cluster-token: k8s_etcd - listen-peer-urls: http://0.0.0.0:2380,http://0.0.0.0:7001 - initial-advertise-peer-urls: http://0.0.0.0:2380 - initial-cluster: master=http://0.0.0.0:2380 - initial-cluster-state: new - units: - - name: etcd2.service - command: start - - name: fleet.service - command: start - - name: flanneld.service - command: start - drop-ins: - - name: 50-network-config.conf - content: | - [Unit] - Requires=etcd2.service - [Service] - ExecStartPre=/usr/bin/etcdctl set /coreos.com/network/config '{"Network":"10.244.0.0/16", "Backend": {"Type": "vxlan"}}' - - name: docker-cache.service - command: start - content: | - [Unit] - Description=Docker cache proxy - Requires=early-docker.service - After=early-docker.service - Before=early-docker.target - - [Service] - Restart=always - TimeoutStartSec=0 - RestartSec=5 - Environment="TMPDIR=/var/tmp/" - Environment="DOCKER_HOST=unix:///var/run/early-docker.sock" - ExecStartPre=-/usr/bin/docker kill docker-registry - ExecStartPre=-/usr/bin/docker rm docker-registry - ExecStartPre=/usr/bin/docker pull quay.io/devops/docker-registry:latest - # GUNICORN_OPTS is an workaround for - # https://github.com/docker/docker-registry/issues/892 - ExecStart=/usr/bin/docker run --rm --net host --name docker-registry \ - -e STANDALONE=false \ - -e GUNICORN_OPTS=[--preload] \ - -e MIRROR_SOURCE=https://registry-1.docker.io \ - -e MIRROR_SOURCE_INDEX=https://index.docker.io \ - -e MIRROR_TAGS_CACHE_TTL=1800 \ - quay.io/devops/docker-registry:latest - - name: docker.service - command: start - drop-ins: - - name: 51-docker-mirror.conf - content: | - [Unit] - # making sure that docker-cache is up and that flanneld finished - # startup, otherwise containers won't land in flannel's network... - Requires=docker-cache.service flanneld.service - After=docker-cache.service flanneld.service - [Service] - Environment=DOCKER_OPTS='--registry-mirror=http://$private_ipv4:5000' - - name: kube-apiserver.service - command: start - content: | - [Unit] - Description=Kubernetes API Server - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=etcd2.service - After=etcd2.service - - [Service] - ExecStartPre=-/usr/bin/mkdir -p /opt/bin - ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kube-apiserver - ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-apiserver - ExecStart=/opt/bin/kube-apiserver \ - --allow_privileged=true \ - --insecure_bind_address=0.0.0.0 \ - --insecure_port=8080 \ - --kubelet_https=true \ - --secure_port=6443 \ - --service-cluster-ip-range=10.100.0.0/16 \ - --etcd_servers=http://127.0.0.1:4001 \ - --public_address_override=127.0.0.1 \ - --logtostderr=true - Restart=always - RestartSec=10 - - name: kube-controller-manager.service - command: start - content: | - [Unit] - Description=Kubernetes Controller Manager - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=kube-apiserver.service - After=kube-apiserver.service - - [Service] - ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kube-controller-manager - ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-controller-manager - ExecStart=/opt/bin/kube-controller-manager \ - --machines=127.0.0.1 \ - --master=127.0.0.1:8080 \ - --logtostderr=true - Restart=always - RestartSec=10 - - name: kube-scheduler.service - command: start - content: | - [Unit] - Description=Kubernetes Scheduler - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=kube-apiserver.service - After=kube-apiserver.service - - [Service] - ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kube-scheduler - ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-scheduler - ExecStart=/opt/bin/kube-scheduler --master=127.0.0.1:8080 - Restart=always - RestartSec=10 - - name: kube-proxy.service - command: start - content: | - [Unit] - Description=Kubernetes Proxy - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=etcd2.service - After=etcd2.service - - [Service] - ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kube-proxy - ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-proxy - ExecStart=/opt/bin/kube-proxy \ - --master=127.0.0.1:8080 \ - --logtostderr=true - Restart=always - RestartSec=10 - - name: kube-kubelet.service - command: start - content: | - [Unit] - Description=Kubernetes Kubelet - Documentation=https://github.com/GoogleCloudPlatform/kubernetes - Requires=etcd2.service - After=etcd2.service - - [Service] - ExecStartPre=/usr/bin/wget -N -P /opt/bin https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kubelet - ExecStartPre=/usr/bin/chmod +x /opt/bin/kubelet - ExecStart=/opt/bin/kubelet \ - --address=0.0.0.0 \ - --port=10250 \ - --hostname_override=127.0.0.1 \ - --api_servers=127.0.0.1:8080 \ - --allow_privileged=true \ - --logtostderr=true \ - --healthz_bind_address=0.0.0.0 \ - --healthz_port=10248 - Restart=always - RestartSec=10 - update: - group: alpha - reboot-strategy: off diff --git a/release-0.20.0/docs/getting-started-guides/coreos/coreos_multinode_cluster.md b/release-0.20.0/docs/getting-started-guides/coreos/coreos_multinode_cluster.md deleted file mode 100644 index 8fab7f6ae70..00000000000 --- a/release-0.20.0/docs/getting-started-guides/coreos/coreos_multinode_cluster.md +++ /dev/null @@ -1,142 +0,0 @@ -# CoreOS Multinode Cluster - -Use the [master.yaml](cloud-configs/master.yaml) and [node.yaml](cloud-configs/node.yaml) cloud-configs to provision a multi-node Kubernetes cluster. - -> **Attention**: This requires at least CoreOS version **[653.0.0][coreos653]**, as this was the first release to include etcd2. - -[coreos653]: https://coreos.com/releases/#653.0.0 - -## Overview - -* Provision the master node -* Capture the master node private IP address -* Edit node.yaml -* Provision one or more worker nodes - -### AWS - -*Attention:* Replace `````` below for a [suitable version of CoreOS image for AWS](https://coreos.com/docs/running-coreos/cloud-providers/ec2/). - -#### Provision the Master - -``` -aws ec2 create-security-group --group-name kubernetes --description "Kubernetes Security Group" -aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 22 --cidr 0.0.0.0/0 -aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 80 --cidr 0.0.0.0/0 -aws ec2 authorize-security-group-ingress --group-name kubernetes --source-security-group-name kubernetes -``` - -``` -aws ec2 run-instances \ ---image-id \ ---key-name \ ---region us-west-2 \ ---security-groups kubernetes \ ---instance-type m3.medium \ ---user-data file://master.yaml -``` - -#### Capture the private IP address - -``` -aws ec2 describe-instances --instance-id -``` - -#### Edit node.yaml - -Edit `node.yaml` and replace all instances of `` with the private IP address of the master node. - -#### Provision worker nodes - -``` -aws ec2 run-instances \ ---count 1 \ ---image-id \ ---key-name \ ---region us-west-2 \ ---security-groups kubernetes \ ---instance-type m3.medium \ ---user-data file://node.yaml -``` - -### GCE - -*Attention:* Replace `````` below for a [suitable version of CoreOS image for GCE](https://coreos.com/docs/running-coreos/cloud-providers/google-compute-engine/). - -#### Provision the Master - -``` -gcloud compute instances create master \ ---image-project coreos-cloud \ ---image \ ---boot-disk-size 200GB \ ---machine-type n1-standard-1 \ ---zone us-central1-a \ ---metadata-from-file user-data=master.yaml -``` - -#### Capture the private IP address - -``` -gcloud compute instances list -``` - -#### Edit node.yaml - -Edit `node.yaml` and replace all instances of `` with the private IP address of the master node. - -#### Provision worker nodes - -``` -gcloud compute instances create node1 \ ---image-project coreos-cloud \ ---image \ ---boot-disk-size 200GB \ ---machine-type n1-standard-1 \ ---zone us-central1-a \ ---metadata-from-file user-data=node.yaml -``` - -#### Establish network connectivity - -Next, setup an ssh tunnel to the master so you can run kubectl from your local host. -In one terminal, run `gcloud compute ssh master --ssh-flag="-L 8080:127.0.0.1:8080"` and in a second -run `gcloud compute ssh master --ssh-flag="-R 8080:127.0.0.1:8080"`. - -### VMware Fusion - -#### Create the master config-drive - -``` -mkdir -p /tmp/new-drive/openstack/latest/ -cp master.yaml /tmp/new-drive/openstack/latest/user_data -hdiutil makehybrid -iso -joliet -joliet-volume-name "config-2" -joliet -o master.iso /tmp/new-drive -``` - -#### Provision the Master - -Boot the [vmware image](https://coreos.com/docs/running-coreos/platforms/vmware) using `master.iso` as a config drive. - -#### Capture the master private IP address - -#### Edit node.yaml - -Edit `node.yaml` and replace all instances of `` with the private IP address of the master node. - -#### Create the node config-drive - -``` -mkdir -p /tmp/new-drive/openstack/latest/ -cp node.yaml /tmp/new-drive/openstack/latest/user_data -hdiutil makehybrid -iso -joliet -joliet-volume-name "config-2" -joliet -o node.iso /tmp/new-drive -``` - -#### Provision worker nodes - -Boot one or more the [vmware image](https://coreos.com/docs/running-coreos/platforms/vmware) using `node.iso` as a config drive. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/coreos/coreos_multinode_cluster.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/coreos/coreos_multinode_cluster.md?pixel)]() diff --git a/release-0.20.0/docs/getting-started-guides/coreos/coreos_single_node_cluster.md b/release-0.20.0/docs/getting-started-guides/coreos/coreos_single_node_cluster.md deleted file mode 100644 index ae95fd56c31..00000000000 --- a/release-0.20.0/docs/getting-started-guides/coreos/coreos_single_node_cluster.md +++ /dev/null @@ -1,66 +0,0 @@ -# CoreOS - Single Node Kubernetes Cluster - -Use the [standalone.yaml](cloud-configs/standalone.yaml) cloud-config to provision a single node Kubernetes cluster. - -> **Attention**: This requires at least CoreOS version **[653.0.0][coreos653]**, as this was the first release to include etcd2. - -[coreos653]: https://coreos.com/releases/#653.0.0 - -### CoreOS image versions - -### AWS - -``` -aws ec2 create-security-group --group-name kubernetes --description "Kubernetes Security Group" -aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 22 --cidr 0.0.0.0/0 -aws ec2 authorize-security-group-ingress --group-name kubernetes --source-security-group-name kubernetes -``` - -*Attention:* Replace `````` bellow for a [suitable version of CoreOS image for AWS](https://coreos.com/docs/running-coreos/cloud-providers/ec2/). - -``` -aws ec2 run-instances \ ---image-id \ ---key-name \ ---region us-west-2 \ ---security-groups kubernetes \ ---instance-type m3.medium \ ---user-data file://standalone.yaml -``` - -### GCE - -*Attention:* Replace `````` bellow for a [suitable version of CoreOS image for GCE](https://coreos.com/docs/running-coreos/cloud-providers/google-compute-engine/). - -``` -gcloud compute instances create standalone \ ---image-project coreos-cloud \ ---image \ ---boot-disk-size 200GB \ ---machine-type n1-standard-1 \ ---zone us-central1-a \ ---metadata-from-file user-data=standalone.yaml -``` - -Next, setup an ssh tunnel to the instance so you can run kubectl from your local host. -In one terminal, run `gcloud compute ssh standalone --ssh-flag="-L 8080:127.0.0.1:8080"` and in a second -run `gcloud compute ssh standalone --ssh-flag="-R 8080:127.0.0.1:8080"`. - - -### VMware Fusion - -Create a [config-drive](https://coreos.com/docs/cluster-management/setup/cloudinit-config-drive) ISO. - -``` -mkdir -p /tmp/new-drive/openstack/latest/ -cp standalone.yaml /tmp/new-drive/openstack/latest/user_data -hdiutil makehybrid -iso -joliet -joliet-volume-name "config-2" -joliet -o standalone.iso /tmp/new-drive -``` - -Boot the [vmware image](https://coreos.com/docs/running-coreos/platforms/vmware) using the `standalone.iso` as a config drive. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/coreos/coreos_single_node_cluster.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/coreos/coreos_single_node_cluster.md?pixel)]() diff --git a/release-0.20.0/docs/getting-started-guides/docker-multinode.md b/release-0.20.0/docs/getting-started-guides/docker-multinode.md deleted file mode 100644 index b9e4bdab70d..00000000000 --- a/release-0.20.0/docs/getting-started-guides/docker-multinode.md +++ /dev/null @@ -1,58 +0,0 @@ -Running Multi-Node Kubernetes Using Docker ------------------------------------------- - -_Note_: -These instructions are somewhat significantly more advanced than the [single node](docker.md) instructions. If you are -interested in just starting to explore Kubernetes, we recommend that you start there. - -**Table of Contents** - -- [Prerequisites](#prerequisites) -- [Overview](#overview) - - [Bootstrap Docker](#bootstrap-docker) -- [Master Node](#master-node) -- [Adding a worker node](#adding-a-worker-node) -- [Testing your cluster](#testing-your-cluster) - -## Prerequisites -1. You need a machine with docker installed. - -## Overview -This guide will set up a 2-node kubernetes cluster, consisting of a _master_ node which hosts the API server and orchestrates work -and a _worker_ node which receives work from the master. You can repeat the process of adding worker nodes an arbitrary number of -times to create larger clusters. - -Here's a diagram of what the final result will look like: -![Kubernetes Single Node on Docker](k8s-docker.png) - -### Bootstrap Docker -This guide also uses a pattern of running two instances of the Docker daemon - 1) A _bootstrap_ Docker instance which is used to start system daemons like ```flanneld``` and ```etcd``` - 2) A _main_ Docker instance which is used for the Kubernetes infrastructure and user's scheduled containers - -This pattern is necessary because the ```flannel``` daemon is responsible for setting up and managing the network that interconnects -all of the Docker containers created by Kubernetes. To achieve this, it must run outside of the _main_ Docker daemon. However, -it is still useful to use containers for deployment and management, so we create a simpler _bootstrap_ daemon to achieve this. - -## Master Node -The first step in the process is to initialize the master node. - -See [here](docker-multinode/master.md) for detailed instructions. - -## Adding a worker node - -Once your master is up and running you can add one or more workers on different machines. - -See [here](docker-multinode/worker.md) for detailed instructions. - -## Testing your cluster - -Once your cluster has been created you can [test it out](docker-multinode/testing.md) - -For more complete applications, please look in the [examples directory](../../examples) - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/docker-multinode.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/docker-multinode.md?pixel)]() diff --git a/release-0.20.0/docs/getting-started-guides/docker-multinode/master.md b/release-0.20.0/docs/getting-started-guides/docker-multinode/master.md deleted file mode 100644 index 5638b2bdad2..00000000000 --- a/release-0.20.0/docs/getting-started-guides/docker-multinode/master.md +++ /dev/null @@ -1,149 +0,0 @@ -## Installing a Kubernetes Master Node via Docker -We'll begin by setting up the master node. For the purposes of illustration, we'll assume that the IP of this machine is ```${MASTER_IP}``` - -There are two main phases to installing the master: - * [Setting up ```flanneld``` and ```etcd```](#setting-up-flanneld-and-etcd) - * [Starting the Kubernetes master components](#starting-the-kubernetes-master) - - -## Setting up flanneld and etcd - -### Setup Docker-Bootstrap -We're going to use ```flannel``` to set up networking between Docker daemons. Flannel itself (and etcd on which it relies) will run inside of -Docker containers themselves. To achieve this, we need a separate "bootstrap" instance of the Docker daemon. This daemon will be started with -```--iptables=false``` so that it can only run containers with ```--net=host```. That's sufficient to bootstrap our system. - -Run: -```sh -sudo sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &' -``` - -_Important Note_: -If you are running this on a long running system, rather than experimenting, you should run the bootstrap Docker instance under something like SysV init, upstart or systemd so that it is restarted -across reboots and failures. - - -### Startup etcd for flannel and the API server to use -Run: -``` -sudo docker -H unix:///var/run/docker-bootstrap.sock run --net=host -d gcr.io/google_containers/etcd:2.0.9 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data -``` - -Next, you need to set a CIDR range for flannel. This CIDR should be chosen to be non-overlapping with any existing network you are using: - -```sh -sudo docker -H unix:///var/run/docker-bootstrap.sock run --net=host gcr.io/google_containers/etcd:2.0.9 etcdctl set /coreos.com/network/config '{ "Network": "10.1.0.0/16" }' -``` - - -### Set up Flannel on the master node -Flannel is a network abstraction layer build by CoreOS, we will use it to provide simplfied networking between our Pods of containers. - -Flannel re-configures the bridge that Docker uses for networking. As a result we need to stop Docker, reconfigure its networking, and then restart Docker. - -#### Bring down Docker -To re-configure Docker to use flannel, we need to take docker down, run flannel and then restart Docker. - -Turning down Docker is system dependent, it may be: - -```sh -sudo /etc/init.d/docker stop -``` - -or - -```sh -sudo systemctl stop docker -``` - -or it may be something else. - -#### Run flannel - -Now run flanneld itself: -```sh -sudo docker -H unix:///var/run/docker-bootstrap.sock run -d --net=host --privileged -v /dev/net:/dev/net quay.io/coreos/flannel:0.3.0 -``` - -The previous command should have printed a really long hash, copy this hash. - -Now get the subnet settings from flannel: -``` -sudo docker -H unix:///var/run/docker-bootstrap.sock exec cat /run/flannel/subnet.env -``` - -#### Edit the docker configuration -You now need to edit the docker configuration to activate new flags. Again, this is system specific. - -This may be in ```/etc/default/docker``` or ```/etc/systemd/service/docker.service``` or it may be elsewhere. - -Regardless, you need to add the following to the docker command line: -```sh ---bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU} -``` - -#### Remove the existing Docker bridge -Docker creates a bridge named ```docker0``` by default. You need to remove this: - -```sh -sudo /sbin/ifconfig docker0 down -sudo brctl delbr docker0 -``` - -You may need to install the ```bridge-utils``` package for the ```brctl``` binary. - -#### Restart Docker -Again this is system dependent, it may be: - -```sh -sudo /etc/init.d/docker start -``` - -it may be: -```sh -systemctl start docker -``` - -## Starting the Kubernetes Master -Ok, now that your networking is set up, you can startup Kubernetes, this is the same as the single-node case, we will use the "main" instance of the Docker daemon for the Kubernetes components. - -```sh -sudo docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.18.2 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests-multi -``` - -### Also run the service proxy -```sh -sudo docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.18.2 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2 -``` - -### Test it out -At this point, you should have a functioning 1-node cluster. Let's test it out! - -Download the kubectl binary -([OS X](http://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/darwin/amd64/kubectl)) -([linux](http://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kubectl)) - -List the nodes - -```sh -kubectl get nodes -``` - -This should print: -``` -NAME LABELS STATUS -127.0.0.1 kubernetes.io/hostname=127.0.0.1 Ready -``` - -If the status of the node is ```NotReady``` or ```Unknown``` please check that all of the containers you created are successfully running. -If all else fails, ask questions on IRC at #google-containers. - - -### Next steps -Move on to [adding one or more workers](worker.md) - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/docker-multinode/master.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/docker-multinode/master.md?pixel)]() diff --git a/release-0.20.0/docs/getting-started-guides/docker-multinode/testing.md b/release-0.20.0/docs/getting-started-guides/docker-multinode/testing.md deleted file mode 100644 index 595e00e0e1b..00000000000 --- a/release-0.20.0/docs/getting-started-guides/docker-multinode/testing.md +++ /dev/null @@ -1,63 +0,0 @@ -## Testing your Kubernetes cluster. - -To validate that your node(s) have been added, run: - -```sh -kubectl get nodes -``` - -That should show something like: -``` -NAME LABELS STATUS -10.240.99.26 kubernetes.io/hostname=10.240.99.26 Ready -127.0.0.1 kubernetes.io/hostname=127.0.0.1 Ready -``` - -If the status of any node is ```Unknown``` or ```NotReady``` your cluster is broken, double check that all containers are running properly, and if all else fails, contact us on IRC at -```#google-containers``` for advice. - -### Run an application -```sh -kubectl -s http://localhost:8080 run nginx --image=nginx --port=80 -``` - -now run ```docker ps``` you should see nginx running. You may need to wait a few minutes for the image to get pulled. - -### Expose it as a service: -```sh -kubectl expose rc nginx --port=80 -``` - -This should print: -``` -NAME LABELS SELECTOR IP PORT(S) -nginx run=nginx 80/TCP -``` - -Hit the webserver: -```sh -curl -``` - -Note that you will need run this curl command on your boot2docker VM if you are running on OS X. - -### Scaling - -Now try to scale up the nginx you created before: - -```sh -kubectl scale rc nginx --replicas=3 -``` - -And list the pods - -```sh -kubectl get pods -``` - -You should see pods landing on the newly added machine. - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/docker-multinode/testing.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/docker-multinode/testing.md?pixel)]() diff --git a/release-0.20.0/docs/getting-started-guides/docker-multinode/worker.md b/release-0.20.0/docs/getting-started-guides/docker-multinode/worker.md deleted file mode 100644 index 7ed3a8fb89c..00000000000 --- a/release-0.20.0/docs/getting-started-guides/docker-multinode/worker.md +++ /dev/null @@ -1,114 +0,0 @@ -## Adding a Kubernetes worker node via Docker. - -These instructions are very similar to the master set-up above, but they are duplicated for clarity. -You need to repeat these instructions for each node you want to join the cluster. -We will assume that the IP address of this node is ```${NODE_IP}``` and you have the IP address of the master in ```${MASTER_IP}``` that you created in the [master instructions](master.md). - -For each worker node, there are three steps: - * [Set up ```flanneld``` on the worker node](#set-up-flanneld-on-the-worker-node) - * [Start kubernetes on the worker node](#start-kubernetes-on-the-worker-node) - * [Add the worker to the cluster](#add-the-node-to-the-cluster) - -### Set up Flanneld on the worker node -As before, the Flannel daemon is going to provide network connectivity. - -#### Set up a bootstrap docker: -As previously, we need a second instance of the Docker daemon running to bootstrap the flannel networking. - -Run: -```sh -sudo sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &' -``` - -_Important Note_: -If you are running this on a long running system, rather than experimenting, you should run the bootstrap Docker instance under something like SysV init, upstart or systemd so that it is restarted -across reboots and failures. - -#### Bring down Docker -To re-configure Docker to use flannel, we need to take docker down, run flannel and then restart Docker. - -Turning down Docker is system dependent, it may be: - -```sh -sudo /etc/init.d/docker stop -``` - -or - -```sh -sudo systemctl stop docker -``` - -or it may be something else. - -#### Run flannel - -Now run flanneld itself, this call is slightly different from the above, since we point it at the etcd instance on the master. -```sh -sudo docker -H unix:///var/run/docker-bootstrap.sock run -d --net=host --privileged -v /dev/net:/dev/net quay.io/coreos/flannel:0.3.0 /opt/bin/flanneld --etcd-endpoints=http://${MASTER_IP}:4001 -``` - -The previous command should have printed a really long hash, copy this hash. - -Now get the subnet settings from flannel: -``` -sudo docker -H unix:///var/run/docker-bootstrap.sock exec cat /run/flannel/subnet.env -``` - - -#### Edit the docker configuration -You now need to edit the docker configuration to activate new flags. Again, this is system specific. - -This may be in ```/etc/default/docker``` or ```/etc/systemd/service/docker.service``` or it may be elsewhere. - -Regardless, you need to add the following to the docker command line: -```sh ---bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU} -``` - -#### Remove the existing Docker bridge -Docker creates a bridge named ```docker0``` by default. You need to remove this: - -```sh -sudo /sbin/ifconfig docker0 down -sudo brctl delbr docker0 -``` - -You may need to install the ```bridge-utils``` package for the ```brctl``` binary. - -#### Restart Docker -Again this is system dependent, it may be: - -```sh -sudo /etc/init.d/docker start -``` - -it may be: -```sh -systemctl start docker -``` - -### Start Kubernetes on the worker node -#### Run the kubelet -Again this is similar to the above, but the ```--api_servers``` now points to the master we set up in the beginning. - -```sh -sudo docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.18.2 /hyperkube kubelet --api_servers=http://${MASTER_IP}:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=$(hostname -i) -``` - -#### Run the service proxy -The service proxy provides load-balancing between groups of containers defined by Kubernetes ```Services``` - -```sh -sudo docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.18.2 /hyperkube proxy --master=http://${MASTER_IP}:8080 --v=2 -``` - -### Next steps - -Move on to [testing your cluster](testing.md) or [add another node](#adding-a-kubernetes-worker-node-via-docker) - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/docker-multinode/worker.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/docker-multinode/worker.md?pixel)]() diff --git a/release-0.20.0/docs/getting-started-guides/docker.md b/release-0.20.0/docs/getting-started-guides/docker.md deleted file mode 100644 index 03dd6491cea..00000000000 --- a/release-0.20.0/docs/getting-started-guides/docker.md +++ /dev/null @@ -1,105 +0,0 @@ -Running kubernetes locally via Docker -------------------------------------- - -**Table of Contents** - -- [Overview](#setting-up-a-cluster) -- [Prerequisites](#prerequisites) -- [Step One: Run etcd](#step-one-run-etcd) -- [Step Two: Run the master](#step-two-run-the-master) -- [Step Three: Run the service proxy](#step-three-run-the-service-proxy) -- [Test it out](#test-it-out) -- [Run an application](#run-an-application) -- [Expose it as a service:](#expose-it-as-a-service) -- [A note on turning down your cluster](#a-note-on-turning-down-your-cluster) - -### Overview - -The following instructions show you how to set up a simple, single node kubernetes cluster using Docker. - -Here's a diagram of what the final result will look like: -![Kubernetes Single Node on Docker](k8s-singlenode-docker.png) - -### Prerequisites -1. You need to have docker installed on one machine. - -### Step One: Run etcd -```sh -docker run --net=host -d gcr.io/google_containers/etcd:2.0.9 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data -``` - -### Step Two: Run the master -```sh -docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.18.2 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests -``` - -This actually runs the kubelet, which in turn runs a [pod](http://docs.k8s.io/pods.md) that contains the other master components. - -### Step Three: Run the service proxy -*Note, this could be combined with master above, but it requires --privileged for iptables manipulation* -```sh -docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.18.2 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2 -``` - -### Test it out -At this point you should have a running kubernetes cluster. You can test this by downloading the kubectl -binary -([OS X](https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/darwin/amd64/kubectl)) -([linux](https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kubectl)) - -*Note:* -On OS/X you will need to set up port forwarding via ssh: -```sh -boot2docker ssh -L8080:localhost:8080 -``` - -List the nodes in your cluster by running:: - -```sh -kubectl get nodes -``` - -This should print: -``` -NAME LABELS STATUS -127.0.0.1 Ready -``` - -If you are running different kubernetes clusters, you may need to specify ```-s http://localhost:8080``` to select the local cluster. - -### Run an application -```sh -kubectl -s http://localhost:8080 run-container nginx --image=nginx --port=80 -``` - -now run ```docker ps``` you should see nginx running. You may need to wait a few minutes for the image to get pulled. - -### Expose it as a service: -```sh -kubectl expose rc nginx --port=80 -``` - -This should print: -``` -NAME LABELS SELECTOR IP PORT(S) -nginx run=nginx 80/TCP -``` - -Hit the webserver: -```sh -curl -``` - -Note that you will need run this curl command on your boot2docker VM if you are running on OS X. - -### A note on turning down your cluster -Many of these containers run under the management of the ```kubelet``` binary, which attempts to keep containers running, even if they fail. So, in order to turn down -the cluster, you need to first kill the kubelet container, and then any other containers. - -You may use ```docker ps -a | awk '{print $1}' | xargs docker kill```, note this removes _all_ containers running under Docker, so use with caution. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/docker.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/docker.md?pixel)]() diff --git a/release-0.20.0/docs/getting-started-guides/es-browser.png b/release-0.20.0/docs/getting-started-guides/es-browser.png deleted file mode 100644 index f556fa8c561..00000000000 Binary files a/release-0.20.0/docs/getting-started-guides/es-browser.png and /dev/null differ diff --git a/release-0.20.0/docs/getting-started-guides/fedora/fedora_ansible_config.md b/release-0.20.0/docs/getting-started-guides/fedora/fedora_ansible_config.md deleted file mode 100644 index d4fe85d7593..00000000000 --- a/release-0.20.0/docs/getting-started-guides/fedora/fedora_ansible_config.md +++ /dev/null @@ -1,249 +0,0 @@ -Configuring kubernetes on [Fedora](http://fedoraproject.org) via [Ansible](http://www.ansible.com/home) -------------------------------------------------------------------------------------------------------- - -Configuring kubernetes on Fedora via Ansible offers a simple way to quickly create a clustered environment with little effort. - -**Table of Contents** - -- [Prerequisites](#prerequisites) -- [Architecture of the cluster](#architecture-of-the-cluster) -- [Configuring ssh access to the cluster](#configuring-ssh-access-to-the-cluster) -- [Configuring the internal kubernetes network](#configuring-the-internal-kubernetes-network) -- [Setting up the cluster](#setting-up-the-cluster) -- [Testing and using your new cluster](#testing-and-using-your-new-cluster) - -##Prerequisites - -1. Host able to run ansible and able to clone the following repo: [kubernetes-ansible](https://github.com/eparis/kubernetes-ansible) -2. A Fedora 20+ or RHEL7 host to act as cluster master -3. As many Fedora 20+ or RHEL7 hosts as you would like, that act as cluster minions - -The hosts can be virtual or bare metal. The only requirement to make the ansible network setup work is that all of the machines are connected via the same layer 2 network. - -Ansible will take care of the rest of the configuration for you - configuring networking, installing packages, handling the firewall, etc... This example will use one master and two minions. - -## Architecture of the cluster - -A Kubernetes cluster requires etcd, a master, and n minions, so we will create a cluster with three hosts, for example: - -``` - fed1 (master,etcd) = 192.168.121.205 - fed2 (minion) = 192.168.121.84 - fed3 (minion) = 192.168.121.116 -``` - -**Make sure your local machine** - - - has ansible - - has git - -**then we just clone down the kubernetes-ansible repository** - -``` - yum install -y ansible git - git clone https://github.com/eparis/kubernetes-ansible.git - cd kubernetes-ansible -``` - -**Tell ansible about each machine and its role in your cluster.** - -Get the IP addresses from the master and minions. Add those to the `inventory` file (at the root of the repo) on the host running Ansible. - -We will set the kube_ip_addr to '10.254.0.[1-3]', for now. The reason we do this is explained later... It might work for you as a default. - -``` -[masters] -192.168.121.205 - -[etcd] -192.168.121.205 - -[minions] -192.168.121.84 kube_ip_addr=[10.254.0.1] -192.168.121.116 kube_ip_addr=[10.254.0.2] -``` - -**Setup ansible access to your nodes** - -If you already are running on a machine which has passwordless ssh access to the fed[1-3] nodes, and 'sudo' privileges, simply set the value of `ansible_ssh_user` in `group_vars/all.yaml` to the username which you use to ssh to the nodes (i.e. `fedora`), and proceed to the next step... - -*Otherwise* setup ssh on the machines like so (you will need to know the root password to all machines in the cluster). - -edit: group_vars/all.yml - -``` -ansible_ssh_user: root -``` - -## Configuring ssh access to the cluster - -If you already have ssh access to every machine using ssh public keys you may skip to [configuring the network](#configuring-the-network) - -**Create a password file.** - -The password file should contain the root password for every machine in the cluster. It will be used in order to lay down your ssh public key. Make sure your machines sshd-config allows password logins from root. - -``` -echo "password" > ~/rootpassword -``` - -**Agree to accept each machine's ssh public key** - -After this is completed, ansible is now enabled to ssh into any of the machines you're configuring. - -``` -ansible-playbook -i inventory ping.yml # This will look like it fails, that's ok -``` - -**Push your ssh public key to every machine** - -Again, you can skip this step if your ansible machine has ssh access to the nodes you are going to use in the kubernetes cluster. -``` -ansible-playbook -i inventory keys.yml -``` - -## Configuring the internal kubernetes network - -If you already have configured your network and docker will use it correctly, skip to [setting up the cluster](#setting-up-the-cluster) - -The ansible scripts are quite hacky configuring the network, you can see the [README](https://github.com/eparis/kubernetes-ansible) for details, or you can simply enter in variants of the 'kube_service_addresses' (in the all.yaml file) as `kube_ip_addr` entries in the minions field, as shown in the next section. - -**Configure the ip addresses which should be used to run pods on each machine** - -The IP address pool used to assign addresses to pods for each minion is the `kube_ip_addr`= option. Choose a /24 to use for each minion and add that to you inventory file. - -For this example, as shown earlier, we can do something like this... - -``` -[minions] -192.168.121.84 kube_ip_addr=10.254.0.1 -192.168.121.116 kube_ip_addr=10.254.0.2 -``` - -**Run the network setup playbook** - -There are two ways to do this: via flannel, or using NetworkManager. - -Flannel is a cleaner mechanism to use, and is the recommended choice. - -- If you are using flannel, you should check the kubernetes-ansible repository above. - -Currently, you essentially have to (1) update group_vars/all.yml, and then (2) run -``` -ansible-playbook -i inventory flannel.yml -``` - -- On the other hand, if using the NetworkManager based setup (i.e. you do not want to use flannel). - -On EACH node, make sure NetworkManager is installed, and the service "NetworkManager" is running, then you can run -the network manager playbook... - -``` -ansible-playbook -i inventory ./old-network-config/hack-network.yml -``` - -## Setting up the cluster - -**Configure the IP addresses used for services** - -Each kubernetes service gets its own IP address. These are not real IPs. You need only select a range of IPs which are not in use elsewhere in your environment. This must be done even if you do not use the network setup provided by the ansible scripts. - -edit: group_vars/all.yml - -``` -kube_service_addresses: 10.254.0.0/16 -``` - -**Tell ansible to get to work!** - -This will finally setup your whole kubernetes cluster for you. - -``` -ansible-playbook -i inventory setup.yml -``` - -## Testing and using your new cluster - -That's all there is to it. It's really that easy. At this point you should have a functioning kubernetes cluster. - - -**Show services running on masters and minions.** - -``` -systemctl | grep -i kube -``` - -**Show firewall rules on the masters and minions.** - -``` -iptables -nvL -``` - -**Create the following apache.json file and deploy pod to minion.** - -``` -cat << EOF > apache.json -{ - "kind": "Pod", - "apiVersion": "v1", - "metadata": { - "name": "fedoraapache", - "labels": { - "name": "fedoraapache" - } - }, - "spec": { - "containers": [ - { - "name": "fedoraapache", - "image": "fedora/apache", - "ports": [ - { - "hostPort": 80, - "containerPort": 80 - } - ] - } - ] - } -} -EOF - -/usr/bin/kubectl create -f apache.json - -**Testing your new kube cluster** - -``` - -**Check where the pod was created** - -``` -kubectl get pods -``` - -Important : Note that the IP of the pods IP fields are on the network which you created in the kube_ip_addr file. - -In this example, that was the 10.254 network. - -If you see 172 in the IP fields, networking was not setup correctly, and you may want to re run or dive deeper into the way networking is being setup by looking at the details of the networking scripts used above. - -**Check Docker status on minion.** - -``` -docker ps -docker images -``` - -**After the pod is 'Running' Check web server access on the minion** - -``` -curl http://localhost -``` - -That's it ! - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/fedora/fedora_ansible_config.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/fedora/fedora_ansible_config.md?pixel)]() diff --git a/release-0.20.0/docs/getting-started-guides/fedora/fedora_manual_config.md b/release-0.20.0/docs/getting-started-guides/fedora/fedora_manual_config.md deleted file mode 100644 index 58fad0b57bc..00000000000 --- a/release-0.20.0/docs/getting-started-guides/fedora/fedora_manual_config.md +++ /dev/null @@ -1,199 +0,0 @@ -Getting started on [Fedora](http://fedoraproject.org) ------------------------------------------------------ - -**Table of Contents** - -- [Prerequisites](#prerequisites) -- [Instructions](#instructions) - -## Prerequisites -1. You need 2 or more machines with Fedora installed. - -## Instructions - -This is a getting started guide for Fedora. It is a manual configuration so you understand all the underlying packages / services / ports, etc... - -This guide will only get ONE node (previously minion) working. Multiple nodes require a functional [networking configuration](http://docs.k8s.io/networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious. - -The kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, fed-master, will be the kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_ (not needed if _etcd_ runs on a different host but this guide assumes that _etcd_ and kubernetes master run on the same host). The remaining host, fed-node will be the node and run kubelet, proxy and docker. - -**System Information:** - -Hosts: -``` -fed-master = 192.168.121.9 -fed-node = 192.168.121.65 -``` - -**Prepare the hosts:** - -* Install kubernetes on all hosts - fed-{master,node}. This will also pull in docker. Also install etcd on fed-master. This guide has been tested with kubernetes-0.15.0 but should work with other versions too. -* The [--enablerepo=update-testing](https://fedoraproject.org/wiki/QA:Updates_Testing) directive in the yum command below will ensure that the most recent Kubernetes version that is scheduled for pre-release will be installed. This should be a more recent version than the Fedora "stable" release for Kubernetes that you would get without adding the directive. -* If you want the very latest Kubernetes release [you can download and yum install the RPM directly from Fedora Koji](http://koji.fedoraproject.org/koji/packageinfo?packageID=19202) instead of using the yum install command below. - -``` -yum -y install --enablerepo=updates-testing kubernetes -``` -* Install etcd and iptables - -``` -yum -y install etcd iptables -``` - -* Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS). Make sure that communication works between fed-master and fed-node by using a utility such as ping. - -``` -echo "192.168.121.9 fed-master -192.168.121.65 fed-node" >> /etc/hosts -``` - -* Edit /etc/kubernetes/config which will be the same on all hosts (master and node) to contain: - -``` -# Comma separated list of nodes in the etcd cluster -KUBE_MASTER="--master=http://fed-master:8080" - -# logging to stderr means we get it in the systemd journal -KUBE_LOGTOSTDERR="--logtostderr=true" - -# journal message level, 0 is debug -KUBE_LOG_LEVEL="--v=0" - -# Should this cluster be allowed to run privileged docker containers -KUBE_ALLOW_PRIV="--allow_privileged=false" -``` - -* Disable the firewall on both the master and node, as docker does not play well with other firewall rule managers. Please note that iptables-services does not exist on default fedora server install. - -``` -systemctl disable iptables-services firewalld -systemctl stop iptables-services firewalld -``` - -**Configure the kubernetes services on the master.** - -* Edit /etc/kubernetes/apiserver to appear as such. The service_cluster_ip_range IP addresses must be an unused block of addresses, not used anywhere else. They do not need to be routed or assigned to anything. - -``` -# The address on the local server to listen to. -KUBE_API_ADDRESS="--address=0.0.0.0" - -# Comma separated list of nodes in the etcd cluster -KUBE_ETCD_SERVERS="--etcd_servers=http://127.0.0.1:4001" - -# Address range to use for services -KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" - -# Add your own! -KUBE_API_ARGS="" -``` - -* Edit /etc/etcd/etcd.conf,let the etcd to listen all the ip instead of 127.0.0.1, if not, you will get the error like "connection refused" -``` -ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:4001" -``` - -* Start the appropriate services on master: - -``` -for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do - systemctl restart $SERVICES - systemctl enable $SERVICES - systemctl status $SERVICES -done -``` - -* Addition of nodes: - -* Create following node.json file on kubernetes master node: - -```json -{ - "apiVersion": "v1", - "kind": "Node", - "metadata": { - "name": "fed-node", - "labels":{ "name": "fed-node-label"} - }, - "spec": { - "externalID": "fed-node" - } -} -``` - -Now create a node object internally in your kubernetes cluster by running: - -``` -$ kubectl create -f node.json - -$ kubectl get nodes -NAME LABELS STATUS -fed-node name=fed-node-label Unknown - -``` - -Please note that in the above, it only creates a representation for the node -_fed-node_ internally. It does not provision the actual _fed-node_. Also, it -is assumed that _fed-node_ (as specified in `name`) can be resolved and is -reachable from kubernetes master node. This guide will discuss how to provision -a kubernetes node (fed-node) below. - -**Configure the kubernetes services on the node.** - -***We need to configure the kubelet on the node.*** - -* Edit /etc/kubernetes/kubelet to appear as such: - -``` -### -# kubernetes kubelet (node) config - -# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) -KUBELET_ADDRESS="--address=0.0.0.0" - -# You may leave this blank to use the actual hostname -KUBELET_HOSTNAME="--hostname_override=fed-node" - -# location of the api-server -KUBELET_API_SERVER="--api_servers=http://fed-master:8080" - -# Add your own! -#KUBELET_ARGS="" -``` - -* Start the appropriate services on the node (fed-node). - -``` -for SERVICES in kube-proxy kubelet docker; do - systemctl restart $SERVICES - systemctl enable $SERVICES - systemctl status $SERVICES -done -``` - -* Check to make sure now the cluster can see the fed-node on fed-master, and its status changes to _Ready_. - -``` -kubectl get nodes -NAME LABELS STATUS -fed-node name=fed-node-label Ready -``` -* Deletion of nodes: - -To delete _fed-node_ from your kubernetes cluster, one should run the following on fed-master (Please do not do it, it is just for information): - -``` -$ kubectl delete -f node.json -``` - -*You should be finished!* - -**The cluster should be running! Launch a test pod.** - -You should have a functional cluster, check out [101](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/examples/walkthrough/README.md)! - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/fedora/fedora_manual_config.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/fedora/fedora_manual_config.md?pixel)]() diff --git a/release-0.20.0/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md b/release-0.20.0/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md deleted file mode 100644 index 214ac15d943..00000000000 --- a/release-0.20.0/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md +++ /dev/null @@ -1,183 +0,0 @@ -Kubernetes multiple nodes cluster with flannel on Fedora --------------------------------------------------------- - -**Table of Contents** - -- [Introduction](#introduction) -- [Prerequisites](#prerequisites) -- [Master Setup](#master-setup) -- [Node Setup](#node-setup) -- [**Test the cluster and flannel configuration**](#test-the-cluster-and-flannel-configuration) - -## Introduction - -This document describes how to deploy kubernetes on multiple hosts to set up a multi-node cluster and networking with flannel. Follow fedora [getting started guide](fedora_manual_config.md) to setup 1 master (fed-master) and 2 or more nodes (minions). Make sure that all nodes (minions) have different names (fed-node1, fed-node2 and so on) and labels (fed-node1-label, fed-node2-label, and so on) to avoid any conflict. Also make sure that the kubernetes master host is running etcd, kube-controller-manager, kube-scheduler, and kube-apiserver services, and the nodes (minions) are running docker, kube-proxy and kubelet services. Now install flannel on kubernetes nodes (minions). flannel on each node configures an overlay network that docker uses. flannel runs on each node to setup a unique class-C container network. - -## Prerequisites -1. You need 2 or more machines with Fedora installed. - -## Master Setup - -**Perform following commands on the kubernetes master** - -* Configure flannel by creating a `flannel-config.json` in your current directory on fed-master. flannel provides udp and vxlan among other overlay networking backend options. In this guide, we choose kernel based vxlan backend. The contents of the json are: - -``` -{ - "Network": "18.16.0.0/16", - "SubnetLen": 24, - "Backend": { - "Type": "vxlan", - "VNI": 1 - } -} -``` -**NOTE:** Choose an IP range that is *NOT* part of the public IP address range. - -* Add the configuration to the etcd server on fed-master. - -``` -# etcdctl set /coreos.com/network/config < flannel-config.json -``` - -* Verify the key exists in the etcd server on fed-master. - -``` -# etcdctl get /coreos.com/network/config -``` - -## Node Setup - -**Perform following commands on all kubernetes nodes** - -* Edit the flannel configuration file /etc/sysconfig/flanneld as follows: - -``` -# Flanneld configuration options - -# etcd url location. Point this to the server where etcd runs -FLANNEL_ETCD="http://fed-master:4001" - -# etcd config key. This is the configuration key that flannel queries -# For address range assignment -FLANNEL_ETCD_KEY="/coreos.com/network" - -# Any additional options that you want to pass -FLANNEL_OPTIONS="" -``` - -**Note:** By default, flannel uses the interface for the default route. If you have multiple interfaces and would like to use an interface other than the default route one, you could add "-iface=" to FLANNEL_OPTIONS. For additional options, run `flanneld --help` on command line. - -* Enable the flannel service. - -``` -# systemctl enable flanneld -``` - -* If docker is not running, then starting flannel service is enough and skip the next step. - -``` -# systemctl start flanneld -``` - -* If docker is already running, then stop docker, delete docker bridge (docker0), start flanneld and restart docker as follows. Another alternative is to just reboot the system (`systemctl reboot`). - -``` -# systemctl stop docker -# ip link delete docker0 -# systemctl start flanneld -# systemctl start docker -``` - -*** - -##**Test the cluster and flannel configuration** - -* Now check the interfaces on the nodes. Notice there is now a flannel.1 interface, and the ip addresses of docker0 and flannel.1 interfaces are in the same network. You will notice that docker0 is assigned a subnet (18.16.29.0/24 as shown below) on each kubernetes node out of the IP range configured above. A working output should look like this: - -``` -# ip -4 a|grep inet - inet 127.0.0.1/8 scope host lo - inet 192.168.122.77/24 brd 192.168.122.255 scope global dynamic eth0 - inet 18.16.29.0/16 scope global flannel.1 - inet 18.16.29.1/24 scope global docker0 -``` - -* From any node in the cluster, check the cluster members by issuing a query to etcd server via curl (only partial output is shown using `grep -E "\{|\}|key|value"`). If you set up a 1 master and 3 nodes cluster, you should see one block for each node showing the subnets they have been assigned. You can associate those subnets to each node by the MAC address (VtepMAC) and IP address (Public IP) that is listed in the output. - -``` -# curl -s http://fed-master:4001/v2/keys/coreos.com/network/subnets | python -mjson.tool -{ - "node": { - "key": "/coreos.com/network/subnets", - { - "key": "/coreos.com/network/subnets/18.16.29.0-24", - "value": "{\"PublicIP\":\"192.168.122.77\",\"BackendType\":\"vxlan\",\"BackendData\":{\"VtepMAC\":\"46:f1:d0:18:d0:65\"}}" - }, - { - "key": "/coreos.com/network/subnets/18.16.83.0-24", - "value": "{\"PublicIP\":\"192.168.122.36\",\"BackendType\":\"vxlan\",\"BackendData\":{\"VtepMAC\":\"ca:38:78:fc:72:29\"}}" - }, - { - "key": "/coreos.com/network/subnets/18.16.90.0-24", - "value": "{\"PublicIP\":\"192.168.122.127\",\"BackendType\":\"vxlan\",\"BackendData\":{\"VtepMAC\":\"92:e2:80:ba:2d:4d\"}}" - } - } -} -``` - -* From all nodes, review the `/run/flannel/subnet.env` file. This file was generated automatically by flannel. - -``` -# cat /run/flannel/subnet.env -FLANNEL_SUBNET=18.16.29.1/24 -FLANNEL_MTU=1450 -FLANNEL_IPMASQ=false -``` - -* At this point, we have etcd running on the kubernetes master, and flannel / docker running on kubernetes nodes. Next steps are for testing cross-host container communication which will confirm that docker and flannel are configured properly. - -* Issue the following commands on any 2 nodes: - -``` -#docker run -it fedora:latest bash -bash-4.3# -``` - -* This will place you inside the container. Install iproute and iputils packages to install ip and ping utilities. Due to a [bug](https://bugzilla.redhat.com/show_bug.cgi?id=1142311), it is required to modify capabilities of ping binary to work around "Operation not permitted" error. - -``` -bash-4.3# yum -y install iproute iputils -bash-4.3# setcap cap_net_raw-ep /usr/bin/ping -``` - -* Now note the IP address on the first node: - -``` -bash-4.3# ip -4 a l eth0 | grep inet - inet 18.16.29.4/24 scope global eth0 -``` - -* And also note the IP address on the other node: - -``` -bash-4.3# ip a l eth0 | grep inet - inet 18.16.90.4/24 scope global eth0 -``` - -* Now ping from the first node to the other node: - -``` -bash-4.3# ping 18.16.90.4 -PING 18.16.90.4 (18.16.90.4) 56(84) bytes of data. -64 bytes from 18.16.90.4: icmp_seq=1 ttl=62 time=0.275 ms -64 bytes from 18.16.90.4: icmp_seq=2 ttl=62 time=0.372 ms -``` - -* Now kubernetes multi-node cluster is set up with overlay networking set up by flannel. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md?pixel)]() diff --git a/release-0.20.0/docs/getting-started-guides/gce.md b/release-0.20.0/docs/getting-started-guides/gce.md deleted file mode 100644 index 87c881554b5..00000000000 --- a/release-0.20.0/docs/getting-started-guides/gce.md +++ /dev/null @@ -1,204 +0,0 @@ -Getting started on Google Compute Engine ----------------------------------------- - -**Table of Contents** - -- [Before you start](#before-you-start) -- [Prerequisites](#prerequisites) -- [Starting a cluster](#starting-a-cluster) -- [Installing the kubernetes command line tools on your workstation](#installing-the-kubernetes-command-line-tools-on-your-workstation) -- [Getting started with your cluster](#getting-started-with-your-cluster) - - [Inspect your cluster](#inspect-your-cluster) - - [Run some examples](#run-some-examples) -- [Tearing down the cluster](#tearing-down-the-cluster) -- [Customizing](#customizing) -- [Troubleshooting](#troubleshooting) - - [Project settings](#project-settings) - - [Cluster initialization hang](#cluster-initialization-hang) - - [SSH](#ssh) - - [Networking](#networking) - - -The example below creates a Kubernetes cluster with 4 worker node Virtual Machines and a master Virtual Machine (i.e. 5 VMs in your cluster). This cluster is set up and controlled from your workstation (or wherever you find convenient). - -### Before you start - -If you want a simplified getting started experience and GUI for managing clusters, please consider trying [Google Container Engine](https://cloud.google.com/container-engine/) for hosted cluster installation and management. - -If you want to use custom binaries or pure open source Kubernetes, please continue with the instructions below. - -### Prerequisites - -1. You need a Google Cloud Platform account with billing enabled. Visit the [Google Developers Console](http://cloud.google.com/console) for more details. -1. Install `gcloud` as necessary. `gcloud` can be installed as a part of the [Google Cloud SDK](https://cloud.google.com/sdk/). -1. Then, make sure you have the `gcloud preview` command line component installed. Run `gcloud preview` at the command line - if it asks to install any components, go ahead and install them. If it simply shows help text, you're good to go. This is required as the cluster setup script uses GCE [Instance Groups](https://cloud.google.com/compute/docs/instance-groups/), which are in the gcloud preview namespace. You will also need to **enable [`Compute Engine Instance Group Manager API`](https://developers.google.com/console/help/new/#activatingapis)** in the developers console. -1. Make sure that gcloud is set to use the Google Cloud Platform project you want. You can check the current project using `gcloud config list project` and change it via `gcloud config set project `. -1. Make sure you have credentials for GCloud by running ` gcloud auth login`. -1. Make sure you can start up a GCE VM from the command line. At least make sure you can do the [Create an instance](https://cloud.google.com/compute/docs/quickstart#create_an_instance) part of the GCE Quickstart. -1. Make sure you can ssh into the VM without interactive prompts. See the [Log in to the instance](https://cloud.google.com/compute/docs/quickstart#ssh) part of the GCE Quickstart. - -### Starting a cluster - -You can install a client and start a cluster with this command: - -```bash -curl -sS https://get.k8s.io | bash -``` - -Once this command completes, you will have a master VM and four worker VMs, running as a Kubernetes cluster. By default, some containers will already be running on your cluster. Containers like `kibana` and `elasticsearch` provide [logging](../logging.md), while `heapster` provides [monitoring](../../cluster/addons/cluster-monitoring/README.md) services. - -Alternately, if you prefer, you can download and install the latest Kubernetes release from [this page](https://github.com/GoogleCloudPlatform/kubernetes/releases), then run the `/cluster/kube-up.sh` script to start the cluster: - -```bash -cd kubernetes -cluster/kube-up.sh -``` - -If you run into trouble, please see the section on [troubleshooting](gce.md#troubleshooting), post to the -[google-containers group](https://groups.google.com/forum/#!forum/google-containers), or come ask questions on IRC at #google-containers on freenode. - -The next few steps will show you: - -1. how to set up the command line client on your workstation to manage the cluster -1. examples of how to use the cluster -1. how to delete the cluster -1. how to start clusters with non-default options (like larger clusters) - -### Installing the kubernetes command line tools on your workstation - -The cluster startup script will leave you with a running cluster and a ```kubernetes``` directory on your workstation. -The next step is to make sure the `kubectl` tool is in your path. - -The [kubectl](../kubectl.md) tool controls the Kubernetes cluster manager. It lets you inspect your cluster resources, create, delete, and update components, and much more. -You will use it to look at your new cluster and bring up example apps. - -Add the appropriate binary folder to your ```PATH``` to access kubectl: - -```bash -# OS X -export PATH=/platforms/darwin/amd64:$PATH - -# Linux -export PATH=/platforms/linux/amd64:$PATH -``` - -**Note**: gcloud also ships with ```kubectl```, which by default is added to your path. -However the gcloud bundled kubectl version may be older than the one downloaded by the -get.k8s.io install script. We recommend you use the downloaded binary to avoid -potential issues with client/server version skew. - -### Getting started with your cluster - -#### Inspect your cluster - -Once `kubectl` is in your path, you can use it to look at your cluster. E.g., running: - -```shell -$ kubectl get services -``` - -should show a set of [services](../services.md) that look something like this: - -```shell -NAME LABELS SELECTOR IP(S) PORT(S) -elasticsearch-logging k8s-app=elasticsearch-logging,kubernetes.io/cluster-service=true,kubernetes.io/name=Elasticsearch k8s-app=elasticsearch-logging 10.0.198.255 9200/TCP -kibana-logging k8s-app=kibana-logging,kubernetes.io/cluster-service=true,kubernetes.io/name=Kibana k8s-app=kibana-logging 10.0.56.44 5601/TCP -kube-dns k8s-app=kube-dns,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeDNS k8s-app=kube-dns 10.0.0.10 53/UDP -kubernetes component=apiserver,provider=kubernetes 10.0.0.1 443/TCP -``` - -Similarly, you can take a look at the set of [pods](../pods.md) that were created during cluster startup. -You can do this via the - -```shell -$ kubectl get pods -``` -command. - -You'll see see a list of pods that looks something like this (the name specifics will be different): - -```shell -NAME READY REASON RESTARTS AGE -elasticsearch-logging-v1-ab87r 1/1 Running 0 1m -elasticsearch-logging-v1-v9lqa 1/1 Running 0 1m -fluentd-elasticsearch-kubernetes-minion-419y 1/1 Running 0 12s -fluentd-elasticsearch-kubernetes-minion-k0xh 1/1 Running 0 1m -fluentd-elasticsearch-kubernetes-minion-oa8l 1/1 Running 0 1m -fluentd-elasticsearch-kubernetes-minion-xuj5 1/1 Running 0 1m -kibana-logging-v1-cx2p8 1/1 Running 0 1m -kube-dns-v3-pa3w9 3/3 Running 0 1m -monitoring-heapster-v1-m1xkz 1/1 Running 0 1m -``` - -Some of the pods may take a few seconds to start up (during this time they'll show `Pending`), but check that they all show as `Running` after a short period. - -#### Run some examples - -Then, see [a simple nginx example](../../examples/simple-nginx.md) to try out your new cluster. - -For more complete applications, please look in the [examples directory](../../examples). The [guestbook example](../../examples/guestbook) is a good "getting started" walkthrough. - -### Tearing down the cluster -To remove/delete/teardown the cluster, use the `kube-down.sh` script. - -```bash -cd kubernetes -cluster/kube-down.sh -``` - -Likewise, the `kube-up.sh` in the same directory will bring it back up. You do not need to rerun the `curl` or `wget` command: everything needed to setup the Kubernetes cluster is now on your workstation. - -### Customizing - -The script above relies on Google Storage to stage the Kubernetes release. It -then will start (by default) a single master VM along with 4 worker VMs. You -can tweak some of these parameters by editing `kubernetes/cluster/gce/config-default.sh` -You can view a transcript of a successful cluster creation -[here](https://gist.github.com/satnam6502/fc689d1b46db9772adea). - -### Troubleshooting - -#### Project settings - -You need to have the Google Cloud Storage API, and the Google Cloud Storage -JSON API enabled. It is activated by default for new projects. Otherwise, it -can be done in the Google Cloud Console. See the [Google Cloud Storage JSON -API Overview](https://cloud.google.com/storage/docs/json_api/) for more -details. - -Also ensure that-- as listed in the [Prerequsites section](#prerequisites)-- you've enabled the `Compute Engine Instance Group Manager API`, and can start up a GCE VM from the command line as in the [GCE Quickstart](https://cloud.google.com/compute/docs/quickstart) instructions. - -#### Cluster initialization hang - -If the Kubernetes startup script hangs waiting for the API to be reachable, you can troubleshoot by SSHing into the master and minion VMs and looking at logs such as `/var/log/startupscript.log`. - -**Once you fix the issue, you should run `kube-down.sh` to cleanup** after the partial cluster creation, before running `kube-up.sh` to try again. - -#### SSH - -If you're having trouble SSHing into your instances, ensure the GCE firewall -isn't blocking port 22 to your VMs. By default, this should work but if you -have edited firewall rules or created a new non-default network, you'll need to -expose it: `gcloud compute firewall-rules create --network= ---description "SSH allowed from anywhere" --allow tcp:22 default-ssh` - -Additionally, your GCE SSH key must either have no passcode or you need to be -using `ssh-agent`. - -#### Networking - -The instances must be able to connect to each other using their private IP. The -script uses the "default" network which should have a firewall rule called -"default-allow-internal" which allows traffic on any port on the private IPs. -If this rule is missing from the default network or if you change the network -being used in `cluster/config-default.sh` create a new rule with the following -field values: - -* Source Ranges: `10.0.0.0/8` -* Allowed Protocols and Port: `tcp:1-65535;udp:1-65535;icmp` - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/gce.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/gce.md?pixel)]() diff --git a/release-0.20.0/docs/getting-started-guides/juju.md b/release-0.20.0/docs/getting-started-guides/juju.md deleted file mode 100644 index 9f6dca5e18c..00000000000 --- a/release-0.20.0/docs/getting-started-guides/juju.md +++ /dev/null @@ -1,239 +0,0 @@ -Getting started with Juju -------------------------- - -Juju handles provisioning machines and deploying complex systems to a -wide number of clouds, supporting service orchestration once the bundle of -services has been deployed. - -**Table of Contents** - -- [Prerequisites](#prerequisites) - - [On Ubuntu](#on-ubuntu) - - [With Docker](#with-docker) -- [Launch Kubernetes cluster](#launch-kubernetes-cluster) -- [Exploring the cluster](#exploring-the-cluster) -- [Run some containers!](#run-some-containers) -- [Scale out cluster](#scale-out-cluster) -- [Launch the "k8petstore" example app](#launch-the-k8petstore-example-app) -- [Tear down cluster](#tear-down-cluster) -- [More Info](#more-info) - - [Cloud compatibility](#cloud-compatibility) - - -## Prerequisites - -> Note: If you're running kube-up, on ubuntu - all of the dependencies -> will be handled for you. You may safely skip to the section: -> [Launch Kubernetes Cluster](#launch-kubernetes-cluster) - -### On Ubuntu - -[Install the Juju client](https://juju.ubuntu.com/install) on your -local ubuntu system: - - sudo add-apt-repository ppa:juju/stable - sudo apt-get update - sudo apt-get install juju-core juju-quickstart - - -### With Docker - -If you are not using ubuntu or prefer the isolation of docker, you may -run the following: - - mkdir ~/.juju - sudo docker run -v ~/.juju:/home/ubuntu/.juju -ti whitmo/jujubox:latest - -At this point from either path you will have access to the `juju -quickstart` command. - -To set up the credentials for your chosen cloud run: - - juju quickstart --constraints="mem=3.75G" -i - -Follow the dialogue and choose `save` and `use`. Quickstart will now -bootstrap the juju root node and setup the juju web based user -interface. - - -## Launch Kubernetes cluster - -You will need to have the Kubernetes tools compiled before launching the cluster - - make all WHAT=cmd/kubectl - export KUBERNETES_PROVIDER=juju - cluster/kube-up.sh - -If this is your first time running the `kube-up.sh` script, it will install -the required predependencies to get started with Juju, additionally it will -launch a curses based configuration utility allowing you to select your cloud -provider and enter the proper access credentials. - -Next it will deploy the kubernetes master, etcd, 2 minions with flannel based -Software Defined Networking. - - -## Exploring the cluster - -Juju status provides information about each unit in the cluster: - - juju status --format=oneline - - docker/0: 52.4.92.78 (started) - - flannel-docker/0: 52.4.92.78 (started) - - kubernetes/0: 52.4.92.78 (started) - - docker/1: 52.6.104.142 (started) - - flannel-docker/1: 52.6.104.142 (started) - - kubernetes/1: 52.6.104.142 (started) - - etcd/0: 52.5.216.210 (started) 4001/tcp - - juju-gui/0: 52.5.205.174 (started) 80/tcp, 443/tcp - - kubernetes-master/0: 52.6.19.238 (started) 8080/tcp - -You can use `juju ssh` to access any of the units: - - juju ssh kubernetes-master/0 - - -## Run some containers! - -`kubectl` is available on the kubernetes master node. We'll ssh in to -launch some containers, but one could use kubectl locally setting -KUBERNETES_MASTER to point at the ip of `kubernetes-master/0`. - -No pods will be available before starting a container: - - kubectl get pods - POD CONTAINER(S) IMAGE(S) HOST LABELS STATUS - - kubectl get replicationcontrollers - CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS - -We'll follow the aws-coreos example. Create a pod manifest: `pod.json` - -``` -{ - "apiVersion": "v1", - "kind": "Pod", - "metadata": { - "name": "hello", - "labels": { - "name": "hello", - "environment": "testing" - } - }, - "spec": { - "containers": [{ - "name": "hello", - "image": "quay.io/kelseyhightower/hello", - "ports": [{ - "containerPort": 80, - "hostPort": 80 - }] - }] - } -} -``` - -Create the pod with kubectl: - - kubectl create -f pod.json - - -Get info on the pod: - - kubectl get pods - - -To test the hello app, we need to locate which minion is hosting -the container. Better tooling for using juju to introspect container -is in the works but we can use `juju run` and `juju status` to find -our hello app. - -Exit out of our ssh session and run: - - juju run --unit kubernetes/0 "docker ps -n=1" - ... - juju run --unit kubernetes/1 "docker ps -n=1" - CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES - 02beb61339d8 quay.io/kelseyhightower/hello:latest /hello About an hour ago Up About an hour k8s_hello.... - - -We see `kubernetes/1` has our container, we can open port 80: - - juju run --unit kubernetes/1 "open-port 80" - juju expose kubernetes - sudo apt-get install curl - curl $(juju status --format=oneline kubernetes/1 | cut -d' ' -f3) - -Finally delete the pod: - - juju ssh kubernetes-master/0 - kubectl delete pods hello - - -## Scale out cluster - -We can add minion units like so: - - juju add-unit docker # creates unit docker/2, kubernetes/2, docker-flannel/2 - -## Launch the "k8petstore" example app - -The [k8petstore example](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/examples/k8petstore) is available as a -[juju action](https://jujucharms.com/docs/devel/actions). - - juju action do kubernetes-master/0 - -Note: this example includes curl statements to exercise the app, which automatically generates "petstore" transactions written to redis, and allows you to visualize the throughput in your browswer. - -## Tear down cluster - - ./kube-down.sh - -or - - juju destroy-environment --force `juju env` - -## More Info - -Kubernetes Bundle on Github - - - [Bundle Repository](https://github.com/whitmo/bundle-kubernetes) - * [Kubernetes master charm](https://github.com/whitmo/charm-kubernetes-master) - * [Kubernetes mininion charm](https://github.com/whitmo/charm-kubernetes) - - [Bundle Documentation](http://whitmo.github.io/bundle-kubernetes) - - [More about Juju](https://juju.ubuntu.com) - - -### Cloud compatibility - -Juju runs natively against a variety of cloud providers and can be -made to work against many more using a generic manual provider. - -Provider | v0.15.0 --------------- | ------- -AWS | TBD -HPCloud | TBD -OpenStack | TBD -Joyent | TBD -Azure | TBD -Digital Ocean | TBD -MAAS (bare metal) | TBD -GCE | TBD - - -Provider | v0.8.1 --------------- | ------- -AWS | [Pass](http://reports.vapour.ws/charm-test-details/charm-bundle-test-parent-136) -HPCloud | [Pass](http://reports.vapour.ws/charm-test-details/charm-bundle-test-parent-136) -OpenStack | [Pass](http://reports.vapour.ws/charm-test-details/charm-bundle-test-parent-136) -Joyent | [Pass](http://reports.vapour.ws/charm-test-details/charm-bundle-test-parent-136) -Azure | TBD -Digital Ocean | TBD -MAAS (bare metal) | TBD -GCE | TBD - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/juju.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/juju.md?pixel)]() diff --git a/release-0.20.0/docs/getting-started-guides/k8s-docker.png b/release-0.20.0/docs/getting-started-guides/k8s-docker.png deleted file mode 100644 index 6795e35e83d..00000000000 Binary files a/release-0.20.0/docs/getting-started-guides/k8s-docker.png and /dev/null differ diff --git a/release-0.20.0/docs/getting-started-guides/k8s-singlenode-docker.png b/release-0.20.0/docs/getting-started-guides/k8s-singlenode-docker.png deleted file mode 100644 index 5ebf812682d..00000000000 Binary files a/release-0.20.0/docs/getting-started-guides/k8s-singlenode-docker.png and /dev/null differ diff --git a/release-0.20.0/docs/getting-started-guides/kibana-logs.png b/release-0.20.0/docs/getting-started-guides/kibana-logs.png deleted file mode 100644 index 15b2f6759b3..00000000000 Binary files a/release-0.20.0/docs/getting-started-guides/kibana-logs.png and /dev/null differ diff --git a/release-0.20.0/docs/getting-started-guides/libvirt-coreos.md b/release-0.20.0/docs/getting-started-guides/libvirt-coreos.md deleted file mode 100644 index d36efab2651..00000000000 --- a/release-0.20.0/docs/getting-started-guides/libvirt-coreos.md +++ /dev/null @@ -1,274 +0,0 @@ -Getting started with libvirt CoreOS ------------------------------------ - -**Table of Contents** - -- [Highlights](#highlights) -- [Prerequisites](#prerequisites) -- [Setup](#setup) -- [Interacting with your Kubernetes cluster with the `kube-*` scripts.](#interacting-with-your-kubernetes-cluster-with-the-kube--scripts) -- [Troubleshooting](#troubleshooting) - - [!!! Cannot find kubernetes-server-linux-amd64.tar.gz](#-cannot-find-kubernetes-server-linux-amd64targz) - - [Can't find virsh in PATH, please fix and retry.](#cant-find-virsh-in-path-please-fix-and-retry) - - [error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory](#error-failed-to-connect-socket-to-varrunlibvirtlibvirt-sock-no-such-file-or-directory) - - [error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': Permission denied](#error-failed-to-connect-socket-to-varrunlibvirtlibvirt-sock-permission-denied) - - [error: Out of memory initializing network (virsh net-create...)](#error-out-of-memory-initializing-network-virsh-net-create) - -### Highlights - -* Super-fast cluster boot-up (few seconds instead of several minutes for vagrant) -* Reduced disk usage thanks to [COW](https://en.wikibooks.org/wiki/QEMU/Images#Copy_on_write) -* Reduced memory footprint thanks to [KSM](https://www.kernel.org/doc/Documentation/vm/ksm.txt) - -### Prerequisites - -1. Install [dnsmasq](http://www.thekelleys.org.uk/dnsmasq/doc.html) -2. Install [ebtables](http://ebtables.netfilter.org/) -3. Install [qemu](http://wiki.qemu.org/Main_Page) -4. Install [libvirt](http://libvirt.org/) -5. Enable and start the libvirt daemon, e.g: - * ``systemctl enable libvirtd`` - * ``systemctl start libvirtd`` -6. [Grant libvirt access to your user¹](https://libvirt.org/aclpolkit.html) -7. Check that your $HOME is accessible to the qemu user² - -#### ¹ Depending on your distribution, libvirt access may be denied by default or may require a password at each access. - -You can test it with the following command: -``` -virsh -c qemu:///system pool-list -``` - -If you have access error messages, please read https://libvirt.org/acl.html and https://libvirt.org/aclpolkit.html . - -In short, if your libvirt has been compiled with Polkit support (ex: Arch, Fedora 21), you can create `/etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules` as follows to grant full access to libvirt to `$USER` - -``` -sudo /bin/sh -c "cat - > /etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules" << EOF -polkit.addRule(function(action, subject) { - if (action.id == "org.libvirt.unix.manage" && - subject.user == "$USER") { - return polkit.Result.YES; - polkit.log("action=" + action); - polkit.log("subject=" + subject); - } -}); -EOF -``` - -If your libvirt has not been compiled with Polkit (ex: Ubuntu 14.04.1 LTS), check the permissions on the libvirt unix socket: - -``` -ls -l /var/run/libvirt/libvirt-sock -srwxrwx--- 1 root libvirtd 0 févr. 12 16:03 /var/run/libvirt/libvirt-sock - -usermod -a -G libvirtd $USER -# $USER needs to logout/login to have the new group be taken into account -``` - -(Replace `$USER` with your login name) - -#### ² Qemu will run with a specific user. It must have access to the VMs drives - -All the disk drive resources needed by the VM (CoreOS disk image, kubernetes binaries, cloud-init files, etc.) are put inside `./cluster/libvirt-coreos/libvirt_storage_pool`. - -As we’re using the `qemu:///system` instance of libvirt, qemu will run with a specific `user:group` distinct from your user. It is configured in `/etc/libvirt/qemu.conf`. That qemu user must have access to that libvirt storage pool. - -If your `$HOME` is world readable, everything is fine. If your $HOME is private, `cluster/kube-up.sh` will fail with an error message like: - -``` -error: Cannot access storage file '$HOME/.../kubernetes/cluster/libvirt-coreos/libvirt_storage_pool/kubernetes_master.img' (as uid:99, gid:78): Permission denied -``` - -In order to fix that issue, you have several possibilities: -* set `POOL_PATH` inside `cluster/libvirt-coreos/config-default.sh` to a directory: - * backed by a filesystem with a lot of free disk space - * writable by your user; - * accessible by the qemu user. -* Grant the qemu user access to the storage pool. - -On Arch: - -``` -setfacl -m g:kvm:--x ~ -``` - -### Setup - -By default, the libvirt-coreos setup will create a single kubernetes master and 3 kubernetes minions. Because the VM drives use Copy-on-Write and because of memory ballooning and KSM, there is a lot of resource over-allocation. - -To start your local cluster, open a shell and run: - -```shell -cd kubernetes - -export KUBERNETES_PROVIDER=libvirt-coreos -cluster/kube-up.sh -``` - -The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine. - -The `NUM_MINIONS` environment variable may be set to specify the number of minions to start. If it is not set, the number of minions defaults to 3. - -The `KUBE_PUSH` environment variable may be set to specify which kubernetes binaries must be deployed on the cluster. Its possible values are: - -* `release` (default if `KUBE_PUSH` is not set) will deploy the binaries of `_output/release-tars/kubernetes-server-….tar.gz`. This is built with `make release` or `make release-skip-tests`. -* `local` will deploy the binaries of `_output/local/go/bin`. These are built with `make`. - -You can check that your machines are there and running with: - -``` -virsh -c qemu:///system list - Id Name State ----------------------------------------------------- - 15 kubernetes_master running - 16 kubernetes_minion-01 running - 17 kubernetes_minion-02 running - 18 kubernetes_minion-03 running - ``` - -You can check that the kubernetes cluster is working with: - -``` -$ kubectl get nodes -NAME LABELS STATUS -192.168.10.2 Ready -192.168.10.3 Ready -192.168.10.4 Ready -``` - -The VMs are running [CoreOS](https://coreos.com/). -Your ssh keys have already been pushed to the VM. (It looks for ~/.ssh/id_*.pub) -The user to use to connect to the VM is `core`. -The IP to connect to the master is 192.168.10.1. -The IPs to connect to the minions are 192.168.10.2 and onwards. - -Connect to `kubernetes_master`: -``` -ssh core@192.168.10.1 -``` - -Connect to `kubernetes_minion-01`: -``` -ssh core@192.168.10.2 -``` - -### Interacting with your Kubernetes cluster with the `kube-*` scripts. - -All of the following commands assume you have set `KUBERNETES_PROVIDER` appropriately: - -``` -export KUBERNETES_PROVIDER=libvirt-coreos -``` - -Bring up a libvirt-CoreOS cluster of 5 minions - -``` -NUM_MINIONS=5 cluster/kube-up.sh -``` - -Destroy the libvirt-CoreOS cluster - -``` -cluster/kube-down.sh -``` - -Update the libvirt-CoreOS cluster with a new Kubernetes release produced by `make release` or `make release-skip-tests`: - -``` -cluster/kube-push.sh -``` - -Update the libvirt-CoreOS cluster with the locally built Kubernetes binaries produced by `make`: -``` -KUBE_PUSH=local cluster/kube-push.sh -``` - -Interact with the cluster - -``` -kubectl ... -``` - -### Troubleshooting - -#### !!! Cannot find kubernetes-server-linux-amd64.tar.gz - -Build the release tarballs: - -``` -make release -``` - -#### Can't find virsh in PATH, please fix and retry. - -Install libvirt - -On Arch: - -``` -pacman -S qemu libvirt -``` - -On Ubuntu 14.04.1: - -``` -aptitude install qemu-system-x86 libvirt-bin -``` - -On Fedora 21: - -``` -yum install qemu libvirt -``` - -#### error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory - -Start the libvirt daemon - -On Arch: - -``` -systemctl start libvirtd -``` - -On Ubuntu 14.04.1: - -``` -service libvirt-bin start -``` - -#### error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': Permission denied - -Fix libvirt access permission (Remember to adapt `$USER`) - -On Arch and Fedora 21: - -``` -cat > /etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules < - -Mesos allows dynamic sharing of cluster resources between Kubernetes and other first-class Mesos frameworks such as [Hadoop][1], [Spark][2], and [Chronos][3]. -Mesos also ensures applications from different frameworks running on your cluster are isolated and that resources are allocated fairly. - -Running Kubernetes on Mesos allows you to easily move Kubernetes workloads from one cloud provider to another to your own physical datacenter. - -This tutorial will walk you through setting up Kubernetes on a Mesos cluster. -It provides a step by step walk through of adding Kubernetes to a Mesos cluster and running the classic GuestBook demo application. -The walkthrough presented here is based on the v0.4.x series of the Kubernetes-Mesos project, which itself is based on Kubernetes v0.11.0. - -**NOTE:** There are [known issues with the current implementation][11]. -Please [file an issue against the kubernetes-mesos project][12] if you have problems completing the steps below. - -### Prerequisites - -* Understanding of [Apache Mesos][10] -* Mesos cluster on [Google Compute Engine][5] -* A VPN connection to the cluster. - -### Deploy Kubernetes-Mesos - -Log into the master node over SSH, replacing the placeholder below with the correct IP address. - -```bash -ssh jclouds@${ip_address_of_master_node} -``` - -Build Kubernetes-Mesos. - -```bash -$ git clone https://github.com/mesosphere/kubernetes-mesos k8sm -$ mkdir -p bin && sudo docker run --rm -v $(pwd)/bin:/target \ - -v $(pwd)/k8sm:/snapshot -e GIT_BRANCH=release-0.4 \ - mesosphere/kubernetes-mesos:build -``` - -Set some environment variables. -The internal IP address of the master may be obtained via `hostname -i`. - -```bash -$ export servicehost=$(hostname -i) -$ export mesos_master=${servicehost}:5050 -$ export KUBERNETES_MASTER=http://${servicehost}:8888 -``` -### Deploy etcd -Start etcd and verify that it is running: - -```bash -$ sudo docker run -d --hostname $(uname -n) --name etcd -p 4001:4001 -p 7001:7001 coreos/etcd -``` - -```bash -$ sudo docker ps -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES -fd7bac9e2301 coreos/etcd:latest "/etcd" 5s ago Up 3s 2379/tcp, 2380/... etcd -``` -It's also a good idea to ensure your etcd instance is reachable by testing it -```bash -curl -L http://$servicehost:4001/v2/keys/ -``` -If connectivity is OK, you will see an output of the available keys in etcd (if any). - -### Start Kubernetes-Mesos Services -Start the kubernetes-mesos API server, controller manager, and scheduler on a Mesos master node: - -```bash -$ ./bin/km apiserver \ - --address=${servicehost} \ - --mesos_master=${mesos_master} \ - --etcd_servers=http://${servicehost}:4001 \ - --service-cluster-ip-range=10.10.10.0/24 \ - --port=8888 \ - --cloud_provider=mesos \ - --v=1 >apiserver.log 2>&1 & - -$ ./bin/km controller-manager \ - --master=$servicehost:8888 \ - --mesos_master=${mesos_master} \ - --v=1 >controller.log 2>&1 & - -$ ./bin/km scheduler \ - --address=${servicehost} \ - --mesos_master=${mesos_master} \ - --etcd_servers=http://${servicehost}:4001 \ - --mesos_user=root \ - --api_servers=$servicehost:8888 \ - --v=2 >scheduler.log 2>&1 & -``` - -Also on the master node, we'll start up a proxy instance to act as a -public-facing service router, for testing the web interface a little -later on. - -```bash -$ sudo ./bin/km proxy \ - --bind_address=${servicehost} \ - --etcd_servers=http://${servicehost}:4001 \ - --logtostderr=true >proxy.log 2>&1 & -``` - -Disown your background jobs so that they'll stay running if you log out. - -```bash -$ disown -a -``` -#### Validate KM Services -Interact with the kubernetes-mesos framework via `kubectl`: - -```bash -$ bin/kubectl get pods -POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS -``` - -```bash -$ bin/kubectl get services # your service IPs will likely differ -NAME LABELS SELECTOR IP PORT -kubernetes component=apiserver,provider=kubernetes 10.10.10.2 443 -``` -Lastly, use the Mesos CLI tool to validate the Kubernetes scheduler framework has been registered and running: -```bash -$ mesos state | grep "Kubernetes" - "name": "Kubernetes", -``` -Or, look for Kubernetes in the Mesos web GUI by pointing your browser to -`http://${mesos_master}`. Make sure you have an active VPN connection. -Go to the Frameworks tab, and look for an active framework named "Kubernetes". - -## Spin up a pod - -Write a JSON pod description to a local file: - -```bash -$ cat <nginx.json -{ "kind": "Pod", -"apiVersion": "v1beta1", -"id": "nginx-id-01", -"desiredState": { - "manifest": { - "version": "v1beta1", - "containers": [{ - "name": "nginx-01", - "image": "nginx", - "ports": [{ - "containerPort": 80, - "hostPort": 31000 - }], - "livenessProbe": { - "enabled": true, - "type": "http", - "initialDelaySeconds": 30, - "httpGet": { - "path": "/index.html", - "port": "8081" - } - } - }] - } -}, -"labels": { - "name": "foo" -} } -EOPOD -``` - -Send the pod description to Kubernetes using the `kubectl` CLI: - -```bash -$ bin/kubectl create -f nginx.json -nginx-id-01 -``` - -Wait a minute or two while `dockerd` downloads the image layers from the internet. -We can use the `kubectl` interface to monitor the status of our pod: - -```bash -$ bin/kubectl get pods -POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS -nginx-id-01 172.17.5.27 nginx-01 nginx 10.72.72.178/10.72.72.178 cluster=gce,name=foo Running -``` - -Verify that the pod task is running in the Mesos web GUI. Click on the -Kubernetes framework. The next screen should show the running Mesos task that -started the Kubernetes pod. - -## Run the Example Guestbook App - -Following the instructions from the kubernetes-mesos [examples/guestbook][6]: - -```bash -$ export ex=k8sm/examples/guestbook -$ bin/kubectl create -f $ex/redis-master.json -$ bin/kubectl create -f $ex/redis-master-service.json -$ bin/kubectl create -f $ex/redis-slave-controller.json -$ bin/kubectl create -f $ex/redis-slave-service.json -$ bin/kubectl create -f $ex/frontend-controller.json - -$ cat </tmp/frontend-service -{ - "id": "frontend", - "kind": "Service", - "apiVersion": "v1beta1", - "port": 9998, - "selector": { - "name": "frontend" - }, - "publicIPs": [ - "${servicehost}" - ] -} -EOS -$ bin/kubectl create -f /tmp/frontend-service -``` - -Watch your pods transition from `Pending` to `Running`: - -```bash -$ watch 'bin/kubectl get pods' -``` - -Review your Mesos cluster's tasks: - -```bash -$ mesos ps - TIME STATE RSS CPU %MEM COMMAND USER ID - 0:00:05 R 41.25 MB 0.5 64.45 none root 0597e78b-d826-11e4-9162-42010acb46e2 - 0:00:08 R 41.58 MB 0.5 64.97 none root 0595b321-d826-11e4-9162-42010acb46e2 - 0:00:10 R 41.93 MB 0.75 65.51 none root ff8fff87-d825-11e4-9162-42010acb46e2 - 0:00:10 R 41.93 MB 0.75 65.51 none root 0597fa32-d826-11e4-9162-42010acb46e2 - 0:00:05 R 41.25 MB 0.5 64.45 none root ff8e01f9-d825-11e4-9162-42010acb46e2 - 0:00:10 R 41.93 MB 0.75 65.51 none root fa1da063-d825-11e4-9162-42010acb46e2 - 0:00:08 R 41.58 MB 0.5 64.97 none root b9b2e0b2-d825-11e4-9162-42010acb46e2 -``` -The number of Kubernetes pods listed earlier (from `bin/kubectl get pods`) should equal to the number active Mesos tasks listed the previous listing (`mesos ps`). - -Next, determine the internal IP address of the front end [service][7]: - -```bash -$ bin/kubectl get services -NAME LABELS SELECTOR IP PORT -kubernetes component=apiserver,provider=kubernetes 10.10.10.2 443 -redismaster name=redis-master 10.10.10.49 10000 -redisslave name=redisslave name=redisslave 10.10.10.109 10001 -frontend name=frontend 10.10.10.149 9998 -``` - -Interact with the frontend application via curl using the front-end service IP address from above: - -```bash -$ curl http://${frontend_service_ip_address}:9998/index.php?cmd=get\&key=messages -{"data": ""} -``` - -Or via the Redis CLI: - -```bash -$ sudo apt-get install redis-tools -$ redis-cli -h ${redis_master_service_ip_address} -p 10000 -10.233.254.108:10000> dump messages -"\x00\x06,world\x06\x00\xc9\x82\x8eHj\xe5\xd1\x12" -``` -#### Test Guestbook App -Or interact with the frontend application via your browser, in 2 steps: - -First, open the firewall on the master machine. - -```bash -# determine the internal port for the frontend service -$ sudo iptables-save|grep -e frontend # -- port 36336 in this case --A KUBE-PORTALS-CONTAINER -d 10.10.10.149/32 -p tcp -m comment --comment frontend -m tcp --dport 9998 -j DNAT --to-destination 10.22.183.23:36336 --A KUBE-PORTALS-CONTAINER -d 10.22.183.23/32 -p tcp -m comment --comment frontend -m tcp --dport 9998 -j DNAT --to-destination 10.22.183.23:36336 --A KUBE-PORTALS-HOST -d 10.10.10.149/32 -p tcp -m comment --comment frontend -m tcp --dport 9998 -j DNAT --to-destination 10.22.183.23:36336 --A KUBE-PORTALS-HOST -d 10.22.183.23/32 -p tcp -m comment --comment frontend -m tcp --dport 9998 -j DNAT --to-destination 10.22.183.23:36336 - -# open up access to the internal port for the frontend service -$ sudo iptables -A INPUT -i eth0 -p tcp -m state --state NEW,ESTABLISHED -m tcp \ - --dport ${internal_frontend_service_port} -j ACCEPT -``` - -Next, add a firewall rule in the Google Cloud Platform Console. Choose Compute > -Compute Engine > Networks, click on the name of your mesosphere-* network, then -click "New firewall rule" and allow access to TCP port 9998. - -![Google Cloud Platform firewall configuration][8] - -Now, you can visit the guestbook in your browser! - -![Kubernetes Guestbook app running on Mesos][9] - -[1]: http://mesosphere.com/docs/tutorials/run-hadoop-on-mesos-using-installer -[2]: http://mesosphere.com/docs/tutorials/run-spark-on-mesos -[3]: http://mesosphere.com/docs/tutorials/run-chronos-on-mesos -[4]: http://cloud.google.com -[5]: https://cloud.google.com/compute/ -[6]: https://github.com/mesosphere/kubernetes-mesos/tree/v0.4.0/examples/guestbook -[7]: https://github.com/GoogleCloudPlatform/kubernetes/blob/v0.11.0/docs/services.md#ips-and-vips -[8]: mesos/k8s-firewall.png -[9]: mesos/k8s-guestbook.png -[10]: http://mesos.apache.org/ -[11]: https://github.com/mesosphere/kubernetes-mesos/blob/master/docs/issues.md -[12]: https://github.com/mesosphere/kubernetes-mesos/issues - - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/mesos.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/mesos.md?pixel)]() diff --git a/release-0.20.0/docs/getting-started-guides/mesos/k8s-firewall.png b/release-0.20.0/docs/getting-started-guides/mesos/k8s-firewall.png deleted file mode 100755 index ed1c57ca7d0..00000000000 Binary files a/release-0.20.0/docs/getting-started-guides/mesos/k8s-firewall.png and /dev/null differ diff --git a/release-0.20.0/docs/getting-started-guides/mesos/k8s-guestbook.png b/release-0.20.0/docs/getting-started-guides/mesos/k8s-guestbook.png deleted file mode 100755 index 07d2458b3b5..00000000000 Binary files a/release-0.20.0/docs/getting-started-guides/mesos/k8s-guestbook.png and /dev/null differ diff --git a/release-0.20.0/docs/getting-started-guides/ovirt.md b/release-0.20.0/docs/getting-started-guides/ovirt.md deleted file mode 100644 index d6225cac0fa..00000000000 --- a/release-0.20.0/docs/getting-started-guides/ovirt.md +++ /dev/null @@ -1,60 +0,0 @@ -Getting started on oVirt ------------------------- - -**Table of Contents** - -- [What is oVirt](#what-is-ovirt) -- [oVirt Cloud Provider Deployment](#ovirt-cloud-provider-deployment) -- [Using the oVirt Cloud Provider](#using-the-ovirt-cloud-provider) -- [oVirt Cloud Provider Screencast](#ovirt-cloud-provider-screencast) - -## What is oVirt - -oVirt is a virtual datacenter manager that delivers powerful management of multiple virtual machines on multiple hosts. Using KVM and libvirt, oVirt can be installed on Fedora, CentOS, or Red Hat Enterprise Linux hosts to set up and manage your virtual data center. - -## oVirt Cloud Provider Deployment - -The oVirt cloud provider allows to easily discover and automatically add new VM instances as nodes to your kubernetes cluster. -At the moment there are no community-supported or pre-loaded VM images including kubernetes but it is possible to [import] or [install] Project Atomic (or Fedora) in a VM to [generate a template]. Any other distribution that includes kubernetes may work as well. - -It is mandatory to [install the ovirt-guest-agent] in the guests for the VM ip address and hostname to be reported to ovirt-engine and ultimately to kubernetes. - -Once the kubernetes template is available it is possible to start instantiating VMs that can be discovered by the cloud provider. - -[import]: http://ovedou.blogspot.it/2014/03/importing-glance-images-as-ovirt.html -[install]: http://www.ovirt.org/Quick_Start_Guide#Create_Virtual_Machines -[generate a template]: http://www.ovirt.org/Quick_Start_Guide#Using_Templates -[install the ovirt-guest-agent]: http://www.ovirt.org/How_to_install_the_guest_agent_in_Fedora - -## Using the oVirt Cloud Provider - -The oVirt Cloud Provider requires access to the oVirt REST-API to gather the proper information, the required credential should be specified in the `ovirt-cloud.conf` file: - - [connection] - uri = https://localhost:8443/ovirt-engine/api - username = admin@internal - password = admin - -In the same file it is possible to specify (using the `filters` section) what search query to use to identify the VMs to be reported to kubernetes: - - [filters] - # Search query used to find nodes - vms = tag=kubernetes - -In the above example all the VMs tagged with the `kubernetes` label will be reported as nodes to kubernetes. - -The `ovirt-cloud.conf` file then must be specified in kube-controller-manager: - - kube-controller-manager ... --cloud-provider=ovirt --cloud-config=/path/to/ovirt-cloud.conf ... - -## oVirt Cloud Provider Screencast - -This short screencast demonstrates how the oVirt Cloud Provider can be used to dynamically add VMs to your kubernetes cluster. - -[![Screencast](http://img.youtube.com/vi/JyyST4ZKne8/0.jpg)](http://www.youtube.com/watch?v=JyyST4ZKne8) - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/ovirt.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/ovirt.md?pixel)]() diff --git a/release-0.20.0/docs/getting-started-guides/rackspace.md b/release-0.20.0/docs/getting-started-guides/rackspace.md deleted file mode 100644 index 7b7fa4fbea7..00000000000 --- a/release-0.20.0/docs/getting-started-guides/rackspace.md +++ /dev/null @@ -1,71 +0,0 @@ -Getting started on Rackspace ----------------------------- - -**Table of Contents** - -- [Introduction](#introduction) -- [Prerequisites](#prerequisites) -- [Provider: Rackspace](#provider-rackspace) -- [Build](#build) -- [Cluster](#cluster) -- [Some notes:](#some-notes) -- [Network Design](#network-design) - -## Introduction - -* Supported Version: v0.18.1 - -In general, the dev-build-and-up.sh workflow for Rackspace is the similar to GCE. The specific implementation is different due to the use of CoreOS, Rackspace Cloud Files and the overall network design. - -These scripts should be used to deploy development environments for Kubernetes. If your account leverages RackConnect or non-standard networking, these scripts will most likely not work without modification. - -NOTE: The rackspace scripts do NOT rely on `saltstack` and instead rely on cloud-init for configuration. - -The current cluster design is inspired by: -- [corekube](https://github.com/metral/corekube/) -- [Angus Lees](https://github.com/anguslees/kube-openstack/) - -## Prerequisites -1. Python2.7 -2. You need to have both `nova` and `swiftly` installed. It's recommended to use a python virtualenv to install these packages into. -3. Make sure you have the appropriate environment variables set to interact with the OpenStack APIs. See [Rackspace Documentation](http://docs.rackspace.com/servers/api/v2/cs-gettingstarted/content/section_gs_install_nova.html) for more details. - -##Provider: Rackspace - -- To build your own released version from source use `export KUBERNETES_PROVIDER=rackspace` and run the `bash hack/dev-build-and-up.sh` -- Note: The get.k8s.io install method is not working yet for our scripts. - * To install the latest released version of kubernetes use `export KUBERNETES_PROVIDER=rackspace; wget -q -O - https://get.k8s.io | bash` - -## Build -1. The kubernetes binaries will be built via the common build scripts in `build/`. -2. If you've set the ENV `KUBERNETES_PROVIDER=rackspace`, the scripts will upload `kubernetes-server-linux-amd64.tar.gz` to Cloud Files. -2. A cloud files container will be created via the `swiftly` CLI and a temp URL will be enabled on the object. -3. The built `kubernetes-server-linux-amd64.tar.gz` will be uploaded to this container and the URL will be passed to master/minions nodes when booted. - -## Cluster -There is a specific `cluster/rackspace` directory with the scripts for the following steps: -1. A cloud network will be created and all instances will be attached to this network. - - flanneld uses this network for next hop routing. These routes allow the containers running on each node to communicate with one another on this private network. -2. A SSH key will be created and uploaded if needed. This key must be used to ssh into the machines (we do not capture the password). -3. The master server and additional nodes will be created via the `nova` CLI. A `cloud-config.yaml` is generated and provided as user-data with the entire configuration for the systems. -4. We then boot as many nodes as defined via `$NUM_MINIONS`. - -## Some notes: -- The scripts expect `eth2` to be the cloud network that the containers will communicate across. -- A number of the items in `config-default.sh` are overridable via environment variables. -- For older versions please either: - * Sync back to `v0.9` with `git checkout v0.9` - * Download a [snapshot of `v0.9`](https://github.com/GoogleCloudPlatform/kubernetes/archive/v0.9.tar.gz) - * Sync back to `v0.3` with `git checkout v0.3` - * Download a [snapshot of `v0.3`](https://github.com/GoogleCloudPlatform/kubernetes/archive/v0.3.tar.gz) - -## Network Design -- eth0 - Public Interface used for servers/containers to reach the internet -- eth1 - ServiceNet - Intra-cluster communication (k8s, etcd, etc) communicate via this interface. The `cloud-config` files use the special CoreOS identifier `$private_ipv4` to configure the services. -- eth2 - Cloud Network - Used for k8s pods to communicate with one another. The proxy service will pass traffic via this interface. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/rackspace.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/rackspace.md?pixel)]() diff --git a/release-0.20.0/docs/getting-started-guides/rkt/README.md b/release-0.20.0/docs/getting-started-guides/rkt/README.md deleted file mode 100644 index 6b91dbdb4bf..00000000000 --- a/release-0.20.0/docs/getting-started-guides/rkt/README.md +++ /dev/null @@ -1,95 +0,0 @@ -# Run Kubernetes with rkt - -This document describes how to run Kubernetes using [rkt](https://github.com/coreos/rkt) as a container runtime. -We still have [a bunch of work](https://github.com/GoogleCloudPlatform/kubernetes/issues/8262) to do to make the experience with rkt wonderful, please stay tuned! - -### **Prerequisite** - -- [systemd](http://www.freedesktop.org/wiki/Software/systemd/) should be installed on your machine and should be enabled. The minimum version required at this moment (2015/05/28) is [215](http://lists.freedesktop.org/archives/systemd-devel/2014-July/020903.html). - *(Note that systemd is not required by rkt itself, we are using it here to monitor and manage the pods launched by kubelet.)* - -- Install the latest rkt release according to the instructions [here](https://github.com/coreos/rkt). - The minimum version required for now is [v0.5.6](https://github.com/coreos/rkt/releases/tag/v0.5.6). - -- Make sure the `rkt metadata service` is running because it is necessary for running pod in private network mode. - More details about the networking of rkt can be found in the [documentation](https://github.com/coreos/rkt/blob/master/Documentation/networking.md). - - To start the `rkt metadata service`, you can simply run: - ```shell - $ sudo rkt metadata-service - ``` - - If you want the service to be running as a systemd service, then: - ```shell - $ sudo systemd-run rkt metadata-service - ``` - Alternatively, you can use the [rkt-metadata.service](https://github.com/coreos/rkt/blob/master/dist/init/systemd/rkt-metadata.service) and [rkt-metadata.socket](https://github.com/coreos/rkt/blob/master/dist/init/systemd/rkt-metadata.socket) to start the service. - - -### Local cluster - -To use rkt as the container runtime, you just need to set the environment variable `CONTAINER_RUNTIME`: -```shell -$ export CONTAINER_RUNTIME=rkt -$ hack/local-up-cluster.sh -``` - -### CoreOS cluster on GCE - -To use rkt as the container runtime for your CoreOS cluster on GCE, you need to specify the OS distribution, project, image: -```shell -$ export KUBE_OS_DISTRIBUTION=coreos -$ export KUBE_GCE_MINION_IMAGE= -$ export KUBE_GCE_MINION_PROJECT=coreos-cloud -$ export KUBE_CONTAINER_RUNTIME=rkt -``` - -You can optionally choose the version of rkt used by setting `KUBE_RKT_VERSION`: -```shell -$ export KUBE_RKT_VERSION=0.5.6 -``` - -Then you can launch the cluster by: -````shell -$ kube-up.sh -``` - -Note that we are still working on making all containerized the master components run smoothly in rkt. Before that we are not able to run the master node with rkt yet. - -### CoreOS cluster on AWS - -To use rkt as the container runtime for your CoreOS cluster on AWS, you need to specify the provider and OS distribution: -```shell -$ export KUBERNETES_PROVIDER=aws -$ export KUBE_OS_DISTRIBUTION=coreos -$ export KUBE_CONTAINER_RUNTIME=rkt -``` - -You can optionally choose the version of rkt used by setting `KUBE_RKT_VERSION`: -```shell -$ export KUBE_RKT_VERSION=0.5.6 -``` - -You can optionally choose the CoreOS channel by setting `COREOS_CHANNEL`: -```shell -$ export COREOS_CHANNEL=stable -``` - -Then you can launch the cluster by: -````shell -$ kube-up.sh -``` - -Note: CoreOS is not supported as the master using the automated launch -scripts. The master node is always Ubuntu. - -### Getting started with your cluster -See [a simple nginx example](../../examples/simple-nginx.md) to try out your new cluster. - -For more complete applications, please look in the [examples directory](../../examples). - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/rkt/README.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/rkt/README.md?pixel)]() diff --git a/release-0.20.0/docs/getting-started-guides/ubuntu.md b/release-0.20.0/docs/getting-started-guides/ubuntu.md deleted file mode 100644 index c38bcb61419..00000000000 --- a/release-0.20.0/docs/getting-started-guides/ubuntu.md +++ /dev/null @@ -1,191 +0,0 @@ -Kubernetes Deployment On Bare-metal Ubuntu Nodes ------------------------------------------------- - -- [Introduction](#introduction) -- [Prerequisites](#prerequisites) - - [Starting a Cluster](#starting-a-cluster) - - [Make *kubernetes* , *etcd* and *flanneld* binaries](#make-kubernetes--etcd-and-flanneld-binaries) - - [Configure and start the kubernetes cluster](#configure-and-start-the-kubernetes-cluster) - - [Deploy addons](#deploy-addons) - - [Trouble Shooting](#trouble-shooting) - -## Introduction - -This document describes how to deploy kubernetes on ubuntu nodes, including 1 master node and 3 minion nodes, and people uses this approach can scale to **any number of minion nodes** by changing some settings with ease. The original idea was heavily inspired by @jainvipin 's ubuntu single node work, which has been merge into this document. - -[Cloud team from Zhejiang University](https://github.com/ZJU-SEL) will maintain this work. - -## Prerequisites -*1 The minion nodes have installed docker version 1.2+ and bridge-utils to manipulate linux bridge* - -*2 All machines can communicate with each other, no need to connect Internet (should use private docker registry in this case)* - -*3 These guide is tested OK on Ubuntu 14.04 LTS 64bit server, but it should also work on most Ubuntu versions* - -*4 Dependences of this guide: etcd-2.0.9, flannel-0.4.0, k8s-0.18.0, but it may work with higher versions* - -*5 All the remote servers can be ssh logged in without a password by using key authentication* - - -### Starting a Cluster -#### Make *kubernetes* , *etcd* and *flanneld* binaries - -First clone the kubernetes github repo, `$ git clone https://github.com/GoogleCloudPlatform/kubernetes.git` -then `$ cd kubernetes/cluster/ubuntu`. - -Then run `$ ./build.sh`, this will download all the needed binaries into `./binaries`. - -You can customize your etcd version, flannel version, k8s version by changing variable `ETCD_VERSION` , `FLANNEL_VERSION` and `K8S_VERSION` in build.sh, default etcd version is 2.0.9, flannel version is 0.4.0 and K8s version is 0.18.0. - -Please make sure that there are `kube-apiserver`, `kube-controller-manager`, `kube-scheduler`, `kubelet`, `kube-proxy`, `etcd`, `etcdctl` and `flannel` in the binaries/master or binaries/minion directory. - -> We used flannel here because we want to use overlay network, but please remember it is not the only choice, and it is also not a k8s' necessary dependence. Actually you can just build up k8s cluster natively, or use flannel, Open vSwitch or any other SDN tool you like, we just choose flannel here as a example. - -#### Configure and start the kubernetes cluster -An example cluster is listed as below: - -| IP Address|Role | -|---------|------| -|10.10.103.223| minion | -|10.10.103.162| minion | -|10.10.103.250| both master and minion| - -First configure the cluster information in cluster/ubuntu/config-default.sh, below is a simple sample. - -``` -export nodes="vcap@10.10.103.250 vcap@10.10.103.162 vcap@10.10.103.223" - -export roles=("ai" "i" "i") - -export NUM_MINIONS=${NUM_MINIONS:-3} - -export SERVICE_CLUSTER_IP_RANGE=11.1.1.0/24 - -export FLANNEL_NET=172.16.0.0/16 - - -``` - -The first variable `nodes` defines all your cluster nodes, MASTER node comes first and separated with blank space like ` ` - -Then the `roles ` variable defines the role of above machine in the same order, "ai" stands for machine acts as both master and minion, "a" stands for master, "i" stands for minion. So they are just defined the k8s cluster as the table above described. - -The `NUM_MINIONS` variable defines the total number of minions. - -The `SERVICE_CLUSTER_IP_RANGE` variable defines the kubernetes service IP range. Please make sure that you do have a valid private ip range defined here, because some IaaS provider may reserve private ips. You can use below three private network range according to rfc1918. Besides you'd better not choose the one that conflicts with your own private network range. - - 10.0.0.0 - 10.255.255.255 (10/8 prefix) - - 172.16.0.0 - 172.31.255.255 (172.16/12 prefix) - - 192.168.0.0 - 192.168.255.255 (192.168/16 prefix) - -The `FLANNEL_NET` variable defines the IP range used for flannel overlay network, should not conflict with above `SERVICE_CLUSTER_IP_RANGE`. - -After all the above variable being set correctly. We can use below command in cluster/ directory to bring up the whole cluster. - -`$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh` - -The scripts is automatically scp binaries and config files to all the machines and start the k8s service on them. The only thing you need to do is to type the sudo password when promoted. The current machine name is shown below like. So you will not type in the wrong password. - -``` - -Deploying minion on machine 10.10.103.223 - -... - -[sudo] password to copy files and start minion: - -``` - -If all things goes right, you will see the below message from console -`Cluster validation succeeded` indicating the k8s is up. - -**All done !** - -You can also use `kubectl` command to see if the newly created k8s is working correctly. The `kubectl` binary is under the `cluster/ubuntu/binaries` directory. You can move it into your PATH. Then you can use the below command smoothly. - -For example, use `$ kubectl get nodes` to see if all your minion nodes are in ready status. It may take some time for the minions ready to use like below. - -``` - -NAME LABELS STATUS - -10.10.103.162 kubernetes.io/hostname=10.10.103.162 Ready - -10.10.103.223 kubernetes.io/hostname=10.10.103.223 Ready - -10.10.103.250 kubernetes.io/hostname=10.10.103.250 Ready - - -``` - -Also you can run kubernetes [guest-example](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/guestbook) to build a redis backend cluster on the k8s. - - -#### Deploy addons - -After the previous parts, you will have a working k8s cluster, this part will teach you how to deploy addons like dns onto the existing cluster. - -The configuration of dns is configured in cluster/ubuntu/config-default.sh. - -``` - -ENABLE_CLUSTER_DNS=true - -DNS_SERVER_IP="192.168.3.10" - -DNS_DOMAIN="kubernetes.local" - -DNS_REPLICAS=1 - -``` -The `DNS_SERVER_IP` is defining the ip of dns server which must be in the service_cluster_ip_range. - -The `DNS_REPLICAS` describes how many dns pod running in the cluster. - -After all the above variable have been set. Just type the below command - -``` - -$ cd cluster/ubuntu - -$ KUBERNETES_PROVIDER=ubuntu ./deployAddons.sh - -``` - -After some time, you can use `$ kubectl get pods` to see the dns pod is running in the cluster. Done! - - -#### Trouble Shooting - -Generally, what this approach did is quite simple: - -1. Download and copy binaries and configuration files to proper directories on every node - -2. Configure `etcd` using IPs based on input from user - -3. Create and start flannel network - -So, if you see a problem, **check etcd configuration first** - -Please try: - -1. Check `/var/log/upstart/etcd.log` for suspicious etcd log - -2. Check `/etc/default/etcd`, as we do not have much input validation, a right config should be like: - ``` - ETCD_OPTS="-name infra1 -initial-advertise-peer-urls -listen-peer-urls -initial-cluster-token etcd-cluster-1 -initial-cluster infra1=,infra2=,infra3= -initial-cluster-state new" - ``` - -3. You can use below command - `$ KUBERNETES_PROVIDER=ubuntu ./kube-down.sh` to bring down the cluster and run - `$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh` again to start again. - -4. You can also customize your own settings in `/etc/default/{component_name}` after configured success. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/ubuntu.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/ubuntu.md?pixel)]() diff --git a/release-0.20.0/docs/getting-started-guides/vagrant.md b/release-0.20.0/docs/getting-started-guides/vagrant.md deleted file mode 100644 index 63e77042226..00000000000 --- a/release-0.20.0/docs/getting-started-guides/vagrant.md +++ /dev/null @@ -1,337 +0,0 @@ -## Getting started with Vagrant - -Running kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/develop on your local machine (Linux, Mac OS X). - -**Table of Contents** - -- [Prerequisites](#prerequisites) -- [Setup](#setup) -- [Interacting with your Kubernetes cluster with Vagrant.](#interacting-with-your-kubernetes-cluster-with-vagrant) -- [Authenticating with your master](#authenticating-with-your-master) -- [Running containers](#running-containers) -- [Troubleshooting](#troubleshooting) - - [I keep downloading the same (large) box all the time!](#i-keep-downloading-the-same-large-box-all-the-time) - - [I just created the cluster, but I am getting authorization errors!](#i-just-created-the-cluster-but-i-am-getting-authorization-errors) - - [I just created the cluster, but I do not see my container running!](#i-just-created-the-cluster-but-i-do-not-see-my-container-running) - - [I want to make changes to Kubernetes code!](#i-want-to-make-changes-to-kubernetes-code) - - [I have brought Vagrant up but the nodes cannot validate!](#i-have-brought-vagrant-up-but-the-nodes-cannot-validate) - - [I want to change the number of nodes!](#i-want-to-change-the-number-of-nodes) - - [I want my VMs to have more memory!](#i-want-my-vms-to-have-more-memory) - - [I ran vagrant suspend and nothing works!](#i-ran-vagrant-suspend-and-nothing-works) - - -### Prerequisites -1. Install latest version >= 1.6.2 of vagrant from http://www.vagrantup.com/downloads.html -2. Install one of: - 1. The latest version of Virtual Box from https://www.virtualbox.org/wiki/Downloads - 2. [VMWare Fusion](https://www.vmware.com/products/fusion/) version 5 or greater as well as the appropriate [Vagrant VMWare Fusion provider](https://www.vagrantup.com/vmware) - 3. [VMWare Workstation](https://www.vmware.com/products/workstation/) version 9 or greater as well as the [Vagrant VMWare Workstation provider](https://www.vagrantup.com/vmware) - 4. [Parallels Desktop](https://www.parallels.com/products/desktop/) version 9 or greater as well as the [Vagrant Parallels provider](https://parallels.github.io/vagrant-parallels/) - 5. libvirt with KVM and enable support of hardware virtualisation. [Vagrant-libvirt](https://github.com/pradels/vagrant-libvirt). For fedora provided official rpm, and possible to use ```yum install vagrant-libvirt``` - -### Setup - -Setting up a cluster is as simple as running: - -```sh -export KUBERNETES_PROVIDER=vagrant -curl -sS https://get.k8s.io | bash -``` - -The `KUBERNETES_PROVIDER` environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine. - -By default, the Vagrant setup will create a single kubernetes-master and 1 kubernetes-minion. Each VM will take 1 GB, so make sure you have at least 2GB to 4GB of free memory (plus appropriate free disk space). To start your local cluster, open a shell and run: - -```sh -cd kubernetes - -export KUBERNETES_PROVIDER=vagrant -./cluster/kube-up.sh -``` - -Vagrant will provision each machine in the cluster with all the necessary components to run Kubernetes. The initial setup can take a few minutes to complete on each machine. - -If you installed more than one Vagrant provider, Kubernetes will usually pick the appropriate one. However, you can override which one Kubernetes will use by setting the [`VAGRANT_DEFAULT_PROVIDER`](https://docs.vagrantup.com/v2/providers/default.html) environment variable: - -```sh -export VAGRANT_DEFAULT_PROVIDER=parallels -export KUBERNETES_PROVIDER=vagrant -./cluster/kube-up.sh -``` - -By default, each VM in the cluster is running Fedora. - -To access the master or any minion: - -```sh -vagrant ssh master -vagrant ssh minion-1 -``` - -If you are running more than one minion, you can access the others by: - -```sh -vagrant ssh minion-2 -vagrant ssh minion-3 -``` - -Each node in the cluster installs the docker daemon and the kubelet. - -The master node instantiates the Kubernetes master components as pods on the machine. - -To view the service status and/or logs on the kubernetes-master: - -```sh -vagrant ssh master -[vagrant@kubernetes-master ~] $ sudo su - -[root@kubernetes-master ~] $ systemctl status kubelet -[root@kubernetes-master ~] $ journalctl -ru kubelet - -[root@kubernetes-master ~] $ systemctl status docker -[root@kubernetes-master ~] $ journalctl -ru docker - -[root@kubernetes-master ~] $ tail -f /var/log/kube-apiserver.log -[root@kubernetes-master ~] $ tail -f /var/log/kube-controller-manager.log -[root@kubernetes-master ~] $ tail -f /var/log/kube-scheduler.log -``` - -To view the services on any of the kubernetes-minion(s): -```sh -vagrant ssh minion-1 -[vagrant@kubernetes-master ~] $ sudo su - -[root@kubernetes-master ~] $ systemctl status kubelet -[root@kubernetes-master ~] $ journalctl -ru kubelet - -[root@kubernetes-master ~] $ systemctl status docker -[root@kubernetes-master ~] $ journalctl -ru docker -``` - -### Interacting with your Kubernetes cluster with Vagrant. - -With your Kubernetes cluster up, you can manage the nodes in your cluster with the regular Vagrant commands. - -To push updates to new Kubernetes code after making source changes: -```sh -./cluster/kube-push.sh -``` - -To stop and then restart the cluster: -```sh -vagrant halt -./cluster/kube-up.sh -``` - -To destroy the cluster: -```sh -vagrant destroy -``` - -Once your Vagrant machines are up and provisioned, the first thing to do is to check that you can use the `kubectl.sh` script. - -You may need to build the binaries first, you can do this with ```make``` - -```sh -$ ./cluster/kubectl.sh get nodes - -NAME LABELS -10.245.1.4 -10.245.1.5 -10.245.1.3 -``` - -### Authenticating with your master - -When using the vagrant provider in Kubernetes, the `cluster/kubectl.sh` script will cache your credentials in a `~/.kubernetes_vagrant_auth` file so you will not be prompted for them in the future. - -```sh -cat ~/.kubernetes_vagrant_auth -{ "User": "vagrant", - "Password": "vagrant", - "CAFile": "/home/k8s_user/.kubernetes.vagrant.ca.crt", - "CertFile": "/home/k8s_user/.kubecfg.vagrant.crt", - "KeyFile": "/home/k8s_user/.kubecfg.vagrant.key" -} -``` - -You should now be set to use the `cluster/kubectl.sh` script. For example try to list the nodes that you have started with: - -```sh -./cluster/kubectl.sh get nodes -``` - -### Running containers - -Your cluster is running, you can list the nodes in your cluster: - -```sh -$ ./cluster/kubectl.sh get nodes - -NAME LABELS -10.245.2.4 -10.245.2.3 -10.245.2.2 -``` - -Now start running some containers! - -You can now use any of the `cluster/kube-*.sh` commands to interact with your VM machines. -Before starting a container there will be no pods, services and replication controllers. - -```sh -$ ./cluster/kubectl.sh get pods -NAME IMAGE(S) HOST LABELS STATUS - -$ ./cluster/kubectl.sh get services -NAME LABELS SELECTOR IP PORT - -$ ./cluster/kubectl.sh get replicationcontrollers -NAME IMAGE(S SELECTOR REPLICAS -``` - -Start a container running nginx with a replication controller and three replicas - -```sh -$ ./cluster/kubectl.sh run my-nginx --image=nginx --replicas=3 --port=80 -``` - -When listing the pods, you will see that three containers have been started and are in Waiting state: - -```sh -$ ./cluster/kubectl.sh get pods -NAME IMAGE(S) HOST LABELS STATUS -781191ff-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.4/10.245.2.4 name=myNginx Waiting -7813c8bd-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.2/10.245.2.2 name=myNginx Waiting -78140853-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.3/10.245.2.3 name=myNginx Waiting -``` - -You need to wait for the provisioning to complete, you can monitor the nodes by doing: - -```sh -$ vagrant ssh minion-1 -c 'sudo docker images' -kubernetes-minion-1: - REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE - 96864a7d2df3 26 hours ago 204.4 MB - google/cadvisor latest e0575e677c50 13 days ago 12.64 MB - kubernetes/pause latest 6c4579af347b 8 weeks ago 239.8 kB -``` - -Once the docker image for nginx has been downloaded, the container will start and you can list it: - -```sh -$ vagrant ssh minion-1 -c 'sudo docker ps' -kubernetes-minion-1: - CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES - dbe79bf6e25b nginx:latest "nginx" 21 seconds ago Up 19 seconds k8s--mynginx.8c5b8a3a--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--fcfa837f - fa0e29c94501 kubernetes/pause:latest "/pause" 8 minutes ago Up 8 minutes 0.0.0.0:8080->80/tcp k8s--net.a90e7ce4--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--baf5b21b - aa2ee3ed844a google/cadvisor:latest "/usr/bin/cadvisor" 38 minutes ago Up 38 minutes k8s--cadvisor.9e90d182--cadvisor_-_agent.file--4626b3a2 - 65a3a926f357 kubernetes/pause:latest "/pause" 39 minutes ago Up 39 minutes 0.0.0.0:4194->8080/tcp k8s--net.c5ba7f0e--cadvisor_-_agent.file--342fd561 -``` - -Going back to listing the pods, services and replicationcontrollers, you now have: - -```sh -$ ./cluster/kubectl.sh get pods -NAME IMAGE(S) HOST LABELS STATUS -781191ff-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.4/10.245.2.4 name=myNginx Running -7813c8bd-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.2/10.245.2.2 name=myNginx Running -78140853-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.3/10.245.2.3 name=myNginx Running - -$ ./cluster/kubectl.sh get services -NAME LABELS SELECTOR IP PORT - -$ ./cluster/kubectl.sh get replicationcontrollers -NAME IMAGE(S SELECTOR REPLICAS -myNginx nginx name=my-nginx 3 -``` - -We did not start any services, hence there are none listed. But we see three replicas displayed properly. -Check the [guestbook](../../examples/guestbook/README.md) application to learn how to create a service. -You can already play with scaling the replicas with: - -```sh -$ ./cluster/kubectl.sh scale rc my-nginx --replicas=2 -$ ./cluster/kubectl.sh get pods -NAME IMAGE(S) HOST LABELS STATUS -7813c8bd-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.2/10.245.2.2 name=myNginx Running -78140853-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.3/10.245.2.3 name=myNginx Running -``` - -Congratulations! - -### Troubleshooting - -#### I keep downloading the same (large) box all the time! - -By default the Vagrantfile will download the box from S3. You can change this (and cache the box locally) by providing a name and an alternate URL when calling `kube-up.sh` - -```sh -export KUBERNETES_BOX_NAME=choose_your_own_name_for_your_kuber_box -export KUBERNETES_BOX_URL=path_of_your_kuber_box -export KUBERNETES_PROVIDER=vagrant -./cluster/kube-up.sh -``` - -#### I just created the cluster, but I am getting authorization errors! - -You probably have an incorrect ~/.kubernetes_vagrant_auth file for the cluster you are attempting to contact. - -```sh -rm ~/.kubernetes_vagrant_auth -``` - -After using kubectl.sh make sure that the correct credentials are set: - -```sh -cat ~/.kubernetes_vagrant_auth -{ - "User": "vagrant", - "Password": "vagrant" -} -``` - -#### I just created the cluster, but I do not see my container running! - -If this is your first time creating the cluster, the kubelet on each minion schedules a number of docker pull requests to fetch prerequisite images. This can take some time and as a result may delay your initial pod getting provisioned. - -#### I want to make changes to Kubernetes code! - -To set up a vagrant cluster for hacking, follow the [vagrant developer guide](../devel/developer-guides/vagrant.md). - -#### I have brought Vagrant up but the nodes cannot validate! - -Log on to one of the nodes (`vagrant ssh minion-1`) and inspect the salt minion log (`sudo cat /var/log/salt/minion`). - -#### I want to change the number of nodes! - -You can control the number of nodes that are instantiated via the environment variable `NUM_MINIONS` on your host machine. If you plan to work with replicas, we strongly encourage you to work with enough nodes to satisfy your largest intended replica size. If you do not plan to work with replicas, you can save some system resources by running with a single minion. You do this, by setting `NUM_MINIONS` to 1 like so: - -```sh -export NUM_MINIONS=1 -``` - -#### I want my VMs to have more memory! - -You can control the memory allotted to virtual machines with the `KUBERNETES_MEMORY` environment variable. -Just set it to the number of megabytes you would like the machines to have. For example: - -```sh -export KUBERNETES_MEMORY=2048 -``` - -If you need more granular control, you can set the amount of memory for the master and nodes independently. For example: - -```sh -export KUBERNETES_MASTER_MEMORY=1536 -export KUBERNETES_MINION_MEMORY=2048 -``` - -#### I ran vagrant suspend and nothing works! -```vagrant suspend``` seems to mess up the network. This is not supported at this time. - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/vagrant.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/vagrant.md?pixel)]() diff --git a/release-0.20.0/docs/getting-started-guides/vsphere.md b/release-0.20.0/docs/getting-started-guides/vsphere.md deleted file mode 100644 index 7fb8f07ee43..00000000000 --- a/release-0.20.0/docs/getting-started-guides/vsphere.md +++ /dev/null @@ -1,94 +0,0 @@ -Getting started with vSphere -------------------------------- - -The example below creates a Kubernetes cluster with 4 worker node Virtual -Machines and a master Virtual Machine (i.e. 5 VMs in your cluster). This -cluster is set up and controlled from your workstation (or wherever you find -convenient). - -**Table of Contents** - -- [Prerequisites](#prerequisites) -- [Setup](#setup) -- [Starting a cluster](#starting-a-cluster) -- [Extra: debugging deployment failure](#extra-debugging-deployment-failure) - -### Prerequisites - -1. You need administrator credentials to an ESXi machine or vCenter instance. -2. You must have Go (version 1.2 or later) installed: [www.golang.org](http://www.golang.org). -3. You must have your `GOPATH` set up and include `$GOPATH/bin` in your `PATH`. - - ```sh - export GOPATH=$HOME/src/go - mkdir -p $GOPATH - export PATH=$PATH:$GOPATH/bin - ``` - -4. Install the govc tool to interact with ESXi/vCenter: - - ```sh - go get github.com/vmware/govmomi/govc - ``` - -5. Get or build a [binary release](binary_release.md) - -### Setup - -Download a prebuilt Debian 7.7 VMDK that we'll use as a base image: - -```sh -curl --remote-name-all https://storage.googleapis.com/govmomi/vmdk/2014-11-11/kube.vmdk.gz{,.md5} -md5sum -c kube.vmdk.gz.md5 -gzip -d kube.vmdk.gz -``` - -Import this VMDK into your vSphere datastore: - -```sh -export GOVC_URL='user:pass@hostname' -export GOVC_INSECURE=1 # If the host above uses a self-signed cert -export GOVC_DATASTORE='target datastore' -export GOVC_RESOURCE_POOL='resource pool or cluster with access to datastore' - -govc import.vmdk kube.vmdk ./kube/ -``` - -Verify that the VMDK was correctly uploaded and expanded to ~3GiB: - -```sh -govc datastore.ls ./kube/ -``` - -Take a look at the file `cluster/vsphere/config-common.sh` fill in the required -parameters. The guest login for the image that you imported is `kube:kube`. - -### Starting a cluster - -Now, let's continue with deploying Kubernetes. -This process takes about ~10 minutes. - -```sh -cd kubernetes # Extracted binary release OR repository root -export KUBERNETES_PROVIDER=vsphere -cluster/kube-up.sh -``` - -Refer to the top level README and the getting started guide for Google Compute -Engine. Once you have successfully reached this point, your vSphere Kubernetes -deployment works just as any other one! - -**Enjoy!** - -### Extra: debugging deployment failure - -The output of `kube-up.sh` displays the IP addresses of the VMs it deploys. You -can log into any VM as the `kube` user to poke around and figure out what is -going on (find yourself authorized with your SSH key, or use the password -`kube` otherwise). - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/vsphere.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/getting-started-guides/vsphere.md?pixel)]() diff --git a/release-0.20.0/docs/glossary.md b/release-0.20.0/docs/glossary.md deleted file mode 100644 index fc470a92220..00000000000 --- a/release-0.20.0/docs/glossary.md +++ /dev/null @@ -1,61 +0,0 @@ - -# Glossary and Concept Index - -**Authorization** -:Kubernetes does not currently have an authorization system. Anyone with the cluster password can do anything. We plan -to add sophisticated authorization, and to make it pluggable. See the [access control design doc](./design/access.md) and -[this issue](https://github.com/GoogleCloudPlatform/kubernetes/issues/1430). - -**Annotation** -: A key/value pair that can hold large (compared to a Label), and possibly not human-readable data. Intended to store -non-identifying metadata associated with an object, such as provenance information. Not indexed. - -**Image** -: A [Docker Image](https://docs.docker.com/userguide/dockerimages/). See [images](./images.md). - -**Label** -: A key/value pair conveying user-defined identifying attributes of an object, and used to form sets of related objects, such as -pods which are replicas in a load-balanced service. Not intended to hold large or non-human-readable data. See [labels](./labels.md). - -**Name** -: A user-provided name for an object. See [identifiers](identifiers.md). - -**Namespace** -: A namespace is like a prefix to the name of an object. You can configure your client to use a particular namespace, -so you do not have to type it all the time. Namespaces allow multiple projects to prevent naming collisions between unrelated teams. - -**Pod** -: A collection of containers which will be scheduled onto the same node, which share and an IP and port space, and which -can be created/destroyed together. See [pods](./pods.md). - -**Replication Controller** -: A _replication controller_ ensures that a specified number of pod "replicas" are running at any one time. Both allows -for easy scaling of replicated systems, and handles restarting of a Pod when the machine it is on reboots or otherwise fails. - -**Resource** -: CPU, memory, and other things that a pod can request. See [resources](resources.md). - -**Secret** -: An object containing sensitive information, such as authentication tokens, which can be made available to containers upon request. See [secrets](secrets.md). - -**Selector** -: An expression that matches Labels. Can identify related objects, such as pods which are replicas in a load-balanced -service. See [labels](labels.md). - -**Service** -: A load-balanced set of `pods` which can be accessed via a single stable IP address. See [services](./services.md). - -**UID** -: An identifier on all Kubernetes objects that is set by the Kubernetes API server. Can be used to distinguish between historical -occurrences of same-Name objects. See [identifiers](identifiers.md). - -**Volume** -: A directory, possibly with some data in it, which is accessible to a Container as part of its filesystem. Kubernetes -Volumes build upon [Docker Volumes](https://docs.docker.com/userguide/dockervolumes/), adding provisioning of the Volume -directory and/or device. See [volumes](volumes.md). - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/glossary.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/glossary.md?pixel)]() diff --git a/release-0.20.0/docs/identifiers.md b/release-0.20.0/docs/identifiers.md deleted file mode 100644 index 0e52b44c40e..00000000000 --- a/release-0.20.0/docs/identifiers.md +++ /dev/null @@ -1,16 +0,0 @@ -# Identifiers -All objects in the Kubernetes REST API are unambiguously identified by a Name and a UID. - -For non-unique user-provided attributes, Kubernetes provides [labels](labels.md) and [annotations](annotations.md). - -## Names -Names are generally client-provided. Only one object of a given kind can have a given name at a time (i.e., they are spatially unique). But if you delete an object, you can make a new object with the same name. Names are the used to refer to an object in a resource URL, such as `/api/v1/pods/some-name`. By convention, the names of Kubernetes resources should be up to maximum length of 253 characters and consist of lower case alphanumeric characters, `-`, and `.`, but certain resources have more specific restrictions. See the [identifiers design doc](design/identifiers.md) for the precise syntax rules for names. - -## UIDs -UID are generated by Kubernetes. Every object created over the whole lifetime of a Kubernetes cluster has a distinct UID (i.e., they are spatially and temporally unique). - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/identifiers.md?pixel)]() - - -[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/release-0.20.0/docs/identifiers.md?pixel)]() diff --git a/release-0.20.0/docs/images.md b/release-0.20.0/docs/images.md deleted file mode 100644 index 3f1e1cb28b6..00000000000 --- a/release-0.20.0/docs/images.md +++ /dev/null @@ -1,159 +0,0 @@ -# Images -Each container in a pod has its own image. Currently, the only type of image supported is a [Docker Image](https://docs.docker.com/userguide/dockerimages/). - -You create your Docker image and push it to a registry before referring to it in a kubernetes pod. - -The `image` property of a container supports the same syntax as the `docker` command does, including private registries and tags. - -## Updating Images - -The default pull policy is `PullIfNotPresent` which causes the Kubelet to not -pull an image if it already exists. If you would like to always force a pull -you must set a pull image policy of `PullAlways` or specify a `:latest` tag on -your image. - -## Using a Private Registry -Private registries may require keys to read images from them. -Credentials can be provided in several ways: - - Using Google Container Registry - - Per-cluster - - automatically configured on GCE/GKE - - all pods can read the project's private registry - - Configuring Nodes to Authenticate to a Private Registry - - all pods can read any configured private registries - - requires node configuration by cluster administrator - - Pre-pulling Images - - all pods can use any images cached on a node - - requires root access to all nodes to setup - - Specifying ImagePullKeys on a Pod - - only pods which provide own keys can access the private registry -Each option is described in more detail below. - - -### Using Google Container Registry - -Kubernetes has native support for the [Google Container -Registry (GCR)](https://cloud.google.com/tools/container-registry/), when running on Google Compute -Engine (GCE). If you are running your cluster on GCE or Google Container Engine (GKE), simply -use the full image name (e.g. gcr.io/my_project/image:tag). - -All pods in a cluster will have read access to images in this registry. - -The kubelet kubelet will authenticate to GCR using the instance's -Google service account. The service account on the instance -will have a `https://www.googleapis.com/auth/devstorage.read_only`, -so it can pull from the project's GCR, but not push. - -### Configuring Nodes to Authenticate to a Private Registry -Docker stores keys for private registries in a `.dockercfg` file. Create a config file by running -`docker login .` and then copy the resulting `.dockercfg` file to the root user's -`$HOME` directory (e.g. `/root/.dockercfg`) on each node in the cluster. - -You must ensure all nodes in the cluster have the same `.dockercfg`. Otherwise, pods will run on -some nodes and fail to run on others. For example, if you use node autoscaling, then each instance -template needs to include the `.dockercfg` or mount a drive that contains it. - -All pods will have read access to images in any private registry with keys in the `.dockercfg`. - -### Pre-pulling Images - -Be default, the kubelet will try to pull each image from the specified registry. -However, if the `imagePullPolicy` property of the container is set to `IfNotPresent` or `Never`, -then a local image is used (preferentially or exclusively, respectively). - -If you want to rely on pre-pulled images as a substitute for registry authentication, -you must ensure all nodes in the cluster have the same pre-pulled images. - -This can be used to preload certain images for speed or as an alternative to authenticating to a private registry. - -All pods will have read access to any pre-pulled images. - -### Specifying ImagePullSecrets on a Pod -Kubernetes supports specifying registry keys on a pod. - -First, create a `.dockercfg`, such as running `docker login `. -Then put the resulting `.dockercfg` file into a [secret resource](../docs/secret.md). For example: -``` -cat > dockercfg < secret.json <