Fix capitalization of Kubernetes in the documentation.
This commit is contained in:
@@ -33,7 +33,7 @@ Documentation for other releases can be found at
|
||||
|
||||
# Configuring APIserver ports
|
||||
|
||||
This document describes what ports the kubernetes apiserver
|
||||
This document describes what ports the Kubernetes apiserver
|
||||
may serve on and how to reach them. The audience is
|
||||
cluster administrators who want to customize their cluster
|
||||
or understand the details.
|
||||
@@ -44,7 +44,7 @@ in [Accessing the cluster](../user-guide/accessing-the-cluster.md).
|
||||
|
||||
## Ports and IPs Served On
|
||||
|
||||
The Kubernetes API is served by the Kubernetes APIServer process. Typically,
|
||||
The Kubernetes API is served by the Kubernetes apiserver process. Typically,
|
||||
there is one of these running on a single kubernetes-master node.
|
||||
|
||||
By default the Kubernetes APIserver serves HTTP on 2 ports:
|
||||
|
@@ -69,7 +69,7 @@ with a value of `Basic BASE64ENCODEDUSER:PASSWORD`.
|
||||
We plan for the Kubernetes API server to issue tokens
|
||||
after the user has been (re)authenticated by a *bedrock* authentication
|
||||
provider external to Kubernetes. We plan to make it easy to develop modules
|
||||
that interface between kubernetes and a bedrock authentication provider (e.g.
|
||||
that interface between Kubernetes and a bedrock authentication provider (e.g.
|
||||
github.com, google.com, enterprise directory, kerberos, etc.)
|
||||
|
||||
|
||||
|
@@ -75,7 +75,7 @@ Root causes:
|
||||
- Network partition within cluster, or between cluster and users
|
||||
- Crashes in Kubernetes software
|
||||
- Data loss or unavailability of persistent storage (e.g. GCE PD or AWS EBS volume)
|
||||
- Operator error, e.g. misconfigured kubernetes software or application software
|
||||
- Operator error, e.g. misconfigured Kubernetes software or application software
|
||||
|
||||
Specific scenarios:
|
||||
- Apiserver VM shutdown or apiserver crashing
|
||||
@@ -127,7 +127,7 @@ Mitigations:
|
||||
- Action: Snapshot apiserver PDs/EBS-volumes periodically
|
||||
- Mitigates: Apiserver backing storage lost
|
||||
- Mitigates: Some cases of operator error
|
||||
- Mitigates: Some cases of kubernetes software fault
|
||||
- Mitigates: Some cases of Kubernetes software fault
|
||||
|
||||
- Action: use replication controller and services in front of pods
|
||||
- Mitigates: Node shutdown
|
||||
|
@@ -33,7 +33,7 @@ Documentation for other releases can be found at
|
||||
|
||||
# DNS Integration with Kubernetes
|
||||
|
||||
As of kubernetes 0.8, DNS is offered as a [cluster add-on](http://releases.k8s.io/HEAD/cluster/addons/README.md).
|
||||
As of Kubernetes 0.8, DNS is offered as a [cluster add-on](http://releases.k8s.io/HEAD/cluster/addons/README.md).
|
||||
If enabled, a DNS Pod and Service will be scheduled on the cluster, and the kubelets will be
|
||||
configured to tell individual containers to use the DNS Service's IP to resolve DNS names.
|
||||
|
||||
@@ -42,7 +42,7 @@ assigned a DNS name. By default, a client Pod's DNS search list will
|
||||
include the Pod's own namespace and the cluster's default domain. This is best
|
||||
illustrated by example:
|
||||
|
||||
Assume a Service named `foo` in the kubernetes namespace `bar`. A Pod running
|
||||
Assume a Service named `foo` in the Kubernetes namespace `bar`. A Pod running
|
||||
in namespace `bar` can look up this service by simply doing a DNS query for
|
||||
`foo`. A Pod running in namespace `quux` can look up this service by doing a
|
||||
DNS query for `foo.bar`.
|
||||
@@ -53,14 +53,14 @@ supports forward lookups (A records) and service lookups (SRV records).
|
||||
## How it Works
|
||||
|
||||
The running DNS pod holds 3 containers - skydns, etcd (a private instance which skydns uses),
|
||||
and a kubernetes-to-skydns bridge called kube2sky. The kube2sky process
|
||||
watches the kubernetes master for changes in Services, and then writes the
|
||||
and a Kubernetes-to-skydns bridge called kube2sky. The kube2sky process
|
||||
watches the Kubernetes master for changes in Services, and then writes the
|
||||
information to etcd, which skydns reads. This etcd instance is not linked to
|
||||
any other etcd clusters that might exist, including the kubernetes master.
|
||||
any other etcd clusters that might exist, including the Kubernetes master.
|
||||
|
||||
## Issues
|
||||
|
||||
The skydns service is reachable directly from kubernetes nodes (outside
|
||||
The skydns service is reachable directly from Kubernetes nodes (outside
|
||||
of any container) and DNS resolution works if the skydns service is targeted
|
||||
explicitly. However, nodes are not configured to use the cluster DNS service or
|
||||
to search the cluster's DNS domain by default. This may be resolved at a later
|
||||
|
@@ -38,7 +38,7 @@ Documentation for other releases can be found at
|
||||
### Synopsis
|
||||
|
||||
|
||||
The kubernetes API server validates and configures data
|
||||
The Kubernetes API server validates and configures data
|
||||
for the api objects which include pods, services, replicationcontrollers, and
|
||||
others. The API Server services REST operations and provides the frontend to the
|
||||
cluster's shared state through which all other components interact.
|
||||
@@ -80,7 +80,7 @@ cluster's shared state through which all other components interact.
|
||||
--kubelet_port=0: Kubelet port
|
||||
--kubelet_timeout=0: Timeout for kubelet operations
|
||||
--long-running-request-regexp="(/|^)((watch|proxy)(/|$)|(logs|portforward|exec)/?$)": A regular expression matching long running requests which should be excluded from maximum inflight request handling.
|
||||
--master-service-namespace="": The namespace from which the kubernetes master services should be injected into pods
|
||||
--master-service-namespace="": The namespace from which the Kubernetes master services should be injected into pods
|
||||
--max-requests-inflight=400: The maximum number of requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit.
|
||||
--min-request-timeout=1800: An optional field indicating the minimum number of seconds a handler must keep a request open before timing it out. Currently only honored by the watch request handler, which picks a randomized value above this number as the connection timeout, to spread out load.
|
||||
--old-etcd-prefix="": The previous prefix for all resource paths in etcd, if any.
|
||||
|
@@ -38,7 +38,7 @@ Documentation for other releases can be found at
|
||||
### Synopsis
|
||||
|
||||
|
||||
The kubernetes controller manager is a daemon that embeds
|
||||
The Kubernetes controller manager is a daemon that embeds
|
||||
the core control loops shipped with Kubernetes. In applications of robotics and
|
||||
automation, a control loop is a non-terminating loop that regulates the state of
|
||||
the system. In Kubernetes, a controller is a control loop that watches the shared
|
||||
|
@@ -38,7 +38,7 @@ Documentation for other releases can be found at
|
||||
### Synopsis
|
||||
|
||||
|
||||
The kubernetes network proxy runs on each node. This
|
||||
The Kubernetes network proxy runs on each node. This
|
||||
reflects services as defined in the Kubernetes API on each node and can do simple
|
||||
TCP,UDP stream forwarding or round robin TCP,UDP forwarding across a set of backends.
|
||||
Service cluster ips and ports are currently found through Docker-links-compatible
|
||||
|
@@ -38,7 +38,7 @@ Documentation for other releases can be found at
|
||||
### Synopsis
|
||||
|
||||
|
||||
The kubernetes scheduler is a policy-rich, topology-aware,
|
||||
The Kubernetes scheduler is a policy-rich, topology-aware,
|
||||
workload-specific function that significantly impacts availability, performance,
|
||||
and capacity. The scheduler needs to take into account individual and collective
|
||||
resource requirements, quality of service requirements, hardware/software/policy
|
||||
|
@@ -91,7 +91,7 @@ HTTP server: The kubelet can also listen for HTTP and respond to a simple API
|
||||
--kubeconfig=: Path to a kubeconfig file, specifying how to authenticate to API server (the master location is set by the api-servers flag).
|
||||
--low-diskspace-threshold-mb=0: The absolute free disk space, in MB, to maintain. When disk space falls below this threshold, new pods would be rejected. Default: 256
|
||||
--manifest-url="": URL for accessing the container manifest
|
||||
--master-service-namespace="": The namespace from which the kubernetes master services should be injected into pods
|
||||
--master-service-namespace="": The namespace from which the Kubernetes master services should be injected into pods
|
||||
--max-pods=40: Number of Pods that can run on this Kubelet.
|
||||
--maximum-dead-containers=0: Maximum number of old instances of a containers to retain globally. Each container takes up some disk space. Default: 100.
|
||||
--maximum-dead-containers-per-container=0: Maximum number of old instances of a container to retain per container. Each container takes up some disk space. Default: 2.
|
||||
|
@@ -33,7 +33,7 @@ Documentation for other releases can be found at
|
||||
|
||||
# Considerations for running multiple Kubernetes clusters
|
||||
|
||||
You may want to set up multiple kubernetes clusters, both to
|
||||
You may want to set up multiple Kubernetes clusters, both to
|
||||
have clusters in different regions to be nearer to your users, and to tolerate failures and/or invasive maintenance.
|
||||
This document describes some of the issues to consider when making a decision about doing so.
|
||||
|
||||
@@ -67,7 +67,7 @@ Reasons to have multiple clusters include:
|
||||
|
||||
## Selecting the right number of clusters
|
||||
|
||||
The selection of the number of kubernetes clusters may be a relatively static choice, only revisited occasionally.
|
||||
The selection of the number of Kubernetes clusters may be a relatively static choice, only revisited occasionally.
|
||||
By contrast, the number of nodes in a cluster and the number of pods in a service may be change frequently according to
|
||||
load and growth.
|
||||
|
||||
|
@@ -125,7 +125,7 @@ number of pods that can be scheduled onto the node.
|
||||
|
||||
### Node Info
|
||||
|
||||
General information about the node, for instance kernel version, kubernetes version
|
||||
General information about the node, for instance kernel version, Kubernetes version
|
||||
(kubelet version, kube-proxy version), docker version (if used), OS name.
|
||||
The information is gathered by Kubelet from the node.
|
||||
|
||||
@@ -231,7 +231,7 @@ Normally, nodes register themselves and report their capacity when creating the
|
||||
you are doing [manual node administration](#manual-node-administration), then you need to set node
|
||||
capacity when adding a node.
|
||||
|
||||
The kubernetes scheduler ensures that there are enough resources for all the pods on a node. It
|
||||
The Kubernetes scheduler ensures that there are enough resources for all the pods on a node. It
|
||||
checks that the sum of the limits of containers on the node is no greater than than the node capacity. It
|
||||
includes all containers started by kubelet, but not containers started directly by docker, nor
|
||||
processes not in containers.
|
||||
|
@@ -63,7 +63,7 @@ Neither contention nor changes to quota will affect already-running pods.
|
||||
|
||||
## Enabling Resource Quota
|
||||
|
||||
Resource Quota support is enabled by default for many kubernetes distributions. It is
|
||||
Resource Quota support is enabled by default for many Kubernetes distributions. It is
|
||||
enabled when the apiserver `--admission_control=` flag has `ResourceQuota` as
|
||||
one of its arguments.
|
||||
|
||||
|
@@ -95,7 +95,7 @@ Key | Value
|
||||
------------- | -------------
|
||||
`api_servers` | (Optional) The IP address / host name where a kubelet can get read-only access to kube-apiserver
|
||||
`cbr-cidr` | (Optional) The minion IP address range used for the docker container bridge.
|
||||
`cloud` | (Optional) Which IaaS platform is used to host kubernetes, *gce*, *azure*, *aws*, *vagrant*
|
||||
`cloud` | (Optional) Which IaaS platform is used to host Kubernetes, *gce*, *azure*, *aws*, *vagrant*
|
||||
`etcd_servers` | (Optional) Comma-delimited list of IP addresses the kube-apiserver and kubelet use to reach etcd. Uses the IP of the first machine in the kubernetes_master role, or 127.0.0.1 on GCE.
|
||||
`hostnamef` | (Optional) The full host name of the machine, i.e. uname -n
|
||||
`node_ip` | (Optional) The IP address to use to address this node
|
||||
@@ -103,7 +103,7 @@ Key | Value
|
||||
`network_mode` | (Optional) Networking model to use among nodes: *openvswitch*
|
||||
`networkInterfaceName` | (Optional) Networking interface to use to bind addresses, default value *eth0*
|
||||
`publicAddressOverride` | (Optional) The IP address the kube-apiserver should use to bind against for external read-only access
|
||||
`roles` | (Required) 1. `kubernetes-master` means this machine is the master in the kubernetes cluster. 2. `kubernetes-pool` means this machine is a kubernetes-minion. Depending on the role, the Salt scripts will provision different resources on the machine.
|
||||
`roles` | (Required) 1. `kubernetes-master` means this machine is the master in the Kubernetes cluster. 2. `kubernetes-pool` means this machine is a kubernetes-minion. Depending on the role, the Salt scripts will provision different resources on the machine.
|
||||
|
||||
These keys may be leveraged by the Salt sls files to branch behavior.
|
||||
|
||||
|
Reference in New Issue
Block a user