Merge pull request #6618 from roberthbailey/no-nginx
Salt reconfiguration to get rid of nginx on GCE
This commit is contained in:
@@ -20,18 +20,20 @@ HTTP on 3 ports:
|
||||
- only GET requests are allowed.
|
||||
- requests are rate limited
|
||||
3. Secure Port
|
||||
- default is port 6443, change with `-secure_port`
|
||||
- default is port 443, change with `-secure_port`
|
||||
- default IP is first non-localhost network interface, change with `-public_address_override`
|
||||
- serves HTTPS. Set cert with `-tls_cert_file` and key with `-tls_private_key_file`.
|
||||
- uses token-file based [authentication](./authentication.md).
|
||||
- uses token-file or client-certificate based [authentication](./authentication.md).
|
||||
- uses policy-based [authorization](./authorization.md).
|
||||
|
||||
## Proxies and Firewall rules
|
||||
|
||||
Additionally, in typical configurations (i.e. GCE), there is a proxy (nginx) running
|
||||
Additionally, in some configurations there is a proxy (nginx) running
|
||||
on the same machine as the apiserver process. The proxy serves HTTPS protected
|
||||
by Basic Auth on port 443, and proxies to the apiserver on localhost:8080.
|
||||
Typically, firewall rules will allow HTTPS access to port 443.
|
||||
by Basic Auth on port 443, and proxies to the apiserver on localhost:8080. In
|
||||
these configurations the secure port is typically set to 6443.
|
||||
|
||||
A firewall rule is typically configured to allow external HTTPS access to port 443.
|
||||
|
||||
The above are defaults and reflect how Kubernetes is deployed to GCE using
|
||||
kube-up.sh. Other cloud providers may vary.
|
||||
@@ -42,15 +44,15 @@ There are three differently configured serving ports because there are a
|
||||
variety of uses cases:
|
||||
1. Clients outside of a Kubernetes cluster, such as human running `kubectl`
|
||||
on desktop machine. Currently, accesses the Localhost Port via a proxy (nginx)
|
||||
running on the `kubernetes-master` machine. Proxy uses Basic Auth.
|
||||
running on the `kubernetes-master` machine. Proxy uses bearer token authentication.
|
||||
2. Processes running in Containers on Kubernetes that need to do read from
|
||||
the apiserver. Currently, these can use Readonly Port.
|
||||
3. Scheduler and Controller-manager processes, which need to do read-write
|
||||
API operations. Currently, these have to run on the
|
||||
API operations. Currently, these have to run on the
|
||||
operations on the apiserver. Currently, these have to run on the same
|
||||
host as the apiserver and use the Localhost Port.
|
||||
4. Kubelets, which need to do read-write API operations and are necessarily
|
||||
on different machines than the apiserver. Kubelet uses the Secure Port
|
||||
4. Kubelets, which need to do read-write API operations and are necessarily
|
||||
on different machines than the apiserver. Kubelet uses the Secure Port
|
||||
to get their pods, to find the services that a pod can see, and to
|
||||
write events. Credentials are distributed to kubelets at cluster
|
||||
setup time.
|
||||
@@ -59,13 +61,14 @@ variety of uses cases:
|
||||
- Policy will limit the actions kubelets can do via the authed port.
|
||||
- Kube-proxy currently uses the readonly port to read services and endpoints,
|
||||
but will eventually use the auth port.
|
||||
- Kubelets may change from token-based authentication to cert-based-auth.
|
||||
- Kubelets will change from token-based authentication to cert-based-auth.
|
||||
- Scheduler and Controller-manager will use the Secure Port too. They
|
||||
will then be able to run on different machines than the apiserver.
|
||||
- A general mechanism will be provided for [giving credentials to
|
||||
pods](
|
||||
https://github.com/GoogleCloudPlatform/kubernetes/issues/1907).
|
||||
- The Readonly Port will no longer be needed and will be removed.
|
||||
- The Readonly Port will no longer be needed and [will be removed](
|
||||
https://github.com/GoogleCloudPlatform/kubernetes/issues/5921).
|
||||
- Clients, like kubectl, will all support token-based auth, and the
|
||||
Localhost will no longer be needed, and will not be the default.
|
||||
However, the localhost port may continue to be an option for
|
||||
|
@@ -2,40 +2,40 @@
|
||||
|
||||
Client access to a running kubernetes cluster can be shared by copying
|
||||
the `kubectl` client config bundle ([.kubeconfig](kubeconfig-file.md)).
|
||||
This config bundle lives in `$HOME/.kube/.kubeconfig`, and is generated
|
||||
by `cluster/kube-up.sh`. Sample steps for sharing `.kubeconfig` below.
|
||||
This config bundle lives in `$HOME/.kube/config`, and is generated
|
||||
by `cluster/kube-up.sh`. Sample steps for sharing `kubeconfig` below.
|
||||
|
||||
**1. Create a cluster**
|
||||
```bash
|
||||
cluster/kube-up.sh
|
||||
```
|
||||
**2. Copy .kubeconfig to new host**
|
||||
**2. Copy `kubeconfig` to new host**
|
||||
```bash
|
||||
scp $HOME/.kube/.kubeconfig user@remotehost:/path/to/.kubeconfig
|
||||
scp $HOME/.kube/config user@remotehost:/path/to/.kube/config
|
||||
```
|
||||
|
||||
**3. On new host, make copied `.kubeconfig` available to `kubectl`**
|
||||
**3. On new host, make copied `config` available to `kubectl`**
|
||||
|
||||
* Option A: copy to default location
|
||||
```bash
|
||||
mv /path/to/.kubeconfig $HOME/.kube/.kubeconfig
|
||||
mv /path/to/.kube/config $HOME/.kube/config
|
||||
```
|
||||
* Option B: copy to working directory (from which kubectl is run)
|
||||
```bash
|
||||
mv /path/to/.kubeconfig $PWD
|
||||
mv /path/to/.kube/config $PWD
|
||||
```
|
||||
* Option C: manually pass `.kubeconfig` location to `.kubectl`
|
||||
* Option C: manually pass `kubeconfig` location to `.kubectl`
|
||||
```bash
|
||||
# via environment variable
|
||||
export KUBECONFIG=/path/to/.kubeconfig
|
||||
export KUBECONFIG=/path/to/.kube/config
|
||||
|
||||
# via commandline flag
|
||||
kubectl ... --kubeconfig=/path/to/.kubeconfig
|
||||
kubectl ... --kubeconfig=/path/to/.kube/config
|
||||
```
|
||||
|
||||
## Manually Generating `.kubeconfig`
|
||||
## Manually Generating `kubeconfig`
|
||||
|
||||
`.kubeconfig` is generated by `kube-up` but you can generate your own
|
||||
`kubeconfig` is generated by `kube-up` but you can generate your own
|
||||
using (any desired subset of) the following commands.
|
||||
|
||||
```bash
|
||||
@@ -46,15 +46,15 @@ kubectl config set-cluster $CLUSTER_NICK
|
||||
--embed-certs=true \
|
||||
# Or if tls not needed, replace --certificate-authority and --embed-certs with
|
||||
--insecure-skip-tls-verify=true
|
||||
--kubeconfig=/path/to/standalone/.kubeconfig
|
||||
--kubeconfig=/path/to/standalone/.kube/config
|
||||
|
||||
# create user entry
|
||||
kubectl config set-credentials $USER_NICK
|
||||
# basic auth credentials, generated on kube master
|
||||
# bearer token credentials, generated on kube master
|
||||
--token=$token \
|
||||
# use either username|password or token, not both
|
||||
--username=$username \
|
||||
--password=$password \
|
||||
# use either username|password or token, not both
|
||||
--token=$token \
|
||||
--client-certificate=/path/to/crt_file \
|
||||
--client-key=/path/to/key_file \
|
||||
--embed-certs=true
|
||||
@@ -65,42 +65,42 @@ kubectl config set-context $CONTEXT_NAME --cluster=$CLUSTER_NICKNAME --user=$USE
|
||||
```
|
||||
Notes:
|
||||
* The `--embed-certs` flag is needed to generate a standalone
|
||||
`.kubeconfig`, that will work as-is on another host.
|
||||
`kubeconfig`, that will work as-is on another host.
|
||||
* `--kubeconfig` is both the preferred file to load config from and the file to
|
||||
save config too. In the above commands the `--kubeconfig` file could be
|
||||
omitted if you first run
|
||||
```bash
|
||||
export KUBECONFIG=/path/to/standalone/.kubeconfig
|
||||
export KUBECONFIG=/path/to/standalone/.kube/config
|
||||
```
|
||||
* The ca_file, key_file, and cert_file referrenced above are generated on the
|
||||
kube master at cluster turnup. They can be found on the master under
|
||||
`/srv/kubernetes`. Basic auth/token are also generated on the kube master.
|
||||
`/srv/kubernetes`. Bearer token/basic auth are also generated on the kube master.
|
||||
|
||||
For more details on `.kubeconfig` see [kubeconfig-file.md](kubeconfig-file.md),
|
||||
For more details on `kubeconfig` see [kubeconfig-file.md](kubeconfig-file.md),
|
||||
and/or run `kubectl config -h`.
|
||||
|
||||
## Merging `.kubeconfig` Example
|
||||
## Merging `kubeconfig` Example
|
||||
|
||||
`kubectl` loads and merges config from the following locations (in order)
|
||||
|
||||
1. `--kubeconfig=path/to/kubeconfig` commandline flag
|
||||
2. `KUBECONFIG=path/to/kubeconfig` env variable
|
||||
1. `--kubeconfig=path/to/.kube/config` commandline flag
|
||||
2. `KUBECONFIG=path/to/.kube/config` env variable
|
||||
3. `$PWD/.kubeconfig`
|
||||
4. `$HOME/.kube/.kubeconfig`
|
||||
4. `$HOME/.kube/config`
|
||||
|
||||
If you create clusters A, B on host1, and clusters C, D on host2, you can
|
||||
make all four clusters available on both hosts by running
|
||||
|
||||
```bash
|
||||
# on host2, copy host1's default kubeconfig, and merge it from env
|
||||
scp host1:/path/to/home1/.kube/.kubeconfig path/to/other/.kubeconfig
|
||||
scp host1:/path/to/home1/.kube/config path/to/other/.kube/config
|
||||
|
||||
export $KUBECONFIG=path/to/other/.kubeconfig
|
||||
export $KUBECONFIG=path/to/other/.kube/config
|
||||
|
||||
# on host1, copy host2's default kubeconfig and merge it from env
|
||||
scp host2:/path/to/home2/.kube/.kubeconfig path/to/other/.kubeconfig
|
||||
scp host2:/path/to/home2/.kube/config path/to/other/.kube/config
|
||||
|
||||
export $KUBECONFIG=path/to/other/.kubeconfig
|
||||
export $KUBECONFIG=path/to/other/.kube/config
|
||||
```
|
||||
Detailed examples and explanation of `.kubeconfig` loading/merging rules can be found in [kubeconfig-file.md](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/kubeconfig-file.md).
|
||||
Detailed examples and explanation of `kubeconfig` loading/merging rules can be found in [kubeconfig-file.md](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/kubeconfig-file.md).
|
||||
|
||||
|
Reference in New Issue
Block a user