Convert gcutil to gcloud compute
This commit is contained in:
@@ -40,11 +40,12 @@ We set up this bridge on each node with SaltStack, in [container_bridge.py](clus
|
||||
|
||||
We make these addresses routable in GCE:
|
||||
|
||||
gcutil addroute ${MINION_NAMES[$i]} ${MINION_IP_RANGES[$i]} \
|
||||
--norespect_terminal_width \
|
||||
--project ${PROJECT} \
|
||||
--network ${NETWORK} \
|
||||
--next_hop_instance ${ZONE}/instances/${MINION_NAMES[$i]} &
|
||||
gcloud compute routes add "${MINION_NAMES[$i]}" \
|
||||
--project "${PROJECT}" \
|
||||
--destination-range "${MINION_IP_RANGES[$i]}" \
|
||||
--network "${NETWORK}" \
|
||||
--next-hop-instance "${MINION_NAMES[$i]}" \
|
||||
--next-hop-instance-zone "${ZONE}" &
|
||||
|
||||
The minion IP ranges are /24s in the 10-dot space.
|
||||
|
||||
|
@@ -9,14 +9,14 @@ The example below creates a Kubernetes cluster with 4 worker node Virtual Machin
|
||||
2. Make sure you can start up a GCE VM. At least make sure you can do the [Create an instance](https://developers.google.com/compute/docs/quickstart#addvm) part of the GCE Quickstart.
|
||||
3. Make sure you can ssh into the VM without interactive prompts.
|
||||
* Your GCE SSH key must either have no passcode or you need to be using `ssh-agent`.
|
||||
* Ensure the GCE firewall isn't blocking port 22 to your VMs. By default, this should work but if you have edited firewall rules or created a new non-default network, you'll need to expose it: `gcutil addfirewall --network=<network-name> --description "SSH allowed from anywhere" --allowed=tcp:22 default-ssh`
|
||||
* Ensure the GCE firewall isn't blocking port 22 to your VMs. By default, this should work but if you have edited firewall rules or created a new non-default network, you'll need to expose it: `gcloud compute firewall-rules create --network=<network-name> --description "SSH allowed from anywhere" --allow tcp:22 default-ssh`
|
||||
4. You need to have the Google Cloud Storage API, and the Google Cloud Storage JSON API enabled. This can be done in the Google Cloud Console.
|
||||
|
||||
|
||||
### Prerequisites for your workstation
|
||||
|
||||
1. Be running a Linux or Mac OS X.
|
||||
2. You must have the [Google Cloud SDK](https://developers.google.com/cloud/sdk/) installed. This will get you `gcloud`, `gcutil` and `gsutil`.
|
||||
2. You must have the [Google Cloud SDK](https://developers.google.com/cloud/sdk/) installed. This will get you `gcloud` and `gsutil`.
|
||||
3. Ensure that your `gcloud` components are up-to-date by running `gcloud components update`.
|
||||
4. If you want to build your own release, you need to have [Docker
|
||||
installed](https://docs.docker.com/installation/). On Mac OS X you can use
|
||||
|
@@ -59,10 +59,15 @@ Before you can use a GCE PD with a pod, you need to create it and format it.
|
||||
__We are actively working on making this more streamlined.__
|
||||
|
||||
```sh
|
||||
gcutil adddisk --size_gb=<size> --zone=<zone> <name>
|
||||
gcutil attachdisk --disk <name> kubernetes-master
|
||||
gcutil ssh kubernetes-master sudo /usr/share/google/safe_format_and_mount /dev/disk/by-id/google-test2 /mnt/tmp
|
||||
gcutil detachdisk --device_name google-<name> kubernetes-master
|
||||
DISK_NAME=my-data-disk
|
||||
DISK_SIZE=500GB
|
||||
ZONE=us-central1-a
|
||||
|
||||
gcloud compute disks create --size=$DISK_SIZE --zone=$ZONE $DISK_NAME
|
||||
gcloud compute instances attach-disk --zone=$ZONE --disk=$DISK_NAME --device-name temp-data kubernetes-master
|
||||
gcloud compute ssh --zone=$ZONE kubernetes-master \
|
||||
--command "sudo /usr/share/google/safe_format_and_mount /dev/disk/by-id/google-temp-data /mnt/tmp"
|
||||
gcloud compute instances detach-disk --zone=$ZONE --disk $DISK_NAME kubernetes-master
|
||||
```
|
||||
|
||||
#### GCE PD Example configuration:
|
||||
|
Reference in New Issue
Block a user