Removes "name" labels, improves document flow, adds highlighting.

This commit is contained in:
Elson Rodriguez 2015-10-07 19:26:58 -07:00
parent 93908b9b14
commit 0bb17fe6df
6 changed files with 56 additions and 39 deletions

View File

@ -43,12 +43,12 @@ Your cluster must have 4 CPU and 6 GB of RAM to complete the example up to the s
### Deploy Selenium Grid Hub: ### Deploy Selenium Grid Hub:
We will be using Selenium Grid Hub to make our Selenium install scalable via a master/worker model. The Selenium Hub is the master, and the Selenium Nodes are the workers(not to be confused with Kubernetes nodes). We only need one hub, but we're using a replication controller to ensure that the hub is always running: We will be using Selenium Grid Hub to make our Selenium install scalable via a master/worker model. The Selenium Hub is the master, and the Selenium Nodes are the workers(not to be confused with Kubernetes nodes). We only need one hub, but we're using a replication controller to ensure that the hub is always running:
``` ```console
kubectl create --filename=selenium-hub-rc.yaml kubectl create --filename=selenium-hub-rc.yaml
``` ```
The Selenium Nodes will need to know how to get to the Hub, let's create a service for the nodes to connect to. The Selenium Nodes will need to know how to get to the Hub, let's create a service for the nodes to connect to.
``` ```console
kubectl create --filename=selenium-hub-svc.yaml kubectl create --filename=selenium-hub-svc.yaml
``` ```
@ -56,9 +56,9 @@ kubectl create --filename=selenium-hub-svc.yaml
Let's verify our deployment of Selenium hub by connecting to the web console. Let's verify our deployment of Selenium hub by connecting to the web console.
#### Kubernetes Nodes Reachable #### Kubernetes Nodes Reachable
If your Kubernetes nodes are reachable from your network, you can verify the hub by hitting it on the nodeport. You can retrieve the nodeport by typing `kubectl describe svc selenium-hub`, however the snippet below automates that: If your Kubernetes nodes are reachable from your network, you can verify the hub by hitting it on the nodeport. You can retrieve the nodeport by typing `kubectl describe svc selenium-hub`, however the snippet below automates that by using kubectl's template functionality:
``` ```console
export NODEPORT=`kubectl get svc --selector='name=selenium-hub' --output=template --template="{{ with index .items 0}}{{with index .spec.ports 0 }}{{.nodePort}}{{end}}{{end}}"` export NODEPORT=`kubectl get svc --selector='app=selenium-hub' --output=template --template="{{ with index .items 0}}{{with index .spec.ports 0 }}{{.nodePort}}{{end}}{{end}}"`
export NODE=`kubectl get nodes --output=template --template="{{with index .items 0 }}{{.metadata.name}}{{end}}"` export NODE=`kubectl get nodes --output=template --template="{{with index .items 0 }}{{.metadata.name}}{{end}}"`
curl http://$NODE:$NODEPORT curl http://$NODE:$NODEPORT
@ -66,39 +66,40 @@ curl http://$NODE:$NODEPORT
#### Kubernetes Nodes Unreachable #### Kubernetes Nodes Unreachable
If you cannot reach your Kubernetes nodes from your network, you can proxy via kubectl. If you cannot reach your Kubernetes nodes from your network, you can proxy via kubectl.
``` ```console
export PODNAME=`kubectl get pods --selector="name=selenium-hub" --output=template --template="{{with index .items 0}}{{.metadata.name}}{{end}}"` export PODNAME=`kubectl get pods --selector="app=selenium-hub" --output=template --template="{{with index .items 0}}{{.metadata.name}}{{end}}"`
kubectl port-forward --pod=$PODNAME 4444:4444 kubectl port-forward --pod=$PODNAME 4444:4444
``` ```
In a seperate terminal, you can now check the status. In a seperate terminal, you can now check the status.
``` ```console
curl http://localhost:4444 curl http://localhost:4444
``` ```
#### Using Google Container Engine #### Using Google Container Engine
If you are using Google Container Engine, you can expose your hub via the internet. This is a bad idea for many reasons, but you can do it as follows: If you are using Google Container Engine, you can expose your hub via the internet. This is a bad idea for many reasons, but you can do it as follows:
``` ```console
kubectl expose rc selenium-hub --name=selenium-hub-external --labels="name=selenium-hub-external,external=true" --create-external-load-balancer=true kubectl expose rc selenium-hub --name=selenium-hub-external --labels="app=selenium-hub,external=true" --create-external-load-balancer=true
``` ```
Then wait a few minutes, eventually your new `selenium-hub-external` service will be assigned an load balancing IP from gcloud. Then wait a few minutes, eventually your new `selenium-hub-external` service will be assigned a load balanced IP from gcloud. Once `kubectl get svc selenium-hub-external` shows two IPs, run this snippet.
``` ```console
export INTERNET_IP=`kubectl get svc --selector="name=selenium-hub-external" --output=template --template="{{with index .items 0}}{{with index .status.loadBalancer.ingress 0}}{{.ip}}{{end}}{{end}}"` export INTERNET_IP=`kubectl get svc --selector="app=selenium-hub,external=true" --output=template --template="{{with index .items 0}}{{with index .status.loadBalancer.ingress 0}}{{.ip}}{{end}}{{end}}"`
curl http://$INTERNET_IP:4444/ curl http://$INTERNET_IP:4444/
``` ```
You should now be able to hit `$INTERNET_IP` via your web browser, and so can everyone else on the Internet!
### Deploy Firefox and Chrome Nodes: ### Deploy Firefox and Chrome Nodes:
Now that the Hub is up, we can deploy workers. Now that the Hub is up, we can deploy workers.
This will deploy 3 Chrome nodes. This will deploy 2 Chrome nodes.
``` ```console
kubectl create -f selenium-node-chrome-rc.yaml kubectl create -f selenium-node-chrome-rc.yaml
``` ```
And 3 Firefox nodes to match. And 2 Firefox nodes to match.
``` ```console
kubectl create -f selenium-node-firefox-rc.yaml kubectl create -f selenium-node-firefox-rc.yaml
``` ```
@ -109,24 +110,24 @@ Let's run a quick Selenium job to validate our setup.
#### Setup Python Environment #### Setup Python Environment
First, we need to start a python container that we can attach to. First, we need to start a python container that we can attach to.
``` ```console
kubectl run selenium-python --image=google/python-hello kubectl run selenium-python --image=google/python-hello
``` ```
Next, we need to get inside this container. Next, we need to get inside this container.
``` ```console
export PODNAME=`kubectl get pods --selector="run=selenium-python" --output=template --template="{{with index .items 0}}{{.metadata.name}}{{end}}"` export PODNAME=`kubectl get pods --selector="run=selenium-python" --output=template --template="{{with index .items 0}}{{.metadata.name}}{{end}}"`
kubectl exec --stdin=true --tty=true $PODNAME bash kubectl exec --stdin=true --tty=true $PODNAME bash
``` ```
Once inside, we need to install the Selenium library Once inside, we need to install the Selenium library
``` ```console
pip install selenium pip install selenium
``` ```
#### Run Selenium Job with Python #### Run Selenium Job with Python
We're all set up, start the python interpreter. We're all set up, start the python interpreter.
``` ```console
python python
``` ```
@ -162,7 +163,7 @@ Congratulations, your Selenium Hub is up, with Firefox and Chrome nodes!
### Scale your Firefox and Chrome nodes. ### Scale your Firefox and Chrome nodes.
If you need more Firefox or Chrome nodes, your hardware is the limit: If you need more Firefox or Chrome nodes, your hardware is the limit:
``` ```console
kubectl scale rc selenium-node-firefox --replicas=10 kubectl scale rc selenium-node-firefox --replicas=10
kubectl scale rc selenium-node-chrome --replicas=10 kubectl scale rc selenium-node-chrome --replicas=10
``` ```
@ -170,13 +171,13 @@ kubectl scale rc selenium-node-chrome --replicas=10
You now have 10 Firefox and 10 Chrome nodes, happy Seleniuming! You now have 10 Firefox and 10 Chrome nodes, happy Seleniuming!
### Debugging ### Debugging
Sometimes it is neccessary to check on a hung test. Each pod is running VNC. To check on one of the browser nodes via VNC, it's reccomended that you proxy, since we don't want to expose a service for every pod, and the containers have a weak password. Replace POD_NAME with the name of the pod you want to connect to. Sometimes it is neccessary to check on a hung test. Each pod is running VNC. To check on one of the browser nodes via VNC, it's reccomended that you proxy, since we don't want to expose a service for every pod, and the containers have a weak VNC password. Replace POD_NAME with the name of the pod you want to connect to.
``` ```console
kubectl port-forward --pod=POD_NAME 9000:5900 kubectl port-forward --pod=POD_NAME 5900:5900
``` ```
Then connect to localhost:9000 with your VNC client using the password "secret" Then connect to localhost:5900 with your VNC client using the password "secret"
Enjoy your scalable Selenium Grid! Enjoy your scalable Selenium Grid!
@ -186,7 +187,7 @@ Adapted from: https://github.com/SeleniumHQ/docker-selenium
To remove all created resources, run the following: To remove all created resources, run the following:
``` ```console
kubectl delete rc selenium-hub kubectl delete rc selenium-hub
kubectl delete rc selenium-node-chrome kubectl delete rc selenium-node-chrome
kubectl delete rc selenium-node-firefox kubectl delete rc selenium-node-firefox

View File

@ -3,15 +3,15 @@ kind: ReplicationController
metadata: metadata:
name: selenium-hub name: selenium-hub
labels: labels:
name: selenium-hub app: selenium-hub
spec: spec:
replicas: 1 replicas: 1
selector: selector:
name: selenium-hub app: selenium-hub
template: template:
metadata: metadata:
labels: labels:
name: selenium-hub app: selenium-hub
spec: spec:
containers: containers:
- name: selenium-hub - name: selenium-hub

View File

@ -1,15 +1,15 @@
apiVersion: v1 apiVersion: v1
kind: Service kind: Service
metadata: metadata:
name: selenium-hub name: selenium-hub
labels: labels:
name: selenium-hub app: selenium-hub
spec: spec:
ports: ports:
- port: 4444 - port: 4444
targetPort: 4444 targetPort: 4444
name: port0 name: port0
selector: selector:
name: selenium-hub app: selenium-hub
type: NodePort type: NodePort
sessionAffinity: None sessionAffinity: None

View File

@ -3,15 +3,15 @@ kind: ReplicationController
metadata: metadata:
name: selenium-node-chrome name: selenium-node-chrome
labels: labels:
name: selenium-node-chrome app: selenium-node-chrome
spec: spec:
replicas: 2 replicas: 2
selector: selector:
name: selenium-node-chrome app: selenium-node-chrome
template: template:
metadata: metadata:
labels: labels:
name: selenium-node-chrome app: selenium-node-chrome
spec: spec:
containers: containers:
- name: selenium-node-chrome - name: selenium-node-chrome

View File

@ -3,15 +3,15 @@ kind: ReplicationController
metadata: metadata:
name: selenium-node-firefox name: selenium-node-firefox
labels: labels:
name: selenium-node-firefox app: selenium-node-firefox
spec: spec:
replicas: 2 replicas: 2
selector: selector:
name: selenium-node-firefox app: selenium-node-firefox
template: template:
metadata: metadata:
labels: labels:
name: selenium-node-firefox app: selenium-node-firefox
spec: spec:
containers: containers:
- name: selenium-node-firefox - name: selenium-node-firefox

View File

@ -1,3 +1,19 @@
#!/usr/bin/env python
# Copyright 2015 The Kubernetes Authors All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from selenium import webdriver from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities from selenium.webdriver.common.desired_capabilities import DesiredCapabilities