diff --git a/examples/selenium/README.md b/examples/selenium/README.md index a8b2615ad4c..66a6a3363df 100644 --- a/examples/selenium/README.md +++ b/examples/selenium/README.md @@ -43,12 +43,12 @@ Your cluster must have 4 CPU and 6 GB of RAM to complete the example up to the s ### Deploy Selenium Grid Hub: We will be using Selenium Grid Hub to make our Selenium install scalable via a master/worker model. The Selenium Hub is the master, and the Selenium Nodes are the workers(not to be confused with Kubernetes nodes). We only need one hub, but we're using a replication controller to ensure that the hub is always running: -``` +```console kubectl create --filename=selenium-hub-rc.yaml ``` The Selenium Nodes will need to know how to get to the Hub, let's create a service for the nodes to connect to. -``` +```console kubectl create --filename=selenium-hub-svc.yaml ``` @@ -56,9 +56,9 @@ kubectl create --filename=selenium-hub-svc.yaml Let's verify our deployment of Selenium hub by connecting to the web console. #### Kubernetes Nodes Reachable -If your Kubernetes nodes are reachable from your network, you can verify the hub by hitting it on the nodeport. You can retrieve the nodeport by typing `kubectl describe svc selenium-hub`, however the snippet below automates that: -``` -export NODEPORT=`kubectl get svc --selector='name=selenium-hub' --output=template --template="{{ with index .items 0}}{{with index .spec.ports 0 }}{{.nodePort}}{{end}}{{end}}"` +If your Kubernetes nodes are reachable from your network, you can verify the hub by hitting it on the nodeport. You can retrieve the nodeport by typing `kubectl describe svc selenium-hub`, however the snippet below automates that by using kubectl's template functionality: +```console +export NODEPORT=`kubectl get svc --selector='app=selenium-hub' --output=template --template="{{ with index .items 0}}{{with index .spec.ports 0 }}{{.nodePort}}{{end}}{{end}}"` export NODE=`kubectl get nodes --output=template --template="{{with index .items 0 }}{{.metadata.name}}{{end}}"` curl http://$NODE:$NODEPORT @@ -66,39 +66,40 @@ curl http://$NODE:$NODEPORT #### Kubernetes Nodes Unreachable If you cannot reach your Kubernetes nodes from your network, you can proxy via kubectl. -``` -export PODNAME=`kubectl get pods --selector="name=selenium-hub" --output=template --template="{{with index .items 0}}{{.metadata.name}}{{end}}"` +```console +export PODNAME=`kubectl get pods --selector="app=selenium-hub" --output=template --template="{{with index .items 0}}{{.metadata.name}}{{end}}"` kubectl port-forward --pod=$PODNAME 4444:4444 ``` In a seperate terminal, you can now check the status. -``` +```console curl http://localhost:4444 ``` #### Using Google Container Engine If you are using Google Container Engine, you can expose your hub via the internet. This is a bad idea for many reasons, but you can do it as follows: -``` -kubectl expose rc selenium-hub --name=selenium-hub-external --labels="name=selenium-hub-external,external=true" --create-external-load-balancer=true +```console +kubectl expose rc selenium-hub --name=selenium-hub-external --labels="app=selenium-hub,external=true" --create-external-load-balancer=true ``` -Then wait a few minutes, eventually your new `selenium-hub-external` service will be assigned an load balancing IP from gcloud. -``` -export INTERNET_IP=`kubectl get svc --selector="name=selenium-hub-external" --output=template --template="{{with index .items 0}}{{with index .status.loadBalancer.ingress 0}}{{.ip}}{{end}}{{end}}"` +Then wait a few minutes, eventually your new `selenium-hub-external` service will be assigned a load balanced IP from gcloud. Once `kubectl get svc selenium-hub-external` shows two IPs, run this snippet. +```console +export INTERNET_IP=`kubectl get svc --selector="app=selenium-hub,external=true" --output=template --template="{{with index .items 0}}{{with index .status.loadBalancer.ingress 0}}{{.ip}}{{end}}{{end}}"` curl http://$INTERNET_IP:4444/ ``` +You should now be able to hit `$INTERNET_IP` via your web browser, and so can everyone else on the Internet! ### Deploy Firefox and Chrome Nodes: Now that the Hub is up, we can deploy workers. -This will deploy 3 Chrome nodes. -``` +This will deploy 2 Chrome nodes. +```console kubectl create -f selenium-node-chrome-rc.yaml ``` -And 3 Firefox nodes to match. -``` +And 2 Firefox nodes to match. +```console kubectl create -f selenium-node-firefox-rc.yaml ``` @@ -109,24 +110,24 @@ Let's run a quick Selenium job to validate our setup. #### Setup Python Environment First, we need to start a python container that we can attach to. -``` +```console kubectl run selenium-python --image=google/python-hello ``` Next, we need to get inside this container. -``` +```console export PODNAME=`kubectl get pods --selector="run=selenium-python" --output=template --template="{{with index .items 0}}{{.metadata.name}}{{end}}"` kubectl exec --stdin=true --tty=true $PODNAME bash ``` Once inside, we need to install the Selenium library -``` +```console pip install selenium ``` #### Run Selenium Job with Python We're all set up, start the python interpreter. -``` +```console python ``` @@ -162,7 +163,7 @@ Congratulations, your Selenium Hub is up, with Firefox and Chrome nodes! ### Scale your Firefox and Chrome nodes. If you need more Firefox or Chrome nodes, your hardware is the limit: -``` +```console kubectl scale rc selenium-node-firefox --replicas=10 kubectl scale rc selenium-node-chrome --replicas=10 ``` @@ -170,13 +171,13 @@ kubectl scale rc selenium-node-chrome --replicas=10 You now have 10 Firefox and 10 Chrome nodes, happy Seleniuming! ### Debugging -Sometimes it is neccessary to check on a hung test. Each pod is running VNC. To check on one of the browser nodes via VNC, it's reccomended that you proxy, since we don't want to expose a service for every pod, and the containers have a weak password. Replace POD_NAME with the name of the pod you want to connect to. +Sometimes it is neccessary to check on a hung test. Each pod is running VNC. To check on one of the browser nodes via VNC, it's reccomended that you proxy, since we don't want to expose a service for every pod, and the containers have a weak VNC password. Replace POD_NAME with the name of the pod you want to connect to. -``` -kubectl port-forward --pod=POD_NAME 9000:5900 +```console +kubectl port-forward --pod=POD_NAME 5900:5900 ``` -Then connect to localhost:9000 with your VNC client using the password "secret" +Then connect to localhost:5900 with your VNC client using the password "secret" Enjoy your scalable Selenium Grid! @@ -186,7 +187,7 @@ Adapted from: https://github.com/SeleniumHQ/docker-selenium To remove all created resources, run the following: -``` +```console kubectl delete rc selenium-hub kubectl delete rc selenium-node-chrome kubectl delete rc selenium-node-firefox diff --git a/examples/selenium/selenium-hub-rc.yaml b/examples/selenium/selenium-hub-rc.yaml index b1d53361cbe..6f9b50c1d30 100644 --- a/examples/selenium/selenium-hub-rc.yaml +++ b/examples/selenium/selenium-hub-rc.yaml @@ -3,15 +3,15 @@ kind: ReplicationController metadata: name: selenium-hub labels: - name: selenium-hub + app: selenium-hub spec: replicas: 1 selector: - name: selenium-hub + app: selenium-hub template: metadata: labels: - name: selenium-hub + app: selenium-hub spec: containers: - name: selenium-hub diff --git a/examples/selenium/selenium-hub-svc.yaml b/examples/selenium/selenium-hub-svc.yaml index 4feed0387ae..0b252ede375 100644 --- a/examples/selenium/selenium-hub-svc.yaml +++ b/examples/selenium/selenium-hub-svc.yaml @@ -1,15 +1,15 @@ apiVersion: v1 kind: Service metadata: - name: selenium-hub + name: selenium-hub labels: - name: selenium-hub + app: selenium-hub spec: ports: - port: 4444 targetPort: 4444 name: port0 selector: - name: selenium-hub + app: selenium-hub type: NodePort sessionAffinity: None diff --git a/examples/selenium/selenium-node-chrome-rc.yaml b/examples/selenium/selenium-node-chrome-rc.yaml index 6f78bdbec8f..324c5507380 100644 --- a/examples/selenium/selenium-node-chrome-rc.yaml +++ b/examples/selenium/selenium-node-chrome-rc.yaml @@ -3,15 +3,15 @@ kind: ReplicationController metadata: name: selenium-node-chrome labels: - name: selenium-node-chrome + app: selenium-node-chrome spec: replicas: 2 selector: - name: selenium-node-chrome + app: selenium-node-chrome template: metadata: labels: - name: selenium-node-chrome + app: selenium-node-chrome spec: containers: - name: selenium-node-chrome diff --git a/examples/selenium/selenium-node-firefox-rc.yaml b/examples/selenium/selenium-node-firefox-rc.yaml index c3496182850..399ddd249d2 100644 --- a/examples/selenium/selenium-node-firefox-rc.yaml +++ b/examples/selenium/selenium-node-firefox-rc.yaml @@ -3,15 +3,15 @@ kind: ReplicationController metadata: name: selenium-node-firefox labels: - name: selenium-node-firefox + app: selenium-node-firefox spec: replicas: 2 selector: - name: selenium-node-firefox + app: selenium-node-firefox template: metadata: labels: - name: selenium-node-firefox + app: selenium-node-firefox spec: containers: - name: selenium-node-firefox diff --git a/examples/selenium/selenium-test.py b/examples/selenium/selenium-test.py index bfaeeea7b5b..d7ab8035893 100644 --- a/examples/selenium/selenium-test.py +++ b/examples/selenium/selenium-test.py @@ -1,3 +1,19 @@ +#!/usr/bin/env python + +# Copyright 2015 The Kubernetes Authors All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + from selenium import webdriver from selenium.webdriver.common.desired_capabilities import DesiredCapabilities