Adding the Juju charms to the cluster directory.

This commit is contained in:
Matt Bruzek 2015-04-17 17:21:37 -05:00
parent 4ff2e53fee
commit af8f31a0cd
58 changed files with 2650 additions and 0 deletions

View File

@ -0,0 +1 @@
.git

View File

@ -0,0 +1,5 @@
*~
.bzr
.venv
unit_tests/__pycache__
*.pyc

View File

@ -0,0 +1,5 @@
omit:
- .git
- .gitignore
- .gitmodules
- revision

View File

@ -0,0 +1,29 @@
build: virtualenv lint test
virtualenv:
virtualenv .venv
.venv/bin/pip install -q -r requirements.txt
lint: virtualenv
@.venv/bin/flake8 hooks unit_tests --exclude=charmhelpers
@.venv/bin/charm proof
test: virtualenv
@CHARM_DIR=. PYTHONPATH=./hooks .venv/bin/py.test -v unit_tests/*
functional-test:
@bundletester
release: check-path virtualenv
@.venv/bin/pip install git-vendor
@.venv/bin/git-vendor sync -d ${KUBERNETES_MASTER_BZR}
check-path:
ifndef KUBERNETES_MASTER_BZR
$(error KUBERNETES_MASTER_BZR is undefined)
endif
clean:
rm -rf .venv
find -name *.pyc -delete

View File

@ -0,0 +1,101 @@
# Kubernetes Master Charm
[Kubernetes](https://github.com/googlecloudplatform/kubernetes) is an open
source system for managing containerized applications across multiple hosts.
Kubernetes uses [Docker](http://www.docker.io/) to package, instantiate and run
containerized applications.
The Kubernetes Juju charms enable you to run Kubernetes on all the cloud
platforms that Juju supports.
A Kubernetes deployment consists of several independent charms that can be
scaled to meet your needs
### Etcd
Etcd is a key value store for Kubernetes. All persistent master state
is stored in `etcd`.
### Flannel-docker
Flannel is a
[software defined networking](http://en.wikipedia.org/wiki/Software-defined_networking)
component that provides individual subnets for each machine in the cluster.
### Docker
Docker is an open platform for distributing applications for system administrators.
### Kubernetes master
The controlling unit in a Kubernetes cluster is called the master. It is the
main management contact point providing many management services for the worker
nodes.
### Kubernetes minion
The servers that perform the work are known as minions. Minions must be able to
communicate with the master and run the workloads that are assigned to them.
## Usage
#### Deploying the Development Focus
To deploy a Kubernetes environment in Juju :
juju deploy cs:~kubernetes/trusty/etcd
juju deploy cs:trusty/flannel-docker
juju deploy cs:trusty/docker
juju deploy local:trusty/kubernetes-master
juju deploy local:trusty/kubernetes
juju add-relation etcd flannel-docker
juju add-relation flannel-docker:network docker:network
juju add-relation flannel-docker:docker-host docker
juju add-relation etcd kubernetes
juju add-relation etcd kubernetes-master
juju add-relation kubernetes kubernetes-master
#### Deploying the recommended configuration
A bundle can be used to deploy Kubernetes onto any cloud it can be
orchestrated directly in the Juju Graphical User Interface, when using
`juju quickstart`:
juju quickstart https://raw.githubusercontent.com/whitmo/bundle-kubernetes/master/bundles.yaml
For more information on the recommended bundle deployment, see the
[Kubernetes bundle documentation](https://github.com/whitmo/bundle-kubernetes)
#### Post Deployment
To interact with the kubernetes environment, either build or
[download](https://github.com/GoogleCloudPlatform/kubernetes/releases) the
[kubectl](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/kubectl.md)
binary (available in the releases binary tarball) and point it to the master with :
$ juju status kubernetes-master | grep public
public-address: 104.131.108.99
$ export KUBERNETES_MASTER="104.131.108.99"
# Configuration
For you convenience this charm supports changing the version of kubernetes binaries.
This can be done through the Juju GUI or on the command line:
juju set kubernetes version=”v0.10.0”
If the charm does not already contain the tar file with the desired architecture
and version it will attempt to download the kubernetes binaries using the gsutil
command.
Congratulations you know have deployed a Kubernetes environment! Use the
[kubectl](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/kubectl.md)
to interact with the environment.
# Kubernetes information
- [Kubernetes github project](https://github.com/GoogleCloudPlatform/kubernetes)
- [Kubernetes issue tracker](https://github.com/GoogleCloudPlatform/kubernetes/issues)
- [Kubernetes Documenation](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/docs)
- [Kubernetes releases](https://github.com/GoogleCloudPlatform/kubernetes/releases)

View File

@ -0,0 +1,9 @@
options:
version:
type: string
default: "v0.8.1"
description: |
The kubernetes release to use in this charm. The binary files are
compiled from the source identified by this tag in github. Using the
value of "source" will use the master kubernetes branch when compiling
the binaries.

View File

@ -0,0 +1,13 @@
Copyright 2015 Canonical LTD
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,81 @@
# Getting Started
## Environment Considerations
Kubernetes has specific cloud provider integration, and as of the current writing of this document that supported list includes the official Juju supported providers:
- [Amazon AWS](https://jujucharms.com/docs/config-aws)
- [Azure](https://jujucharms.com/docs/config-azure)
- [Vagrant](https://jujucharms.com/docs/config-vagrant)
Other providers available for use as a *juju manual environment* can be listed in the [Kubernetes Documentation](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/docs/getting-started-guides)
## Deployment
The Kubernetes Charms are currently under heavy development. We encourage you to fork these charms and contribute back to the development effort! See our [contributing](contributing.md) doc for more information on this.
#### Deploying the Preview Release charms
juju deploy cs:~hazmat/trusty/etcd
juju deploy cs:~hazmat/trusty/flannel
juju deploy local:trusty/kubernetes-master
juju deploy local:trusty/kubernetes
juju add-relation etcd flannel
juju add-relation etcd kubernetes
juju add-relation etcd kubernetes-master
juju add-relation kubernetes kubernetes-master
#### Deploying the Development Release Charms
> These charms are known to be unstable as they are tracking the current efforts of the community at enabling different features against Kubernetes. This includes the specifics for integration per cloud environment, and upgrading to the latest development version.
mkdir -p ~/charms/trusty
git clone https://github.com/whitmo/kubernetes-master.git ~/charms/trusty/kubernetes-master
git clone https://github.com/whitmo/kubernetes.git ~/charms/trusty/kubernetes
##### Skipping the manual deployment after git clone
> **Note:** This path requires the pre-requisite of juju-deployer. You can obtain juju-deployer via `apt-get install juju-deployer`
wget https://github.com/whitmo/bundle-kubernetes/blob/master/develop.yaml kubernetes-devel.yaml
juju-deployer kubernetes-devel.yaml
## Verifying Deployment with the Kubernetes Agent
You'll need the kubernetes command line client to utlize the created cluster. And this can be fetched from the [Releases](https://github.com/GoogleCloudPlatform/kubernetes/releases) page on the Kubernetes project. Make sure you're fetching a client library that matches what the charm is deploying.
Grab the tarball and from the extracted release you can just directly use the cli binary at ./kubernetes/platforms/linux/amd64/kubecfg
You'll need the address of the kubernetes master as environment variable :
juju status kubernetes-master/0
Grab the public-address there and export it as KUBERNETES_MASTER environment variable :
export KUBERNETES_MASTER=$(juju status --format=oneline kubernetes-master | cut -d' ' -f3):8080
And now you can run through the kubernetes examples per normal. :
kubecfg list minions
## Scale Up
If the default capacity of the bundle doesn't provide enough capacity for your workload(s) you can scale horizontially by adding a unit to the flannel and kubernetes services respectively.
juju add-unit flannel
juju add-unit kubernetes --to # (machine id of new flannel unit)
## Known Issues / Limitations
Kubernetes currently has platform specific functionality. For example load balancers and persistence volumes only work with the google compute provider atm.
The Juju integration uses the kubernetes null provider. This means external load balancers and storage can't be directly driven through kubernetes config files.
## Where to get help
If you run into any issues, file a bug at our [issue tracker](http://github.com/whitmo/kubernetes-charm/issues), email the Juju Mailing List at <juju@lists.ubuntu.com>, or feel free to join us in #juju on irc.freenode.net.

View File

@ -0,0 +1,52 @@
#### Contributions are welcome, in any form. Whether that be Bugs, BugFixes, Documentation, or Features.
### Submitting a bug
1. Go to our [issue tracker](http://github.com/whitmo/kubernetes-master-charm/issues) on GitHub
2. Search for existing issues using the search field at the top of the page
3. File a new issue including the info listed below
4. Thanks a ton for helping make the Kubernetes-Master Charm higher quality!
##### When filing a new bug, please include:
- **Descriptive title** - use keywords so others can find your bug (avoiding duplicates)
- **Steps to trigger the problem** - that are specific, and repeatable
- **What happens** - when you follow the steps, and what you expected to happen instead.
- Include the exact text of any error messages if applicable (or upload screenshots).
- Kubernetes-Master Charm version (or if you're pulling directly from Git, your current commit SHA - use git rev-parse HEAD) and the Juju Version output from `juju --version`.
- Did this work in a previous charm version? If so, also provide the version that it worked in.
- Any errors logged in `juju debug log` Console view
### Can I help fix a bug?
Yes please! But first...
- Make sure no one else is already working on it -- if the bug has a milestone assigned or is tagged 'fix in progress', then it's already under way. Otherwise, post a comment on the bug to let others know you're starting to work on it.
We use the Fork & Pull model for distributed development. For a more in-depth overview: consult with the github documentation on [Collaborative Development Models](https://help.github.com/articles/using-pull-requests/#before-you-begin).
> ##### Fork & pull
>
> The fork & pull model lets anyone fork an existing repository and push changes to their personal fork without requiring access be granted to the source repository. The changes must then be pulled into the source repository by the project maintainer. This model reduces the amount of friction for new contributors and is popular with open source projects because it allows people to work independently without upfront coordination.
### Submitting a Bug Fix
The following checklist will help developers not familiar with the fork and pull process of development. We appreciate your enthusiasm to make the Kubernetes-Master Charm a High Quality experience! To Rapidly get started - follow the 8 steps below.
1. [Fork the repository](https://help.github.com/articles/fork-a-repo/)
2. Clone your fork `git clone git@github.com/myusername/kubernetes-master-charm.git`
3. Checkout your topic branch with `git checkout -b my-awesome-bugfix`
4. Hack away at your feature/bugfix
5. Validate your bugfix if possible in the amulet test(s) so we dont reintroduce it later.
6. Validate your code meets guidelines by passing lint tests `make lint`
6. Commit code `git commit -a -m 'i did all this work to fix #32'`
7. Push your branch to your forks remote branch `git push origin my-awesome-bugfix`
8. Create the [Pull Request](https://help.github.com/articles/using-pull-requests/#initiating-the-pull-request)
9. Await Code Review
10. Rejoice when Pull Request is accepted
### Submitting a Feature
The Steps are the same as [Submitting a Bug Fix](#submitting-a-bug-fix). If you want extra credit, make sure you [File an issue](http://github.com/whitmo/kubernetes-master-charm/issues) that covers the Feature you are working on - as kind of a courtesy heads up. And assign the issue to yourself so we know you are working on it.

View File

@ -0,0 +1,20 @@
description "Kubernetes Controller"
start on runlevel [2345]
stop on runlevel [!2345]
limit nofile 20000 20000
kill timeout 30 # wait 30s between SIGTERM and SIGKILL.
exec /usr/local/bin/apiserver \
--address=%(api_bind_address)s \
--etcd_servers=%(etcd_servers)s \
--logtostderr=true \
--portal_net=10.244.240.0/20

View File

@ -0,0 +1,20 @@
description "Kubernetes Controller"
start on runlevel [2345]
stop on runlevel [!2345]
limit nofile 20000 20000
kill timeout 30 # wait 30s between SIGTERM and SIGKILL.
exec /usr/local/bin/controller-manager \
--address=%(bind_address)s \
--logtostderr=true \
--master=%(api_server_address)s

View File

@ -0,0 +1,59 @@
#!/bin/bash
set -ex
# This script downloads a Kubernetes release and creates a tar file with only
# the files that are needed for this charm.
# Usage: create_kubernetes_tar.sh VERSION ARCHITECTURE
usage() {
echo "Build a tar file with only the files needed for the kubernetes charm."
echo "The script accepts two arguments version and desired architecture."
echo "$0 version architecture"
}
download_kubernetes() {
local VERSION=$1
URL_PREFIX="https://github.com/GoogleCloudPlatform/kubernetes"
KUBERNETES_URL="${URL_PREFIX}/releases/download/${VERSION}/kubernetes.tar.gz"
# Remove the previous temporary files to remain idempotent.
if [ -f /tmp/kubernetes.tar.gz ]; then
rm /tmp/kubernetes.tar.gz
fi
# Download the kubernetes release from the Internet.
wget --no-verbose --tries 2 -O /tmp/kubernetes.tar.gz $KUBERNETES_URL
}
extract_kubernetes() {
local ARCH=$1
# Untar the kubernetes release file.
tar -xvzf /tmp/kubernetes.tar.gz -C /tmp
# Untar the server linux amd64 package.
tar -xvzf /tmp/kubernetes/server/kubernetes-server-linux-$ARCH.tar.gz -C /tmp
}
create_charm_tar() {
local OUTPUT_FILE=${1:-"$PWD/kubernetes.tar.gz"}
local OUTPUT_DIR=`dirname $OUTPUT_FILE`
if [ ! -d $OUTPUT_DIR ]; then
mkdir -p $OUTPUT
fi
# Change to the directory the binaries are.
cd /tmp/kubernetes/server/bin/
# Create a tar file with the binaries that are needed for kubernetes master.
tar -cvzf $OUTPUT_FILE kube-apiserver kube-controller-manager kubectl kube-scheduler
}
if [ $# -gt 2 ]; then
usage
exit 1
fi
VERSION=${1:-"v0.8.1"}
ARCH=${2:-"amd64"}
download_kubernetes $VERSION
extract_kubernetes $ARCH
TAR_FILE="$PWD/kubernetes-master-$VERSION-$ARCH.tar.gz"
create_charm_tar $TAR_FILE

View File

@ -0,0 +1,6 @@
server {
listen %(api_bind_address)s:80;
location %(web_uri)s {
alias /opt/kubernetes/_output/local/bin/linux/amd64/;
}
}

View File

@ -0,0 +1,39 @@
# HTTP/HTTPS server
#
server {
listen 80;
server_name localhost;
root html;
index index.html index.htm;
# ssl on;
# ssl_certificate /usr/share/nginx/server.cert;
# ssl_certificate_key /usr/share/nginx/server.key;
# ssl_session_timeout 5m;
# ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
# ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS;
# ssl_prefer_server_ciphers on;
location / {
# auth_basic "Restricted";
# auth_basic_user_file /usr/share/nginx/htpasswd;
# Proxy settings
# disable buffering so that watch works
proxy_buffering off;
proxy_pass %(api_server_address)s;
proxy_connect_timeout 159s;
proxy_send_timeout 600s;
proxy_read_timeout 600s;
# Disable retry
proxy_next_upstream off;
# Support web sockets
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}

View File

@ -0,0 +1,20 @@
description "Kubernetes Scheduler"
start on runlevel [2345]
stop on runlevel [!2345]
limit nofile 20000 20000
kill timeout 30 # wait 30s between SIGTERM and SIGKILL.
exec /usr/local/bin/scheduler \
--address=%(bind_address)s \
--logtostderr=true \
--master=%(api_server_address)s

View File

@ -0,0 +1 @@
hooks.py

View File

@ -0,0 +1 @@
hooks.py

View File

@ -0,0 +1,211 @@
#!/usr/bin/python
"""
The main hook file is called by Juju.
"""
import contextlib
import os
import socket
import subprocess
import sys
from charmhelpers.core import hookenv, host
from kubernetes_installer import KubernetesInstaller
from path import path
hooks = hookenv.Hooks()
@contextlib.contextmanager
def check_sentinel(filepath):
"""
A context manager method to write a file while the code block is doing
something and remove the file when done.
"""
fail = False
try:
yield filepath.exists()
except:
fail = True
filepath.touch()
raise
finally:
if fail is False and filepath.exists():
filepath.remove()
@hooks.hook('config-changed')
def config_changed():
"""
On the execution of the juju event 'config-changed' this function
determines the appropriate architecture and the configured version to
create kubernetes binary files.
"""
hookenv.log('Starting config-changed')
charm_dir = path(hookenv.charm_dir())
config = hookenv.config()
# Get the version of kubernetes to install.
version = config['version']
# Get the package architecture, rather than the from the kernel (uname -m).
arch = subprocess.check_output(['dpkg', '--print-architecture']).strip()
kubernetes_dir = path('/opt/kubernetes')
if not kubernetes_dir.exists():
print('The source directory {0} does not exist'.format(kubernetes_dir))
print('Was the kubernetes code cloned during install?')
exit(1)
if version in ['source', 'head', 'master']:
branch = 'master'
else:
# Create a branch to a tag.
branch = 'tags/{0}'.format(version)
# Construct the path to the binaries using the arch.
output_path = kubernetes_dir / '_output/local/bin/linux' / arch
installer = KubernetesInstaller(arch, version, output_path)
# Change to the kubernetes directory (git repository).
with kubernetes_dir:
# Create a command to get the current branch.
git_branch = 'git branch | grep "\*" | cut -d" " -f2'
current_branch = subprocess.check_output(git_branch, shell=True).strip()
print('Current branch: ', current_branch)
# Create the path to a file to indicate if the build was broken.
broken_build = charm_dir / '.broken_build'
# write out the .broken_build file while this block is executing.
with check_sentinel(broken_build) as last_build_failed:
print('Last build failed: ', last_build_failed)
# Rebuild if the current version is different or last build failed.
if current_branch != version or last_build_failed:
installer.build(branch)
if not output_path.exists():
broken_build.touch()
else:
print('Notifying minions of verison ' + version)
# Notify the minions of a version change.
for r in hookenv.relation_ids('minions-api'):
hookenv.relation_set(r, version=version)
print('Done notifing minions of version ' + version)
# Create the symoblic links to the right directories.
installer.install()
relation_changed()
hookenv.log('The config-changed hook completed successfully.')
@hooks.hook('etcd-relation-changed', 'minions-api-relation-changed')
def relation_changed():
template_data = get_template_data()
# Check required keys
for k in ('etcd_servers',):
if not template_data.get(k):
print "Missing data for", k, template_data
return
print "Running with\n", template_data
# Render and restart as needed
for n in ('apiserver', 'controller-manager', 'scheduler'):
if render_file(n, template_data) or not host.service_running(n):
host.service_restart(n)
# Render the file that makes the kubernetes binaries available to minions.
if render_file(
'distribution', template_data,
'conf.tmpl', '/etc/nginx/sites-enabled/distribution') or \
not host.service_running('nginx'):
host.service_reload('nginx')
# Render the default nginx template.
if render_file(
'nginx', template_data,
'conf.tmpl', '/etc/nginx/sites-enabled/default') or \
not host.service_running('nginx'):
host.service_reload('nginx')
# Send api endpoint to minions
notify_minions()
def notify_minions():
print("Notify minions.")
config = hookenv.config()
for r in hookenv.relation_ids('minions-api'):
hookenv.relation_set(
r,
hostname=hookenv.unit_private_ip(),
port=8080,
version=config['version'])
def get_template_data():
rels = hookenv.relations()
config = hookenv.config()
template_data = {}
template_data['etcd_servers'] = ",".join([
"http://%s:%s" % (s[0], s[1]) for s in sorted(
get_rel_hosts('etcd', rels, ('hostname', 'port')))])
template_data['minions'] = ",".join(get_rel_hosts('minions-api', rels))
template_data['api_bind_address'] = _bind_addr(hookenv.unit_private_ip())
template_data['bind_address'] = "127.0.0.1"
template_data['api_server_address'] = "http://%s:%s" % (
hookenv.unit_private_ip(), 8080)
arch = subprocess.check_output(['dpkg', '--print-architecture']).strip()
template_data['web_uri'] = "/kubernetes/%s/local/bin/linux/%s/" % (
config['version'], arch)
_encode(template_data)
return template_data
def _bind_addr(addr):
if addr.replace('.', '').isdigit():
return addr
try:
return socket.gethostbyname(addr)
except socket.error:
raise ValueError("Could not resolve private address")
def _encode(d):
for k, v in d.items():
if isinstance(v, unicode):
d[k] = v.encode('utf8')
def get_rel_hosts(rel_name, rels, keys=('private-address',)):
hosts = []
for r, data in rels.get(rel_name, {}).items():
for unit_id, unit_data in data.items():
if unit_id == hookenv.local_unit():
continue
values = [unit_data.get(k) for k in keys]
if not all(values):
continue
hosts.append(len(values) == 1 and values[0] or values)
return hosts
def render_file(name, data, src_suffix="upstart.tmpl", tgt_path=None):
tmpl_path = os.path.join(
os.environ.get('CHARM_DIR'), 'files', '%s.%s' % (name, src_suffix))
with open(tmpl_path) as fh:
tmpl = fh.read()
rendered = tmpl % data
if tgt_path is None:
tgt_path = '/etc/init/%s.conf' % name
if os.path.exists(tgt_path):
with open(tgt_path) as fh:
contents = fh.read()
if contents == rendered:
return False
with open(tgt_path, 'w') as fh:
fh.write(rendered)
return True
if __name__ == '__main__':
hooks.execute(sys.argv)

View File

@ -0,0 +1 @@
install.py

View File

@ -0,0 +1,90 @@
#!/usr/bin/python
import setup
setup.pre_install()
import subprocess
from charmhelpers.core import hookenv
from charmhelpers import fetch
from charmhelpers.fetch import archiveurl
from path import path
def install():
install_packages()
hookenv.log('Installing go')
download_go()
hookenv.log('Adding kubernetes and go to the path')
strings = [
'export GOROOT=/usr/local/go\n',
'export PATH=$PATH:$GOROOT/bin\n',
'export KUBE_MASTER_IP=0.0.0.0\n',
'export KUBERNETES_MASTER=http://$KUBE_MASTER_IP\n',
]
update_rc_files(strings)
hookenv.log('Downloading kubernetes code')
clone_repository()
hookenv.open_port(8080)
hookenv.log('Install complete')
def download_go():
"""
Kubernetes charm strives to support upstream. Part of this is installing a
fairly recent edition of GO. This fetches the golang archive and installs
it in /usr/local
"""
go_url = 'https://storage.googleapis.com/golang/go1.4.2.linux-amd64.tar.gz'
go_sha1 = '5020af94b52b65cc9b6f11d50a67e4bae07b0aff'
handler = archiveurl.ArchiveUrlFetchHandler()
handler.install(go_url, '/usr/local', go_sha1, 'sha1')
def clone_repository():
"""
Clone the upstream repository into /opt/kubernetes for deployment compilation
of kubernetes. Subsequently used during upgrades.
"""
repository = 'https://github.com/GoogleCloudPlatform/kubernetes.git'
kubernetes_directory = '/opt/kubernetes'
command = ['git', 'clone', repository, kubernetes_directory]
print(command)
output = subprocess.check_output(command)
print(output)
def install_packages():
"""
Install required packages to build the k8s source, and syndicate between
minion nodes. In addition, fetch pip to handle python dependencies
"""
hookenv.log('Installing Debian packages')
# Create the list of packages to install.
apt_packages = ['build-essential', 'git', 'make', 'nginx', 'python-pip']
fetch.apt_install(fetch.filter_installed_packages(apt_packages))
def update_rc_files(strings):
"""
Preseed the bash environment for ubuntu and root with K8's env vars to
make interfacing with the api easier. (see: kubectrl docs)
"""
rc_files = [path('/home/ubuntu/.bashrc'), path('/root/.bashrc')]
for rc_file in rc_files:
lines = rc_file.lines()
for string in strings:
if string not in lines:
lines.append(string)
rc_file.write_lines(lines)
if __name__ == "__main__":
install()

View File

@ -0,0 +1,91 @@
import os
import shlex
import subprocess
from path import path
def run(command, shell=False):
""" A convience method for executing all the commands. """
print(command)
if shell is False:
command = shlex.split(command)
output = subprocess.check_output(command, shell=shell)
print(output)
return output
class KubernetesInstaller():
"""
This class contains the logic needed to install kuberentes binary files.
"""
def __init__(self, arch, version, output_dir):
""" Gather the required variables for the install. """
# The kubernetes-master charm needs certain commands to be aliased.
self.aliases = {'kube-apiserver': 'apiserver',
'kube-controller-manager': 'controller-manager',
'kube-proxy': 'kube-proxy',
'kube-scheduler': 'scheduler',
'kubectl': 'kubectl',
'kubelet': 'kubelet'}
self.arch = arch
self.version = version
self.output_dir = path(output_dir)
def build(self, branch):
""" Build kubernetes from a github repository using the Makefile. """
# Remove any old build artifacts.
make_clean = 'make clean'
run(make_clean)
# Always checkout the master to get the latest repository information.
git_checkout_cmd = 'git checkout master'
run(git_checkout_cmd)
# When checking out a tag, delete the old branch (not master).
if branch != 'master':
git_drop_branch = 'git branch -D {0}'.format(self.version)
print(git_drop_branch)
rc = subprocess.call(git_drop_branch.split())
if rc != 0:
print('returned: %d' % rc)
# Make sure the git repository is up-to-date.
git_fetch = 'git fetch origin {0}'.format(branch)
run(git_fetch)
if branch == 'master':
git_reset = 'git reset --hard origin/master'
run(git_reset)
else:
# Checkout a branch of kubernetes so the repo is correct.
checkout = 'git checkout -b {0} {1}'.format(self.version, branch)
run(checkout)
# Create an environment with the path to the GO binaries included.
go_path = ('/usr/local/go/bin', os.environ.get('PATH', ''))
go_env = os.environ.copy()
go_env['PATH'] = ':'.join(go_path)
print(go_env['PATH'])
# Compile the binaries with the make command using the WHAT variable.
make_what = "make all WHAT='cmd/kube-apiserver cmd/kubectl "\
"cmd/kube-controller-manager plugin/cmd/kube-scheduler "\
"cmd/kubelet cmd/kube-proxy'"
print(make_what)
rc = subprocess.call(shlex.split(make_what), env=go_env)
def install(self, install_dir=path('/usr/local/bin')):
""" Install kubernetes binary files from the output directory. """
if not install_dir.isdir():
install_dir.makedirs_p()
# Create the symbolic links to the real kubernetes binaries.
for key, value in self.aliases.iteritems():
target = self.output_dir / key
if target.exists():
link = install_dir / value
if link.exists():
link.remove()
target.symlink(link)
else:
print('Error target file {0} does not exist.'.format(target))
exit(1)

View File

@ -0,0 +1 @@
hooks.py

View File

@ -0,0 +1,30 @@
def pre_install():
"""
Do any setup required before the install hook.
"""
install_charmhelpers()
install_path()
def install_charmhelpers():
"""
Install the charmhelpers library, if not present.
"""
try:
import charmhelpers # noqa
except ImportError:
import subprocess
subprocess.check_call(['apt-get', 'install', '-y', 'python-pip'])
subprocess.check_call(['pip', 'install', 'charmhelpers'])
def install_path():
"""
Install the path.py library, when not present.
"""
try:
import path # noqa
except ImportError:
import subprocess
subprocess.check_call(['apt-get', 'install', '-y', 'python-pip'])
subprocess.check_call(['pip', 'install', 'path.py'])

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 76 KiB

View File

@ -0,0 +1,19 @@
name: kubernetes-master
summary: Container Cluster Management Master
description: |
Provides a kubernetes api endpoint, scheduler for managing containers.
maintainers:
- Matt Bruzek <matt.bruzek@canonical.com>
- Whit Morriss <whit.morriss@canonical.com>
- Charles Butler <charles.butler@canonical.com>
tags:
- ops
- network
provides:
client-api:
interface: kubernetes-client
minions-api:
interface: kubernetes-api
requires:
etcd:
interface: etcd

View File

@ -0,0 +1,75 @@
kubernetes-master
-----------------
notes on src
------------
current provider responsibilities
- instances
- load blanacers
- zones (not useful as its only for apiserver).
provider functionality currently hardcoded to gce across codebase
- persistent storage
ideas
-----
- juju provider impl
- file provider for machines/minions
- openvpn as overlay per extant salt config.
cloud
-----
todo
----
- token auth file
- format csv -> token, user, uid
- config privileged
- config log-level
- config / check logs collection endpoint
- config / version and binary location via url
Q/A
----
https://botbot.me/freenode/google-containers/2014-10-17/?msg=23696683&page=6
Q. The new volumes/storage provider api appears to be hardcoded to
gce.. Is there a plan to abstract that anytime soon?
A. effectively it is abstract enough for the moment, no plans to
change, but willing subject to suitable abstraction.
Q.The zone provider api appears to return the address only of the api
server afaics. How is that useful? afaics the better semantic would be
an attribute on the minions to instantiate multiple templates across
zones?
A. apparently not considered, current solution for ha is multiple k8s
per zone with external lb. pointed out this was inane.
Q. Several previous platforms supported have been moved to the icebox,
just curious what was subject to bitrot. the salt/shell script for
those platforms or something more api intrinsic?
A. apparently the change to ship binaries instead of build from src
broke them.. somehow.
Q. i'm mostly interested in flannel due to its portability. Does the
inter pod networking setup need to include the other components of the
system, ie does api talk directly to containers, or only via kubelet.
A. api server only talks to kubelet
Q. Status of HA?
A. not done yet, election package merged, nothing using it.
Afaics design discussion doesn't take place on the list.
Q. Is minion registration supported, ie. bypassing cloud provider
filter all instances via regex match?
A. not done yet, pull request in review for minions in etcd (not
found, perhaps merged)
-------------
cadvisor usage helper
https://github.com/GoogleCloudPlatform/heapster

View File

@ -0,0 +1,5 @@
flake8
pytest
bundletester
path.py
charmhelpers

View File

@ -0,0 +1,48 @@
#!/bin/bash
# This script sets up the guestbook example application in Kubernetes.
# The KUBERENTES_MASTER variable must be set to the URL for kubectl to work.
# The first argument is optional and can be used for debugging.
set -o errexit # (set -e)
DEBUG=false
if [[ "$1" == "-d" ]] || [[ "$1" == "--debug" ]]; then
DEBUG=true
set -o xtrace # (set -x)
fi
cd /opt/kubernetes/
# Step One Turn up the redis master
kubectl create -f examples/guestbook/redis-master.json
if [[ "${DEBUG}" == true ]]; then
kubectl get pods
fi
# Step Two: Turn up the master service
kubectl create -f examples/guestbook/redis-master-service.json
if [[ "${DEBUG}" == true ]]; then
kubectl get services
fi
# Step Three: Turn up the replicated slave pods
kubectl create -f examples/guestbook/redis-slave-controller.json
if [[ "${DEBUG}" == true ]]; then
kubectl get replicationcontrollers
kubectl get pods
fi
# Step Four: Create the redis slave service
kubectl create -f examples/guestbook/redis-slave-service.json
if [[ "${DEBUG}" == true ]]; then
kubectl get services
fi
# Step Five: Create the frontend pod
kubectl create -f examples/guestbook/frontend-controller.json
if [[ "${DEBUG}" == true ]]; then
kubectl get replicationcontrollers
kubectl get pods
fi
set +x
echo "# Now run the following commands on your juju client"
echo "juju run --service kubernetes 'open-port 8000'"
echo "juju expose kubernetes"
echo "# Go to the kubernetes public address on port 8000 to see the guestbook application"

View File

@ -0,0 +1,105 @@
from mock import patch
from path import path
from path import Path
import pytest
import subprocess
import sys
# Add the hooks directory to the python path.
hooks_dir = Path('__file__').parent.abspath() / 'hooks'
sys.path.insert(0, hooks_dir.abspath())
# Import the module to be tested.
import kubernetes_installer
def test_run():
""" Test the run method both with valid commands and invalid commands. """
ls = 'ls -l {0}/kubernetes_installer.py'.format(hooks_dir)
output = kubernetes_installer.run(ls, False)
assert output
assert 'kubernetes_installer.py' in output
output = kubernetes_installer.run(ls, True)
assert output
assert 'kubernetes_installer.py' in output
invalid_directory = path('/not/a/real/directory')
assert not invalid_directory.exists()
invalid_command = 'ls {0}'.format(invalid_directory)
with pytest.raises(subprocess.CalledProcessError) as error:
kubernetes_installer.run(invalid_command)
print(error)
with pytest.raises(subprocess.CalledProcessError) as error:
kubernetes_installer.run(invalid_command, shell=True)
print(error)
class TestKubernetesInstaller():
def makeone(self, *args, **kw):
""" Create the KubernetesInstaller object and return it. """
from kubernetes_installer import KubernetesInstaller
return KubernetesInstaller(*args, **kw)
def test_init(self):
""" Test that the init method correctly assigns the variables. """
ki = self.makeone('i386', '3.0.1', '/tmp/does_not_exist')
assert ki.aliases
assert 'kube-apiserver' in ki.aliases
assert 'kube-controller-manager' in ki.aliases
assert 'kube-scheduler' in ki.aliases
assert 'kubectl' in ki.aliases
assert 'kubelet' in ki.aliases
assert ki.arch == 'i386'
assert ki.version == '3.0.1'
assert ki.output_dir == path('/tmp/does_not_exist')
@patch('kubernetes_installer.run')
@patch('kubernetes_installer.subprocess.call')
def test_build(self, cmock, rmock):
""" Test the build method with master and non-master branches. """
directory = path('/tmp/kubernetes_installer_test/build')
ki = self.makeone('amd64', 'v99.00.11', directory)
assert not directory.exists(), 'The %s directory exists!' % directory
# Call the build method with "master" branch.
ki.build("master")
# TODO: run is called many times but mock only remembers last one.
rmock.assert_called_with('git reset --hard origin/master')
# TODO: call is complex and hard to verify with mock, fix that.
cmock.assert_called_once()
# Call the build method with something other than "master" branch.
ki.build("branch")
# TODO: run is called many times, but mock only remembers last one.
rmock.assert_called_with('git checkout -b v99.00.11 branch')
# TODO: call is complex and hard to verify with mock, fix that.
cmock.assert_called_once()
directory.rmtree_p()
def test_install(self):
""" Test the install method that it creates the correct links. """
directory = path('/tmp/kubernetes_installer_test/install')
ki = self.makeone('ppc64le', '1.2.3', directory)
assert not directory.exists(), 'The %s directory exits!' % directory
directory.makedirs_p()
# Create the files for the install method to link to.
(directory / 'kube-apiserver').touch()
(directory / 'kube-controller-manager').touch()
(directory / 'kube-proxy').touch()
(directory / 'kube-scheduler').touch()
(directory / 'kubectl').touch()
(directory / 'kubelet').touch()
results = directory / 'install/results/go/here'
assert not results.exists()
ki.install(results)
assert results.isdir()
# Check that all the files were correctly aliased and are links.
assert (results / 'apiserver').islink()
assert (results / 'controller-manager').islink()
assert (results / 'kube-proxy').islink()
assert (results / 'scheduler').islink()
assert (results / 'kubectl').islink()
assert (results / 'kubelet').islink()
directory.rmtree_p()

View File

@ -0,0 +1,92 @@
from mock import patch, Mock, MagicMock
from path import Path
import pytest
import sys
# Munge the python path so we can find our hook code
d = Path('__file__').parent.abspath() / 'hooks'
sys.path.insert(0, d.abspath())
# Import the modules from the hook
import install
class TestInstallHook():
@patch('install.path')
def test_update_rc_files(self, pmock):
"""
Test happy path on updating env files. Assuming everything
exists and is in place.
"""
pmock.return_value.lines.return_value = ['line1', 'line2']
install.update_rc_files(['test1', 'test2'])
pmock.return_value.write_lines.assert_called_with(['line1', 'line2',
'test1', 'test2'])
def test_update_rc_files_with_nonexistant_path(self):
"""
Test an unhappy path if the bashrc/users do not exist.
"""
with pytest.raises(OSError) as exinfo:
install.update_rc_files(['test1','test2'])
@patch('install.fetch')
@patch('install.hookenv')
def test_package_installation(self, hemock, ftmock):
"""
Verify we are calling the known essentials to build and syndicate
kubes.
"""
pkgs = ['build-essential', 'git',
'make', 'nginx', 'python-pip']
install.install_packages()
hemock.log.assert_called_with('Installing Debian packages')
ftmock.filter_installed_packages.assert_called_with(pkgs)
@patch('install.archiveurl.ArchiveUrlFetchHandler')
def test_go_download(self, aumock):
"""
Test that we are actually handing off to charm-helpers to
download a specific archive of Go. This is non-configurable so
its reasonably safe to assume we're going to always do this,
and when it changes we shall curse the brittleness of this test.
"""
ins_mock = aumock.return_value.install
install.download_go()
url = 'https://storage.googleapis.com/golang/go1.4.2.linux-amd64.tar.gz'
sha1='5020af94b52b65cc9b6f11d50a67e4bae07b0aff'
ins_mock.assert_called_with(url, '/usr/local', sha1, 'sha1')
@patch('install.subprocess')
def test_clone_repository(self, spmock):
"""
We're not using a unit-tested git library - so ensure our subprocess
call is consistent. If we change this, we want to know we've broken it.
"""
install.clone_repository()
repo = 'https://github.com/GoogleCloudPlatform/kubernetes.git'
direct = '/opt/kubernetes'
spmock.check_output.assert_called_with(['git', 'clone', repo, direct])
@patch('install.install_packages')
@patch('install.download_go')
@patch('install.clone_repository')
@patch('install.update_rc_files')
@patch('install.hookenv')
def test_install_main(self, hemock, urmock, crmock, dgmock, ipmock):
"""
Ensure the driver/main method is calling all the supporting methods.
"""
strings = [
'export GOROOT=/usr/local/go\n',
'export PATH=$PATH:$GOROOT/bin\n',
'export KUBE_MASTER_IP=0.0.0.0\n',
'export KUBERNETES_MASTER=http://$KUBE_MASTER_IP\n',
]
install.install()
crmock.assert_called_once()
dgmock.assert_called_once()
crmock.assert_called_once()
urmock.assert_called_with(strings)
hemock.open_port.assert_called_with(8080)

View File

@ -0,0 +1 @@
.git

View File

@ -0,0 +1,6 @@
.bzr
*.pyc
*~
*\#*
/files/.kubernetes-*
.venv

View File

@ -0,0 +1,5 @@
omit:
- .git
- .gitignore
- .gitmodules
- revision

View File

@ -0,0 +1,29 @@
build: virtualenv lint test
virtualenv:
virtualenv .venv
.venv/bin/pip install -q -r requirements.txt
lint: virtualenv
@.venv/bin/flake8 hooks unit_tests --exclude=charmhelpers
@.venv/bin/charm proof
test: virtualenv
@CHARM_DIR=. PYTHONPATH=./hooks .venv/bin/py.test unit_tests/*
functional-test:
@bundletester
release: check-path virtualenv
@.venv/bin/pip install git-vendor
@.venv/bin/git-vendor sync -d ${KUBERNETES_BZR}
check-path:
ifndef KUBERNETES_BZR
$(error KUBERNETES_BZR is undefined)
endif
clean:
rm -rf .venv
find -name *.pyc -delete

View File

@ -0,0 +1,100 @@
# Kubernetes Minion Charm
[Kubernetes](https://github.com/googlecloudplatform/kubernetes) is an open
source system for managing containerized applications across multiple hosts.
Kubernetes uses [Docker](http://www.docker.io/) to package, instantiate and run
containerized applications.
The Kubernetes Juju charms enable you to run Kubernetes on all the cloud
platforms that Juju supports.
A Kubernetes deployment consists of several independent charms that can be
scaled to meet your needs
### Etcd
Etcd is a key value store for Kubernetes. All persistent master state
is stored in `etcd`.
### Flannel-docker
Flannel is a
[software defined networking](http://en.wikipedia.org/wiki/Software-defined_networking)
component that provides individual subnets for each machine in the cluster.
### Docker
Docker is an open platform for distributing applications for system administrators.
### Kubernetes master
The controlling unit in a Kubernetes cluster is called the master. It is the
main management contact point providing many management services for the worker
nodes.
### Kubernetes minion
The servers that perform the work are known as minions. Minions must be able to
communicate with the master and run the workloads that are assigned to them.
## Usage
#### Deploying the Development Focus
To deploy a Kubernetes environment in Juju :
juju deploy cs:~kubernetes/trusty/etcd
juju deploy cs:trusty/flannel-docker
juju deploy cs:trusty/docker
juju deploy local:trusty/kubernetes-master
juju deploy local:trusty/kubernetes
juju add-relation etcd flannel-docker
juju add-relation flannel-docker:network docker:network
juju add-relation flannel-docker:docker-host docker
juju add-relation etcd kubernetes
juju add-relation etcd kubernetes-master
juju add-relation kubernetes kubernetes-master
#### Deploying the recommended configuration
A bundle can be used to deploy Kubernetes onto any cloud it can be
orchestrated directly in the Juju Graphical User Interface, when using
`juju quickstart`:
juju quickstart https://raw.githubusercontent.com/whitmo/bundle-kubernetes/master/bundles.yaml
For more information on the recommended bundle deployment, see the
[Kubernetes bundle documentation](https://github.com/whitmo/bundle-kubernetes)
#### Post Deployment
To interact with the kubernetes environment, either build or
[download](https://github.com/GoogleCloudPlatform/kubernetes/releases) the
[kubectl](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/kubectl.md)
binary (available in the releases binary tarball) and point it to the master with :
$ juju status kubernetes-master | grep public
public-address: 104.131.108.99
$ export KUBERNETES_MASTER="104.131.108.99"
# Configuration
For you convenience this charm supports changing the version of kubernetes binaries.
This can be done through the Juju GUI or on the command line:
juju set kubernetes version=”v0.10.0”
If the charm does not already contain the tar file with the desired architecture
and version it will attempt to download the kubernetes binaries using the gsutil
command.
Congratulations you know have deployed a Kubernetes environment! Use the
[kubectl](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/kubectl.md)
to interact with the environment.
# Kubernetes information
- [Kubernetes github project](https://github.com/GoogleCloudPlatform/kubernetes)
- [Kubernetes issue tracker](https://github.com/GoogleCloudPlatform/kubernetes/issues)
- [Kubernetes Documenation](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/docs)
- [Kubernetes releases](https://github.com/GoogleCloudPlatform/kubernetes/releases)

View File

@ -0,0 +1,13 @@
Copyright 2015 Canonical LTD
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,81 @@
# Getting Started
## Environment Considerations
Kubernetes has specific cloud provider integration, and as of the current writing of this document that supported list includes the official Juju supported providers:
- [Amazon AWS](https://jujucharms.com/docs/config-aws)
- [Azure](https://jujucharms.com/docs/config-azure)
- [Vagrant](https://jujucharms.com/docs/config-vagrant)
Other providers available for use as a *juju manual environment* can be listed in the [Kubernetes Documentation](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/docs/getting-started-guides)
## Deployment
The Kubernetes Charms are currently under heavy development. We encourage you to fork these charms and contribute back to the development effort! See our [contributing](contributing.md) doc for more information on this.
#### Deploying the Preview Release charms
juju deploy cs:~hazmat/trusty/etcd
juju deploy cs:~hazmat/trusty/flannel
juju deploy local:trusty/kubernetes-master
juju deploy local:trusty/kubernetes
juju add-relation etcd flannel
juju add-relation etcd kubernetes
juju add-relation etcd kubernetes-master
juju add-relation kubernetes kubernetes-master
#### Deploying the Development Release Charms
> These charms are known to be unstable as they are tracking the current efforts of the community at enabling different features against Kubernetes. This includes the specifics for integration per cloud environment, and upgrading to the latest development version.
mkdir -p ~/charms/trusty
git clone https://github.com/whitmo/kubernetes-master.git ~/charms/trusty/kubernetes-master
git clone https://github.com/whitmo/kubernetes.git ~/charms/trusty/kubernetes
##### Skipping the manual deployment after git clone
> **Note:** This path requires the pre-requisite of juju-deployer. You can obtain juju-deployer via `apt-get install juju-deployer`
wget https://github.com/whitmo/bundle-kubernetes/blob/master/develop.yaml kubernetes-devel.yaml
juju-deployer kubernetes-devel.yaml
## Verifying Deployment with the Kubernetes Agent
You'll need the kubernetes command line client to utlize the created cluster. And this can be fetched from the [Releases](https://github.com/GoogleCloudPlatform/kubernetes/releases) page on the Kubernetes project. Make sure you're fetching a client library that matches what the charm is deploying.
Grab the tarball and from the extracted release you can just directly use the cli binary at ./kubernetes/platforms/linux/amd64/kubecfg
You'll need the address of the kubernetes master as environment variable :
juju status kubernetes-master/0
Grab the public-address there and export it as KUBERNETES_MASTER environment variable :
export KUBERNETES_MASTER=$(juju status --format=oneline kubernetes-master | cut -d' ' -f3):8080
And now you can run through the kubernetes examples per normal. :
kubecfg list minions
## Scale Up
If the default capacity of the bundle doesn't provide enough capacity for your workload(s) you can scale horizontially by adding a unit to the flannel and kubernetes services respectively.
juju add-unit flannel
juju add-unit kubernetes --to # (machine id of new flannel unit)
## Known Issues / Limitations
Kubernetes currently has platform specific functionality. For example load balancers and persistence volumes only work with the google compute provider atm.
The Juju integration uses the kubernetes null provider. This means external load balancers and storage can't be directly driven through kubernetes config files.
## Where to get help
If you run into any issues, file a bug at our [issue tracker](http://github.com/whitmo/kubernetes-charm/issues), email the Juju Mailing List at &lt;juju@lists.ubuntu.com&gt;, or feel free to join us in #juju on irc.freenode.net.

View File

@ -0,0 +1,52 @@
#### Contributions are welcome, in any form. Whether that be Bugs, BugFixes, Documentation, or Features.
### Submitting a bug
1. Go to our [issue tracker](http://github.com/whitmo/kubernetes-charm/issues) on GitHub
2. Search for existing issues using the search field at the top of the page
3. File a new issue including the info listed below
4. Thanks a ton for helping make the Kubernetes Charm higher quality!
##### When filing a new bug, please include:
- **Descriptive title** - use keywords so others can find your bug (avoiding duplicates)
- **Steps to trigger the problem** - that are specific, and repeatable
- **What happens** - when you follow the steps, and what you expected to happen instead.
- Include the exact text of any error messages if applicable (or upload screenshots).
- Kubernetes Charm version (or if you're pulling directly from Git, your current commit SHA - use git rev-parse HEAD) and the Juju Version output from `juju --version`.
- Did this work in a previous charm version? If so, also provide the version that it worked in.
- Any errors logged in `juju debug log` Console view
### Can I help fix a bug?
Yes please! But first...
- Make sure no one else is already working on it -- if the bug has a milestone assigned or is tagged 'fix in progress', then it's already under way. Otherwise, post a comment on the bug to let others know you're starting to work on it.
We use the Fork &amp; Pull model for distributed development. For a more in-depth overview: consult with the github documentation on [Collaborative Development Models](https://help.github.com/articles/using-pull-requests/#before-you-begin).
> ##### Fork & pull
>
> The fork & pull model lets anyone fork an existing repository and push changes to their personal fork without requiring access be granted to the source repository. The changes must then be pulled into the source repository by the project maintainer. This model reduces the amount of friction for new contributors and is popular with open source projects because it allows people to work independently without upfront coordination.
### Submitting a Bug Fix
The following checklist will help developers not familiar with the fork and pull process of development. We appreciate your enthusiasm to make the Kubernetes Charm a High Quality experience! To Rapidly get started - follow the 8 steps below.
1. [Fork the repository](https://help.github.com/articles/fork-a-repo/)
2. Clone your fork `git clone git@github.com/myusername/kubernetes-charm.git`
3. Checkout your topic branch with `git checkout -b my-awesome-bugfix`
4. Hack away at your feature/bugfix
5. Validate your bugfix if possible in the amulet test(s) so we dont reintroduce it later.
6. Validate your code meets guidelines by passing lint tests `make lint`
6. Commit code `git commit -a -m 'i did all this work to fix #32'`
7. Push your branch to your forks remote branch `git push origin my-awesome-bugfix`
8. Create the [Pull Request](https://help.github.com/articles/using-pull-requests/#initiating-the-pull-request)
9. Await Code Review
10. Rejoice when Pull Request is accepted
### Submitting a Feature
The Steps are the same as [Submitting a Bug Fix](#submitting-a-bug-fix). If you want extra credit, make sure you [File an issue](http://github.com/whitmo/kubernetes-charm/issues) that covers the Feature you are working on - as kind of a courtesy heads up. And assign the issue to yourself so we know you are working on it.

View File

@ -0,0 +1,16 @@
description "cadvisor container metrics"
start on started docker
stop on stopping docker
limit nofile 20000 20000
kill timeout 60 # wait 60s between SIGTERM and SIGKILL.
exec docker run \
--volume=/var/run:/var/run:rw \
--volume=/sys/fs/cgroup:/sys/fs/cgroup:ro \
--volume=/var/lib/docker/:/var/lib/docker:ro \
--publish=127.0.0.1:4194:8080 \
--name=cadvisor \
google/cadvisor:latest

View File

@ -0,0 +1,59 @@
#!/bin/bash
set -ex
# This script downloads a Kubernetes release and creates a tar file with only
# the files that are needed for this charm.
# Usage: create_kubernetes_tar.sh VERSION ARCHITECTURE
usage() {
echo "Build a tar file with only the files needed for the kubernetes charm."
echo "The script accepts two arguments version and desired architecture."
echo "$0 version architecture"
}
download_kubernetes() {
local VERSION=$1
URL_PREFIX="https://github.com/GoogleCloudPlatform/kubernetes"
KUBERNETES_URL="${URL_PREFIX}/releases/download/${VERSION}/kubernetes.tar.gz"
# Remove the previous temporary files to remain idempotent.
if [ -f /tmp/kubernetes.tar.gz ]; then
rm /tmp/kubernetes.tar.gz
fi
# Download the kubernetes release from the Internet.
wget --no-verbose --tries 2 -O /tmp/kubernetes.tar.gz $KUBERNETES_URL
}
extract_kubernetes() {
local ARCH=$1
# Untar the kubernetes release file.
tar -xvzf /tmp/kubernetes.tar.gz -C /tmp
# Untar the server linux amd64 package.
tar -xvzf /tmp/kubernetes/server/kubernetes-server-linux-$ARCH.tar.gz -C /tmp
}
create_charm_tar() {
local OUTPUT_FILE=${1:-"$PWD/kubernetes.tar.gz"}
local OUTPUT_DIR=`dirname $OUTPUT_FILE`
if [ ! -d $OUTPUT_DIR ]; then
mkdir -p $OUTPUT
fi
# Change to the directory the binaries are.
cd /tmp/kubernetes/server/bin/
# Create a tar file with the binaries that are needed for kubernetes minion.
tar -cvzf $OUTPUT_FILE kubelet kube-proxy
}
if [ $# -gt 2 ]; then
usage
exit 1
fi
VERSION=${1:-"v0.8.1"}
ARCH=${2:-"amd64"}
download_kubernetes $VERSION
extract_kubernetes $ARCH
TAR_FILE="$PWD/kubernetes-$VERSION-$ARCH.tar.gz"
create_charm_tar $TAR_FILE

View File

@ -0,0 +1,14 @@
description "kubernetes kubelet"
start on runlevel [2345]
stop on runlevel [!2345]
limit nofile 20000 20000
kill timeout 60 # wait 60s between SIGTERM and SIGKILL.
exec /usr/local/bin/kubelet \
--address=%(kubelet_bind_addr)s \
--api_servers=%(kubeapi_server)s \
--hostname_override=%(kubelet_bind_addr)s \
--logtostderr=true

View File

@ -0,0 +1,12 @@
description "kubernetes proxy"
start on runlevel [2345]
stop on runlevel [!2345]
limit nofile 20000 20000
kill timeout 60 # wait 60s between SIGTERM and SIGKILL.
exec /usr/local/bin/proxy \
--master=%(kubeapi_server)s \
--logtostderr=true

View File

@ -0,0 +1 @@
hooks.py

View File

@ -0,0 +1 @@
hooks.py

View File

@ -0,0 +1,225 @@
#!/usr/bin/python
"""
The main hook file that is called by Juju.
"""
import json
import httplib
import os
import time
import socket
import subprocess
import sys
import urlparse
from charmhelpers.core import hookenv, host
from kubernetes_installer import KubernetesInstaller
from path import path
from lib.registrator import Registrator
hooks = hookenv.Hooks()
@hooks.hook('api-relation-changed')
def api_relation_changed():
"""
On the relation to the api server, this function determines the appropriate
architecture and the configured version to copy the kubernetes binary files
from the kubernetes-master charm and installs it locally on this machine.
"""
hookenv.log('Starting api-relation-changed')
charm_dir = path(hookenv.charm_dir())
# Get the package architecture, rather than the from the kernel (uname -m).
arch = subprocess.check_output(['dpkg', '--print-architecture']).strip()
kubernetes_bin_dir = path('/opt/kubernetes/bin')
# Get the version of kubernetes to install.
version = subprocess.check_output(['relation-get', 'version']).strip()
print('Relation version: ', version)
if not version:
print('No version present in the relation.')
exit(0)
version_file = charm_dir / '.version'
if version_file.exists():
previous_version = version_file.text()
print('Previous version: ', previous_version)
if version == previous_version:
exit(0)
# Can not download binaries while the service is running, so stop it.
# TODO: Figure out a better way to handle upgraded kubernetes binaries.
for service in ('kubelet', 'proxy'):
if host.service_running(service):
host.service_stop(service)
command = ['relation-get', 'private-address']
# Get the kubernetes-master address.
server = subprocess.check_output(command).strip()
print('Kubernetes master private address: ', server)
installer = KubernetesInstaller(arch, version, server, kubernetes_bin_dir)
installer.download()
installer.install()
# Write the most recently installed version number to the file.
version_file.write_text(version)
relation_changed()
@hooks.hook('etcd-relation-changed',
'network-relation-changed')
def relation_changed():
"""Connect the parts and go :-)
"""
template_data = get_template_data()
# Check required keys
for k in ('etcd_servers', 'kubeapi_server'):
if not template_data.get(k):
print("Missing data for %s %s" % (k, template_data))
return
print("Running with\n%s" % template_data)
# Setup kubernetes supplemental group
setup_kubernetes_group()
# Register services
for n in ("cadvisor", "kubelet", "proxy"):
if render_upstart(n, template_data) or not host.service_running(n):
print("Starting %s" % n)
host.service_restart(n)
# Register machine via api
print("Registering machine")
register_machine(template_data['kubeapi_server'])
# Save the marker (for restarts to detect prev install)
template_data.save()
def get_template_data():
rels = hookenv.relations()
template_data = hookenv.Config()
template_data.CONFIG_FILE_NAME = ".unit-state"
overlay_type = get_scoped_rel_attr('network', rels, 'overlay_type')
etcd_servers = get_rel_hosts('etcd', rels, ('hostname', 'port'))
api_servers = get_rel_hosts('api', rels, ('hostname', 'port'))
# kubernetes master isn't ha yet.
if api_servers:
api_info = api_servers.pop()
api_servers = "http://%s:%s" % (api_info[0], api_info[1])
template_data['overlay_type'] = overlay_type
template_data['kubelet_bind_addr'] = _bind_addr(
hookenv.unit_private_ip())
template_data['proxy_bind_addr'] = _bind_addr(
hookenv.unit_get('public-address'))
template_data['kubeapi_server'] = api_servers
template_data['etcd_servers'] = ",".join([
'http://%s:%s' % (s[0], s[1]) for s in sorted(etcd_servers)])
template_data['identifier'] = os.environ['JUJU_UNIT_NAME'].replace(
'/', '-')
return _encode(template_data)
def _bind_addr(addr):
if addr.replace('.', '').isdigit():
return addr
try:
return socket.gethostbyname(addr)
except socket.error:
raise ValueError("Could not resolve private address")
def _encode(d):
for k, v in d.items():
if isinstance(v, unicode):
d[k] = v.encode('utf8')
return d
def get_scoped_rel_attr(rel_name, rels, attr):
private_ip = hookenv.unit_private_ip()
for r, data in rels.get(rel_name, {}).items():
for unit_id, unit_data in data.items():
if unit_data.get('private-address') != private_ip:
continue
if unit_data.get(attr):
return unit_data.get(attr)
def get_rel_hosts(rel_name, rels, keys=('private-address',)):
hosts = []
for r, data in rels.get(rel_name, {}).items():
for unit_id, unit_data in data.items():
if unit_id == hookenv.local_unit():
continue
values = [unit_data.get(k) for k in keys]
if not all(values):
continue
hosts.append(len(values) == 1 and values[0] or values)
return hosts
def render_upstart(name, data):
tmpl_path = os.path.join(
os.environ.get('CHARM_DIR'), 'files', '%s.upstart.tmpl' % name)
with open(tmpl_path) as fh:
tmpl = fh.read()
rendered = tmpl % data
tgt_path = '/etc/init/%s.conf' % name
if os.path.exists(tgt_path):
with open(tgt_path) as fh:
contents = fh.read()
if contents == rendered:
return False
with open(tgt_path, 'w') as fh:
fh.write(rendered)
return True
def register_machine(apiserver, retry=False):
parsed = urlparse.urlparse(apiserver)
# identity = hookenv.local_unit().replace('/', '-')
private_address = hookenv.unit_private_ip()
with open('/proc/meminfo') as fh:
info = fh.readline()
mem = info.strip().split(":")[1].strip().split()[0]
cpus = os.sysconf("SC_NPROCESSORS_ONLN")
registration_request = Registrator()
registration_request.data['Kind'] = 'Minion'
registration_request.data['id'] = private_address
registration_request.data['name'] = private_address
registration_request.data['metadata']['name'] = private_address
registration_request.data['spec']['capacity']['mem'] = mem + ' K'
registration_request.data['spec']['capacity']['cpu'] = cpus
registration_request.data['spec']['externalID'] = private_address
registration_request.data['status']['hostIP'] = private_address
response, result = registration_request.register(parsed.hostname,
parsed.port,
"/api/v1beta3/nodes")
print(response)
try:
registration_request.command_succeeded(response, result)
except ValueError:
# This happens when we have already registered
# for now this is OK
pass
def setup_kubernetes_group():
output = subprocess.check_output(['groups', 'kubernetes'])
# TODO: check group exists
if 'docker' not in output:
subprocess.check_output(
['usermod', '-a', '-G', 'docker', 'kubernetes'])
if __name__ == '__main__':
hooks.execute(sys.argv)

View File

@ -0,0 +1,32 @@
#!/bin/bash
set -ex
# Install is guaranteed to run once per rootfs
echo "Installing kubernetes-node on $JUJU_UNIT_NAME"
apt-get update -qq
apt-get install -q -y \
bridge-utils \
python-dev \
python-pip \
wget
pip install path.py
# Create the necessary kubernetes group.
groupadd kubernetes
useradd -d /var/lib/kubernetes \
-g kubernetes \
-s /sbin/nologin \
--system \
kubernetes
install -d -m 0744 -o kubernetes -g kubernetes /var/lib/kubernetes
install -d -m 0744 -o kubernetes -g kubernetes /etc/kubernetes/manifests
# wait for the world, depends on where we installed it from distro
#sudo service docker.io stop
# or upstream archive
#sudo service docker stop

View File

@ -0,0 +1,52 @@
import subprocess
from path import path
class KubernetesInstaller():
"""
This class contains the logic needed to install kuberentes binary files.
"""
def __init__(self, arch, version, master, output_dir):
""" Gather the required variables for the install. """
# The kubernetes charm needs certain commands to be aliased.
self.aliases = {'kube-proxy': 'proxy',
'kubelet': 'kubelet'}
self.arch = arch
self.version = version
self.master = master
self.output_dir = output_dir
def download(self):
""" Download the kuberentes binaries from the kubernetes master. """
url = 'http://{0}/kubernetes/{1}/local/bin/linux/{2}'.format(
self.master, self.version, self.arch)
if not self.output_dir.isdir():
self.output_dir.makedirs_p()
for key in self.aliases:
uri = '{0}/{1}'.format(url, key)
destination = self.output_dir / key
wget = 'wget -nv {0} -O {1}'.format(uri, destination)
print(wget)
output = subprocess.check_output(wget.split())
print(output)
destination.chmod(0o755)
def install(self, install_dir=path('/usr/local/bin')):
""" Create links to the binary files to the install directory. """
if not install_dir.isdir():
install_dir.makedirs_p()
# Create the symbolic links to the real kubernetes binaries.
for key, value in self.aliases.iteritems():
target = self.output_dir / key
if target.exists():
link = install_dir / value
if link.exists():
link.remove()
target.symlink(link)
else:
print('Error target file {0} does not exist.'.format(target))
exit(1)

View File

@ -0,0 +1,84 @@
import httplib
import json
import time
class Registrator:
def __init__(self):
self.ds ={
"creationTimestamp": "",
"kind": "Minion",
"name": "", # private_address
"metadata": {
"name": "", #private_address,
},
"spec": {
"externalID": "", #private_address
"capacity": {
"mem": "", # mem + ' K',
"cpu": "", # cpus
}
},
"status": {
"conditions": [],
"hostIP": "", #private_address
}
}
@property
def data(self):
''' Returns a data-structure for population to make a request. '''
return self.ds
def register(self, hostname, port, api_path):
''' Contact the API Server for a new registration '''
headers = {"Content-type": "application/json",
"Accept": "application/json"}
connection = httplib.HTTPConnection(hostname, port)
print 'CONN {}'.format(connection)
connection.request("POST", api_path, json.dumps(self.data), headers)
response = connection.getresponse()
body = response.read()
print(body)
result = json.loads(body)
print("Response status:%s reason:%s body:%s" % (
response.status, response.reason, result))
return response, result
def update(self):
''' Contact the API Server to update a registration '''
# do a get on the API for the node
# repost to the API with any modified data
pass
def save(self):
''' Marshall the registration data '''
# TODO
pass
def command_succeeded(self, response, result):
''' Evaluate response data to determine if the command was successful '''
if response.status in [200, 201]:
print("Registered")
return True
elif response.status in [409,]:
print("Status Conflict")
# Suggested return a PUT instead of a POST with this response
# code, this predicates use of the UPDATE method
# TODO
elif response.status in (500,) and result.get(
'message', '').startswith('The requested resource does not exist'):
# There's something fishy in the kube api here (0.4 dev), first time we
# go to register a new minion, we always seem to get this error.
# https://github.com/GoogleCloudPlatform/kubernetes/issues/1995
time.sleep(1)
print("Retrying registration...")
raise ValueError("Registration returned 500, retry")
# return register_machine(apiserver, retry=True)
else:
print("Registration error")
# TODO - get request data
raise RuntimeError("Unable to register machine with")

View File

@ -0,0 +1 @@
hooks.py

View File

@ -0,0 +1,15 @@
#!/bin/bash
set -ex
# Start is guaranteed to be called once when after the unit is installed
# *AND* once everytime a machine is rebooted.
if [ ! -f $CHARM_DIR/.unit-state ]
then
exit 0;
fi
service docker restart
service proxy restart
service kubelet restart

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 76 KiB

View File

@ -0,0 +1,23 @@
name: kubernetes
summary: Container Cluster Management Node
maintainers:
- Matt Bruzek <matthew.bruzek@canonical.com>
- Whit Morriss <whit.morriss@canonical.com>
- Charles Butler <charles.butler@canonical.com>
description: |
Provides a kubernetes node for running containers
See http://goo.gl/CSggxE
tags:
- ops
- network
subordinate: true
requires:
etcd:
interface: etcd
api:
interface: kubernetes-api
network:
interface: overlay-network
docker-host:
interface: juju-info
scope: container

View File

@ -0,0 +1,4 @@
flake8
pytest
bundletester
path.py

View File

@ -0,0 +1,45 @@
import json
from mock import MagicMock, patch, call
from path import Path
import pytest
import sys
d = Path('__file__').parent.abspath() / 'hooks'
sys.path.insert(0, d.abspath())
from lib.registrator import Registrator
class TestRegistrator():
def setup_method(self, method):
self.r = Registrator()
def test_data_type(self):
if type(self.r.data) is not dict:
pytest.fail("Invalid type")
@patch('json.loads')
@patch('httplib.HTTPConnection')
def test_register(self, httplibmock, jsonmock):
result = self.r.register('foo', 80, '/v1beta1/test')
httplibmock.assert_called_with('foo', 80)
requestmock = httplibmock().request
requestmock.assert_called_with(
"POST", "/v1beta1/test",
json.dumps(self.r.data),
{"Content-type": "application/json",
"Accept": "application/json"})
def test_command_succeeded(self):
response = MagicMock()
result = json.loads('{"status": "Failure", "kind": "Status", "code": 409, "apiVersion": "v1beta2", "reason": "AlreadyExists", "details": {"kind": "minion", "id": "10.200.147.200"}, "message": "minion \\"10.200.147.200\\" already exists", "creationTimestamp": null}')
response.status = 200
self.r.command_succeeded(response, result)
response.status = 500
with pytest.raises(RuntimeError):
self.r.command_succeeded(response, result)
response.status = 409
with pytest.raises(ValueError):
self.r.command_succeeded(response, result)

View File

@ -0,0 +1,8 @@
# import pytest
class TestHooks():
# TODO: Actually write tests.
def test_fake(self):
pass