Merge pull request #8296 from jlowdermilk/gen-analytics

Add ga-beacon analytics to gendocs scripts
This commit is contained in:
Victor Marmol
2015-05-18 08:40:02 -07:00
241 changed files with 780 additions and 53 deletions

View File

@@ -14,3 +14,6 @@
* An overview of the [Design of Kubernetes](design)
* There are example files and walkthroughs in the [examples](../examples) folder.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/README.md?pixel)]()

View File

@@ -201,4 +201,7 @@ kube-dns-oh43e 10.244.1.10 etcd
monitoring-heapster-controller-fplln 10.244.0.4 heapster kubernetes/heapster:v0.8 kubernetes-minion-2il2.c.kubernetes-user2.internal/130.211.155.16 kubernetes.io/cluster-service=true,name=heapster,uses=monitoring-influxdb Running 5 hours
monitoring-influx-grafana-controller-0133o 10.244.3.4 influxdb kubernetes/heapster_influxdb:v0.3 kubernetes-minion-kmin.c.kubernetes-user2.internal/130.211.173.22 kubernetes.io/cluster-service=true,name=influxGrafana Running 5 hours
grafana kubernetes/heapster_grafana:v0.4
```
```
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/accessing-the-cluster.md?pixel)]()

View File

@@ -73,3 +73,6 @@ variety of uses cases:
Localhost will no longer be needed, and will not be the default.
However, the localhost port may continue to be an option for
installations that want to do their own auth proxy.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/accessing_the_api.md?pixel)]()

View File

@@ -22,3 +22,6 @@ Possible information that could be recorded in annotations:
* phone/pager number(s) of person(s) responsible, or directory entry where that info could be found, such as a team website
Yes, this information could be stored in an external database or directory, but that would make it much harder to produce shared client libraries and tools for deployment, management, introspection, etc.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/annotations.md?pixel)]()

View File

@@ -578,3 +578,6 @@ Events
TODO: Document events (refer to another doc for details)
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/api-conventions.md?pixel)]()

View File

@@ -51,3 +51,6 @@ Some important differences between v1beta1/2 and v1beta3:
* Pull policies changed from `PullAlways`, `PullNever`, and `PullIfNotPresent` to `Always`, `Never`, and `IfNotPresent`.
* The volume `source` is inlined into `volume` rather than nested.
* Host volumes have been changed from `hostDir` to `hostPath` to better reflect that they can be files or directories.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/api.md?pixel)]()

View File

@@ -38,3 +38,6 @@ after the user has been (re)authenticated by a *bedrock* authentication
provider external to Kubernetes. We plan to make it easy to develop modules
that interface between kubernetes and a bedrock authentication provider (e.g.
github.com, google.com, enterprise directory, kerberos, etc.)
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/authentication.md?pixel)]()

View File

@@ -101,3 +101,6 @@ to a remote authorization service. Authorization modules can implement
their own caching to reduce the cost of repeated authorization calls with the
same or similar arguments. Developers should then consider the interaction between
caching and revocation of permissions.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/authorization.md?pixel)]()

View File

@@ -128,3 +128,6 @@ calls for maximum 100 node clusters at v1.0 and maximum 1000 node clusters in th
When you have multiple clusters, you would typically create services with the same config in each cluster and put each of those
service instances behind a load balancer (AWS Elastic Load Balancer, GCE Forwarding Rule or HTTP Load Balancer), so that
failures of a single cluster are not visible to end users.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/availability.md?pixel)]()

View File

@@ -76,3 +76,6 @@ Server-side support:
1. Field selection [#1362](https://github.com/GoogleCloudPlatform/kubernetes/issues/1362)
1. Field filtering [#1459](https://github.com/GoogleCloudPlatform/kubernetes/issues/1459)
1. Operate on uids
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/cli-roadmap.md?pixel)]()

View File

@@ -12,3 +12,6 @@
* [PHP](https://github.com/devstub/kubernetes-api-php-client)
* [Node.js](https://github.com/tenxcloud/node-kubernetes-client)
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/client-libraries.md?pixel)]()

View File

@@ -70,3 +70,6 @@ project.](salt.md).
* **Authorization** [authorization]( authorization.md)
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/cluster-admin-guide.md?pixel)]()

View File

@@ -57,3 +57,6 @@ If you want more control over the upgrading process, you may use the following w
If you deleted the node's VM instance and created a new one, then a new schedulable node resource will
be created automatically when you create a new VM instance (if you're using a cloud provider that supports
node discovery; currently this is only GCE, not including CoreOS on GCE using kube-register). See [Node](node.md).
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/cluster_management.md?pixel)]()

View File

@@ -85,3 +85,6 @@ Hook handlers are the way that hooks are surfaced to containers.  Containers ca
* HTTP - Executes an HTTP request against a specific endpoint on the container.  HTTP error codes (5xx) and non-response/failure to connect are treated as container failures. Parameters are passed to the http endpoint as query args (e.g. http://some.server.com/some/path?reason=HEALTH)
[1]: http://man7.org/linux/man-pages/man2/gethostname.2.html
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/container-environment.md?pixel)]()

View File

@@ -87,3 +87,6 @@ The relationship between Docker's capabilities and [Linux capabilities](http://m
| SETFCAP | CAP_SETFCAP |
| WAKE_ALARM | CAP_WAKE_ALARM |
| BLOCK_SUSPEND | CAP_BLOCK_SUSPEND |
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/containers.md?pixel)]()

View File

@@ -15,3 +15,6 @@ A single Kubernetes cluster is not intended to span multiple availability zones.
Finally, Kubernetes aspires to be an extensible, pluggable, building-block OSS platform and toolkit. Therefore, architecturally, we want Kubernetes to be built as a collection of pluggable components and layers, with the ability to use alternative schedulers, controllers, storage systems, and distribution mechanisms, and we're evolving its current code in that direction. Furthermore, we want others to be able to extend Kubernetes functionality, such as with higher-level PaaS functionality or multi-cluster layers, without modification of core Kubernetes source. Therefore, its API isn't just (or even necessarily mainly) targeted at end users, but at tool and extension developers. Its APIs are intended to serve as the foundation for an open ecosystem of tools, automation systems, and higher-level API layers. Consequently, there are no "internal" inter-component APIs. All APIs are visible and available, including the APIs used by the scheduler, the node controller, the replication-controller manager, Kubelet's API, etc. There's no glass to break -- in order to handle more complex use cases, one can just access the lower-level APIs in a fully transparent, composable manner.
For more about the Kubernetes architecture, see [architecture](architecture.md).
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/README.md?pixel)]()

View File

@@ -246,3 +246,6 @@ Initial implementation:
Improvements:
- API server does logging instead.
- Policies to drop logging for high rate trusted API calls, or by users performing audit or other sensitive functions.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/access.md?pixel)]()

View File

@@ -77,3 +77,6 @@ will ensure the following:
6. Object is persisted
If at any step, there is an error, the request is canceled.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/admission_control.md?pixel)]()

View File

@@ -130,3 +130,6 @@ In the current proposal, the **LimitRangeItem** matches purely on **LimitRangeIt
It is expected we will want to define limits for particular pods or containers by name/uid and label/field selector.
To make a **LimitRangeItem** more restrictive, we will intend to add these additional restrictions at a future point in time.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/admission_control_limit_range.md?pixel)]()

View File

@@ -151,3 +151,6 @@ replicationcontrollers 5 20
resourcequotas 1 1
services 3 5
```
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/admission_control_resource_quota.md?pixel)]()

View File

@@ -42,3 +42,6 @@ The scheduler binds unscheduled pods to nodes via the `/binding` API. The schedu
All other cluster-level functions are currently performed by the Controller Manager. For instance, `Endpoints` objects are created and updated by the endpoints controller, and nodes are discovered, managed, and monitored by the node controller. These could eventually be split into separate components to make them independently pluggable.
The [`replicationController`](../replication-controller.md) is a mechanism that is layered on top of the simple [`pod`](../pods.md) API. We eventually plan to port it to a generic plug-in mechanism, once one is implemented.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/architecture.md?pixel)]()

View File

@@ -58,3 +58,6 @@ This diagram dynamic clustering using the bootstrap API endpoint. That API endp
This flow has the admin manually approving the kubelet signing requests. This is the `queue` policy defined above.This manual intervention could be replaced by code that can verify the signing requests via other means.
![Dynamic Sequence Diagram](clustering/dynamic.png)
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/clustering.md?pixel)]()

View File

@@ -23,4 +23,6 @@ If you are using boot2docker and get warnings about clock skew (or if things are
## Automatically rebuild on file changes
If you have the fswatch utility installed, you can have it monitor the file system and automatically rebuild when files have changed. Just do a `make watch`.
If you have the fswatch utility installed, you can have it monitor the file system and automatically rebuild when files have changed. Just do a `make watch`.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/clustering/README.md?pixel)]()

View File

@@ -141,4 +141,6 @@ functionality. We need to make sure that users are not allowed to execute
remote commands or do port forwarding to containers they aren't allowed to
access.
Additional work is required to ensure that multiple command execution or port forwarding connections from different clients are not able to see each other's data. This can most likely be achieved via SELinux labeling and unique process contexts.
Additional work is required to ensure that multiple command execution or port forwarding connections from different clients are not able to see each other's data. This can most likely be achieved via SELinux labeling and unique process contexts.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/command_execution_port_forwarding.md?pixel)]()

View File

@@ -76,3 +76,6 @@ This demonstrates what would have been 20 separate entries (indicating schedulin
* PR [#4206](https://github.com/GoogleCloudPlatform/kubernetes/issues/4206): Modify Event struct to allow compressing multiple recurring events in to a single event
* PR [#4306](https://github.com/GoogleCloudPlatform/kubernetes/issues/4306): Compress recurring events in to a single event to optimize etcd storage
* PR [#4444](https://github.com/GoogleCloudPlatform/kubernetes/pull/4444): Switch events history to use LRU cache instead of map
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/event_compression.md?pixel)]()

View File

@@ -88,3 +88,6 @@ objectives.
1. Each container is started up with enough metadata to distinguish the pod from whence it came.
2. Each attempt to run a container is assigned a UID (a string) that is unique across time.
1. This may correspond to Docker's container ID.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/identifiers.md?pixel)]()

View File

@@ -332,4 +332,6 @@ has a deletion timestamp and that its list of finalizers is empty. As a result,
content associated from that namespace has been purged. It performs a final DELETE action
to remove that Namespace from the storage.
At this point, all content associated with that Namespace, and the Namespace itself are gone.
At this point, all content associated with that Namespace, and the Namespace itself are gone.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/namespaces.md?pixel)]()

View File

@@ -106,3 +106,6 @@ Another approach could be to create a new host interface alias for each pod, if
### IPv6
IPv6 would be a nice option, also, but we can't depend on it yet. Docker support is in progress: [Docker issue #2974](https://github.com/dotcloud/docker/issues/2974), [Docker issue #6923](https://github.com/dotcloud/docker/issues/6923), [Docker issue #6975](https://github.com/dotcloud/docker/issues/6975). Additionally, direct ipv6 assignment to instances doesn't appear to be supported by major cloud providers (e.g., AWS EC2, GCE) yet. We'd happily take pull requests from people running Kubernetes on bare metal, though. :-)
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/networking.md?pixel)]()

View File

@@ -212,3 +212,6 @@ cluster/kubectl.sh delete pvc myclaim-1
The ```PersistentVolumeClaimBinder``` will reconcile this by removing the claim reference from the PV and change the PVs status to 'Released'.
Admins can script the recycling of released volumes. Future dynamic provisioners will understand how a volume should be recycled.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/persistent-storage.md?pixel)]()

View File

@@ -53,3 +53,6 @@ TODO
## General principles
* [Eric Raymond's 17 UNIX rules](https://en.wikipedia.org/wiki/Unix_philosophy#Eric_Raymond.E2.80.99s_17_Unix_Rules)
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/principles.md?pixel)]()

View File

@@ -558,3 +558,6 @@ source. Both containers will have the following files present on their filesyst
/etc/secret-volume/username
/etc/secret-volume/password
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/secrets.md?pixel)]()

View File

@@ -115,3 +115,6 @@ Both the Kubelet and Kube Proxy need information related to their specific roles
The controller manager for Replication Controllers and other future controllers act on behalf of a user via delegation to perform automated maintenance on Kubernetes resources. Their ability to access or modify resource state should be strictly limited to their intended duties and they should be prevented from accessing information not pertinent to their role. For example, a replication controller needs only to create a copy of a known pod configuration, to determine the running state of an existing pod, or to delete an existing pod that it created - it does not need to know the contents or current state of a pod, nor have access to any data in the pods attached volumes.
The Kubernetes pod scheduler is responsible for reading data from the pod to fit it onto a minion in the cluster. At a minimum, it needs access to view the ID of a pod (to craft the binding), its current state, any resource information necessary to identify placement, and other data relevant to concerns like anti-affinity, zone or region preference, or custom logic. It does not need the ability to modify pods or see other resources, only to create bindings. It should not need the ability to delete bindings unless the scheduler takes control of relocating components on failed hosts (which could be implemented by a separate component that can delete bindings but not create them). The scheduler may need read access to user or project-container information to determine preferential location (underspecified at this time).
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/security.md?pixel)]()

View File

@@ -155,3 +155,6 @@ has defined capabilities or privileged. Contexts that attempt to define a UID o
will be denied by default. In the future the admission plugin will base this decision upon
configurable policies that reside within the [service account](https://github.com/GoogleCloudPlatform/kubernetes/pull/2297).
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/security_context.md?pixel)]()

View File

@@ -162,3 +162,6 @@ to services in the same namespace and read-write access to events in that namesp
Finally, it may provide an interface to automate creation of new serviceAccounts. In that case, the user may want
to GET serviceAccounts to see what has been created.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/service_accounts.md?pixel)]()

View File

@@ -89,3 +89,6 @@ then ```foo-next``` is synthesized using the pattern ```<controller-name>-<hash-
* Otherwise, ```foo-next``` and ```foo``` both exist
* Set ```desired-replicas``` annotation on ```foo``` to match the annotation on ```foo-next```
* Goto Rollout with ```foo``` and ```foo-next``` trading places.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/simple-rolling-update.md?pixel)]()

View File

@@ -19,3 +19,6 @@ Docs in this directory relate to developing Kubernetes.
and how the version information gets embedded into the built binaries.
* **Profiling Kubernetes** ([profiling.md](profiling.md)): How to plug in go pprof profiler to Kubernetes.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/README.md?pixel)]()

View File

@@ -332,3 +332,6 @@ the change gets in. If you are unsure, ask. Also make sure that the change gets
## Adding new REST objects
TODO(smarterclayton): write this.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/api_changes.md?pixel)]()

View File

@@ -5,3 +5,6 @@ Coding style advice for contributors
- https://github.com/golang/go/wiki/CodeReviewComments
- https://gist.github.com/lavalamp/4bd23295a9f32706a48f
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/coding-conventions.md?pixel)]()

View File

@@ -38,3 +38,6 @@ PRs that are incorrectly judged to be merge-able, may be reverted and subject to
## Holds
Any maintainer or core contributor who wants to review a PR but does not have time immediately may put a hold on a PR simply by saying so on the PR discussion and offering an ETA measured in single-digit days at most. Any PR that has a hold shall not be merged until the person who requested the hold acks the review, withdraws their hold, or is overruled by a preponderance of maintainers.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/collab.md?pixel)]()

View File

@@ -333,3 +333,6 @@ export KUBERNETES_MINION_MEMORY=2048
#### I ran vagrant suspend and nothing works!
```vagrant suspend``` seems to mess up the network. It's not supported at this time.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/developer-guides/vagrant.md?pixel)]()

View File

@@ -267,3 +267,6 @@ git remote set-url --push upstream no_push
```
hack/run-gendocs.sh
```
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/development.md?pixel)]()

View File

@@ -175,3 +175,6 @@ take the place of common sense and good taste. Use your best judgment, but put
a bit of thought into how your work can be made easier to review. If you do
these things your PRs will flow much more easily.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/faster_reviews.md?pixel)]()

View File

@@ -64,3 +64,6 @@ Eventually you will have sufficient runs for your purposes. At that point you ca
If you do a final check for flakes with ```docker ps -a```, ignore tasks that exited -1, since that's what happens when you stop the replication controller.
Happy flake hunting!
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/flaky-tests.md?pixel)]()

View File

@@ -17,3 +17,6 @@ Definitions
* design - priority/design is for issues that are used to track design discussions
* support - priority/support is used for issues tracking user support requests
* untriaged - anything without a priority/X label will be considered untriaged
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/issues.md?pixel)]()

View File

@@ -24,3 +24,6 @@ The following conventions for the glog levels to use. [glog](http://godoc.org/g
* Logging in particularly thorny parts of code where you may want to come back later and check it
As per the comments, the practical default level is V(2). Developers and QE environments may wish to run at V(3) or V(4). If you wish to change the log level, you can pass in `-v=X` where X is the desired maximum level to log.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/logging.md?pixel)]()

View File

@@ -32,3 +32,6 @@ to get 30 sec. CPU profile.
## Contention profiling
To enable contention profiling you need to add line ```rt.SetBlockProfileRate(1)``` in addition to ```m.mux.HandleFunc(...)``` added before (```rt``` stands for ```runtime``` in ```master.go```). This enables 'debug/pprof/block' subpage, which can be used as an input to ```go tool pprof```.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/profiling.md?pixel)]()

View File

@@ -14,3 +14,6 @@ We want to limit the total number of PRs in flight to:
* Maintain a clean project
* Remove old PRs that would be difficult to rebase as the underlying code has changed over time
* Encourage code velocity
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/pull-requests.md?pixel)]()

View File

@@ -163,3 +163,6 @@ After this summary, preamble, all the relevant PRs/issues that got in that
version should be listed and linked together with a small summary understandable
by plain mortals (in a perfect world PR/issue's title would be enough but often
it is just too cryptic/geeky/domain-specific that it isn't).
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/releasing.md?pixel)]()

View File

@@ -97,3 +97,6 @@ These guidelines say *what* to do. See the Rationale section for *why*.
if you use another Configuration Management tool -- you just have to do some manual steps
during testing and deployment.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/writing-a-getting-started-guide.md?pixel)]()

View File

@@ -33,3 +33,6 @@ Guide](cluster-admin-guide.md).
## Contributing to the Kubernetes Project
See this [README](../docs/devel/README.md).
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/developer-guide.md?pixel)]()

View File

@@ -36,3 +36,6 @@ time.
## For more information
See [the docs for the cluster addon](../cluster/addons/dns/README.md).
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/dns.md?pixel)]()

View File

@@ -45,3 +45,6 @@ spec:
fieldPath: metadata.namespace
restartPolicy: Never
```
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/downward_api.md?pixel)]()

View File

@@ -57,3 +57,6 @@ Definition of columns:
- **Community**: Actively supported by community contributions. May not work with more recent releases of kubernetes.
- **Inactive**: No active maintainer. Not recommended for first-time K8s users, and may be deleted soon.
- **Notes** is relevant information such as version k8s used.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/README.md?pixel)]()

View File

@@ -212,3 +212,6 @@ Visit the public IP address in your browser to view the running pod.
```bash
kubectl delete pods hello
```
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/aws-coreos.md?pixel)]()

View File

@@ -79,3 +79,6 @@ cluster/kube-down.sh
## Further reading
Please see the [Kubernetes docs](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/docs) for more details on administering
and using a Kubernetes cluster.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/aws.md?pixel)]()

View File

@@ -19,3 +19,6 @@ mv kubectl /usr/local/bin/
```bash
ssh -f -nNT -L 8080:127.0.0.1:8080 core@<master-public-ip>
```
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/aws/kubectl.md?pixel)]()

View File

@@ -126,3 +126,6 @@ Look in `api/examples/` for more examples
```
cluster/kube-down.sh
```
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/azure.md?pixel)]()

View File

@@ -21,3 +21,6 @@ make release
```
For more details on the release process see the [`build/` directory](../../build)
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/binary_release.md?pixel)]()

View File

@@ -162,3 +162,6 @@ centos-minion <none> Ready
**The cluster should be running! Launch a test pod.**
You should have a functional cluster, check out [101](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/examples/walkthrough/README.md)!
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/centos/centos_manual_config.md?pixel)]()

View File

@@ -88,3 +88,6 @@ SSH to it using the key that was created and using the _core_ user and you can l
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/cloudstack.md?pixel)]()

View File

@@ -10,3 +10,6 @@ There are multiple guides on running Kubernetes with [CoreOS](http://coreos.com)
* [Yet another multi-node cluster using cloud-config and Vagrant](https://github.com/AntonioMeireles/kubernetes-vagrant-coreos-cluster/blob/master/README.md) (similar to the one above but with an increased, more *aggressive* focus on features and flexibility)
* [Multi-node cluster with Vagrant and fleet units using a small OS X App](https://github.com/rimusz/coreos-osx-gui-kubernetes-cluster/blob/master/README.md)
* [Resizable multi-node cluster on Azure with Weave](coreos/azure/README.md)
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/coreos.md?pixel)]()

View File

@@ -185,3 +185,6 @@ If you don't wish care about the Azure bill, you can tear down the cluster. It's
> Note: make sure to use the _latest state file_, as after resizing there is a new one.
By the way, with the scripts shown, you can deploy multiple clusters, if you like :)
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/coreos/azure/README.md?pixel)]()

View File

@@ -674,3 +674,6 @@ List Kubernetes
Kill all pods:
for i in `kubectl get pods | awk '{print $1}'`; do kubectl stop pod $i; done
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/coreos/bare_metal_offline.md?pixel)]()

View File

@@ -132,3 +132,6 @@ hdiutil makehybrid -iso -joliet -joliet-volume-name "config-2" -joliet -o node.i
#### Provision worker nodes
Boot one or more the [vmware image](https://coreos.com/docs/running-coreos/platforms/vmware) using `node.iso` as a config drive.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/coreos/coreos_multinode_cluster.md?pixel)]()

View File

@@ -56,3 +56,6 @@ hdiutil makehybrid -iso -joliet -joliet-volume-name "config-2" -joliet -o standa
```
Boot the [vmware image](https://coreos.com/docs/running-coreos/platforms/vmware) using the `standalone.iso` as a config drive.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/coreos/coreos_single_node_cluster.md?pixel)]()

View File

@@ -43,3 +43,6 @@ See [here](docker-multinode/worker.md) for detailed instructions.
Once your cluster has been created you can [test it out](docker-multinode/testing.md)
For more complete applications, please look in the [examples directory](../../examples)
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/docker-multinode.md?pixel)]()

View File

@@ -141,3 +141,6 @@ If all else fails, ask questions on IRC at #google-containers.
### Next steps
Move on to [adding one or more workers](worker.md)
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/docker-multinode/master.md?pixel)]()

View File

@@ -55,4 +55,6 @@ And list the pods
kubectl get pods
```
You should see pods landing on the newly added machine.
You should see pods landing on the newly added machine.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/docker-multinode/testing.md?pixel)]()

View File

@@ -130,3 +130,6 @@ Make the API call to add the node, you should do this on the master node that yo
### Next steps
Move on to [testing your cluster](testing.md) or [add another node](#adding-a-kubernetes-worker-node-via-docker)
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/docker-multinode/worker.md?pixel)]()

View File

@@ -79,3 +79,6 @@ Many of these containers run under the management of the ```kubelet``` binary, w
the cluster, you need to first kill the kubelet container, and then any other containers.
You may use ```docker ps -a | awk '{print $1}' | xargs docker kill```, note this removes _all_ containers running under Docker, so use with caution.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/docker.md?pixel)]()

View File

@@ -229,3 +229,6 @@ curl http://localhost
```
That's it !
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/fedora/fedora_ansible_config.md?pixel)]()

View File

@@ -189,3 +189,6 @@ $ kubectl delete -f node.json
**The cluster should be running! Launch a test pod.**
You should have a functional cluster, check out [101](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/examples/walkthrough/README.md)!
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/fedora/fedora_manual_config.md?pixel)]()

View File

@@ -157,3 +157,6 @@ PING 18.16.90.4 (18.16.90.4) 56(84) bytes of data.
```
* Now kubernetes multi-node cluster is set up with overlay networking set up by flannel.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/fedora/flannel_multi_node_cluster.md?pixel)]()

View File

@@ -104,3 +104,6 @@ field values:
* Source Ranges: `10.0.0.0/8`
* Allowed Protocols and Port: `tcp:1-65535;udp:1-65535;icmp`
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/gce.md?pixel)]()

View File

@@ -208,3 +208,6 @@ Azure | TBD
Digital Ocean | TBD
MAAS (bare metal) | TBD
GCE | TBD
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/juju.md?pixel)]()

View File

@@ -252,3 +252,6 @@ usermod -a -G libvirtd $USER
#### error: Out of memory initializing network (virsh net-create...)
Ensure libvirtd has been restarted since ebtables was installed.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/libvirt-coreos.md?pixel)]()

View File

@@ -110,3 +110,6 @@ One or more of the kubernetes daemons might've crashed. Tail the logs of each in
#### The pods fail to connect to the services by host names
The local-up-cluster.sh script doesn't start a DNS service. Similar situation can be found [here](https://github.com/GoogleCloudPlatform/kubernetes/issues/6667). You can start a manually. Related documents can be found [here](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/cluster/addons/dns#how-do-i-configure-it)
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/locally.md?pixel)]()

View File

@@ -26,3 +26,6 @@ Elasticsearch service (more information to follow shortly in the contrib directo
To enable logging of Docker contains in a cluster using Google Compute
Platform set the config flags ``ENABLE_NODE_LOGGING`` to ``true`` and
``LOGGING_DESTINATION`` to ``gcp``.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/logging.md?pixel)]()

View File

@@ -302,3 +302,6 @@ Now, you can visit the guestbook in your browser!
[10]: mesos/k8s-guestbook.png
[11]: http://mesos.apache.org/
[12]: https://google.mesosphere.com/clusters
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/mesos.md?pixel)]()

View File

@@ -42,3 +42,6 @@ The `ovirt-cloud.conf` file then must be specified in kube-controller-manager:
This short screencast demonstrates how the oVirt Cloud Provider can be used to dynamically add VMs to your kubernetes cluster.
[![Screencast](http://img.youtube.com/vi/JyyST4ZKne8/0.jpg)](http://www.youtube.com/watch?v=JyyST4ZKne8)
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/ovirt.md?pixel)]()

View File

@@ -45,3 +45,6 @@ The current cluster design is inspired by:
- eth0 - Public Interface used for servers/containers to reach the internet
- eth1 - ServiceNet - Intra-cluster communication (k8s, etcd, etc) communicate via this interface. The `cloud-config` files use the special CoreOS identifier `$private_ipv4` to configure the services.
- eth2 - Cloud Network - Used for k8s pods to communicate with one another. The proxy service will pass traffic via this interface.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/rackspace.md?pixel)]()

View File

@@ -170,3 +170,6 @@ Please try:
`$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh` again to start again.
4. You can also customize your own settings in `/etc/default/{component_name}` after configured success.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/ubuntu.md?pixel)]()

View File

@@ -300,3 +300,6 @@ export KUBERNETES_MINION_MEMORY=2048
#### I ran vagrant suspend and nothing works!
```vagrant suspend``` seems to mess up the network. It's not supported at this time.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/vagrant.md?pixel)]()

View File

@@ -78,3 +78,6 @@ The output of `kube-up.sh` displays the IP addresses of the VMs it deploys. You
can log into any VM as the `kube` user to poke around and figure out what is
going on (find yourself authorized with your SSH key, or use the password
`kube` otherwise).
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/vsphere.md?pixel)]()

View File

@@ -53,3 +53,6 @@ occurrences of same-Name objects. See [identifiers](identifiers.md).
: A directory, possibly with some data in it, which is accessible to a Container as part of its filesystem. Kubernetes
Volumes build upon [Docker Volumes](https://docs.docker.com/userguide/dockervolumes/), adding provisioning of the Volume
directory and/or device. See [volumes](volumes.md).
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/glossary.md?pixel)]()

View File

@@ -8,3 +8,6 @@ Names are generally client-provided. Only one object of a given kind can have a
## UIDs
UID are generated by Kubernetes. Every object created over the whole lifetime of a Kubernetes cluster has a distinct UID (i.e., they are spatially and temporally unique).
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/identifiers.md?pixel)]()

View File

@@ -30,3 +30,6 @@ Pull Policy is per-container, but any user of the cluster will have access to al
## Updating Images
The default pull policy is `PullIfNotPresent` which causes the Kubelet to not pull an image if it already exists. If you would like to always force a pull you must set a pull image policy of `PullAlways` or specify a `:latest` tag on your image.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/images.md?pixel)]()

View File

@@ -149,3 +149,6 @@ $kubectl config set-context queen-anne-context --cluster=pig-cluster --user=blac
$kubectl config set-context federal-context --cluster=horse-cluster --user=green-user --namespace=chisel-ns
$kubectl config use-context federal-context
```
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubeconfig-file.md?pixel)]()

View File

@@ -65,4 +65,6 @@ kubectl
* [kubectl update](kubectl_update.md) - Update a resource by filename or stdin.
* [kubectl version](kubectl_version.md) - Print the client and server version information.
###### Auto generated by spf13/cobra at 2015-05-08 20:26:40.494626806 +0000 UTC
###### Auto generated by spf13/cobra at 2015-05-15 00:05:04.556347262 +0000 UTC
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl.md?pixel)]()

View File

@@ -49,4 +49,6 @@ kubectl api-versions
### SEE ALSO
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
###### Auto generated by spf13/cobra at 2015-05-08 20:26:40.494346454 +0000 UTC
###### Auto generated by spf13/cobra at 2015-05-15 00:05:04.555704962 +0000 UTC
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl_api-versions.md?pixel)]()

View File

@@ -49,4 +49,6 @@ kubectl cluster-info
### SEE ALSO
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
###### Auto generated by spf13/cobra at 2015-05-08 20:26:40.494226337 +0000 UTC
###### Auto generated by spf13/cobra at 2015-05-15 00:05:04.555514789 +0000 UTC
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl_cluster-info.md?pixel)]()

View File

@@ -62,4 +62,6 @@ kubectl config SUBCOMMAND
* [kubectl config use-context](kubectl_config_use-context.md) - Sets the current-context in a kubeconfig file
* [kubectl config view](kubectl_config_view.md) - displays Merged kubeconfig settings or a specified kubeconfig file.
###### Auto generated by spf13/cobra at 2015-05-08 20:26:40.494113712 +0000 UTC
###### Auto generated by spf13/cobra at 2015-05-15 00:05:04.555327159 +0000 UTC
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl_config.md?pixel)]()

View File

@@ -64,4 +64,6 @@ $ kubectl config set-cluster e2e --insecure-skip-tls-verify=true
### SEE ALSO
* [kubectl config](kubectl_config.md) - config modifies kubeconfig files
###### Auto generated by spf13/cobra at 2015-05-08 20:26:40.493372429 +0000 UTC
###### Auto generated by spf13/cobra at 2015-05-15 00:05:04.553839852 +0000 UTC
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl_config_set-cluster.md?pixel)]()

View File

@@ -57,4 +57,6 @@ $ kubectl config set-context gce --user=cluster-admin
### SEE ALSO
* [kubectl config](kubectl_config.md) - config modifies kubeconfig files
###### Auto generated by spf13/cobra at 2015-05-08 20:26:40.493620985 +0000 UTC
###### Auto generated by spf13/cobra at 2015-05-15 00:05:04.554224777 +0000 UTC
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl_config_set-context.md?pixel)]()

View File

@@ -77,4 +77,6 @@ $ kubectl set-credentials cluster-admin --client-certificate=~/.kube/admin.crt -
### SEE ALSO
* [kubectl config](kubectl_config.md) - config modifies kubeconfig files
###### Auto generated by spf13/cobra at 2015-05-08 20:26:40.493498685 +0000 UTC
###### Auto generated by spf13/cobra at 2015-05-15 00:05:04.55402965 +0000 UTC
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl_config_set-credentials.md?pixel)]()

View File

@@ -51,4 +51,6 @@ kubectl config set PROPERTY_NAME PROPERTY_VALUE
### SEE ALSO
* [kubectl config](kubectl_config.md) - config modifies kubeconfig files
###### Auto generated by spf13/cobra at 2015-05-08 20:26:40.49374188 +0000 UTC
###### Auto generated by spf13/cobra at 2015-05-15 00:05:04.554534222 +0000 UTC
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl_config_set.md?pixel)]()

View File

@@ -50,4 +50,6 @@ kubectl config unset PROPERTY_NAME
### SEE ALSO
* [kubectl config](kubectl_config.md) - config modifies kubeconfig files
###### Auto generated by spf13/cobra at 2015-05-08 20:26:40.493867298 +0000 UTC
###### Auto generated by spf13/cobra at 2015-05-15 00:05:04.554933161 +0000 UTC
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl_config_unset.md?pixel)]()

View File

@@ -49,4 +49,6 @@ kubectl config use-context CONTEXT_NAME
### SEE ALSO
* [kubectl config](kubectl_config.md) - config modifies kubeconfig files
###### Auto generated by spf13/cobra at 2015-05-08 20:26:40.493987321 +0000 UTC
###### Auto generated by spf13/cobra at 2015-05-15 00:05:04.555123528 +0000 UTC
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl_config_use-context.md?pixel)]()

View File

@@ -72,4 +72,6 @@ $ kubectl config view -o template --template='{{range .users}}{{ if eq .name "e2
### SEE ALSO
* [kubectl config](kubectl_config.md) - config modifies kubeconfig files
###### Auto generated by spf13/cobra at 2015-05-08 20:26:40.493241636 +0000 UTC
###### Auto generated by spf13/cobra at 2015-05-15 00:05:04.553648867 +0000 UTC
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl_config_view.md?pixel)]()

View File

@@ -62,4 +62,6 @@ $ cat pod.json | kubectl create -f -
### SEE ALSO
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
###### Auto generated by spf13/cobra at 2015-05-08 20:26:40.491140012 +0000 UTC
###### Auto generated by spf13/cobra at 2015-05-15 00:05:04.550199549 +0000 UTC
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl_create.md?pixel)]()

Some files were not shown because too many files have changed in this diff Show More