The VS and dot is seprated

Signed-off-by: YuPengZTE <yu.peng36@zte.com.cn>
This commit is contained in:
YuPengZTE 2016-09-26 17:05:53 +08:00
parent 3aa8abd687
commit d0f69ee0f9
11 changed files with 18 additions and 18 deletions

View File

@ -254,7 +254,7 @@ In the Enterprise Profile:
In the Simple Profile:
- There is a single `namespace` used by the single user.
Namespaces versus userAccount vs Labels:
Namespaces versus userAccount vs. Labels:
- `userAccount`s are intended for audit logging (both name and UID should be
logged), and to define who has access to `namespace`s.
- `labels` (see [docs/user-guide/labels.md](../../docs/user-guide/labels.md))

View File

@ -121,7 +121,7 @@ If a third-party wants to track additional resources, it must follow the
resource naming conventions prescribed by Kubernetes. This means the resource
must have a fully-qualified name (i.e. mycompany.org/shinynewresource)
## Resource Requirements: Requests vs Limits
## Resource Requirements: Requests vs. Limits
If a resource supports the ability to distinguish between a request and a limit
for a resource, the quota tracking system will only cost the request value

View File

@ -113,9 +113,9 @@ system external to Kubernetes.
Kubernetes does not dictate how to divide up the space of user identifier
strings. User names can be simple Unix-style short usernames, (e.g. `alice`), or
may be qualified to allow for federated identity (`alice@example.com` vs
may be qualified to allow for federated identity (`alice@example.com` vs.
`alice@example.org`.) Naming convention may distinguish service accounts from
user accounts (e.g. `alice@example.com` vs
user accounts (e.g. `alice@example.com` vs.
`build-service-account-a3b7f0@foo-namespace.service-accounts.example.com`), but
Kubernetes does not require this.

View File

@ -63,7 +63,7 @@ resources](../user-guide/working-with-resources.md).*
- [List Operations](#list-operations)
- [Map Operations](#map-operations)
- [Idempotency](#idempotency)
- [Optional vs Required](#optional-vs-required)
- [Optional vs. Required](#optional-vs-required)
- [Defaulting](#defaulting)
- [Late Initialization](#late-initialization)
- [Concurrency Control and Consistency](#concurrency-control-and-consistency)
@ -658,7 +658,7 @@ exists - instead, it will either return 201 Created or 504 with Reason
allotted, and the client should retry (optionally after the time indicated in
the Retry-After header).
## Optional vs Required
## Optional vs. Required
Fields must be either optional or required.

View File

@ -109,7 +109,7 @@ fast-moving codebase - lock in your changes ASAP, and make merges be someone
else's problem.
Obviously, we want every PR to be useful on its own, so you'll have to use
common sense in deciding what can be a PR vs what should be a commit in a larger
common sense in deciding what can be a PR vs. what should be a commit in a larger
PR. Rule of thumb - if this commit or set of commits is directly related to
Feature-X and nothing else, it should probably be part of the Feature-X PR. If
you can plausibly imagine someone finding value in this commit outside of

View File

@ -444,7 +444,7 @@ including discussion of:
1. admission control
1. initial placement of instances of a new
service vs scheduling new instances of an existing service in response
service vs. scheduling new instances of an existing service in response
to auto-scaling
1. rescheduling pods due to failure (response might be
different depending on if it's failure of a node, rack, or whole AZ)

View File

@ -227,7 +227,7 @@ These addons should also be converted to multiple platforms:
### Conflicts
What should we do if there's a conflict between keeping e.g. `linux/ppc64le` builds vs merging a release blocker?
What should we do if there's a conflict between keeping e.g. `linux/ppc64le` builds vs. merging a release blocker?
In fact, we faced this problem while this proposal was being written; in [#25243](https://github.com/kubernetes/kubernetes/pull/25243). It is quite obvious that the release blocker is of higher priority.

View File

@ -117,7 +117,7 @@ resembles:
reduce the amount of memory garbage created during serialization and
deserialization.
* More efficient formats like Msgpack were considered, but they only offer
2x speed up vs the 10x observed for Protobuf
2x speed up vs. the 10x observed for Protobuf
* gRPC was considered, but is a larger change that requires more core
refactoring. This approach does not eliminate the possibility of switching
to gRPC in the future.
@ -356,7 +356,7 @@ deserialization of the remaining bytes into the `runtime.Unknown` type.
## Streaming wire format
While the majority of Kubernetes APIs return single objects that can vary
in type (Pod vs Status, PodList vs Status), the watch APIs return a stream
in type (Pod vs. Status, PodList vs. Status), the watch APIs return a stream
of identical objects (Events). At the time of this writing, this is the only
current or anticipated streaming RESTful protocol (logging, port-forwarding,
and exec protocols use a binary protocol over Websockets or SPDY).

View File

@ -79,10 +79,10 @@ max number of active best-effort pods. In addition, the cluster-admin
requires the ability to scope a quota that limits compute resources to
exclude best-effort pods.
### Ability to quota long-running vs bounded-duration compute resources
### Ability to quota long-running vs. bounded-duration compute resources
The cluster-admin may want to quota end-users separately
based on long-running vs bounded-duration compute resources.
based on long-running vs. bounded-duration compute resources.
For example, a cluster-admin may offer more compute resources
for long running pods that are expected to have a more permanent residence
@ -94,7 +94,7 @@ request if there is no active traffic. An operator that wants to control
density will offer lower quota limits for batch workloads than web applications.
A classic example is a PaaS deployment where the cluster-admin may
allow a separate budget for pods that run their web application vs pods that
allow a separate budget for pods that run their web application vs. pods that
build web applications.
Another example is providing more quota to a database pod than a
@ -105,8 +105,8 @@ pod that performs a database migration.
* As a cluster-admin, I want the ability to quota
* compute resource requests
* compute resource limits
* compute resources for terminating vs non-terminating workloads
* compute resources for best-effort vs non-best-effort pods
* compute resources for terminating vs. non-terminating workloads
* compute resources for best-effort vs. non-best-effort pods
## Proposed Change

View File

@ -82,7 +82,7 @@ feature's owner(s). The following are suggested conventions:
in each component to toggle on/off.
- Alpha features should be disabled by default. Beta features may
be enabled by default. Refer to docs/devel/api_changes.md#alpha-beta-and-stable-versions
for more detailed guidance on alpha vs beta.
for more detailed guidance on alpha vs. beta.
## Upgrade support

View File

@ -590,7 +590,7 @@ renaming parameters seems less likely than changing field paths.
Openshift defines templates as a first class resource so they can be created/retrieved/etc via standard tools. This allows client tools to list available templates (available in the openshift cluster), allows existing resource security controls to be applied to templates, and generally provides a more integrated feel to templates. However there is no explicit requirement that for k8s to adopt templates, it must also adopt storing them in the cluster.
### Processing templates (server vs client)
### Processing templates (server vs. client)
Openshift handles template processing via a server endpoint which consumes a template object from the client and returns the list of objects
produced by processing the template. It is also possible to handle the entire template processing flow via the client, but this was deemed