The VS and dot is seprated

Signed-off-by: YuPengZTE <yu.peng36@zte.com.cn>
This commit is contained in:
YuPengZTE
2016-09-26 17:05:53 +08:00
parent 3aa8abd687
commit d0f69ee0f9
11 changed files with 18 additions and 18 deletions

View File

@@ -444,7 +444,7 @@ including discussion of:
1. admission control
1. initial placement of instances of a new
service vs scheduling new instances of an existing service in response
service vs. scheduling new instances of an existing service in response
to auto-scaling
1. rescheduling pods due to failure (response might be
different depending on if it's failure of a node, rack, or whole AZ)

View File

@@ -227,7 +227,7 @@ These addons should also be converted to multiple platforms:
### Conflicts
What should we do if there's a conflict between keeping e.g. `linux/ppc64le` builds vs merging a release blocker?
What should we do if there's a conflict between keeping e.g. `linux/ppc64le` builds vs. merging a release blocker?
In fact, we faced this problem while this proposal was being written; in [#25243](https://github.com/kubernetes/kubernetes/pull/25243). It is quite obvious that the release blocker is of higher priority.

View File

@@ -117,7 +117,7 @@ resembles:
reduce the amount of memory garbage created during serialization and
deserialization.
* More efficient formats like Msgpack were considered, but they only offer
2x speed up vs the 10x observed for Protobuf
2x speed up vs. the 10x observed for Protobuf
* gRPC was considered, but is a larger change that requires more core
refactoring. This approach does not eliminate the possibility of switching
to gRPC in the future.
@@ -356,7 +356,7 @@ deserialization of the remaining bytes into the `runtime.Unknown` type.
## Streaming wire format
While the majority of Kubernetes APIs return single objects that can vary
in type (Pod vs Status, PodList vs Status), the watch APIs return a stream
in type (Pod vs. Status, PodList vs. Status), the watch APIs return a stream
of identical objects (Events). At the time of this writing, this is the only
current or anticipated streaming RESTful protocol (logging, port-forwarding,
and exec protocols use a binary protocol over Websockets or SPDY).

View File

@@ -79,10 +79,10 @@ max number of active best-effort pods. In addition, the cluster-admin
requires the ability to scope a quota that limits compute resources to
exclude best-effort pods.
### Ability to quota long-running vs bounded-duration compute resources
### Ability to quota long-running vs. bounded-duration compute resources
The cluster-admin may want to quota end-users separately
based on long-running vs bounded-duration compute resources.
based on long-running vs. bounded-duration compute resources.
For example, a cluster-admin may offer more compute resources
for long running pods that are expected to have a more permanent residence
@@ -94,7 +94,7 @@ request if there is no active traffic. An operator that wants to control
density will offer lower quota limits for batch workloads than web applications.
A classic example is a PaaS deployment where the cluster-admin may
allow a separate budget for pods that run their web application vs pods that
allow a separate budget for pods that run their web application vs. pods that
build web applications.
Another example is providing more quota to a database pod than a
@@ -105,8 +105,8 @@ pod that performs a database migration.
* As a cluster-admin, I want the ability to quota
* compute resource requests
* compute resource limits
* compute resources for terminating vs non-terminating workloads
* compute resources for best-effort vs non-best-effort pods
* compute resources for terminating vs. non-terminating workloads
* compute resources for best-effort vs. non-best-effort pods
## Proposed Change

View File

@@ -82,7 +82,7 @@ feature's owner(s). The following are suggested conventions:
in each component to toggle on/off.
- Alpha features should be disabled by default. Beta features may
be enabled by default. Refer to docs/devel/api_changes.md#alpha-beta-and-stable-versions
for more detailed guidance on alpha vs beta.
for more detailed guidance on alpha vs. beta.
## Upgrade support

View File

@@ -590,7 +590,7 @@ renaming parameters seems less likely than changing field paths.
Openshift defines templates as a first class resource so they can be created/retrieved/etc via standard tools. This allows client tools to list available templates (available in the openshift cluster), allows existing resource security controls to be applied to templates, and generally provides a more integrated feel to templates. However there is no explicit requirement that for k8s to adopt templates, it must also adopt storing them in the cluster.
### Processing templates (server vs client)
### Processing templates (server vs. client)
Openshift handles template processing via a server endpoint which consumes a template object from the client and returns the list of objects
produced by processing the template. It is also possible to handle the entire template processing flow via the client, but this was deemed