Refactor storePodsNamespacer.List() and
storeReplicationContollersNamespacer.List(). They are the same
function, just with different signatures.
This fixes a bug where, when we fell back on a brute force approach, we
were still returning an error.
Also change to explicit return without named return values.
A recent change tries to separate resync and relist. The motivation
was to avoid triggering relist when a resync is required.
However, the change is not effective since it stops the watcher. As hongchao
mentioned in the original comment, today's storage interface will not deliever
any progress notification to the watch chan. So any watcher that does not receive
events for the last few seconds will not be able to catch up from the previous
index after a hard close since the index of the last received event is out of
the cache window inside etcd2.
This pull request tries to fix this issue by not stoping watcher when a resync is
required.
Automatic merge from submit-queue
add namespace index for cache
@wojtek-t
Implement in this approach make the change of lister.go small, but we should replace all `NewInformer()` to `NewIndexInformer()`, even when someone not want to filter by namespace(eg. gc_controller and scheduler). Any suggestion?
Reflectors started from goroutines are broken because Go doesn't allow
runtime.Callers to see the spawning goroutine. Do a best effort parse of
the call stack for now.
Add logging so that we can easily see which reflectors processes launch,
and measure in logs the frequency of sync intervals.
The message as it is framed right now does not make any sense for the
end users of our system. It might even lead to confusion. So this is
attempt to make the error message less confusing.
Update the Deployments' API types, defaulting code, conversions, helpers
and validation to use ReplicaSets instead of ReplicationControllers and
LabelSelector instead of map[string]string for selectors.
Also update the Deployment controller, registry, kubectl subcommands,
client listers package and e2e tests to use ReplicaSets and
LabelSelector for Deployments.
Move type LabelSelector and type LabelSelectorRequirement from pkg/apis/extensions
This avoids an import loop when Job (and later DaemonSet, Deployment, ReplicaSet)
are moved out of extensions to new api groups.
Also Move LabelSelectorAsSelector utility from pkg/apis/extensions/ to pkg/api/unversioned/
Also its test.
Also LabelSelectorOp* constants.
Also the pkg/apis/extensions/validation functions ValidateLabelSelectorRequirement and
ValidateLabelSelector move to pkg/api/unversioned
The related type in pkg/apis/extensions/v1beta1/ is staying there. I might move
it in another PR if neccessary.
Combine the fields that will be used for content transformation
(content-type, codec, and group version) into a single struct in client,
and then pass that struct into the rest client and request. Set the
content-type when sending requests to the server, and accept the content
type as primary.
Will form the foundation for content-negotiation via the client.
For AWS EBS, a volume can only be attached to a node in the same AZ.
The scheduler must therefore detect if a volume is being attached to a
pod, and ensure that the pod is scheduled on a node in the same AZ as
the volume.
So that the scheduler need not query the cloud provider every time, and
to support decoupled operation (e.g. bare metal) we tag the volume with
our placement labels. This is done automatically by means of an
admission controller on AWS when a PersistentVolume is created backed by
an EBS volume.
Support for tagging GCE PVs will follow.
Pods that specify a volume directly (i.e. without using a
PersistentVolumeClaim) will not currently be scheduled correctly (i.e.
they will be scheduled without zone-awareness).
Set the out of disk node condition to unknown in the node controller if
the kubelet does not report its node condition in a long time. Update
node controller unit tests.
Implement a node condition predicate function that checks if a given
node satisfies the conditions defined by the predicate and if it
does, use that node for scheduling pods. The predicate function takes
both NodeReady and NodeOutOfDisk into consideration to determine if a
node is fit for scheduling pods.
The predicate is then passed to the node lister in the scheduler factory
so that the node lister can run the predicate function on the nodes when
schedling pods thereby omitting nodes that does not satisfy the
predicate.
Also update listers test.
Define a new out of disk node condition and use it to report when node
goes out of disk.
Make a copy of loop range clause variable in node listers so that it
is available outside the for loop.
Also update/implement unit tests.