Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
should use time.Since instead of time.Now().Sub
**What this PR does / why we need it**:
should use time.Since instead of time.Now().Sub
**Special notes for your reviewer**:
Automatic merge from submit-queue (batch tested with PRs 61962, 58972, 62509, 62606). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix job's backoff limit for restart policy OnFailure
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#54870
**Release note**:
```release-note
NONE
```
/assign janetkuo
Automatic merge from submit-queue (batch tested with PRs 60054, 60202, 60219, 58090, 60275). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Improves backoff policy in JobController
**What this PR does / why we need it**:
This PR is fixing the issue: #56853, It improves the "Job backoff policy" when Job is configure to allow parallelism and few pods' Jobs failed but others succeed.
Now, it checks if the number of pods succeeded increased since the last check. If yes the backoff delay is cleared.
**Which issue(s) this PR fixes**:
Fixes#56853
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
issues: https://github.com/kubernetes/kubernetes/issues/56853
Add check if the number of pods succeeded increased since the last
check. If yes the backoff delay is cleared. This logic improves the Job
backoff policy when parallelism > 1 and few pods's Job failed but others
succeed.
Centralize the key "forget" and "requeue" process in only on method.
Change the signature of the syncJob method in order to return the
information if it is necessary to forget the backoff delay for a given
key.
Job failure policy integration in JobController. From the
JobSpec.BackoffLimit the JobController will define the backoff
duration between Job retry.
It use the ```workqueue.RateLimitingInterface``` to store the number of
"retry" as "requeue" and the default Job backoff initial duration is set
during the initialization of the ```workqueue.RateLimiter.
Since the number of retry for each job is store in a local structure
"JobController.queue" if the JobController restarts the number of retries
will be lost and the backoff duration will be reset to 0.
Add e2e test for Job backoff failure policy
Automatic merge from submit-queue (batch tested with PRs 50602, 51561, 51703, 51748, 49142)
Slow-start batch pod creation of rs, rc, ds, jobs
Prevent too-large replicas from generating enormous numbers
of events by creating only a few pods at a time, then increasing
the batch size when pod creations succeed. Stop creating batches
of pods when any pod creation errors are encountered.
Todo:
- [x] Add automated tests
- [x] Test ds
Fixes https://github.com/kubernetes/kubernetes/issues/49145
**Release note**:
```release-note
controllers backoff better in face of quota denial
```
Automatic merge from submit-queue (batch tested with PRs 51666, 49829, 51058, 51004, 50938)
Add test items for job utils
**What this PR does / why we need it**:
Add test item for job util
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
NONE
**Release note**:
```release-note
NONE
```
Prevent too-large replicas from generating enormous numbers
of events by creating only a few pods at a time, then increasing
the batch size when pod creations succeed. Stop creating batches
of pods when any pod creation errors are encountered.
Automatic merge from submit-queue (batch tested with PRs 44719, 48454)
check job ActiveDeadlineSeconds
**What this PR does / why we need it**:
enqueue a sync task after ActiveDeadlineSeconds
**Which issue this PR fixes** *:
fixes#32149
**Special notes for your reviewer**:
**Release note**:
```release-note
enqueue a sync task to wake up jobcontroller to check job ActiveDeadlineSeconds in time
```
Automatic merge from submit-queue (batch tested with PRs 48976, 49474, 40050, 49426, 49430)
Remove duplicated import and wrong alias name of api package
**What this PR does / why we need it**:
**Which issue this PR fixes**: fixes#48975
**Special notes for your reviewer**:
/assign @caesarxuchao
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Curating Owners: pkg/controller
cc @jsafrane @mikedanese @bprashanth @derekwaynecarr @thockin @saad-ali
In an effort to expand the existing pool of reviewers and establish a
two-tiered review process (first someone **lgtms** and then someone
experienced in the project **approves**), we are adding new reviewers to
existing owners files.
## If You Care About the Process:
We did this by algorithmically figuring out who’s contributed code to
the project and in what directories. Unfortunately, that doesn’t work
perfectly: people that have made mechanical code changes (e.g change the
copyright header across all directories) end up as reviewers in lots of
places.
Instead of using pure commit data, we generated an excessively large
list of reviewers and pruned based on all time commit data, recent
commit data and review data (number of PRs commented on).
At this point we have a decent list of reviewers, but it needs one last
pass for fine tuning.
## TLDR:
As an owner of a sig/directory and a leader of the project, here’s what
we need from you:
1. Use PR https://github.com/kubernetes/kubernetes/pull/35715 as an example.
2. The pull-request is made editable, please edit the OWNERS file to add
the names of people that should be reviewing code in the future in the **reviewers** section. You probably do NOT need to modify the **approvers** section.
3. Notify me if you want some OWNERS file to be removed. Being an approver or reviewer
of a parent directory makes you a reviewer/approver of the subdirectories too, so not all
OWNERS files may be necessary.
4. Please use ALIAS if you want to use the same list of people over and
over again (don't hesitate to ask me for help, or use the pull-request
above as an example)
I can't revert with github which says "Sorry, this pull request couldn’t be
reverted automatically. It may have already been reverted, or the content may
have changed since it was merged."
Reverts commit: 0c191e787b
Move type LabelSelector and type LabelSelectorRequirement from pkg/apis/extensions
This avoids an import loop when Job (and later DaemonSet, Deployment, ReplicaSet)
are moved out of extensions to new api groups.
Also Move LabelSelectorAsSelector utility from pkg/apis/extensions/ to pkg/api/unversioned/
Also its test.
Also LabelSelectorOp* constants.
Also the pkg/apis/extensions/validation functions ValidateLabelSelectorRequirement and
ValidateLabelSelector move to pkg/api/unversioned
The related type in pkg/apis/extensions/v1beta1/ is staying there. I might move
it in another PR if neccessary.
Combine the fields that will be used for content transformation
(content-type, codec, and group version) into a single struct in client,
and then pass that struct into the rest client and request. Set the
content-type when sending requests to the server, and accept the content
type as primary.
Will form the foundation for content-negotiation via the client.
When job.spec.completions is nil, only
one task needs to succeed for the job to succeed,
and parallelism can be scaled freely during runtime.
Added tests.
Release Note:
This causes two minor changes to the API.
First, unset parallelism previously was defaulted to be
equal to completions. Now it always defaults to 1 if unset.
Second, having parallelism=N and completions unset would previously
be defaulted to 1 completion and N parallelism.
(this is not something we expect people to do, though)
Now, no defaulting occurs in that case, and the job's
behavior is different (any completion causes success).
When a job is complete, the controller will indefinitely update its conditions
with a Complete condition. This change makes the controller exit the
reconcilation as soon as the job is already found to be marked as complete.
Sometimes the test would
generate two identical pod names
using the Now().UnixNano() function.
Apparently the Nanos are rounded.
Passed 2000 unittest runs.
Added status REST storage.
Added validation for Status Updates.
Changed job controller to update status rather than just job
(which ignores status updates).