2018-03-14 16:37:46 +07:00
|
|
|
<odoo>
|
|
|
|
<record model="ir.cron" id="merge_cron">
|
2020-02-07 17:52:42 +07:00
|
|
|
<field name="name">Check for progress of (and merge) stagings</field>
|
2018-03-14 16:37:46 +07:00
|
|
|
<field name="model_id" ref="model_runbot_merge_project"/>
|
|
|
|
<field name="state">code</field>
|
2020-02-07 17:52:42 +07:00
|
|
|
<field name="code">model._check_stagings(True)</field>
|
2024-07-31 14:40:53 +07:00
|
|
|
<field name="interval_number">6</field>
|
|
|
|
<field name="interval_type">hours</field>
|
2020-02-07 17:52:42 +07:00
|
|
|
<field name="numbercall">-1</field>
|
|
|
|
<field name="doall" eval="False"/>
|
[IMP] *: crons tests running for better triggered compatibility
Mergebot / forwardport crons need to run in a specific ordering in
order to flow into one another correctly. The default ordering being
unspecified, it was not possible to use the normal cron
runner (instead of the external driver running crons in sequence one
at a time). This can be fixed by setting *sequences* on crons, as the
cron runner (`_process_jobs`) will use that order to acquire and run
crons.
Also override `_process_jobs` however: the built-in cron runner
fetches a static list of ready crons, then runs that.
This is fine for normal situation where the cron runner runs in a loop
anyway but it's any issue for the tests, as we expect that cron A can
trigger cron B, and we want cron B to run *right now* even if it
hadn't been triggered before cron A ran.
We can replace `_process_job` with a cut down version which does
that (cut down because we don't need most of the error handling /
resilience, there's no concurrent workers, there's no module being
installed, versions must match, ...). This allows e.g. the cron
propagating commit statuses to trigger the staging cron, and both will
run within the same `run_crons` session.
Something I didn't touch is that `_process_jobs` internally creates
completely new environments so there is no way to pass context into
the cron jobs anymore (whereas it works for `method_direct_trigger`),
this means the context values have to be shunted elsewhere for that
purpose which is gross. But even though I'm replacing `_process_jobs`,
this seems a bit too much of a change in cron execution semantics. So
left it out.
While at it tho, silence the spammy `py.warnings` stuff I can't do
much about.
2024-07-30 14:21:05 +07:00
|
|
|
<field name="priority">30</field>
|
2020-02-07 17:52:42 +07:00
|
|
|
</record>
|
|
|
|
<record model="ir.cron" id="staging_cron">
|
|
|
|
<field name="name">Check for progress of PRs and create Stagings</field>
|
|
|
|
<field name="model_id" ref="model_runbot_merge_project"/>
|
|
|
|
<field name="state">code</field>
|
|
|
|
<field name="code">model._create_stagings(True)</field>
|
2024-08-01 15:15:32 +07:00
|
|
|
<field name="interval_number">6</field>
|
|
|
|
<field name="interval_type">hours</field>
|
2018-03-14 16:37:46 +07:00
|
|
|
<field name="numbercall">-1</field>
|
|
|
|
<field name="doall" eval="False"/>
|
[IMP] *: crons tests running for better triggered compatibility
Mergebot / forwardport crons need to run in a specific ordering in
order to flow into one another correctly. The default ordering being
unspecified, it was not possible to use the normal cron
runner (instead of the external driver running crons in sequence one
at a time). This can be fixed by setting *sequences* on crons, as the
cron runner (`_process_jobs`) will use that order to acquire and run
crons.
Also override `_process_jobs` however: the built-in cron runner
fetches a static list of ready crons, then runs that.
This is fine for normal situation where the cron runner runs in a loop
anyway but it's any issue for the tests, as we expect that cron A can
trigger cron B, and we want cron B to run *right now* even if it
hadn't been triggered before cron A ran.
We can replace `_process_job` with a cut down version which does
that (cut down because we don't need most of the error handling /
resilience, there's no concurrent workers, there's no module being
installed, versions must match, ...). This allows e.g. the cron
propagating commit statuses to trigger the staging cron, and both will
run within the same `run_crons` session.
Something I didn't touch is that `_process_jobs` internally creates
completely new environments so there is no way to pass context into
the cron jobs anymore (whereas it works for `method_direct_trigger`),
this means the context values have to be shunted elsewhere for that
purpose which is gross. But even though I'm replacing `_process_jobs`,
this seems a bit too much of a change in cron execution semantics. So
left it out.
While at it tho, silence the spammy `py.warnings` stuff I can't do
much about.
2024-07-30 14:21:05 +07:00
|
|
|
<field name="priority">40</field>
|
2018-03-14 16:37:46 +07:00
|
|
|
</record>
|
2018-10-12 20:54:14 +07:00
|
|
|
<record model="ir.cron" id="feedback_cron">
|
|
|
|
<field name="name">Send feedback to PR</field>
|
2021-11-10 19:13:34 +07:00
|
|
|
<field name="model_id" ref="model_runbot_merge_pull_requests_feedback"/>
|
2018-10-12 20:54:14 +07:00
|
|
|
<field name="state">code</field>
|
2021-11-10 19:13:34 +07:00
|
|
|
<field name="code">model._send()</field>
|
2024-07-30 18:45:51 +07:00
|
|
|
<field name="interval_number">6</field>
|
|
|
|
<field name="interval_type">hours</field>
|
2018-10-12 20:54:14 +07:00
|
|
|
<field name="numbercall">-1</field>
|
|
|
|
<field name="doall" eval="False"/>
|
[IMP] *: crons tests running for better triggered compatibility
Mergebot / forwardport crons need to run in a specific ordering in
order to flow into one another correctly. The default ordering being
unspecified, it was not possible to use the normal cron
runner (instead of the external driver running crons in sequence one
at a time). This can be fixed by setting *sequences* on crons, as the
cron runner (`_process_jobs`) will use that order to acquire and run
crons.
Also override `_process_jobs` however: the built-in cron runner
fetches a static list of ready crons, then runs that.
This is fine for normal situation where the cron runner runs in a loop
anyway but it's any issue for the tests, as we expect that cron A can
trigger cron B, and we want cron B to run *right now* even if it
hadn't been triggered before cron A ran.
We can replace `_process_job` with a cut down version which does
that (cut down because we don't need most of the error handling /
resilience, there's no concurrent workers, there's no module being
installed, versions must match, ...). This allows e.g. the cron
propagating commit statuses to trigger the staging cron, and both will
run within the same `run_crons` session.
Something I didn't touch is that `_process_jobs` internally creates
completely new environments so there is no way to pass context into
the cron jobs anymore (whereas it works for `method_direct_trigger`),
this means the context values have to be shunted elsewhere for that
purpose which is gross. But even though I'm replacing `_process_jobs`,
this seems a bit too much of a change in cron execution semantics. So
left it out.
While at it tho, silence the spammy `py.warnings` stuff I can't do
much about.
2024-07-30 14:21:05 +07:00
|
|
|
<field name="priority">60</field>
|
2018-10-12 20:54:14 +07:00
|
|
|
</record>
|
2021-11-10 19:13:34 +07:00
|
|
|
<record model="ir.cron" id="labels_cron">
|
|
|
|
<field name="name">Update labels on PR</field>
|
|
|
|
<field name="model_id" ref="model_runbot_merge_pull_requests_tagging"/>
|
|
|
|
<field name="state">code</field>
|
|
|
|
<field name="code">model._send()</field>
|
|
|
|
<field name="interval_number">10</field>
|
2024-10-22 15:50:09 +07:00
|
|
|
<field name="interval_type">hours</field>
|
2021-11-10 19:13:34 +07:00
|
|
|
<field name="numbercall">-1</field>
|
|
|
|
<field name="doall" eval="False"/>
|
[IMP] *: crons tests running for better triggered compatibility
Mergebot / forwardport crons need to run in a specific ordering in
order to flow into one another correctly. The default ordering being
unspecified, it was not possible to use the normal cron
runner (instead of the external driver running crons in sequence one
at a time). This can be fixed by setting *sequences* on crons, as the
cron runner (`_process_jobs`) will use that order to acquire and run
crons.
Also override `_process_jobs` however: the built-in cron runner
fetches a static list of ready crons, then runs that.
This is fine for normal situation where the cron runner runs in a loop
anyway but it's any issue for the tests, as we expect that cron A can
trigger cron B, and we want cron B to run *right now* even if it
hadn't been triggered before cron A ran.
We can replace `_process_job` with a cut down version which does
that (cut down because we don't need most of the error handling /
resilience, there's no concurrent workers, there's no module being
installed, versions must match, ...). This allows e.g. the cron
propagating commit statuses to trigger the staging cron, and both will
run within the same `run_crons` session.
Something I didn't touch is that `_process_jobs` internally creates
completely new environments so there is no way to pass context into
the cron jobs anymore (whereas it works for `method_direct_trigger`),
this means the context values have to be shunted elsewhere for that
purpose which is gross. But even though I'm replacing `_process_jobs`,
this seems a bit too much of a change in cron execution semantics. So
left it out.
While at it tho, silence the spammy `py.warnings` stuff I can't do
much about.
2024-07-30 14:21:05 +07:00
|
|
|
<field name="priority">70</field>
|
2021-11-10 19:13:34 +07:00
|
|
|
</record>
|
2018-10-12 20:54:14 +07:00
|
|
|
<record model="ir.cron" id="fetch_prs_cron">
|
2018-06-21 14:55:14 +07:00
|
|
|
<field name="name">Check for PRs to fetch</field>
|
2021-11-10 19:13:34 +07:00
|
|
|
<field name="model_id" ref="model_runbot_merge_fetch_job"/>
|
2018-06-21 14:55:14 +07:00
|
|
|
<field name="state">code</field>
|
2021-11-10 19:13:34 +07:00
|
|
|
<field name="code">model._check(True)</field>
|
2024-07-30 18:45:51 +07:00
|
|
|
<field name="interval_number">6</field>
|
|
|
|
<field name="interval_type">hours</field>
|
2018-06-21 14:55:14 +07:00
|
|
|
<field name="numbercall">-1</field>
|
|
|
|
<field name="doall" eval="False"/>
|
[IMP] *: crons tests running for better triggered compatibility
Mergebot / forwardport crons need to run in a specific ordering in
order to flow into one another correctly. The default ordering being
unspecified, it was not possible to use the normal cron
runner (instead of the external driver running crons in sequence one
at a time). This can be fixed by setting *sequences* on crons, as the
cron runner (`_process_jobs`) will use that order to acquire and run
crons.
Also override `_process_jobs` however: the built-in cron runner
fetches a static list of ready crons, then runs that.
This is fine for normal situation where the cron runner runs in a loop
anyway but it's any issue for the tests, as we expect that cron A can
trigger cron B, and we want cron B to run *right now* even if it
hadn't been triggered before cron A ran.
We can replace `_process_job` with a cut down version which does
that (cut down because we don't need most of the error handling /
resilience, there's no concurrent workers, there's no module being
installed, versions must match, ...). This allows e.g. the cron
propagating commit statuses to trigger the staging cron, and both will
run within the same `run_crons` session.
Something I didn't touch is that `_process_jobs` internally creates
completely new environments so there is no way to pass context into
the cron jobs anymore (whereas it works for `method_direct_trigger`),
this means the context values have to be shunted elsewhere for that
purpose which is gross. But even though I'm replacing `_process_jobs`,
this seems a bit too much of a change in cron execution semantics. So
left it out.
While at it tho, silence the spammy `py.warnings` stuff I can't do
much about.
2024-07-30 14:21:05 +07:00
|
|
|
<field name="priority">10</field>
|
2018-06-21 14:55:14 +07:00
|
|
|
</record>
|
2018-10-15 21:19:29 +07:00
|
|
|
<record model="ir.cron" id="check_linked_prs_status">
|
|
|
|
<field name="name">Warn on linked PRs where only one is ready</field>
|
|
|
|
<field name="model_id" ref="model_runbot_merge_pull_requests"/>
|
|
|
|
<field name="state">code</field>
|
|
|
|
<field name="code">model._check_linked_prs_statuses(True)</field>
|
|
|
|
<field name="interval_number">1</field>
|
|
|
|
<field name="interval_type">hours</field>
|
|
|
|
<field name="numbercall">-1</field>
|
|
|
|
<field name="doall" eval="False"/>
|
[IMP] *: crons tests running for better triggered compatibility
Mergebot / forwardport crons need to run in a specific ordering in
order to flow into one another correctly. The default ordering being
unspecified, it was not possible to use the normal cron
runner (instead of the external driver running crons in sequence one
at a time). This can be fixed by setting *sequences* on crons, as the
cron runner (`_process_jobs`) will use that order to acquire and run
crons.
Also override `_process_jobs` however: the built-in cron runner
fetches a static list of ready crons, then runs that.
This is fine for normal situation where the cron runner runs in a loop
anyway but it's any issue for the tests, as we expect that cron A can
trigger cron B, and we want cron B to run *right now* even if it
hadn't been triggered before cron A ran.
We can replace `_process_job` with a cut down version which does
that (cut down because we don't need most of the error handling /
resilience, there's no concurrent workers, there's no module being
installed, versions must match, ...). This allows e.g. the cron
propagating commit statuses to trigger the staging cron, and both will
run within the same `run_crons` session.
Something I didn't touch is that `_process_jobs` internally creates
completely new environments so there is no way to pass context into
the cron jobs anymore (whereas it works for `method_direct_trigger`),
this means the context values have to be shunted elsewhere for that
purpose which is gross. But even though I'm replacing `_process_jobs`,
this seems a bit too much of a change in cron execution semantics. So
left it out.
While at it tho, silence the spammy `py.warnings` stuff I can't do
much about.
2024-07-30 14:21:05 +07:00
|
|
|
<field name="priority">50</field>
|
2018-10-15 21:19:29 +07:00
|
|
|
</record>
|
2019-03-05 14:01:38 +07:00
|
|
|
<record model="ir.cron" id="process_updated_commits">
|
|
|
|
<field name="name">Impact commit statuses on PRs and stagings</field>
|
|
|
|
<field name="model_id" ref="model_runbot_merge_commit"/>
|
|
|
|
<field name="state">code</field>
|
|
|
|
<field name="code">model._notify()</field>
|
2024-07-30 18:42:00 +07:00
|
|
|
<field name="interval_number">6</field>
|
|
|
|
<field name="interval_type">hours</field>
|
2019-03-05 14:01:38 +07:00
|
|
|
<field name="numbercall">-1</field>
|
|
|
|
<field name="doall" eval="False"/>
|
[IMP] *: crons tests running for better triggered compatibility
Mergebot / forwardport crons need to run in a specific ordering in
order to flow into one another correctly. The default ordering being
unspecified, it was not possible to use the normal cron
runner (instead of the external driver running crons in sequence one
at a time). This can be fixed by setting *sequences* on crons, as the
cron runner (`_process_jobs`) will use that order to acquire and run
crons.
Also override `_process_jobs` however: the built-in cron runner
fetches a static list of ready crons, then runs that.
This is fine for normal situation where the cron runner runs in a loop
anyway but it's any issue for the tests, as we expect that cron A can
trigger cron B, and we want cron B to run *right now* even if it
hadn't been triggered before cron A ran.
We can replace `_process_job` with a cut down version which does
that (cut down because we don't need most of the error handling /
resilience, there's no concurrent workers, there's no module being
installed, versions must match, ...). This allows e.g. the cron
propagating commit statuses to trigger the staging cron, and both will
run within the same `run_crons` session.
Something I didn't touch is that `_process_jobs` internally creates
completely new environments so there is no way to pass context into
the cron jobs anymore (whereas it works for `method_direct_trigger`),
this means the context values have to be shunted elsewhere for that
purpose which is gross. But even though I'm replacing `_process_jobs`,
this seems a bit too much of a change in cron execution semantics. So
left it out.
While at it tho, silence the spammy `py.warnings` stuff I can't do
much about.
2024-07-30 14:21:05 +07:00
|
|
|
<field name="priority">20</field>
|
2019-03-05 14:01:38 +07:00
|
|
|
</record>
|
2018-03-14 16:37:46 +07:00
|
|
|
</odoo>
|