2018-03-14 16:37:46 +07:00
""" The mergebot does not work on a dependency basis, rather all
repositories of a project are co - equal and get ( on target and
source branches ) .
When preparing a staging , we simply want to ensure branch - matched PRs
are staged concurrently in all repos
"""
[CHG] *: rewrite commands set, rework status management
This commit revisits the commands set in order to make it more
regular, and limit inconsistent command-sets, although it includes
pseudo-command aliases for common tasks now removed from the core set.
Hard Errors
===========
The previous iteration of the commands set would ignore any
non-command term in a command line. This has been changed to hard
error (and ignoring the entire thing) if any command is unknown or
invalid.
This fixes inconsistent / unexpected interpretations where a user
sends a command, then writes a novel on the same line some words of
which happen to *also* be commands, leading to merge states they did
not expect. They should now be told to fuck off.
Priority Restructuring
----------------------
The numerical priority system was pretty messy in that it confused
"staging priority" (in ways which were not entirely straightforward)
with overrides to other concerns.
This has now being split along all the axis, with separate command
subsets for:
- staging prioritisation, now separated between `default`, `priority`,
and `alone`,
- `default` means PRs are picked by an unspecified order when
creating a staging, if nothing better is available
- `priority` means PRs are picked first when staging, however if
`priority` PRs don't fill the staging the rest will be filled with
`default`, this mode did not previously exist
- `alone` means the PRs are picked first, before splits, and only
`alone` PRs can be part of the staging (which usually matches the
modename)
- `skipchecks` overrides both statuses and approval checks, for the
batch, something previously implied in `p=0`, but now
independent. Setting `skipchecks` basically makes the entire batch
`ready`.
For consistency this also sets the reviewer implicitly: since
skipchecks overrides both statuses *and approval*, whoever enables
this mode is essentially the reviewer.
- `cancel` cancels any ongoing staging when the marked PR becomes
ready again, previously this was also implied (in a more restricted
form) by setting `p=0`
FWBot removal
=============
While the "forwardport bot" still exists as an API level (to segregate
access rights between tokens) it has been removed as an interaction
point, as part of the modules merge plan. As a result,
fwbot stops responding
----------------------
Feedback messages are now always sent by the mergebot, the
forward-porting bot should not send any message or notification
anymore.
commands moved to the merge bot
-------------------------------
- `ignore`/`up to` simply changes bot
- `close` as well
- `skipci` is now a choice / flag of an `fw` command, which denotes
the forward-port policy,
- `fw=default` is the old `ci` and resets the policy to default,
that is wait for the PR to be merged to create forward ports, and
for the required statuses on each forward port to be received
before creating the next
- `fw=skipci` is the old `skipci`, it waits for the merge of the
base PR but then creates all the forward ports immediately (unless
it gets a conflict)
- `fw=skipmerge` immediately creates all the forward ports, without
even waiting for the PR to be merged
This is a completely new mode, and may be rather broken as until
now the 'bot has always assumed the source PR had been merged.
approval rework
---------------
Because of the previous section, there is no distinguishing feature
between `mergebot r+` = "merge this PR" and `forwardbot r+` = "merge
this PR and all its parent with different access rights".
As a result, the two have been merged under a single `mergebot r+`
with heuristics attempting to provide the best experience:
- if approving a non-forward port, the behavior does not change
- else, with review rights on the source, all ancestors are approved
- else, as author of the original, approves all ancestors which descend
from a merged PR
- else, approves all ancestors up to and including the oldest ancestor
to which we have review rights
Most notably, the source's author is not delegated on the source or
any of its descendants anymore. This might need to be revisited if it
provides too restrictive.
For the very specialized need of approving a forward-port *and none of
its ancestors*, `review=` can now take a comma (`,`) separated list of
pull request numbers (github numbers, not mergebot ids).
Computed State
==============
The `state` field of pull requests is now computed. Hopefully this
makes the status more consistent and predictable in the long run, and
importantly makes status management more reliable (because reference
datum get updated naturally flowing to the state).
For now however it makes things more complicated as some of the states
have to be separately signaled or updated:
- `closed` and `error` are now separate flags
- `merge_date` is pulled down from forwardport and becomes the
transition signal for ready -> merged
- `reviewed_by` becomes the transition signal for approval (might be a
good idea to rename it...)
- `status` is computed from the head's statuses and overrides, and
*that* becomes the validation state
Ideally, batch-level flags like `skipchecks` should be on, well, the
batch, and `state` should have a dependency on the batch. However
currently the batch is not a durable / permanent member of the system,
so it's a PR-level flag and a messy pile.
On notable change is that *forcing* the state to `ready` now does that
but also sets the reviewer, `skipchecks`, and overrides to ensure the
API-mediated readying does not get rolled back by e.g. the runbot
sending a status.
This is useful for a few types of automated / programmatic PRs
e.g. translation exports, where we set the state programmatically to
limit noise.
recursive dependency hack
-------------------------
Given a sequence of PRs with an override of the source, if one of the
PRs is updated its descendants should not have the override
anymore. However if the updated PR gets overridden, its descendants
should have *that* override.
This requires some unholy manipulations via an override of `modified`,
as the ORM supports recursive fields but not recursive
dependencies (on a different field).
unconditional followup scheduling
---------------------------------
Previously scheduling forward-port followup was contigent on the FW
policy, but it's not actually correct if the new PR is *immediately*
validated (which can happen now that the field is computed, if there
are no required statuses *or* all of the required statuses are
overridden by an ancestor) as nothing will trigger the state change
and thus scheduling of the fp followup.
The followup function checks all the properties of the batch to port,
so this should not result on incorrect ports. Although it's a bit more
expensive, and will lead to more spam.
Previously this would not happen because on creation of a PR the
validation task (commit -> PR) would still have to execute.
Misc Changes
============
- If a PR is marked as overriding / canceling stagings, it now does
so on retry not just when setting initially.
This was not handled at all previously, so a PR in P0 going into
error due to e.g. a non-deterministic bug would be retried and still
p=0, but a current staging would not get cancelled. Same when a PR
in p=0 goes into error because something was failed, then is updated
with a fix.
- Add tracking to a bunch of relevant PR fields.
Post-mortem analysis currently generally requires going through the
text logs to see what happened, which is annoying.
There is a nondeterminism / inconsistency in the tracking which
sometimes leads the admin user to trigger tracking before the bot
does, leading to the staging tracking being attributed to them
during tests, shove under the carpet by ignoring the user to whom
that tracking is attributed.
When multiple users update tracked fields in the same transaction
all the changes are attributed to the first one having triggered
tracking (?), I couldn't find why the admin sometimes takes over.
- added and leveraged support for enum-backed selection fields
- moved variuous fields from forwardport to runbot_merge
- fix a migration which had never worked and which never run (because
I forgot to bump the version on the module)
- remove some unnecessary intermediate de/serialisation
fixes #673, fixes #309, fixes #792, fixes #846 (probably)
2023-10-31 13:42:07 +07:00
import functools
import operator
2020-10-01 16:49:47 +07:00
import time
2022-07-11 13:17:04 +07:00
import xmlrpc . client
2024-02-09 14:58:19 +07:00
from itertools import repeat
2018-03-14 16:37:46 +07:00
import pytest
2020-02-11 20:20:32 +07:00
import requests
2022-07-11 13:17:04 +07:00
from lxml . etree import XPath
2018-03-14 16:37:46 +07:00
2022-07-11 13:17:04 +07:00
from utils import seen , get_partner , pr_page , to_pr , Commit
2020-11-17 21:21:21 +07:00
2018-09-10 21:00:26 +07:00
2018-03-14 16:37:46 +07:00
@pytest.fixture
[ADD] *: per-repository webhook secret
Currently webhook secrets are configured per *project* which is an
issue both because different repositories may have different
administrators and thus creates safety concerns, and because multiple
repositories can feed into different projects (e.g. on mergebot,
odoo-dev/odoo is both an ancillary repository to the main RD project,
and the main repository to the minor / legacy master-wowl
project). This means it can be necessary to have multiple projects
share the same secret as well, this then mandates the secret for more
repositories per (1).
This is a pain in the ass, so just detach secrets from projects and
link them *only* to repositories, it's cleaner and easier to manage
and set up progressively.
This requires a lot of changes to the tests, as they all need to
correctly configure the signaling.
For `runbot_merge` there was *some* setup sharing already via the
module-level `repo` fixtures`, those were merged into a conftest-level
fixture which could handle the signaling setup. A few tests which
unnecessarily set up repositories ad-hoc were also moved to the
fixture. But for most of the ad-hoc setup in `runbot_merge`, as well
as `forwardport` where it's all ad-hoc, events sources setup was just
appended as is. This should probably be cleaned up at one point, with
the various requirements collected and organised into a small set of
fixtures doing the job more uniformly.
Fixes #887
2024-06-06 16:07:57 +07:00
def repo_a ( env , project , make_repo , setreviewers ) :
2019-10-10 14:22:12 +07:00
repo = make_repo ( ' a ' )
[ADD] *: per-repository webhook secret
Currently webhook secrets are configured per *project* which is an
issue both because different repositories may have different
administrators and thus creates safety concerns, and because multiple
repositories can feed into different projects (e.g. on mergebot,
odoo-dev/odoo is both an ancillary repository to the main RD project,
and the main repository to the minor / legacy master-wowl
project). This means it can be necessary to have multiple projects
share the same secret as well, this then mandates the secret for more
repositories per (1).
This is a pain in the ass, so just detach secrets from projects and
link them *only* to repositories, it's cleaner and easier to manage
and set up progressively.
This requires a lot of changes to the tests, as they all need to
correctly configure the signaling.
For `runbot_merge` there was *some* setup sharing already via the
module-level `repo` fixtures`, those were merged into a conftest-level
fixture which could handle the signaling setup. A few tests which
unnecessarily set up repositories ad-hoc were also moved to the
fixture. But for most of the ad-hoc setup in `runbot_merge`, as well
as `forwardport` where it's all ad-hoc, events sources setup was just
appended as is. This should probably be cleaned up at one point, with
the various requirements collected and organised into a small set of
fixtures doing the job more uniformly.
Fixes #887
2024-06-06 16:07:57 +07:00
r = env [ ' runbot_merge.repository ' ] . create ( {
2020-02-10 21:05:08 +07:00
' project_id ' : project . id ,
2020-01-21 20:00:11 +07:00
' name ' : repo . name ,
2024-02-09 16:49:32 +07:00
' required_statuses ' : ' default ' ,
2021-07-23 20:45:23 +07:00
' group_id ' : False ,
2020-02-10 21:05:08 +07:00
} )
setreviewers ( r )
[ADD] *: per-repository webhook secret
Currently webhook secrets are configured per *project* which is an
issue both because different repositories may have different
administrators and thus creates safety concerns, and because multiple
repositories can feed into different projects (e.g. on mergebot,
odoo-dev/odoo is both an ancillary repository to the main RD project,
and the main repository to the minor / legacy master-wowl
project). This means it can be necessary to have multiple projects
share the same secret as well, this then mandates the secret for more
repositories per (1).
This is a pain in the ass, so just detach secrets from projects and
link them *only* to repositories, it's cleaner and easier to manage
and set up progressively.
This requires a lot of changes to the tests, as they all need to
correctly configure the signaling.
For `runbot_merge` there was *some* setup sharing already via the
module-level `repo` fixtures`, those were merged into a conftest-level
fixture which could handle the signaling setup. A few tests which
unnecessarily set up repositories ad-hoc were also moved to the
fixture. But for most of the ad-hoc setup in `runbot_merge`, as well
as `forwardport` where it's all ad-hoc, events sources setup was just
appended as is. This should probably be cleaned up at one point, with
the various requirements collected and organised into a small set of
fixtures doing the job more uniformly.
Fixes #887
2024-06-06 16:07:57 +07:00
env [ ' runbot_merge.events_sources ' ] . create ( { ' repository ' : r . name } )
2019-10-10 14:22:12 +07:00
return repo
2018-03-14 16:37:46 +07:00
@pytest.fixture
[ADD] *: per-repository webhook secret
Currently webhook secrets are configured per *project* which is an
issue both because different repositories may have different
administrators and thus creates safety concerns, and because multiple
repositories can feed into different projects (e.g. on mergebot,
odoo-dev/odoo is both an ancillary repository to the main RD project,
and the main repository to the minor / legacy master-wowl
project). This means it can be necessary to have multiple projects
share the same secret as well, this then mandates the secret for more
repositories per (1).
This is a pain in the ass, so just detach secrets from projects and
link them *only* to repositories, it's cleaner and easier to manage
and set up progressively.
This requires a lot of changes to the tests, as they all need to
correctly configure the signaling.
For `runbot_merge` there was *some* setup sharing already via the
module-level `repo` fixtures`, those were merged into a conftest-level
fixture which could handle the signaling setup. A few tests which
unnecessarily set up repositories ad-hoc were also moved to the
fixture. But for most of the ad-hoc setup in `runbot_merge`, as well
as `forwardport` where it's all ad-hoc, events sources setup was just
appended as is. This should probably be cleaned up at one point, with
the various requirements collected and organised into a small set of
fixtures doing the job more uniformly.
Fixes #887
2024-06-06 16:07:57 +07:00
def repo_b ( env , project , make_repo , setreviewers ) :
2019-10-10 14:22:12 +07:00
repo = make_repo ( ' b ' )
[ADD] *: per-repository webhook secret
Currently webhook secrets are configured per *project* which is an
issue both because different repositories may have different
administrators and thus creates safety concerns, and because multiple
repositories can feed into different projects (e.g. on mergebot,
odoo-dev/odoo is both an ancillary repository to the main RD project,
and the main repository to the minor / legacy master-wowl
project). This means it can be necessary to have multiple projects
share the same secret as well, this then mandates the secret for more
repositories per (1).
This is a pain in the ass, so just detach secrets from projects and
link them *only* to repositories, it's cleaner and easier to manage
and set up progressively.
This requires a lot of changes to the tests, as they all need to
correctly configure the signaling.
For `runbot_merge` there was *some* setup sharing already via the
module-level `repo` fixtures`, those were merged into a conftest-level
fixture which could handle the signaling setup. A few tests which
unnecessarily set up repositories ad-hoc were also moved to the
fixture. But for most of the ad-hoc setup in `runbot_merge`, as well
as `forwardport` where it's all ad-hoc, events sources setup was just
appended as is. This should probably be cleaned up at one point, with
the various requirements collected and organised into a small set of
fixtures doing the job more uniformly.
Fixes #887
2024-06-06 16:07:57 +07:00
r = env [ ' runbot_merge.repository ' ] . create ( {
2020-02-10 21:05:08 +07:00
' project_id ' : project . id ,
2020-01-21 20:00:11 +07:00
' name ' : repo . name ,
2024-02-09 16:49:32 +07:00
' required_statuses ' : ' default ' ,
2021-07-23 20:45:23 +07:00
' group_id ' : False ,
2020-02-10 21:05:08 +07:00
} )
setreviewers ( r )
[ADD] *: per-repository webhook secret
Currently webhook secrets are configured per *project* which is an
issue both because different repositories may have different
administrators and thus creates safety concerns, and because multiple
repositories can feed into different projects (e.g. on mergebot,
odoo-dev/odoo is both an ancillary repository to the main RD project,
and the main repository to the minor / legacy master-wowl
project). This means it can be necessary to have multiple projects
share the same secret as well, this then mandates the secret for more
repositories per (1).
This is a pain in the ass, so just detach secrets from projects and
link them *only* to repositories, it's cleaner and easier to manage
and set up progressively.
This requires a lot of changes to the tests, as they all need to
correctly configure the signaling.
For `runbot_merge` there was *some* setup sharing already via the
module-level `repo` fixtures`, those were merged into a conftest-level
fixture which could handle the signaling setup. A few tests which
unnecessarily set up repositories ad-hoc were also moved to the
fixture. But for most of the ad-hoc setup in `runbot_merge`, as well
as `forwardport` where it's all ad-hoc, events sources setup was just
appended as is. This should probably be cleaned up at one point, with
the various requirements collected and organised into a small set of
fixtures doing the job more uniformly.
Fixes #887
2024-06-06 16:07:57 +07:00
env [ ' runbot_merge.events_sources ' ] . create ( { ' repository ' : r . name } )
2019-10-10 14:22:12 +07:00
return repo
2018-03-14 16:37:46 +07:00
@pytest.fixture
[ADD] *: per-repository webhook secret
Currently webhook secrets are configured per *project* which is an
issue both because different repositories may have different
administrators and thus creates safety concerns, and because multiple
repositories can feed into different projects (e.g. on mergebot,
odoo-dev/odoo is both an ancillary repository to the main RD project,
and the main repository to the minor / legacy master-wowl
project). This means it can be necessary to have multiple projects
share the same secret as well, this then mandates the secret for more
repositories per (1).
This is a pain in the ass, so just detach secrets from projects and
link them *only* to repositories, it's cleaner and easier to manage
and set up progressively.
This requires a lot of changes to the tests, as they all need to
correctly configure the signaling.
For `runbot_merge` there was *some* setup sharing already via the
module-level `repo` fixtures`, those were merged into a conftest-level
fixture which could handle the signaling setup. A few tests which
unnecessarily set up repositories ad-hoc were also moved to the
fixture. But for most of the ad-hoc setup in `runbot_merge`, as well
as `forwardport` where it's all ad-hoc, events sources setup was just
appended as is. This should probably be cleaned up at one point, with
the various requirements collected and organised into a small set of
fixtures doing the job more uniformly.
Fixes #887
2024-06-06 16:07:57 +07:00
def repo_c ( env , project , make_repo , setreviewers ) :
2019-10-10 14:22:12 +07:00
repo = make_repo ( ' c ' )
[ADD] *: per-repository webhook secret
Currently webhook secrets are configured per *project* which is an
issue both because different repositories may have different
administrators and thus creates safety concerns, and because multiple
repositories can feed into different projects (e.g. on mergebot,
odoo-dev/odoo is both an ancillary repository to the main RD project,
and the main repository to the minor / legacy master-wowl
project). This means it can be necessary to have multiple projects
share the same secret as well, this then mandates the secret for more
repositories per (1).
This is a pain in the ass, so just detach secrets from projects and
link them *only* to repositories, it's cleaner and easier to manage
and set up progressively.
This requires a lot of changes to the tests, as they all need to
correctly configure the signaling.
For `runbot_merge` there was *some* setup sharing already via the
module-level `repo` fixtures`, those were merged into a conftest-level
fixture which could handle the signaling setup. A few tests which
unnecessarily set up repositories ad-hoc were also moved to the
fixture. But for most of the ad-hoc setup in `runbot_merge`, as well
as `forwardport` where it's all ad-hoc, events sources setup was just
appended as is. This should probably be cleaned up at one point, with
the various requirements collected and organised into a small set of
fixtures doing the job more uniformly.
Fixes #887
2024-06-06 16:07:57 +07:00
r = env [ ' runbot_merge.repository ' ] . create ( {
2020-02-10 21:05:08 +07:00
' project_id ' : project . id ,
2020-01-21 20:00:11 +07:00
' name ' : repo . name ,
2024-02-09 16:49:32 +07:00
' required_statuses ' : ' default ' ,
2021-07-23 20:45:23 +07:00
' group_id ' : False ,
2020-02-10 21:05:08 +07:00
} )
setreviewers ( r )
[ADD] *: per-repository webhook secret
Currently webhook secrets are configured per *project* which is an
issue both because different repositories may have different
administrators and thus creates safety concerns, and because multiple
repositories can feed into different projects (e.g. on mergebot,
odoo-dev/odoo is both an ancillary repository to the main RD project,
and the main repository to the minor / legacy master-wowl
project). This means it can be necessary to have multiple projects
share the same secret as well, this then mandates the secret for more
repositories per (1).
This is a pain in the ass, so just detach secrets from projects and
link them *only* to repositories, it's cleaner and easier to manage
and set up progressively.
This requires a lot of changes to the tests, as they all need to
correctly configure the signaling.
For `runbot_merge` there was *some* setup sharing already via the
module-level `repo` fixtures`, those were merged into a conftest-level
fixture which could handle the signaling setup. A few tests which
unnecessarily set up repositories ad-hoc were also moved to the
fixture. But for most of the ad-hoc setup in `runbot_merge`, as well
as `forwardport` where it's all ad-hoc, events sources setup was just
appended as is. This should probably be cleaned up at one point, with
the various requirements collected and organised into a small set of
fixtures doing the job more uniformly.
Fixes #887
2024-06-06 16:07:57 +07:00
env [ ' runbot_merge.events_sources ' ] . create ( { ' repository ' : r . name } )
2019-10-10 14:22:12 +07:00
return repo
2018-03-14 16:37:46 +07:00
2019-10-10 14:22:12 +07:00
def make_pr ( repo , prefix , trees , * , target = ' master ' , user ,
2024-02-09 16:49:32 +07:00
statuses = ( ( ' default ' , ' success ' ) , ) ,
2019-10-10 14:22:12 +07:00
reviewer ) :
2018-03-27 18:33:04 +07:00
"""
: type repo : fake_github . Repo
: type prefix : str
: type trees : list [ dict ]
: type target : str
: type user : str
: type statuses : list [ ( str , str ) ]
: type reviewer : str | None
: rtype : fake_github . PR
"""
2019-10-10 14:22:12 +07:00
* _ , c = repo . make_commits (
' heads/ {} ' . format ( target ) ,
* (
repo . Commit ( ' commit_ {} _ {:02} ' . format ( prefix , i ) , tree = tree )
for i , tree in enumerate ( trees )
) ,
ref = ' heads/ {} ' . format ( prefix )
)
pr = repo . make_pr ( title = ' title {} ' . format ( prefix ) , body = ' body {} ' . format ( prefix ) ,
target = target , head = prefix , token = user )
2018-03-27 18:33:04 +07:00
for context , result in statuses :
repo . post_status ( c , result , context )
if reviewer :
pr . post_comment ( ' hansen r+ ' , reviewer )
2018-03-14 16:37:46 +07:00
return pr
2021-07-23 20:45:23 +07:00
2018-10-15 17:52:36 +07:00
2023-10-06 20:19:36 +07:00
@pytest.mark.parametrize ( ' uniquifier ' , [ False , True ] )
def test_stage_one ( env , project , repo_a , repo_b , config , uniquifier ) :
2018-03-14 16:37:46 +07:00
""" First PR is non-matched from A => should not select PR from B
"""
2023-10-06 20:19:36 +07:00
project . uniquifier = uniquifier
2018-03-14 16:37:46 +07:00
project . batch_limit = 1
2019-10-10 14:22:12 +07:00
with repo_a :
2024-02-09 16:49:32 +07:00
repo_a . make_commits ( None , Commit ( ' initial ' , tree = { ' a ' : ' a_0 ' } ) , ref = ' heads/master ' )
2019-10-10 14:22:12 +07:00
pr_a = make_pr (
repo_a , ' A ' , [ { ' a ' : ' a_1 ' } ] ,
user = config [ ' role_user ' ] [ ' token ' ] ,
reviewer = config [ ' role_reviewer ' ] [ ' token ' ] )
2018-03-14 16:37:46 +07:00
2019-10-10 14:22:12 +07:00
with repo_b :
2024-02-09 16:49:32 +07:00
repo_b . make_commits ( None , Commit ( ' initial ' , tree = { ' a ' : ' b_0 ' } ) , ref = ' heads/master ' )
2019-10-10 14:22:12 +07:00
pr_b = make_pr (
repo_b , ' B ' , [ { ' a ' : ' b_1 ' } ] ,
user = config [ ' role_user ' ] [ ' token ' ] ,
reviewer = config [ ' role_reviewer ' ] [ ' token ' ] ,
)
env . run_crons ( )
2018-03-14 16:37:46 +07:00
2021-08-09 18:21:24 +07:00
pra_id = to_pr ( env , pr_a )
assert pra_id . state == ' ready '
assert pra_id . staging_id
assert repo_a . commit ( ' staging.master ' ) . message . startswith ( ' commit_A_00 ' )
2023-10-06 20:19:36 +07:00
if uniquifier :
assert repo_b . commit ( ' staging.master ' ) . message . startswith ( ' force rebuild ' )
else :
assert repo_b . commit ( ' staging.master ' ) . message == ' initial '
2021-08-09 18:21:24 +07:00
prb_id = to_pr ( env , pr_b )
assert prb_id . state == ' ready '
assert not prb_id . staging_id
2018-03-14 16:37:46 +07:00
2021-07-23 20:45:23 +07:00
get_related_pr_labels = XPath ( ' .//*[normalize-space(text()) = " Linked pull requests " ]//a/text() ' )
def test_stage_match ( env , project , repo_a , repo_b , config , page ) :
2018-03-14 16:37:46 +07:00
""" First PR is matched from A, => should select matched PR from B
"""
project . batch_limit = 1
2019-10-10 14:22:12 +07:00
with repo_a :
2024-02-09 16:49:32 +07:00
repo_a . make_commits ( None , Commit ( ' initial ' , tree = { ' a ' : ' a_0 ' } ) , ref = ' heads/master ' )
2021-07-23 20:45:23 +07:00
prx_a = make_pr (
2019-10-10 14:22:12 +07:00
repo_a , ' do-a-thing ' , [ { ' a ' : ' a_1 ' } ] ,
user = config [ ' role_user ' ] [ ' token ' ] ,
reviewer = config [ ' role_reviewer ' ] [ ' token ' ] ,
)
with repo_b :
2024-02-09 16:49:32 +07:00
repo_b . make_commits ( None , Commit ( ' initial ' , tree = { ' a ' : ' b_0 ' } ) , ref = ' heads/master ' )
2021-07-23 20:45:23 +07:00
prx_b = make_pr ( repo_b , ' do-a-thing ' , [ { ' a ' : ' b_1 ' } ] ,
2019-10-10 14:22:12 +07:00
user = config [ ' role_user ' ] [ ' token ' ] ,
reviewer = config [ ' role_reviewer ' ] [ ' token ' ] ,
)
2021-07-23 20:45:23 +07:00
pr_a = to_pr ( env , prx_a )
pr_b = to_pr ( env , prx_b )
# check that related PRs link to one another
assert get_related_pr_labels ( pr_page ( page , prx_a ) ) == pr_b . mapped ( ' display_name ' )
assert get_related_pr_labels ( pr_page ( page , prx_b ) ) == pr_a . mapped ( ' display_name ' )
2019-10-10 14:22:12 +07:00
env . run_crons ( )
2018-03-14 16:37:46 +07:00
assert pr_a . state == ' ready '
assert pr_a . staging_id
assert pr_b . state == ' ready '
assert pr_b . staging_id
# should be part of the same staging
assert pr_a . staging_id == pr_b . staging_id , \
" branch-matched PRs should be part of the same staging "
2021-07-23 20:45:23 +07:00
# check that related PRs *still* link to one another during staging
assert get_related_pr_labels ( pr_page ( page , prx_a ) ) == [ pr_b . display_name ]
assert get_related_pr_labels ( pr_page ( page , prx_b ) ) == [ pr_a . display_name ]
with repo_a :
2024-02-09 16:49:32 +07:00
repo_a . post_status ( ' staging.master ' , ' failure ' )
2021-07-23 20:45:23 +07:00
env . run_crons ( )
assert pr_a . state == ' error '
assert pr_b . state == ' ready '
with repo_a :
prx_a . post_comment ( ' hansen retry ' , config [ ' role_reviewer ' ] [ ' token ' ] )
env . run_crons ( )
assert pr_a . state == pr_b . state == ' ready '
assert pr_a . staging_id and pr_b . staging_id
2019-11-22 15:21:40 +07:00
for repo in [ repo_a , repo_b ] :
with repo :
2024-02-09 16:49:32 +07:00
repo . post_status ( ' staging.master ' , ' success ' )
2019-11-22 15:21:40 +07:00
env . run_crons ( )
assert pr_a . state == ' merged '
assert pr_b . state == ' merged '
2020-03-02 16:26:40 +07:00
assert ' Related: {} ' . format ( pr_b . display_name ) in repo_a . commit ( ' master ' ) . message
assert ' Related: {} ' . format ( pr_a . display_name ) in repo_b . commit ( ' master ' ) . message
2019-11-22 15:21:40 +07:00
2021-07-23 20:45:23 +07:00
# check that related PRs *still* link to one another after merge
assert get_related_pr_labels ( pr_page ( page , prx_a ) ) == [ pr_b . display_name ]
assert get_related_pr_labels ( pr_page ( page , prx_b ) ) == [ pr_a . display_name ]
2020-10-01 16:49:47 +07:00
def test_different_targets ( env , project , repo_a , repo_b , config ) :
""" PRs with different targets should not be matched together
"""
project . write ( {
' batch_limit ' : 1 ,
' branch_ids ' : [ ( 0 , 0 , { ' name ' : ' other ' } ) ]
} )
with repo_a :
2024-02-09 16:49:32 +07:00
repo_a . make_commits ( None , Commit ( ' initial ' , tree = { ' master ' : ' a_0 ' } ) , ref = ' heads/master ' )
repo_a . make_commits ( None , Commit ( ' initial ' , tree = { ' other ' : ' a_0 ' } ) , ref = ' heads/other ' )
2020-10-01 16:49:47 +07:00
pr_a = make_pr (
repo_a , ' do-a-thing ' , [ { ' mater ' : ' a_1 ' } ] ,
target = ' master ' ,
user = config [ ' role_user ' ] [ ' token ' ] ,
reviewer = config [ ' role_reviewer ' ] [ ' token ' ] ,
)
with repo_b :
2024-02-09 16:49:32 +07:00
repo_b . make_commits ( None , Commit ( ' initial ' , tree = { ' master ' : ' b_0 ' } ) , ref = ' heads/master ' )
repo_b . make_commits ( None , Commit ( ' initial ' , tree = { ' other ' : ' b_0 ' } ) , ref = ' heads/other ' )
2020-10-01 16:49:47 +07:00
pr_b = make_pr (
repo_b , ' do-a-thing ' , [ { ' other ' : ' b_1 ' } ] ,
target = ' other ' ,
user = config [ ' role_user ' ] [ ' token ' ] ,
reviewer = config [ ' role_reviewer ' ] [ ' token ' ] ,
statuses = [ ] ,
)
time . sleep ( 5 )
env . run_crons ( )
pr_a = to_pr ( env , pr_a )
pr_b = to_pr ( env , pr_b )
assert pr_a . state == ' ready '
assert not pr_a . blocked
assert pr_a . staging_id
assert pr_b . blocked
assert pr_b . state == ' approved '
assert not pr_b . staging_id
for r in [ repo_a , repo_b ] :
with r :
2024-02-09 16:49:32 +07:00
r . post_status ( ' staging.master ' , ' success ' )
2020-10-01 16:49:47 +07:00
env . run_crons ( )
assert pr_a . state == ' merged '
2020-01-21 20:00:11 +07:00
def test_stage_different_statuses ( env , project , repo_a , repo_b , config ) :
project . batch_limit = 1
env [ ' runbot_merge.repository ' ] . search ( [
( ' name ' , ' = ' , repo_b . name )
] ) . write ( {
' required_statuses ' : ' foo/bar ' ,
} )
with repo_a :
2024-02-09 16:49:32 +07:00
repo_a . make_commits ( None , Commit ( ' initial ' , tree = { ' a ' : ' a_0 ' } ) , ref = ' heads/master ' )
2020-01-21 20:00:11 +07:00
pr_a = make_pr (
repo_a , ' do-a-thing ' , [ { ' a ' : ' a_1 ' } ] ,
user = config [ ' role_user ' ] [ ' token ' ] ,
reviewer = config [ ' role_reviewer ' ] [ ' token ' ] ,
)
repo_a . post_status ( pr_a . head , ' success ' , ' foo/bar ' )
with repo_b :
2024-02-09 16:49:32 +07:00
repo_b . make_commits ( None , Commit ( ' initial ' , tree = { ' a ' : ' b_0 ' } ) , ref = ' heads/master ' )
2020-03-02 16:26:40 +07:00
[ c ] = repo_b . make_commits (
' heads/master ' ,
2024-02-09 16:49:32 +07:00
repo_b . Commit ( f ' some_commit \n \n See also { repo_a . name } # { pr_a . number : d } ' , tree = { ' a ' : ' b_1 ' } ) ,
2020-03-02 16:26:40 +07:00
ref = ' heads/do-a-thing '
2020-01-21 20:00:11 +07:00
)
2020-03-02 16:26:40 +07:00
pr_b = repo_b . make_pr (
title = " title " , body = " body " , target = ' master ' , head = ' do-a-thing ' ,
token = config [ ' role_user ' ] [ ' token ' ] )
2024-02-09 16:49:32 +07:00
repo_b . post_status ( c , ' success ' )
2020-03-02 16:26:40 +07:00
pr_b . post_comment ( ' hansen r+ ' , config [ ' role_reviewer ' ] [ ' token ' ] )
2020-01-21 20:00:11 +07:00
env . run_crons ( )
# since the labels are the same but the statuses on pr_b are not the
# expected ones, pr_a should be blocked on pr_b, which should be approved
# but not validated / ready
pr_a_id = to_pr ( env , pr_a )
pr_b_id = to_pr ( env , pr_b )
assert pr_a_id . state == ' ready '
assert not pr_a_id . staging_id
assert pr_a_id . blocked
assert pr_b_id . state == ' approved '
assert not pr_b_id . staging_id
with repo_b :
repo_b . post_status ( pr_b . head , ' success ' , ' foo/bar ' )
env . run_crons ( )
assert pr_a_id . state == pr_b_id . state == ' ready '
assert pr_a_id . staging_id == pr_b_id . staging_id
2020-03-02 16:26:40 +07:00
# do the actual merge to check for the Related header
for repo in [ repo_a , repo_b ] :
with repo :
2024-02-09 16:49:32 +07:00
repo . post_status ( ' staging.master ' , ' success ' )
2020-03-02 16:26:40 +07:00
repo . post_status ( ' staging.master ' , ' success ' , ' foo/bar ' )
env . run_crons ( )
pr_a_ref = to_pr ( env , pr_a ) . display_name
pr_b_ref = to_pr ( env , pr_b ) . display_name
master_a = repo_a . commit ( ' master ' )
master_b = repo_b . commit ( ' master ' )
assert ' Related: {} ' . format ( pr_b_ref ) in master_a . message , \
" related should be in PR A ' s message "
assert ' Related: {} ' . format ( pr_a_ref ) not in master_b . message , \
" related should not be in PR B ' s message since the ref ' was added explicitly "
assert pr_a_ref in master_b . message , " the ref ' should still be there though "
2019-10-10 14:22:12 +07:00
def test_unmatch_patch ( env , project , repo_a , repo_b , config ) :
2018-10-23 19:03:45 +07:00
""" When editing files via the UI for a project you don ' t have write
access to , a branch called patch - XXX is automatically created in your
profile to hold the change .
This means it ' s possible to create a:patch-1 and b:patch-1 without
intending them to be related in any way , and more likely than the opposite
since there is no user control over the branch names ( save by actually
creating / renaming branches afterwards before creating the PR ) .
- > PRs with a branch name of patch - * should not be label - matched
"""
project . batch_limit = 1
2019-10-10 14:22:12 +07:00
with repo_a :
2024-02-09 16:49:32 +07:00
repo_a . make_commits ( None , Commit ( ' initial ' , tree = { ' a ' : ' a_0 ' } ) , ref = ' heads/master ' )
2019-10-10 14:22:12 +07:00
pr_a = make_pr (
repo_a , ' patch-1 ' , [ { ' a ' : ' a_1 ' } ] ,
user = config [ ' role_user ' ] [ ' token ' ] ,
reviewer = config [ ' role_reviewer ' ] [ ' token ' ] ,
)
with repo_b :
2024-02-09 16:49:32 +07:00
repo_b . make_commits ( None , Commit ( ' initial ' , tree = { ' a ' : ' b_0 ' } ) , ref = f ' heads/master ' )
2019-10-10 14:22:12 +07:00
pr_b = make_pr (
repo_b , ' patch-1 ' , [ { ' a ' : ' b_1 ' } ] ,
user = config [ ' role_user ' ] [ ' token ' ] ,
reviewer = config [ ' role_reviewer ' ] [ ' token ' ] ,
)
env . run_crons ( )
2018-10-23 19:03:45 +07:00
pr_a = to_pr ( env , pr_a )
pr_b = to_pr ( env , pr_b )
assert pr_a . state == ' ready '
assert pr_a . staging_id
assert pr_b . state == ' ready '
assert not pr_b . staging_id , ' patch-* PRs should not be branch-matched '
2019-10-10 14:22:12 +07:00
def test_sub_match ( env , project , repo_a , repo_b , repo_c , config ) :
2018-03-14 16:37:46 +07:00
""" Branch-matching should work on a subset of repositories
"""
project . batch_limit = 1
2019-10-10 14:22:12 +07:00
with repo_a : # no pr here
2024-02-09 16:49:32 +07:00
repo_a . make_commits ( None , Commit ( ' initial ' , tree = { ' a ' : ' a_0 ' } ) , ref = ' heads/master ' )
2019-10-10 14:22:12 +07:00
with repo_b :
2024-02-09 16:49:32 +07:00
repo_b . make_commits ( None , Commit ( ' initial ' , tree = { ' a ' : ' b_0 ' } ) , ref = ' heads/master ' )
2019-10-10 14:22:12 +07:00
pr_b = make_pr (
repo_b , ' do-a-thing ' , [ { ' a ' : ' b_1 ' } ] ,
user = config [ ' role_user ' ] [ ' token ' ] ,
reviewer = config [ ' role_reviewer ' ] [ ' token ' ] ,
)
with repo_c :
2024-02-09 16:49:32 +07:00
repo_c . make_commits ( None , Commit ( ' initial ' , tree = { ' a ' : ' c_0 ' } ) , ref = ' heads/master ' )
2019-10-10 14:22:12 +07:00
pr_c = make_pr (
repo_c , ' do-a-thing ' , [ { ' a ' : ' c_1 ' } ] ,
user = config [ ' role_user ' ] [ ' token ' ] ,
reviewer = config [ ' role_reviewer ' ] [ ' token ' ] ,
)
env . run_crons ( )
2018-03-14 16:37:46 +07:00
pr_b = to_pr ( env , pr_b )
pr_c = to_pr ( env , pr_c )
assert pr_b . state == ' ready '
assert pr_b . staging_id
assert pr_c . state == ' ready '
assert pr_c . staging_id
# should be part of the same staging
assert pr_c . staging_id == pr_b . staging_id , \
" branch-matched PRs should be part of the same staging "
2018-09-10 21:00:26 +07:00
2018-03-14 16:37:46 +07:00
st = pr_b . staging_id
2021-08-09 18:21:24 +07:00
a_staging = repo_a . commit ( ' staging.master ' )
b_staging = repo_b . commit ( ' staging.master ' )
c_staging = repo_c . commit ( ' staging.master ' )
2023-08-10 14:02:37 +07:00
assert sorted ( st . head_ids . mapped ( ' sha ' ) ) == sorted ( [
a_staging . id ,
b_staging . id ,
c_staging . id ,
] )
s = env [ ' runbot_merge.stagings ' ] . for_heads (
a_staging . id ,
b_staging . id ,
c_staging . id ,
)
assert s == list ( st . ids )
assert sorted ( st . commit_ids . mapped ( ' sha ' ) ) == sorted ( [
a_staging . parents [ 0 ] ,
b_staging . id ,
c_staging . id ,
] )
s = env [ ' runbot_merge.stagings ' ] . for_commits (
a_staging . parents [ 0 ] ,
b_staging . id ,
c_staging . id ,
)
assert s == list ( st . ids )
2018-03-14 16:37:46 +07:00
2024-06-11 20:41:03 +07:00
2019-10-10 14:22:12 +07:00
def test_merge_fail ( env , project , repo_a , repo_b , users , config ) :
2018-03-14 16:37:46 +07:00
""" In a matched-branch scenario, if merging in one of the linked repos
fails it should revert the corresponding merges
"""
project . batch_limit = 1
2019-10-10 14:22:12 +07:00
with repo_a , repo_b :
2024-02-09 16:49:32 +07:00
repo_a . make_commits ( None , Commit ( ' initial ' , tree = { ' a ' : ' a_0 ' } ) , ref = ' heads/master ' )
repo_b . make_commits ( None , Commit ( ' initial ' , tree = { ' a ' : ' b_0 ' } ) , ref = ' heads/master ' )
2018-03-14 16:37:46 +07:00
2019-10-10 14:22:12 +07:00
# first set of matched PRs
pr1a = make_pr (
repo_a , ' do-a-thing ' , [ { ' a ' : ' a_1 ' } ] ,
user = config [ ' role_user ' ] [ ' token ' ] ,
reviewer = config [ ' role_reviewer ' ] [ ' token ' ] ,
)
pr1b = make_pr (
repo_b , ' do-a-thing ' , [ { ' a ' : ' b_1 ' } ] ,
user = config [ ' role_user ' ] [ ' token ' ] ,
reviewer = config [ ' role_reviewer ' ] [ ' token ' ] ,
)
2018-03-14 16:37:46 +07:00
2019-10-10 14:22:12 +07:00
# add a conflicting commit to B so the staging fails
repo_b . make_commit ( ' heads/master ' , ' cn ' , None , tree = { ' a ' : ' cn ' } )
2018-03-14 16:37:46 +07:00
2019-10-10 14:22:12 +07:00
# and a second set of PRs which should get staged while the first set
# fails
pr2a = make_pr (
repo_a , ' do-b-thing ' , [ { ' b ' : ' ok ' } ] ,
user = config [ ' role_user ' ] [ ' token ' ] ,
reviewer = config [ ' role_reviewer ' ] [ ' token ' ] ,
)
pr2b = make_pr (
repo_b , ' do-b-thing ' , [ { ' b ' : ' ok ' } ] ,
user = config [ ' role_user ' ] [ ' token ' ] ,
reviewer = config [ ' role_reviewer ' ] [ ' token ' ] ,
)
env . run_crons ( )
2018-03-14 16:37:46 +07:00
2024-01-16 16:51:37 +07:00
s2 = to_pr ( env , pr2a ) | to_pr ( env , pr2b )
2018-03-14 16:37:46 +07:00
st = env [ ' runbot_merge.stagings ' ] . search ( [ ] )
2018-08-28 16:40:05 +07:00
assert set ( st . batch_ids . prs . ids ) == set ( s2 . ids )
2018-03-14 16:37:46 +07:00
failed = to_pr ( env , pr1b )
assert failed . state == ' error '
assert pr1b . comments == [
2018-06-07 19:53:31 +07:00
( users [ ' reviewer ' ] , ' hansen r+ ' ) ,
2020-11-17 21:21:21 +07:00
seen ( env , pr1b , users ) ,
2022-06-23 19:25:07 +07:00
( users [ ' user ' ] , ' @ %(user)s @ %(reviewer)s unable to stage: merge conflict ' % users ) ,
2018-03-14 16:37:46 +07:00
]
other = to_pr ( env , pr1a )
2018-11-22 00:43:05 +07:00
reviewer = get_partner ( env , users [ " reviewer " ] ) . formatted_email
2018-03-14 16:37:46 +07:00
assert not other . staging_id
2018-09-10 21:00:26 +07:00
assert [
c [ ' commit ' ] [ ' message ' ]
for c in repo_a . log ( ' heads/staging.master ' )
] == [
2024-01-16 16:51:37 +07:00
""" commit_do-b-thing_00
2019-11-22 15:21:40 +07:00
2024-01-16 16:51:37 +07:00
closes % s
2019-11-22 15:21:40 +07:00
2024-01-16 16:51:37 +07:00
Related : % s
Signed - off - by : % s """ % (s2[0].display_name, s2[1].display_name, reviewer),
2018-09-10 21:00:26 +07:00
' initial '
] , " dummy commit + squash-merged PR commit + root commit "
2018-03-14 16:37:46 +07:00
2019-10-10 14:22:12 +07:00
def test_ff_fail ( env , project , repo_a , repo_b , config ) :
2018-03-14 16:37:46 +07:00
""" In a matched-branch scenario, fast-forwarding one of the repos fails
the entire thing should be rolled back
"""
project . batch_limit = 1
2019-10-10 14:22:12 +07:00
with repo_a , repo_b :
2024-02-09 16:49:32 +07:00
[ root_a ] = repo_a . make_commits ( None , Commit ( ' initial ' , tree = { ' a ' : ' a_0 ' } ) , ref = ' heads/master ' )
2019-10-10 14:22:12 +07:00
make_pr (
repo_a , ' do-a-thing ' , [ { ' a ' : ' a_1 ' } ] ,
user = config [ ' role_user ' ] [ ' token ' ] ,
reviewer = config [ ' role_reviewer ' ] [ ' token ' ] ,
)
2018-03-14 16:37:46 +07:00
2024-02-09 16:49:32 +07:00
repo_b . make_commits ( None , Commit ( ' initial ' , tree = { ' a ' : ' b_0 ' } ) , ref = f ' heads/master ' )
2019-10-10 14:22:12 +07:00
make_pr (
repo_b , ' do-a-thing ' , [ { ' a ' : ' b_1 ' } ] ,
user = config [ ' role_user ' ] [ ' token ' ] ,
reviewer = config [ ' role_reviewer ' ] [ ' token ' ] ,
)
env . run_crons ( )
2018-03-14 16:37:46 +07:00
# add second commit blocking FF
2019-10-10 14:22:12 +07:00
with repo_b :
cn = repo_b . make_commit ( ' heads/master ' , ' second ' , None , tree = { ' a ' : ' b_0 ' , ' b ' : ' other ' } )
2018-08-29 21:51:53 +07:00
assert repo_b . commit ( ' heads/master ' ) . id == cn
2018-03-14 16:37:46 +07:00
2019-10-10 14:22:12 +07:00
with repo_a , repo_b :
2024-02-09 16:49:32 +07:00
repo_a . post_status ( ' heads/staging.master ' , ' success ' )
repo_b . post_status ( ' heads/staging.master ' , ' success ' )
2024-08-01 15:15:32 +07:00
env . run_crons ( None )
2018-03-14 16:37:46 +07:00
assert repo_b . commit ( ' heads/master ' ) . id == cn , \
" B should still be at the conflicting commit "
assert repo_a . commit ( ' heads/master ' ) . id == root_a , \
" FF A should have been rolled back when B failed "
# should be re-staged
st = env [ ' runbot_merge.stagings ' ] . search ( [ ] )
assert len ( st ) == 1
assert len ( st . batch_ids . prs ) == 2
2018-10-15 21:19:29 +07:00
class TestCompanionsNotReady :
2019-10-10 14:22:12 +07:00
def test_one_pair ( self , env , project , repo_a , repo_b , config , users ) :
2018-10-15 21:19:29 +07:00
""" If the companion of a ready branch-matched PR is not ready,
they should not get staged
"""
project . batch_limit = 1
2019-10-10 14:22:12 +07:00
with repo_a , repo_b :
2024-02-09 16:49:32 +07:00
repo_a . make_commits ( None , Commit ( ' initial ' , tree = { ' a ' : ' a_0 ' } ) , ref = ' heads/master ' )
2019-10-10 14:22:12 +07:00
# pr_a is born ready
p_a = make_pr (
repo_a , ' do-a-thing ' , [ { ' a ' : ' a_1 ' } ] ,
user = config [ ' role_user ' ] [ ' token ' ] ,
reviewer = config [ ' role_reviewer ' ] [ ' token ' ] ,
)
2024-02-09 16:49:32 +07:00
repo_b . make_commits ( None , Commit ( ' initial ' , tree = { ' a ' : ' b_0 ' } ) , ref = ' heads/master ' )
2019-10-10 14:22:12 +07:00
p_b = make_pr (
repo_b , ' do-a-thing ' , [ { ' a ' : ' b_1 ' } ] ,
user = config [ ' role_user ' ] [ ' token ' ] ,
reviewer = None ,
)
2018-10-15 21:19:29 +07:00
pr_a = to_pr ( env , p_a )
pr_b = to_pr ( env , p_b )
2019-10-10 14:22:12 +07:00
assert pr_a . label == pr_b . label == ' {} :do-a-thing ' . format ( config [ ' github ' ] [ ' owner ' ] )
2018-10-15 21:19:29 +07:00
2019-10-10 14:22:12 +07:00
env . run_crons ( )
2018-10-15 21:19:29 +07:00
2019-03-05 14:01:38 +07:00
assert pr_a . state == ' ready '
assert pr_b . state == ' validated '
2018-10-15 21:19:29 +07:00
assert not pr_b . staging_id
assert not pr_a . staging_id , \
" pr_a should not have been staged as companion is not ready "
assert p_a . comments == [
( users [ ' reviewer ' ] , ' hansen r+ ' ) ,
2020-11-17 21:21:21 +07:00
seen ( env , p_a , users ) ,
2022-06-23 19:25:07 +07:00
( users [ ' user ' ] , " @ %s @ %s linked pull request(s) %s not ready. Linked PRs are not staged until all of them are ready. " % (
users [ ' user ' ] ,
users [ ' reviewer ' ] ,
pr_b . display_name ,
) ) ,
2018-10-15 21:19:29 +07:00
]
# ensure the message is only sent once per PR
2019-10-10 14:22:12 +07:00
env . run_crons ( ' runbot_merge.check_linked_prs_status ' )
2018-10-15 21:19:29 +07:00
assert p_a . comments == [
( users [ ' reviewer ' ] , ' hansen r+ ' ) ,
2020-11-17 21:21:21 +07:00
seen ( env , p_a , users ) ,
2022-06-23 19:25:07 +07:00
( users [ ' user ' ] , " @ %s @ %s linked pull request(s) %s not ready. Linked PRs are not staged until all of them are ready. " % (
users [ ' user ' ] ,
users [ ' reviewer ' ] ,
pr_b . display_name ,
) ) ,
2018-10-15 21:19:29 +07:00
]
2020-11-17 21:21:21 +07:00
assert p_b . comments == [ seen ( env , p_b , users ) ]
2018-10-15 21:19:29 +07:00
2019-10-10 14:22:12 +07:00
def test_two_of_three_unready ( self , env , project , repo_a , repo_b , repo_c , users , config ) :
2018-10-15 21:19:29 +07:00
""" In a 3-batch, if two of the PRs are not ready both should be
linked by the first one
"""
project . batch_limit = 1
2019-10-10 14:22:12 +07:00
with repo_a , repo_b , repo_c :
2024-02-09 16:49:32 +07:00
repo_a . make_commits ( None , Commit ( ' initial ' , tree = { ' f ' : ' a0 ' } ) , ref = ' heads/master ' )
2019-10-10 14:22:12 +07:00
pr_a = make_pr (
repo_a , ' a-thing ' , [ { ' f ' : ' a1 ' } ] ,
user = config [ ' role_user ' ] [ ' token ' ] ,
reviewer = None ,
)
2024-02-09 16:49:32 +07:00
repo_b . make_commits ( None , Commit ( ' initial ' , tree = { ' f ' : ' b0 ' } ) , ref = ' heads/master ' )
2019-10-10 14:22:12 +07:00
pr_b = make_pr (
repo_b , ' a-thing ' , [ { ' f ' : ' b1 ' } ] ,
user = config [ ' role_user ' ] [ ' token ' ] ,
reviewer = config [ ' role_reviewer ' ] [ ' token ' ] ,
)
2024-02-09 16:49:32 +07:00
repo_c . make_commits ( None , Commit ( ' initial ' , tree = { ' f ' : ' c0 ' } ) , ref = ' heads/master ' )
2019-10-10 14:22:12 +07:00
pr_c = make_pr (
repo_c , ' a-thing ' , [ { ' f ' : ' c1 ' } ] ,
user = config [ ' role_user ' ] [ ' token ' ] ,
reviewer = None ,
)
env . run_crons ( )
2018-10-15 21:19:29 +07:00
2020-11-17 21:21:21 +07:00
assert pr_a . comments == [ seen ( env , pr_a , users ) ]
2018-10-15 21:19:29 +07:00
assert pr_b . comments == [
( users [ ' reviewer ' ] , ' hansen r+ ' ) ,
2020-11-17 21:21:21 +07:00
seen ( env , pr_b , users ) ,
2022-06-23 19:25:07 +07:00
( users [ ' user ' ] , " @ %s @ %s linked pull request(s) %s # %d , %s # %d not ready. Linked PRs are not staged until all of them are ready. " % (
users [ ' user ' ] , users [ ' reviewer ' ] ,
2018-10-15 21:19:29 +07:00
repo_a . name , pr_a . number ,
repo_c . name , pr_c . number
) )
]
2020-11-17 21:21:21 +07:00
assert pr_c . comments == [ seen ( env , pr_c , users ) ]
2018-10-15 21:19:29 +07:00
2019-10-10 14:22:12 +07:00
def test_one_of_three_unready ( self , env , project , repo_a , repo_b , repo_c , users , config ) :
2018-10-15 21:19:29 +07:00
""" In a 3-batch, if one PR is not ready it should be linked on the
other two
"""
project . batch_limit = 1
2019-10-10 14:22:12 +07:00
with repo_a , repo_b , repo_c :
2024-02-09 16:49:32 +07:00
repo_a . make_commits ( None , Commit ( ' initial ' , tree = { ' f ' : ' a0 ' } ) , ref = ' heads/master ' )
2019-10-10 14:22:12 +07:00
pr_a = make_pr (
repo_a , ' a-thing ' , [ { ' f ' : ' a1 ' } ] ,
user = config [ ' role_user ' ] [ ' token ' ] ,
reviewer = None ,
)
2024-02-09 16:49:32 +07:00
repo_b . make_commits ( None , Commit ( ' initial ' , tree = { ' f ' : ' b0 ' } ) , ref = ' heads/master ' )
2019-10-10 14:22:12 +07:00
pr_b = make_pr (
repo_b , ' a-thing ' , [ { ' f ' : ' b1 ' } ] ,
user = config [ ' role_user ' ] [ ' token ' ] ,
reviewer = config [ ' role_reviewer ' ] [ ' token ' ] ,
)
2024-02-09 16:49:32 +07:00
repo_c . make_commits ( None , Commit ( ' initial ' , tree = { ' f ' : ' c0 ' } ) , ref = ' heads/master ' )
2019-10-10 14:22:12 +07:00
pr_c = make_pr (
repo_c , ' a-thing ' , [ { ' f ' : ' c1 ' } ] ,
user = config [ ' role_user ' ] [ ' token ' ] ,
reviewer = config [ ' role_reviewer ' ] [ ' token ' ] ,
)
env . run_crons ( )
2018-10-15 21:19:29 +07:00
2020-11-17 21:21:21 +07:00
assert pr_a . comments == [ seen ( env , pr_a , users ) ]
2018-10-15 21:19:29 +07:00
assert pr_b . comments == [
( users [ ' reviewer ' ] , ' hansen r+ ' ) ,
2020-11-17 21:21:21 +07:00
seen ( env , pr_b , users ) ,
2024-02-09 16:49:32 +07:00
( users [ ' user ' ] , f " @ { users [ ' user ' ] } @ { users [ ' reviewer ' ] } linked pull request(s) { repo_a . name } # { pr_a . number } not ready. Linked PRs are not staged until all of them are ready. " )
2018-10-15 21:19:29 +07:00
]
assert pr_c . comments == [
( users [ ' reviewer ' ] , ' hansen r+ ' ) ,
2020-11-17 21:21:21 +07:00
seen ( env , pr_c , users ) ,
2018-10-15 21:19:29 +07:00
( users [ ' user ' ] ,
2024-02-09 16:49:32 +07:00
f " @ { users [ ' user ' ] } @ { users [ ' reviewer ' ] } linked pull request(s) { repo_a . name } # { pr_a . number } not ready. Linked PRs are not staged until all of them are ready. " )
2018-10-15 21:19:29 +07:00
]
2018-03-14 16:37:46 +07:00
2019-10-10 14:22:12 +07:00
def test_other_failed ( env , project , repo_a , repo_b , users , config ) :
2018-09-10 22:18:25 +07:00
""" In a non-matched-branch scenario, if the companion staging (copy of
targets ) fails when built with the PR , it should provide a non - useless
message
"""
2019-10-10 14:22:12 +07:00
with repo_a , repo_b :
2024-02-09 16:49:32 +07:00
repo_a . make_commits ( None , Commit ( ' initial ' , tree = { ' a ' : ' a_0 ' } ) , ref = ' heads/master ' )
2019-10-10 14:22:12 +07:00
# pr_a is born ready
pr_a = make_pr (
repo_a , ' do-a-thing ' , [ { ' a ' : ' a_1 ' } ] ,
user = config [ ' role_user ' ] [ ' token ' ] ,
reviewer = config [ ' role_reviewer ' ] [ ' token ' ] ,
)
2018-09-10 22:18:25 +07:00
2024-02-09 16:49:32 +07:00
repo_b . make_commits ( None , Commit ( ' initial ' , tree = { ' a ' : ' b_0 ' } ) , ref = ' heads/master ' )
2019-10-10 14:22:12 +07:00
env . run_crons ( )
2018-09-10 22:18:25 +07:00
pr = to_pr ( env , pr_a )
assert pr . staging_id
2019-10-10 14:22:12 +07:00
with repo_a , repo_b :
2024-02-09 16:49:32 +07:00
repo_a . post_status ( ' heads/staging.master ' , ' success ' , target_url = " http://example.org/a " )
repo_b . post_status ( ' heads/staging.master ' , ' failure ' , target_url = " http://example.org/b " )
2019-10-10 14:22:12 +07:00
env . run_crons ( )
2018-09-10 22:18:25 +07:00
sth = repo_b . commit ( ' heads/staging.master ' ) . id
assert not pr . staging_id
assert pr . state == ' error '
assert pr_a . comments == [
( users [ ' reviewer ' ] , ' hansen r+ ' ) ,
2020-11-17 21:21:21 +07:00
seen ( env , pr_a , users ) ,
2024-02-09 16:49:32 +07:00
( users [ ' user ' ] , ' @ %s @ %s staging failed: default on %s (view more at http://example.org/b) ' % (
2022-06-23 19:25:07 +07:00
users [ ' user ' ] , users [ ' reviewer ' ] ,
sth
) )
2018-09-10 22:18:25 +07:00
]
2018-10-15 17:52:36 +07:00
class TestMultiBatches :
2019-10-10 14:22:12 +07:00
def test_batching ( self , env , project , repo_a , repo_b , config ) :
2018-10-15 17:52:36 +07:00
""" If multiple batches (label groups) are ready they should get batched
together ( within the limits of teh project ' s batch limit)
"""
project . batch_limit = 3
2019-10-10 14:22:12 +07:00
with repo_a , repo_b :
2024-02-09 16:49:32 +07:00
repo_a . make_commits ( None , Commit ( ' initial ' , tree = { ' a ' : ' a0 ' } ) , ref = ' heads/master ' )
repo_b . make_commits ( None , Commit ( ' initial ' , tree = { ' b ' : ' b0 ' } ) , ref = ' heads/master ' )
2018-10-15 17:52:36 +07:00
2019-10-10 14:22:12 +07:00
prs = [ (
2022-07-04 20:12:50 +07:00
a and make_pr ( repo_a , ' batch {} ' . format ( i ) , [ { ' a {} ' . format ( i ) : ' a {} ' . format ( i ) } ] , user = config [ ' role_user ' ] [ ' token ' ] , reviewer = config [ ' role_reviewer ' ] [ ' token ' ] ) ,
b and make_pr ( repo_b , ' batch {} ' . format ( i ) , [ { ' b {} ' . format ( i ) : ' b {} ' . format ( i ) } ] , user = config [ ' role_user ' ] [ ' token ' ] , reviewer = config [ ' role_reviewer ' ] [ ' token ' ] ) ,
2019-10-10 14:22:12 +07:00
)
for i , ( a , b ) in enumerate ( [ ( 1 , 1 ) , ( 0 , 1 ) , ( 1 , 1 ) , ( 1 , 1 ) , ( 1 , 0 ) ] )
]
env . run_crons ( )
2022-07-04 20:12:50 +07:00
prs = [
( a and to_pr ( env , a ) , b and to_pr ( env , b ) )
for ( a , b ) in prs
]
2018-10-15 17:52:36 +07:00
st = env [ ' runbot_merge.stagings ' ] . search ( [ ] )
assert st
assert len ( st . batch_ids ) == 3 , \
" Should have batched the first <batch_limit> batches "
assert st . mapped ( ' batch_ids.prs ' ) == (
prs [ 0 ] [ 0 ] | prs [ 0 ] [ 1 ]
| prs [ 1 ] [ 1 ]
| prs [ 2 ] [ 0 ] | prs [ 2 ] [ 1 ]
)
assert not prs [ 3 ] [ 0 ] . staging_id
assert not prs [ 3 ] [ 1 ] . staging_id
assert not prs [ 4 ] [ 0 ] . staging_id
2019-10-10 14:22:12 +07:00
def test_batching_split ( self , env , repo_a , repo_b , config ) :
2018-10-15 17:52:36 +07:00
""" If a staging fails, it should get split properly across repos
"""
2019-10-10 14:22:12 +07:00
with repo_a , repo_b :
2024-02-09 16:49:32 +07:00
repo_a . make_commits ( None , Commit ( ' initial ' , tree = { ' a ' : ' a0 ' } ) , ref = ' heads/master ' )
repo_b . make_commits ( None , Commit ( ' initial ' , tree = { ' b ' : ' b0 ' } ) , ref = ' heads/master ' )
2019-10-10 14:22:12 +07:00
prs = [ (
2022-07-04 20:12:50 +07:00
a and make_pr ( repo_a , ' batch {} ' . format ( i ) , [ { ' a {} ' . format ( i ) : ' a {} ' . format ( i ) } ] , user = config [ ' role_user ' ] [ ' token ' ] , reviewer = config [ ' role_reviewer ' ] [ ' token ' ] ) ,
b and make_pr ( repo_b , ' batch {} ' . format ( i ) , [ { ' b {} ' . format ( i ) : ' b {} ' . format ( i ) } ] , user = config [ ' role_user ' ] [ ' token ' ] , reviewer = config [ ' role_reviewer ' ] [ ' token ' ] ) ,
2019-10-10 14:22:12 +07:00
)
for i , ( a , b ) in enumerate ( [ ( 1 , 1 ) , ( 0 , 1 ) , ( 1 , 1 ) , ( 1 , 1 ) , ( 1 , 0 ) ] )
]
env . run_crons ( )
2022-07-04 20:12:50 +07:00
prs = [
( a and to_pr ( env , a ) , b and to_pr ( env , b ) )
for ( a , b ) in prs
]
2018-10-15 17:52:36 +07:00
st0 = env [ ' runbot_merge.stagings ' ] . search ( [ ] )
assert len ( st0 . batch_ids ) == 5
assert len ( st0 . mapped ( ' batch_ids.prs ' ) ) == 8
# mark b.staging as failed -> should create two splits with (0, 1)
# and (2, 3, 4) and stage the first one
2019-10-10 14:22:12 +07:00
with repo_b :
2024-02-09 16:49:32 +07:00
repo_b . post_status ( ' heads/staging.master ' , ' failure ' )
2019-10-10 14:22:12 +07:00
env . run_crons ( )
2018-10-15 17:52:36 +07:00
assert not st0 . active
# at this point we have a re-staged split and an unstaged split
st = env [ ' runbot_merge.stagings ' ] . search ( [ ] )
sp = env [ ' runbot_merge.split ' ] . search ( [ ] )
assert st
assert sp
assert len ( st . batch_ids ) == 2
assert st . mapped ( ' batch_ids.prs ' ) == \
prs [ 0 ] [ 0 ] | prs [ 0 ] [ 1 ] | prs [ 1 ] [ 1 ]
assert len ( sp . batch_ids ) == 3
assert sp . mapped ( ' batch_ids.prs ' ) == \
prs [ 2 ] [ 0 ] | prs [ 2 ] [ 1 ] | prs [ 3 ] [ 0 ] | prs [ 3 ] [ 1 ] | prs [ 4 ] [ 0 ]
2018-06-18 20:23:23 +07:00
2024-06-25 03:16:43 +07:00
@pytest.mark.usefixtures ( " reviewer_admin " )
2019-10-10 14:22:12 +07:00
def test_urgent ( env , repo_a , repo_b , config ) :
[CHG] *: rewrite commands set, rework status management
This commit revisits the commands set in order to make it more
regular, and limit inconsistent command-sets, although it includes
pseudo-command aliases for common tasks now removed from the core set.
Hard Errors
===========
The previous iteration of the commands set would ignore any
non-command term in a command line. This has been changed to hard
error (and ignoring the entire thing) if any command is unknown or
invalid.
This fixes inconsistent / unexpected interpretations where a user
sends a command, then writes a novel on the same line some words of
which happen to *also* be commands, leading to merge states they did
not expect. They should now be told to fuck off.
Priority Restructuring
----------------------
The numerical priority system was pretty messy in that it confused
"staging priority" (in ways which were not entirely straightforward)
with overrides to other concerns.
This has now being split along all the axis, with separate command
subsets for:
- staging prioritisation, now separated between `default`, `priority`,
and `alone`,
- `default` means PRs are picked by an unspecified order when
creating a staging, if nothing better is available
- `priority` means PRs are picked first when staging, however if
`priority` PRs don't fill the staging the rest will be filled with
`default`, this mode did not previously exist
- `alone` means the PRs are picked first, before splits, and only
`alone` PRs can be part of the staging (which usually matches the
modename)
- `skipchecks` overrides both statuses and approval checks, for the
batch, something previously implied in `p=0`, but now
independent. Setting `skipchecks` basically makes the entire batch
`ready`.
For consistency this also sets the reviewer implicitly: since
skipchecks overrides both statuses *and approval*, whoever enables
this mode is essentially the reviewer.
- `cancel` cancels any ongoing staging when the marked PR becomes
ready again, previously this was also implied (in a more restricted
form) by setting `p=0`
FWBot removal
=============
While the "forwardport bot" still exists as an API level (to segregate
access rights between tokens) it has been removed as an interaction
point, as part of the modules merge plan. As a result,
fwbot stops responding
----------------------
Feedback messages are now always sent by the mergebot, the
forward-porting bot should not send any message or notification
anymore.
commands moved to the merge bot
-------------------------------
- `ignore`/`up to` simply changes bot
- `close` as well
- `skipci` is now a choice / flag of an `fw` command, which denotes
the forward-port policy,
- `fw=default` is the old `ci` and resets the policy to default,
that is wait for the PR to be merged to create forward ports, and
for the required statuses on each forward port to be received
before creating the next
- `fw=skipci` is the old `skipci`, it waits for the merge of the
base PR but then creates all the forward ports immediately (unless
it gets a conflict)
- `fw=skipmerge` immediately creates all the forward ports, without
even waiting for the PR to be merged
This is a completely new mode, and may be rather broken as until
now the 'bot has always assumed the source PR had been merged.
approval rework
---------------
Because of the previous section, there is no distinguishing feature
between `mergebot r+` = "merge this PR" and `forwardbot r+` = "merge
this PR and all its parent with different access rights".
As a result, the two have been merged under a single `mergebot r+`
with heuristics attempting to provide the best experience:
- if approving a non-forward port, the behavior does not change
- else, with review rights on the source, all ancestors are approved
- else, as author of the original, approves all ancestors which descend
from a merged PR
- else, approves all ancestors up to and including the oldest ancestor
to which we have review rights
Most notably, the source's author is not delegated on the source or
any of its descendants anymore. This might need to be revisited if it
provides too restrictive.
For the very specialized need of approving a forward-port *and none of
its ancestors*, `review=` can now take a comma (`,`) separated list of
pull request numbers (github numbers, not mergebot ids).
Computed State
==============
The `state` field of pull requests is now computed. Hopefully this
makes the status more consistent and predictable in the long run, and
importantly makes status management more reliable (because reference
datum get updated naturally flowing to the state).
For now however it makes things more complicated as some of the states
have to be separately signaled or updated:
- `closed` and `error` are now separate flags
- `merge_date` is pulled down from forwardport and becomes the
transition signal for ready -> merged
- `reviewed_by` becomes the transition signal for approval (might be a
good idea to rename it...)
- `status` is computed from the head's statuses and overrides, and
*that* becomes the validation state
Ideally, batch-level flags like `skipchecks` should be on, well, the
batch, and `state` should have a dependency on the batch. However
currently the batch is not a durable / permanent member of the system,
so it's a PR-level flag and a messy pile.
On notable change is that *forcing* the state to `ready` now does that
but also sets the reviewer, `skipchecks`, and overrides to ensure the
API-mediated readying does not get rolled back by e.g. the runbot
sending a status.
This is useful for a few types of automated / programmatic PRs
e.g. translation exports, where we set the state programmatically to
limit noise.
recursive dependency hack
-------------------------
Given a sequence of PRs with an override of the source, if one of the
PRs is updated its descendants should not have the override
anymore. However if the updated PR gets overridden, its descendants
should have *that* override.
This requires some unholy manipulations via an override of `modified`,
as the ORM supports recursive fields but not recursive
dependencies (on a different field).
unconditional followup scheduling
---------------------------------
Previously scheduling forward-port followup was contigent on the FW
policy, but it's not actually correct if the new PR is *immediately*
validated (which can happen now that the field is computed, if there
are no required statuses *or* all of the required statuses are
overridden by an ancestor) as nothing will trigger the state change
and thus scheduling of the fp followup.
The followup function checks all the properties of the batch to port,
so this should not result on incorrect ports. Although it's a bit more
expensive, and will lead to more spam.
Previously this would not happen because on creation of a PR the
validation task (commit -> PR) would still have to execute.
Misc Changes
============
- If a PR is marked as overriding / canceling stagings, it now does
so on retry not just when setting initially.
This was not handled at all previously, so a PR in P0 going into
error due to e.g. a non-deterministic bug would be retried and still
p=0, but a current staging would not get cancelled. Same when a PR
in p=0 goes into error because something was failed, then is updated
with a fix.
- Add tracking to a bunch of relevant PR fields.
Post-mortem analysis currently generally requires going through the
text logs to see what happened, which is annoying.
There is a nondeterminism / inconsistency in the tracking which
sometimes leads the admin user to trigger tracking before the bot
does, leading to the staging tracking being attributed to them
during tests, shove under the carpet by ignoring the user to whom
that tracking is attributed.
When multiple users update tracked fields in the same transaction
all the changes are attributed to the first one having triggered
tracking (?), I couldn't find why the admin sometimes takes over.
- added and leveraged support for enum-backed selection fields
- moved variuous fields from forwardport to runbot_merge
- fix a migration which had never worked and which never run (because
I forgot to bump the version on the module)
- remove some unnecessary intermediate de/serialisation
fixes #673, fixes #309, fixes #792, fixes #846 (probably)
2023-10-31 13:42:07 +07:00
""" Either PR of a co-dependent pair being prioritised leads to the entire
pair being prioritized
2018-03-27 18:33:04 +07:00
"""
2019-10-10 14:22:12 +07:00
with repo_a , repo_b :
2024-02-09 16:49:32 +07:00
repo_a . make_commits ( None , Commit ( ' initial ' , tree = { ' a0 ' : ' a ' } ) , ref = ' heads/master ' )
repo_b . make_commits ( None , Commit ( ' initial ' , tree = { ' b0 ' : ' b ' } ) , ref = ' heads/master ' )
2018-03-27 18:33:04 +07:00
2019-10-10 14:22:12 +07:00
pr_a = make_pr ( repo_a , ' batch ' , [ { ' a1 ' : ' a ' } , { ' a2 ' : ' a ' } ] , user = config [ ' role_user ' ] [ ' token ' ] , reviewer = None , statuses = [ ] )
pr_b = make_pr ( repo_b , ' batch ' , [ { ' b1 ' : ' b ' } , { ' b2 ' : ' b ' } ] , user = config [ ' role_user ' ] [ ' token ' ] , reviewer = None , statuses = [ ] )
[CHG] *: rewrite commands set, rework status management
This commit revisits the commands set in order to make it more
regular, and limit inconsistent command-sets, although it includes
pseudo-command aliases for common tasks now removed from the core set.
Hard Errors
===========
The previous iteration of the commands set would ignore any
non-command term in a command line. This has been changed to hard
error (and ignoring the entire thing) if any command is unknown or
invalid.
This fixes inconsistent / unexpected interpretations where a user
sends a command, then writes a novel on the same line some words of
which happen to *also* be commands, leading to merge states they did
not expect. They should now be told to fuck off.
Priority Restructuring
----------------------
The numerical priority system was pretty messy in that it confused
"staging priority" (in ways which were not entirely straightforward)
with overrides to other concerns.
This has now being split along all the axis, with separate command
subsets for:
- staging prioritisation, now separated between `default`, `priority`,
and `alone`,
- `default` means PRs are picked by an unspecified order when
creating a staging, if nothing better is available
- `priority` means PRs are picked first when staging, however if
`priority` PRs don't fill the staging the rest will be filled with
`default`, this mode did not previously exist
- `alone` means the PRs are picked first, before splits, and only
`alone` PRs can be part of the staging (which usually matches the
modename)
- `skipchecks` overrides both statuses and approval checks, for the
batch, something previously implied in `p=0`, but now
independent. Setting `skipchecks` basically makes the entire batch
`ready`.
For consistency this also sets the reviewer implicitly: since
skipchecks overrides both statuses *and approval*, whoever enables
this mode is essentially the reviewer.
- `cancel` cancels any ongoing staging when the marked PR becomes
ready again, previously this was also implied (in a more restricted
form) by setting `p=0`
FWBot removal
=============
While the "forwardport bot" still exists as an API level (to segregate
access rights between tokens) it has been removed as an interaction
point, as part of the modules merge plan. As a result,
fwbot stops responding
----------------------
Feedback messages are now always sent by the mergebot, the
forward-porting bot should not send any message or notification
anymore.
commands moved to the merge bot
-------------------------------
- `ignore`/`up to` simply changes bot
- `close` as well
- `skipci` is now a choice / flag of an `fw` command, which denotes
the forward-port policy,
- `fw=default` is the old `ci` and resets the policy to default,
that is wait for the PR to be merged to create forward ports, and
for the required statuses on each forward port to be received
before creating the next
- `fw=skipci` is the old `skipci`, it waits for the merge of the
base PR but then creates all the forward ports immediately (unless
it gets a conflict)
- `fw=skipmerge` immediately creates all the forward ports, without
even waiting for the PR to be merged
This is a completely new mode, and may be rather broken as until
now the 'bot has always assumed the source PR had been merged.
approval rework
---------------
Because of the previous section, there is no distinguishing feature
between `mergebot r+` = "merge this PR" and `forwardbot r+` = "merge
this PR and all its parent with different access rights".
As a result, the two have been merged under a single `mergebot r+`
with heuristics attempting to provide the best experience:
- if approving a non-forward port, the behavior does not change
- else, with review rights on the source, all ancestors are approved
- else, as author of the original, approves all ancestors which descend
from a merged PR
- else, approves all ancestors up to and including the oldest ancestor
to which we have review rights
Most notably, the source's author is not delegated on the source or
any of its descendants anymore. This might need to be revisited if it
provides too restrictive.
For the very specialized need of approving a forward-port *and none of
its ancestors*, `review=` can now take a comma (`,`) separated list of
pull request numbers (github numbers, not mergebot ids).
Computed State
==============
The `state` field of pull requests is now computed. Hopefully this
makes the status more consistent and predictable in the long run, and
importantly makes status management more reliable (because reference
datum get updated naturally flowing to the state).
For now however it makes things more complicated as some of the states
have to be separately signaled or updated:
- `closed` and `error` are now separate flags
- `merge_date` is pulled down from forwardport and becomes the
transition signal for ready -> merged
- `reviewed_by` becomes the transition signal for approval (might be a
good idea to rename it...)
- `status` is computed from the head's statuses and overrides, and
*that* becomes the validation state
Ideally, batch-level flags like `skipchecks` should be on, well, the
batch, and `state` should have a dependency on the batch. However
currently the batch is not a durable / permanent member of the system,
so it's a PR-level flag and a messy pile.
On notable change is that *forcing* the state to `ready` now does that
but also sets the reviewer, `skipchecks`, and overrides to ensure the
API-mediated readying does not get rolled back by e.g. the runbot
sending a status.
This is useful for a few types of automated / programmatic PRs
e.g. translation exports, where we set the state programmatically to
limit noise.
recursive dependency hack
-------------------------
Given a sequence of PRs with an override of the source, if one of the
PRs is updated its descendants should not have the override
anymore. However if the updated PR gets overridden, its descendants
should have *that* override.
This requires some unholy manipulations via an override of `modified`,
as the ORM supports recursive fields but not recursive
dependencies (on a different field).
unconditional followup scheduling
---------------------------------
Previously scheduling forward-port followup was contigent on the FW
policy, but it's not actually correct if the new PR is *immediately*
validated (which can happen now that the field is computed, if there
are no required statuses *or* all of the required statuses are
overridden by an ancestor) as nothing will trigger the state change
and thus scheduling of the fp followup.
The followup function checks all the properties of the batch to port,
so this should not result on incorrect ports. Although it's a bit more
expensive, and will lead to more spam.
Previously this would not happen because on creation of a PR the
validation task (commit -> PR) would still have to execute.
Misc Changes
============
- If a PR is marked as overriding / canceling stagings, it now does
so on retry not just when setting initially.
This was not handled at all previously, so a PR in P0 going into
error due to e.g. a non-deterministic bug would be retried and still
p=0, but a current staging would not get cancelled. Same when a PR
in p=0 goes into error because something was failed, then is updated
with a fix.
- Add tracking to a bunch of relevant PR fields.
Post-mortem analysis currently generally requires going through the
text logs to see what happened, which is annoying.
There is a nondeterminism / inconsistency in the tracking which
sometimes leads the admin user to trigger tracking before the bot
does, leading to the staging tracking being attributed to them
during tests, shove under the carpet by ignoring the user to whom
that tracking is attributed.
When multiple users update tracked fields in the same transaction
all the changes are attributed to the first one having triggered
tracking (?), I couldn't find why the admin sometimes takes over.
- added and leveraged support for enum-backed selection fields
- moved variuous fields from forwardport to runbot_merge
- fix a migration which had never worked and which never run (because
I forgot to bump the version on the module)
- remove some unnecessary intermediate de/serialisation
fixes #673, fixes #309, fixes #792, fixes #846 (probably)
2023-10-31 13:42:07 +07:00
pr_c = make_pr ( repo_a , ' C ' , [ { ' c1 ' : ' c ' , ' c2 ' : ' c ' } ] , user = config [ ' role_user ' ] [ ' token ' ] , reviewer = config [ ' role_reviewer ' ] [ ' token ' ] )
2018-03-27 18:33:04 +07:00
2019-10-10 14:22:12 +07:00
pr_a . post_comment ( ' hansen rebase-merge ' , config [ ' role_reviewer ' ] [ ' token ' ] )
[CHG] *: rewrite commands set, rework status management
This commit revisits the commands set in order to make it more
regular, and limit inconsistent command-sets, although it includes
pseudo-command aliases for common tasks now removed from the core set.
Hard Errors
===========
The previous iteration of the commands set would ignore any
non-command term in a command line. This has been changed to hard
error (and ignoring the entire thing) if any command is unknown or
invalid.
This fixes inconsistent / unexpected interpretations where a user
sends a command, then writes a novel on the same line some words of
which happen to *also* be commands, leading to merge states they did
not expect. They should now be told to fuck off.
Priority Restructuring
----------------------
The numerical priority system was pretty messy in that it confused
"staging priority" (in ways which were not entirely straightforward)
with overrides to other concerns.
This has now being split along all the axis, with separate command
subsets for:
- staging prioritisation, now separated between `default`, `priority`,
and `alone`,
- `default` means PRs are picked by an unspecified order when
creating a staging, if nothing better is available
- `priority` means PRs are picked first when staging, however if
`priority` PRs don't fill the staging the rest will be filled with
`default`, this mode did not previously exist
- `alone` means the PRs are picked first, before splits, and only
`alone` PRs can be part of the staging (which usually matches the
modename)
- `skipchecks` overrides both statuses and approval checks, for the
batch, something previously implied in `p=0`, but now
independent. Setting `skipchecks` basically makes the entire batch
`ready`.
For consistency this also sets the reviewer implicitly: since
skipchecks overrides both statuses *and approval*, whoever enables
this mode is essentially the reviewer.
- `cancel` cancels any ongoing staging when the marked PR becomes
ready again, previously this was also implied (in a more restricted
form) by setting `p=0`
FWBot removal
=============
While the "forwardport bot" still exists as an API level (to segregate
access rights between tokens) it has been removed as an interaction
point, as part of the modules merge plan. As a result,
fwbot stops responding
----------------------
Feedback messages are now always sent by the mergebot, the
forward-porting bot should not send any message or notification
anymore.
commands moved to the merge bot
-------------------------------
- `ignore`/`up to` simply changes bot
- `close` as well
- `skipci` is now a choice / flag of an `fw` command, which denotes
the forward-port policy,
- `fw=default` is the old `ci` and resets the policy to default,
that is wait for the PR to be merged to create forward ports, and
for the required statuses on each forward port to be received
before creating the next
- `fw=skipci` is the old `skipci`, it waits for the merge of the
base PR but then creates all the forward ports immediately (unless
it gets a conflict)
- `fw=skipmerge` immediately creates all the forward ports, without
even waiting for the PR to be merged
This is a completely new mode, and may be rather broken as until
now the 'bot has always assumed the source PR had been merged.
approval rework
---------------
Because of the previous section, there is no distinguishing feature
between `mergebot r+` = "merge this PR" and `forwardbot r+` = "merge
this PR and all its parent with different access rights".
As a result, the two have been merged under a single `mergebot r+`
with heuristics attempting to provide the best experience:
- if approving a non-forward port, the behavior does not change
- else, with review rights on the source, all ancestors are approved
- else, as author of the original, approves all ancestors which descend
from a merged PR
- else, approves all ancestors up to and including the oldest ancestor
to which we have review rights
Most notably, the source's author is not delegated on the source or
any of its descendants anymore. This might need to be revisited if it
provides too restrictive.
For the very specialized need of approving a forward-port *and none of
its ancestors*, `review=` can now take a comma (`,`) separated list of
pull request numbers (github numbers, not mergebot ids).
Computed State
==============
The `state` field of pull requests is now computed. Hopefully this
makes the status more consistent and predictable in the long run, and
importantly makes status management more reliable (because reference
datum get updated naturally flowing to the state).
For now however it makes things more complicated as some of the states
have to be separately signaled or updated:
- `closed` and `error` are now separate flags
- `merge_date` is pulled down from forwardport and becomes the
transition signal for ready -> merged
- `reviewed_by` becomes the transition signal for approval (might be a
good idea to rename it...)
- `status` is computed from the head's statuses and overrides, and
*that* becomes the validation state
Ideally, batch-level flags like `skipchecks` should be on, well, the
batch, and `state` should have a dependency on the batch. However
currently the batch is not a durable / permanent member of the system,
so it's a PR-level flag and a messy pile.
On notable change is that *forcing* the state to `ready` now does that
but also sets the reviewer, `skipchecks`, and overrides to ensure the
API-mediated readying does not get rolled back by e.g. the runbot
sending a status.
This is useful for a few types of automated / programmatic PRs
e.g. translation exports, where we set the state programmatically to
limit noise.
recursive dependency hack
-------------------------
Given a sequence of PRs with an override of the source, if one of the
PRs is updated its descendants should not have the override
anymore. However if the updated PR gets overridden, its descendants
should have *that* override.
This requires some unholy manipulations via an override of `modified`,
as the ORM supports recursive fields but not recursive
dependencies (on a different field).
unconditional followup scheduling
---------------------------------
Previously scheduling forward-port followup was contigent on the FW
policy, but it's not actually correct if the new PR is *immediately*
validated (which can happen now that the field is computed, if there
are no required statuses *or* all of the required statuses are
overridden by an ancestor) as nothing will trigger the state change
and thus scheduling of the fp followup.
The followup function checks all the properties of the batch to port,
so this should not result on incorrect ports. Although it's a bit more
expensive, and will lead to more spam.
Previously this would not happen because on creation of a PR the
validation task (commit -> PR) would still have to execute.
Misc Changes
============
- If a PR is marked as overriding / canceling stagings, it now does
so on retry not just when setting initially.
This was not handled at all previously, so a PR in P0 going into
error due to e.g. a non-deterministic bug would be retried and still
p=0, but a current staging would not get cancelled. Same when a PR
in p=0 goes into error because something was failed, then is updated
with a fix.
- Add tracking to a bunch of relevant PR fields.
Post-mortem analysis currently generally requires going through the
text logs to see what happened, which is annoying.
There is a nondeterminism / inconsistency in the tracking which
sometimes leads the admin user to trigger tracking before the bot
does, leading to the staging tracking being attributed to them
during tests, shove under the carpet by ignoring the user to whom
that tracking is attributed.
When multiple users update tracked fields in the same transaction
all the changes are attributed to the first one having triggered
tracking (?), I couldn't find why the admin sometimes takes over.
- added and leveraged support for enum-backed selection fields
- moved variuous fields from forwardport to runbot_merge
- fix a migration which had never worked and which never run (because
I forgot to bump the version on the module)
- remove some unnecessary intermediate de/serialisation
fixes #673, fixes #309, fixes #792, fixes #846 (probably)
2023-10-31 13:42:07 +07:00
pr_b . post_comment ( ' hansen rebase-merge alone skipchecks ' , config [ ' role_reviewer ' ] [ ' token ' ] )
2019-10-10 14:22:12 +07:00
env . run_crons ( )
[CHG] *: rewrite commands set, rework status management
This commit revisits the commands set in order to make it more
regular, and limit inconsistent command-sets, although it includes
pseudo-command aliases for common tasks now removed from the core set.
Hard Errors
===========
The previous iteration of the commands set would ignore any
non-command term in a command line. This has been changed to hard
error (and ignoring the entire thing) if any command is unknown or
invalid.
This fixes inconsistent / unexpected interpretations where a user
sends a command, then writes a novel on the same line some words of
which happen to *also* be commands, leading to merge states they did
not expect. They should now be told to fuck off.
Priority Restructuring
----------------------
The numerical priority system was pretty messy in that it confused
"staging priority" (in ways which were not entirely straightforward)
with overrides to other concerns.
This has now being split along all the axis, with separate command
subsets for:
- staging prioritisation, now separated between `default`, `priority`,
and `alone`,
- `default` means PRs are picked by an unspecified order when
creating a staging, if nothing better is available
- `priority` means PRs are picked first when staging, however if
`priority` PRs don't fill the staging the rest will be filled with
`default`, this mode did not previously exist
- `alone` means the PRs are picked first, before splits, and only
`alone` PRs can be part of the staging (which usually matches the
modename)
- `skipchecks` overrides both statuses and approval checks, for the
batch, something previously implied in `p=0`, but now
independent. Setting `skipchecks` basically makes the entire batch
`ready`.
For consistency this also sets the reviewer implicitly: since
skipchecks overrides both statuses *and approval*, whoever enables
this mode is essentially the reviewer.
- `cancel` cancels any ongoing staging when the marked PR becomes
ready again, previously this was also implied (in a more restricted
form) by setting `p=0`
FWBot removal
=============
While the "forwardport bot" still exists as an API level (to segregate
access rights between tokens) it has been removed as an interaction
point, as part of the modules merge plan. As a result,
fwbot stops responding
----------------------
Feedback messages are now always sent by the mergebot, the
forward-porting bot should not send any message or notification
anymore.
commands moved to the merge bot
-------------------------------
- `ignore`/`up to` simply changes bot
- `close` as well
- `skipci` is now a choice / flag of an `fw` command, which denotes
the forward-port policy,
- `fw=default` is the old `ci` and resets the policy to default,
that is wait for the PR to be merged to create forward ports, and
for the required statuses on each forward port to be received
before creating the next
- `fw=skipci` is the old `skipci`, it waits for the merge of the
base PR but then creates all the forward ports immediately (unless
it gets a conflict)
- `fw=skipmerge` immediately creates all the forward ports, without
even waiting for the PR to be merged
This is a completely new mode, and may be rather broken as until
now the 'bot has always assumed the source PR had been merged.
approval rework
---------------
Because of the previous section, there is no distinguishing feature
between `mergebot r+` = "merge this PR" and `forwardbot r+` = "merge
this PR and all its parent with different access rights".
As a result, the two have been merged under a single `mergebot r+`
with heuristics attempting to provide the best experience:
- if approving a non-forward port, the behavior does not change
- else, with review rights on the source, all ancestors are approved
- else, as author of the original, approves all ancestors which descend
from a merged PR
- else, approves all ancestors up to and including the oldest ancestor
to which we have review rights
Most notably, the source's author is not delegated on the source or
any of its descendants anymore. This might need to be revisited if it
provides too restrictive.
For the very specialized need of approving a forward-port *and none of
its ancestors*, `review=` can now take a comma (`,`) separated list of
pull request numbers (github numbers, not mergebot ids).
Computed State
==============
The `state` field of pull requests is now computed. Hopefully this
makes the status more consistent and predictable in the long run, and
importantly makes status management more reliable (because reference
datum get updated naturally flowing to the state).
For now however it makes things more complicated as some of the states
have to be separately signaled or updated:
- `closed` and `error` are now separate flags
- `merge_date` is pulled down from forwardport and becomes the
transition signal for ready -> merged
- `reviewed_by` becomes the transition signal for approval (might be a
good idea to rename it...)
- `status` is computed from the head's statuses and overrides, and
*that* becomes the validation state
Ideally, batch-level flags like `skipchecks` should be on, well, the
batch, and `state` should have a dependency on the batch. However
currently the batch is not a durable / permanent member of the system,
so it's a PR-level flag and a messy pile.
On notable change is that *forcing* the state to `ready` now does that
but also sets the reviewer, `skipchecks`, and overrides to ensure the
API-mediated readying does not get rolled back by e.g. the runbot
sending a status.
This is useful for a few types of automated / programmatic PRs
e.g. translation exports, where we set the state programmatically to
limit noise.
recursive dependency hack
-------------------------
Given a sequence of PRs with an override of the source, if one of the
PRs is updated its descendants should not have the override
anymore. However if the updated PR gets overridden, its descendants
should have *that* override.
This requires some unholy manipulations via an override of `modified`,
as the ORM supports recursive fields but not recursive
dependencies (on a different field).
unconditional followup scheduling
---------------------------------
Previously scheduling forward-port followup was contigent on the FW
policy, but it's not actually correct if the new PR is *immediately*
validated (which can happen now that the field is computed, if there
are no required statuses *or* all of the required statuses are
overridden by an ancestor) as nothing will trigger the state change
and thus scheduling of the fp followup.
The followup function checks all the properties of the batch to port,
so this should not result on incorrect ports. Although it's a bit more
expensive, and will lead to more spam.
Previously this would not happen because on creation of a PR the
validation task (commit -> PR) would still have to execute.
Misc Changes
============
- If a PR is marked as overriding / canceling stagings, it now does
so on retry not just when setting initially.
This was not handled at all previously, so a PR in P0 going into
error due to e.g. a non-deterministic bug would be retried and still
p=0, but a current staging would not get cancelled. Same when a PR
in p=0 goes into error because something was failed, then is updated
with a fix.
- Add tracking to a bunch of relevant PR fields.
Post-mortem analysis currently generally requires going through the
text logs to see what happened, which is annoying.
There is a nondeterminism / inconsistency in the tracking which
sometimes leads the admin user to trigger tracking before the bot
does, leading to the staging tracking being attributed to them
during tests, shove under the carpet by ignoring the user to whom
that tracking is attributed.
When multiple users update tracked fields in the same transaction
all the changes are attributed to the first one having triggered
tracking (?), I couldn't find why the admin sometimes takes over.
- added and leveraged support for enum-backed selection fields
- moved variuous fields from forwardport to runbot_merge
- fix a migration which had never worked and which never run (because
I forgot to bump the version on the module)
- remove some unnecessary intermediate de/serialisation
fixes #673, fixes #309, fixes #792, fixes #846 (probably)
2023-10-31 13:42:07 +07:00
p_a , p_b , p_c = to_pr ( env , pr_a ) , to_pr ( env , pr_b ) , to_pr ( env , pr_c )
assert not p_a . blocked
assert not p_b . blocked
assert p_a . staging_id and p_b . staging_id and p_a . staging_id == p_b . staging_id , \
" a and b should be staged despite neither beinbg reviewed or approved "
2018-03-27 18:33:04 +07:00
assert p_a . batch_id and p_b . batch_id and p_a . batch_id == p_b . batch_id , \
" a and b should have been recognised as co-dependent "
assert not p_c . staging_id
2019-04-05 13:22:05 +07:00
[CHG] *: rewrite commands set, rework status management
This commit revisits the commands set in order to make it more
regular, and limit inconsistent command-sets, although it includes
pseudo-command aliases for common tasks now removed from the core set.
Hard Errors
===========
The previous iteration of the commands set would ignore any
non-command term in a command line. This has been changed to hard
error (and ignoring the entire thing) if any command is unknown or
invalid.
This fixes inconsistent / unexpected interpretations where a user
sends a command, then writes a novel on the same line some words of
which happen to *also* be commands, leading to merge states they did
not expect. They should now be told to fuck off.
Priority Restructuring
----------------------
The numerical priority system was pretty messy in that it confused
"staging priority" (in ways which were not entirely straightforward)
with overrides to other concerns.
This has now being split along all the axis, with separate command
subsets for:
- staging prioritisation, now separated between `default`, `priority`,
and `alone`,
- `default` means PRs are picked by an unspecified order when
creating a staging, if nothing better is available
- `priority` means PRs are picked first when staging, however if
`priority` PRs don't fill the staging the rest will be filled with
`default`, this mode did not previously exist
- `alone` means the PRs are picked first, before splits, and only
`alone` PRs can be part of the staging (which usually matches the
modename)
- `skipchecks` overrides both statuses and approval checks, for the
batch, something previously implied in `p=0`, but now
independent. Setting `skipchecks` basically makes the entire batch
`ready`.
For consistency this also sets the reviewer implicitly: since
skipchecks overrides both statuses *and approval*, whoever enables
this mode is essentially the reviewer.
- `cancel` cancels any ongoing staging when the marked PR becomes
ready again, previously this was also implied (in a more restricted
form) by setting `p=0`
FWBot removal
=============
While the "forwardport bot" still exists as an API level (to segregate
access rights between tokens) it has been removed as an interaction
point, as part of the modules merge plan. As a result,
fwbot stops responding
----------------------
Feedback messages are now always sent by the mergebot, the
forward-porting bot should not send any message or notification
anymore.
commands moved to the merge bot
-------------------------------
- `ignore`/`up to` simply changes bot
- `close` as well
- `skipci` is now a choice / flag of an `fw` command, which denotes
the forward-port policy,
- `fw=default` is the old `ci` and resets the policy to default,
that is wait for the PR to be merged to create forward ports, and
for the required statuses on each forward port to be received
before creating the next
- `fw=skipci` is the old `skipci`, it waits for the merge of the
base PR but then creates all the forward ports immediately (unless
it gets a conflict)
- `fw=skipmerge` immediately creates all the forward ports, without
even waiting for the PR to be merged
This is a completely new mode, and may be rather broken as until
now the 'bot has always assumed the source PR had been merged.
approval rework
---------------
Because of the previous section, there is no distinguishing feature
between `mergebot r+` = "merge this PR" and `forwardbot r+` = "merge
this PR and all its parent with different access rights".
As a result, the two have been merged under a single `mergebot r+`
with heuristics attempting to provide the best experience:
- if approving a non-forward port, the behavior does not change
- else, with review rights on the source, all ancestors are approved
- else, as author of the original, approves all ancestors which descend
from a merged PR
- else, approves all ancestors up to and including the oldest ancestor
to which we have review rights
Most notably, the source's author is not delegated on the source or
any of its descendants anymore. This might need to be revisited if it
provides too restrictive.
For the very specialized need of approving a forward-port *and none of
its ancestors*, `review=` can now take a comma (`,`) separated list of
pull request numbers (github numbers, not mergebot ids).
Computed State
==============
The `state` field of pull requests is now computed. Hopefully this
makes the status more consistent and predictable in the long run, and
importantly makes status management more reliable (because reference
datum get updated naturally flowing to the state).
For now however it makes things more complicated as some of the states
have to be separately signaled or updated:
- `closed` and `error` are now separate flags
- `merge_date` is pulled down from forwardport and becomes the
transition signal for ready -> merged
- `reviewed_by` becomes the transition signal for approval (might be a
good idea to rename it...)
- `status` is computed from the head's statuses and overrides, and
*that* becomes the validation state
Ideally, batch-level flags like `skipchecks` should be on, well, the
batch, and `state` should have a dependency on the batch. However
currently the batch is not a durable / permanent member of the system,
so it's a PR-level flag and a messy pile.
On notable change is that *forcing* the state to `ready` now does that
but also sets the reviewer, `skipchecks`, and overrides to ensure the
API-mediated readying does not get rolled back by e.g. the runbot
sending a status.
This is useful for a few types of automated / programmatic PRs
e.g. translation exports, where we set the state programmatically to
limit noise.
recursive dependency hack
-------------------------
Given a sequence of PRs with an override of the source, if one of the
PRs is updated its descendants should not have the override
anymore. However if the updated PR gets overridden, its descendants
should have *that* override.
This requires some unholy manipulations via an override of `modified`,
as the ORM supports recursive fields but not recursive
dependencies (on a different field).
unconditional followup scheduling
---------------------------------
Previously scheduling forward-port followup was contigent on the FW
policy, but it's not actually correct if the new PR is *immediately*
validated (which can happen now that the field is computed, if there
are no required statuses *or* all of the required statuses are
overridden by an ancestor) as nothing will trigger the state change
and thus scheduling of the fp followup.
The followup function checks all the properties of the batch to port,
so this should not result on incorrect ports. Although it's a bit more
expensive, and will lead to more spam.
Previously this would not happen because on creation of a PR the
validation task (commit -> PR) would still have to execute.
Misc Changes
============
- If a PR is marked as overriding / canceling stagings, it now does
so on retry not just when setting initially.
This was not handled at all previously, so a PR in P0 going into
error due to e.g. a non-deterministic bug would be retried and still
p=0, but a current staging would not get cancelled. Same when a PR
in p=0 goes into error because something was failed, then is updated
with a fix.
- Add tracking to a bunch of relevant PR fields.
Post-mortem analysis currently generally requires going through the
text logs to see what happened, which is annoying.
There is a nondeterminism / inconsistency in the tracking which
sometimes leads the admin user to trigger tracking before the bot
does, leading to the staging tracking being attributed to them
during tests, shove under the carpet by ignoring the user to whom
that tracking is attributed.
When multiple users update tracked fields in the same transaction
all the changes are attributed to the first one having triggered
tracking (?), I couldn't find why the admin sometimes takes over.
- added and leveraged support for enum-backed selection fields
- moved variuous fields from forwardport to runbot_merge
- fix a migration which had never worked and which never run (because
I forgot to bump the version on the module)
- remove some unnecessary intermediate de/serialisation
fixes #673, fixes #309, fixes #792, fixes #846 (probably)
2023-10-31 13:42:07 +07:00
with repo_a :
pr_a . post_comment ( ' hansen r- ' , config [ ' role_reviewer ' ] [ ' token ' ] )
env . run_crons ( )
assert not p_b . staging_id . active , " should be unstaged "
assert p_b . priority == ' alone ' , " priority should not be affected anymore "
assert not p_b . skipchecks , " r- of linked pr should have un-skipcheck-ed this one "
assert p_a . blocked
assert p_b . blocked
2019-04-05 13:22:05 +07:00
class TestBlocked :
2019-10-10 14:22:12 +07:00
def test_merge_method ( self , env , repo_a , config ) :
with repo_a :
2024-02-09 16:49:32 +07:00
repo_a . make_commits ( None , Commit ( ' initial ' , tree = { ' a0 ' : ' a ' } ) , ref = ' heads/master ' )
2019-04-05 13:22:05 +07:00
2019-10-10 14:22:12 +07:00
pr = make_pr ( repo_a , ' A ' , [ { ' a1 ' : ' a ' } , { ' a2 ' : ' a ' } ] , user = config [ ' role_user ' ] [ ' token ' ] , reviewer = config [ ' role_reviewer ' ] [ ' token ' ] , )
env . run_crons ( )
2019-04-05 13:22:05 +07:00
p = to_pr ( env , pr )
assert p . state == ' ready '
2024-10-07 13:07:59 +07:00
assert not p . blocked
assert not p . staging_id
2019-04-05 13:22:05 +07:00
2019-10-10 14:22:12 +07:00
with repo_a : pr . post_comment ( ' hansen rebase-merge ' , config [ ' role_reviewer ' ] [ ' token ' ] )
2024-10-07 13:07:59 +07:00
env . run_crons ( )
assert p . staging_id
2019-04-05 13:22:05 +07:00
2019-10-10 14:22:12 +07:00
def test_linked_closed ( self , env , repo_a , repo_b , config ) :
with repo_a , repo_b :
2024-02-09 16:49:32 +07:00
repo_a . make_commits ( None , Commit ( ' initial ' , tree = { ' a0 ' : ' a ' } ) , ref = ' heads/master ' )
repo_b . make_commits ( None , Commit ( ' initial ' , tree = { ' b0 ' : ' b ' } ) , ref = ' heads/master ' )
2019-04-05 13:22:05 +07:00
2024-04-03 17:08:04 +07:00
pr1_a = make_pr ( repo_a , ' xxx ' , [ { ' a1 ' : ' a ' } ] , user = config [ ' role_user ' ] [ ' token ' ] , reviewer = config [ ' role_reviewer ' ] [ ' token ' ] , )
pr1_b = make_pr ( repo_b , ' xxx ' , [ { ' b1 ' : ' b ' } ] , user = config [ ' role_user ' ] [ ' token ' ] , reviewer = config [ ' role_reviewer ' ] [ ' token ' ] , statuses = [ ] )
2019-10-10 14:22:12 +07:00
env . run_crons ( )
2019-04-05 13:22:05 +07:00
2024-04-03 17:08:04 +07:00
head_a = repo_a . commit ( ' master ' ) . id
head_b = repo_b . commit ( ' master ' ) . id
pr1_a_id = to_pr ( env , pr1_a )
pr1_b_id = to_pr ( env , pr1_b )
assert pr1_a_id . blocked
with repo_b : pr1_b . close ( )
assert not pr1_a_id . blocked
assert len ( pr1_a_id . batch_id . all_prs ) == 2
assert pr1_a_id . state == ' ready '
assert pr1_b_id . state == ' closed '
env . run_crons ( )
assert pr1_a_id . staging_id
with repo_a , repo_b :
repo_a . post_status ( ' staging.master ' , ' success ' )
repo_b . post_status ( ' staging.master ' , ' success ' )
env . run_crons ( )
assert pr1_a_id . state == ' merged '
assert pr1_a_id . batch_id . merge_date
assert repo_a . commit ( ' master ' ) . id != head_a , \
" the master of repo A should be updated "
assert repo_b . commit ( ' master ' ) . id == head_b , \
" the master of repo B should not be updated "
with repo_a :
pr2_a = make_pr ( repo_a , " xxx " , [ { ' x ' : ' x ' } ] , user = config [ ' role_user ' ] [ ' token ' ] , reviewer = config [ ' role_reviewer ' ] [ ' token ' ] )
env . run_crons ( )
pr2_a_id = to_pr ( env , pr2_a )
assert pr2_a_id . batch_id != pr1_a_id . batch_id
assert pr2_a_id . label == pr1_a_id . label
assert len ( pr2_a_id . batch_id . all_prs ) == 1
2019-04-05 13:22:05 +07:00
2019-10-10 14:22:12 +07:00
def test_linked_merged ( self , env , repo_a , repo_b , config ) :
with repo_a , repo_b :
2024-02-09 16:49:32 +07:00
repo_a . make_commits ( None , Commit ( ' initial ' , tree = { ' a0 ' : ' a ' } ) , ref = ' heads/master ' )
repo_b . make_commits ( None , Commit ( ' initial ' , tree = { ' b0 ' : ' b ' } ) , ref = ' heads/master ' )
2019-04-05 13:22:05 +07:00
2019-10-10 14:22:12 +07:00
b = make_pr ( repo_b , ' xxx ' , [ { ' b1 ' : ' b ' } ] , user = config [ ' role_user ' ] [ ' token ' ] , reviewer = config [ ' role_reviewer ' ] [ ' token ' ] , )
env . run_crons ( ) # stage b and c
2019-04-05 13:22:05 +07:00
2019-10-10 14:22:12 +07:00
with repo_a , repo_b :
2024-02-09 16:49:32 +07:00
repo_a . post_status ( ' heads/staging.master ' , ' success ' )
repo_b . post_status ( ' heads/staging.master ' , ' success ' )
2019-10-10 14:22:12 +07:00
env . run_crons ( ) # merge b and c
2019-04-05 13:22:05 +07:00
assert to_pr ( env , b ) . state == ' merged '
2019-10-10 14:22:12 +07:00
with repo_a :
pr = make_pr ( repo_a , ' xxx ' , [ { ' a1 ' : ' a ' } ] , user = config [ ' role_user ' ] [ ' token ' ] , reviewer = config [ ' role_reviewer ' ] [ ' token ' ] , )
env . run_crons ( ) # merge b and c
2019-04-05 13:22:05 +07:00
p = to_pr ( env , pr )
assert not p . blocked
2024-06-25 03:16:43 +07:00
@pytest.mark.usefixtures ( " reviewer_admin " )
2019-10-10 14:22:12 +07:00
def test_linked_unready ( self , env , repo_a , repo_b , config ) :
2019-04-05 13:22:05 +07:00
""" Create a PR A linked to a non-ready PR B,
* A is blocked by default
[CHG] *: rewrite commands set, rework status management
This commit revisits the commands set in order to make it more
regular, and limit inconsistent command-sets, although it includes
pseudo-command aliases for common tasks now removed from the core set.
Hard Errors
===========
The previous iteration of the commands set would ignore any
non-command term in a command line. This has been changed to hard
error (and ignoring the entire thing) if any command is unknown or
invalid.
This fixes inconsistent / unexpected interpretations where a user
sends a command, then writes a novel on the same line some words of
which happen to *also* be commands, leading to merge states they did
not expect. They should now be told to fuck off.
Priority Restructuring
----------------------
The numerical priority system was pretty messy in that it confused
"staging priority" (in ways which were not entirely straightforward)
with overrides to other concerns.
This has now being split along all the axis, with separate command
subsets for:
- staging prioritisation, now separated between `default`, `priority`,
and `alone`,
- `default` means PRs are picked by an unspecified order when
creating a staging, if nothing better is available
- `priority` means PRs are picked first when staging, however if
`priority` PRs don't fill the staging the rest will be filled with
`default`, this mode did not previously exist
- `alone` means the PRs are picked first, before splits, and only
`alone` PRs can be part of the staging (which usually matches the
modename)
- `skipchecks` overrides both statuses and approval checks, for the
batch, something previously implied in `p=0`, but now
independent. Setting `skipchecks` basically makes the entire batch
`ready`.
For consistency this also sets the reviewer implicitly: since
skipchecks overrides both statuses *and approval*, whoever enables
this mode is essentially the reviewer.
- `cancel` cancels any ongoing staging when the marked PR becomes
ready again, previously this was also implied (in a more restricted
form) by setting `p=0`
FWBot removal
=============
While the "forwardport bot" still exists as an API level (to segregate
access rights between tokens) it has been removed as an interaction
point, as part of the modules merge plan. As a result,
fwbot stops responding
----------------------
Feedback messages are now always sent by the mergebot, the
forward-porting bot should not send any message or notification
anymore.
commands moved to the merge bot
-------------------------------
- `ignore`/`up to` simply changes bot
- `close` as well
- `skipci` is now a choice / flag of an `fw` command, which denotes
the forward-port policy,
- `fw=default` is the old `ci` and resets the policy to default,
that is wait for the PR to be merged to create forward ports, and
for the required statuses on each forward port to be received
before creating the next
- `fw=skipci` is the old `skipci`, it waits for the merge of the
base PR but then creates all the forward ports immediately (unless
it gets a conflict)
- `fw=skipmerge` immediately creates all the forward ports, without
even waiting for the PR to be merged
This is a completely new mode, and may be rather broken as until
now the 'bot has always assumed the source PR had been merged.
approval rework
---------------
Because of the previous section, there is no distinguishing feature
between `mergebot r+` = "merge this PR" and `forwardbot r+` = "merge
this PR and all its parent with different access rights".
As a result, the two have been merged under a single `mergebot r+`
with heuristics attempting to provide the best experience:
- if approving a non-forward port, the behavior does not change
- else, with review rights on the source, all ancestors are approved
- else, as author of the original, approves all ancestors which descend
from a merged PR
- else, approves all ancestors up to and including the oldest ancestor
to which we have review rights
Most notably, the source's author is not delegated on the source or
any of its descendants anymore. This might need to be revisited if it
provides too restrictive.
For the very specialized need of approving a forward-port *and none of
its ancestors*, `review=` can now take a comma (`,`) separated list of
pull request numbers (github numbers, not mergebot ids).
Computed State
==============
The `state` field of pull requests is now computed. Hopefully this
makes the status more consistent and predictable in the long run, and
importantly makes status management more reliable (because reference
datum get updated naturally flowing to the state).
For now however it makes things more complicated as some of the states
have to be separately signaled or updated:
- `closed` and `error` are now separate flags
- `merge_date` is pulled down from forwardport and becomes the
transition signal for ready -> merged
- `reviewed_by` becomes the transition signal for approval (might be a
good idea to rename it...)
- `status` is computed from the head's statuses and overrides, and
*that* becomes the validation state
Ideally, batch-level flags like `skipchecks` should be on, well, the
batch, and `state` should have a dependency on the batch. However
currently the batch is not a durable / permanent member of the system,
so it's a PR-level flag and a messy pile.
On notable change is that *forcing* the state to `ready` now does that
but also sets the reviewer, `skipchecks`, and overrides to ensure the
API-mediated readying does not get rolled back by e.g. the runbot
sending a status.
This is useful for a few types of automated / programmatic PRs
e.g. translation exports, where we set the state programmatically to
limit noise.
recursive dependency hack
-------------------------
Given a sequence of PRs with an override of the source, if one of the
PRs is updated its descendants should not have the override
anymore. However if the updated PR gets overridden, its descendants
should have *that* override.
This requires some unholy manipulations via an override of `modified`,
as the ORM supports recursive fields but not recursive
dependencies (on a different field).
unconditional followup scheduling
---------------------------------
Previously scheduling forward-port followup was contigent on the FW
policy, but it's not actually correct if the new PR is *immediately*
validated (which can happen now that the field is computed, if there
are no required statuses *or* all of the required statuses are
overridden by an ancestor) as nothing will trigger the state change
and thus scheduling of the fp followup.
The followup function checks all the properties of the batch to port,
so this should not result on incorrect ports. Although it's a bit more
expensive, and will lead to more spam.
Previously this would not happen because on creation of a PR the
validation task (commit -> PR) would still have to execute.
Misc Changes
============
- If a PR is marked as overriding / canceling stagings, it now does
so on retry not just when setting initially.
This was not handled at all previously, so a PR in P0 going into
error due to e.g. a non-deterministic bug would be retried and still
p=0, but a current staging would not get cancelled. Same when a PR
in p=0 goes into error because something was failed, then is updated
with a fix.
- Add tracking to a bunch of relevant PR fields.
Post-mortem analysis currently generally requires going through the
text logs to see what happened, which is annoying.
There is a nondeterminism / inconsistency in the tracking which
sometimes leads the admin user to trigger tracking before the bot
does, leading to the staging tracking being attributed to them
during tests, shove under the carpet by ignoring the user to whom
that tracking is attributed.
When multiple users update tracked fields in the same transaction
all the changes are attributed to the first one having triggered
tracking (?), I couldn't find why the admin sometimes takes over.
- added and leveraged support for enum-backed selection fields
- moved variuous fields from forwardport to runbot_merge
- fix a migration which had never worked and which never run (because
I forgot to bump the version on the module)
- remove some unnecessary intermediate de/serialisation
fixes #673, fixes #309, fixes #792, fixes #846 (probably)
2023-10-31 13:42:07 +07:00
* A is not blocked if A . skipci
* A is not blocked if B . skipci
2019-04-05 13:22:05 +07:00
"""
2019-10-10 14:22:12 +07:00
with repo_a , repo_b :
2024-02-09 16:49:32 +07:00
repo_a . make_commits ( None , Commit ( ' initial ' , tree = { ' a0 ' : ' a ' } ) , ref = ' heads/master ' )
repo_b . make_commits ( None , Commit ( ' initial ' , tree = { ' b0 ' : ' b ' } ) , ref = ' heads/master ' )
2019-04-05 13:22:05 +07:00
2019-10-10 14:22:12 +07:00
a = make_pr ( repo_a , ' xxx ' , [ { ' a1 ' : ' a ' } ] , user = config [ ' role_user ' ] [ ' token ' ] , reviewer = config [ ' role_reviewer ' ] [ ' token ' ] , )
b = make_pr ( repo_b , ' xxx ' , [ { ' b1 ' : ' b ' } ] , user = config [ ' role_user ' ] [ ' token ' ] , reviewer = config [ ' role_reviewer ' ] [ ' token ' ] , statuses = [ ] )
env . run_crons ( )
2019-04-05 13:22:05 +07:00
pr_a = to_pr ( env , a )
assert pr_a . blocked
[CHG] *: rewrite commands set, rework status management
This commit revisits the commands set in order to make it more
regular, and limit inconsistent command-sets, although it includes
pseudo-command aliases for common tasks now removed from the core set.
Hard Errors
===========
The previous iteration of the commands set would ignore any
non-command term in a command line. This has been changed to hard
error (and ignoring the entire thing) if any command is unknown or
invalid.
This fixes inconsistent / unexpected interpretations where a user
sends a command, then writes a novel on the same line some words of
which happen to *also* be commands, leading to merge states they did
not expect. They should now be told to fuck off.
Priority Restructuring
----------------------
The numerical priority system was pretty messy in that it confused
"staging priority" (in ways which were not entirely straightforward)
with overrides to other concerns.
This has now being split along all the axis, with separate command
subsets for:
- staging prioritisation, now separated between `default`, `priority`,
and `alone`,
- `default` means PRs are picked by an unspecified order when
creating a staging, if nothing better is available
- `priority` means PRs are picked first when staging, however if
`priority` PRs don't fill the staging the rest will be filled with
`default`, this mode did not previously exist
- `alone` means the PRs are picked first, before splits, and only
`alone` PRs can be part of the staging (which usually matches the
modename)
- `skipchecks` overrides both statuses and approval checks, for the
batch, something previously implied in `p=0`, but now
independent. Setting `skipchecks` basically makes the entire batch
`ready`.
For consistency this also sets the reviewer implicitly: since
skipchecks overrides both statuses *and approval*, whoever enables
this mode is essentially the reviewer.
- `cancel` cancels any ongoing staging when the marked PR becomes
ready again, previously this was also implied (in a more restricted
form) by setting `p=0`
FWBot removal
=============
While the "forwardport bot" still exists as an API level (to segregate
access rights between tokens) it has been removed as an interaction
point, as part of the modules merge plan. As a result,
fwbot stops responding
----------------------
Feedback messages are now always sent by the mergebot, the
forward-porting bot should not send any message or notification
anymore.
commands moved to the merge bot
-------------------------------
- `ignore`/`up to` simply changes bot
- `close` as well
- `skipci` is now a choice / flag of an `fw` command, which denotes
the forward-port policy,
- `fw=default` is the old `ci` and resets the policy to default,
that is wait for the PR to be merged to create forward ports, and
for the required statuses on each forward port to be received
before creating the next
- `fw=skipci` is the old `skipci`, it waits for the merge of the
base PR but then creates all the forward ports immediately (unless
it gets a conflict)
- `fw=skipmerge` immediately creates all the forward ports, without
even waiting for the PR to be merged
This is a completely new mode, and may be rather broken as until
now the 'bot has always assumed the source PR had been merged.
approval rework
---------------
Because of the previous section, there is no distinguishing feature
between `mergebot r+` = "merge this PR" and `forwardbot r+` = "merge
this PR and all its parent with different access rights".
As a result, the two have been merged under a single `mergebot r+`
with heuristics attempting to provide the best experience:
- if approving a non-forward port, the behavior does not change
- else, with review rights on the source, all ancestors are approved
- else, as author of the original, approves all ancestors which descend
from a merged PR
- else, approves all ancestors up to and including the oldest ancestor
to which we have review rights
Most notably, the source's author is not delegated on the source or
any of its descendants anymore. This might need to be revisited if it
provides too restrictive.
For the very specialized need of approving a forward-port *and none of
its ancestors*, `review=` can now take a comma (`,`) separated list of
pull request numbers (github numbers, not mergebot ids).
Computed State
==============
The `state` field of pull requests is now computed. Hopefully this
makes the status more consistent and predictable in the long run, and
importantly makes status management more reliable (because reference
datum get updated naturally flowing to the state).
For now however it makes things more complicated as some of the states
have to be separately signaled or updated:
- `closed` and `error` are now separate flags
- `merge_date` is pulled down from forwardport and becomes the
transition signal for ready -> merged
- `reviewed_by` becomes the transition signal for approval (might be a
good idea to rename it...)
- `status` is computed from the head's statuses and overrides, and
*that* becomes the validation state
Ideally, batch-level flags like `skipchecks` should be on, well, the
batch, and `state` should have a dependency on the batch. However
currently the batch is not a durable / permanent member of the system,
so it's a PR-level flag and a messy pile.
On notable change is that *forcing* the state to `ready` now does that
but also sets the reviewer, `skipchecks`, and overrides to ensure the
API-mediated readying does not get rolled back by e.g. the runbot
sending a status.
This is useful for a few types of automated / programmatic PRs
e.g. translation exports, where we set the state programmatically to
limit noise.
recursive dependency hack
-------------------------
Given a sequence of PRs with an override of the source, if one of the
PRs is updated its descendants should not have the override
anymore. However if the updated PR gets overridden, its descendants
should have *that* override.
This requires some unholy manipulations via an override of `modified`,
as the ORM supports recursive fields but not recursive
dependencies (on a different field).
unconditional followup scheduling
---------------------------------
Previously scheduling forward-port followup was contigent on the FW
policy, but it's not actually correct if the new PR is *immediately*
validated (which can happen now that the field is computed, if there
are no required statuses *or* all of the required statuses are
overridden by an ancestor) as nothing will trigger the state change
and thus scheduling of the fp followup.
The followup function checks all the properties of the batch to port,
so this should not result on incorrect ports. Although it's a bit more
expensive, and will lead to more spam.
Previously this would not happen because on creation of a PR the
validation task (commit -> PR) would still have to execute.
Misc Changes
============
- If a PR is marked as overriding / canceling stagings, it now does
so on retry not just when setting initially.
This was not handled at all previously, so a PR in P0 going into
error due to e.g. a non-deterministic bug would be retried and still
p=0, but a current staging would not get cancelled. Same when a PR
in p=0 goes into error because something was failed, then is updated
with a fix.
- Add tracking to a bunch of relevant PR fields.
Post-mortem analysis currently generally requires going through the
text logs to see what happened, which is annoying.
There is a nondeterminism / inconsistency in the tracking which
sometimes leads the admin user to trigger tracking before the bot
does, leading to the staging tracking being attributed to them
during tests, shove under the carpet by ignoring the user to whom
that tracking is attributed.
When multiple users update tracked fields in the same transaction
all the changes are attributed to the first one having triggered
tracking (?), I couldn't find why the admin sometimes takes over.
- added and leveraged support for enum-backed selection fields
- moved variuous fields from forwardport to runbot_merge
- fix a migration which had never worked and which never run (because
I forgot to bump the version on the module)
- remove some unnecessary intermediate de/serialisation
fixes #673, fixes #309, fixes #792, fixes #846 (probably)
2023-10-31 13:42:07 +07:00
with repo_a : a . post_comment ( ' hansen skipchecks ' , config [ ' role_reviewer ' ] [ ' token ' ] )
2019-04-05 13:22:05 +07:00
assert not pr_a . blocked
[CHG] *: rewrite commands set, rework status management
This commit revisits the commands set in order to make it more
regular, and limit inconsistent command-sets, although it includes
pseudo-command aliases for common tasks now removed from the core set.
Hard Errors
===========
The previous iteration of the commands set would ignore any
non-command term in a command line. This has been changed to hard
error (and ignoring the entire thing) if any command is unknown or
invalid.
This fixes inconsistent / unexpected interpretations where a user
sends a command, then writes a novel on the same line some words of
which happen to *also* be commands, leading to merge states they did
not expect. They should now be told to fuck off.
Priority Restructuring
----------------------
The numerical priority system was pretty messy in that it confused
"staging priority" (in ways which were not entirely straightforward)
with overrides to other concerns.
This has now being split along all the axis, with separate command
subsets for:
- staging prioritisation, now separated between `default`, `priority`,
and `alone`,
- `default` means PRs are picked by an unspecified order when
creating a staging, if nothing better is available
- `priority` means PRs are picked first when staging, however if
`priority` PRs don't fill the staging the rest will be filled with
`default`, this mode did not previously exist
- `alone` means the PRs are picked first, before splits, and only
`alone` PRs can be part of the staging (which usually matches the
modename)
- `skipchecks` overrides both statuses and approval checks, for the
batch, something previously implied in `p=0`, but now
independent. Setting `skipchecks` basically makes the entire batch
`ready`.
For consistency this also sets the reviewer implicitly: since
skipchecks overrides both statuses *and approval*, whoever enables
this mode is essentially the reviewer.
- `cancel` cancels any ongoing staging when the marked PR becomes
ready again, previously this was also implied (in a more restricted
form) by setting `p=0`
FWBot removal
=============
While the "forwardport bot" still exists as an API level (to segregate
access rights between tokens) it has been removed as an interaction
point, as part of the modules merge plan. As a result,
fwbot stops responding
----------------------
Feedback messages are now always sent by the mergebot, the
forward-porting bot should not send any message or notification
anymore.
commands moved to the merge bot
-------------------------------
- `ignore`/`up to` simply changes bot
- `close` as well
- `skipci` is now a choice / flag of an `fw` command, which denotes
the forward-port policy,
- `fw=default` is the old `ci` and resets the policy to default,
that is wait for the PR to be merged to create forward ports, and
for the required statuses on each forward port to be received
before creating the next
- `fw=skipci` is the old `skipci`, it waits for the merge of the
base PR but then creates all the forward ports immediately (unless
it gets a conflict)
- `fw=skipmerge` immediately creates all the forward ports, without
even waiting for the PR to be merged
This is a completely new mode, and may be rather broken as until
now the 'bot has always assumed the source PR had been merged.
approval rework
---------------
Because of the previous section, there is no distinguishing feature
between `mergebot r+` = "merge this PR" and `forwardbot r+` = "merge
this PR and all its parent with different access rights".
As a result, the two have been merged under a single `mergebot r+`
with heuristics attempting to provide the best experience:
- if approving a non-forward port, the behavior does not change
- else, with review rights on the source, all ancestors are approved
- else, as author of the original, approves all ancestors which descend
from a merged PR
- else, approves all ancestors up to and including the oldest ancestor
to which we have review rights
Most notably, the source's author is not delegated on the source or
any of its descendants anymore. This might need to be revisited if it
provides too restrictive.
For the very specialized need of approving a forward-port *and none of
its ancestors*, `review=` can now take a comma (`,`) separated list of
pull request numbers (github numbers, not mergebot ids).
Computed State
==============
The `state` field of pull requests is now computed. Hopefully this
makes the status more consistent and predictable in the long run, and
importantly makes status management more reliable (because reference
datum get updated naturally flowing to the state).
For now however it makes things more complicated as some of the states
have to be separately signaled or updated:
- `closed` and `error` are now separate flags
- `merge_date` is pulled down from forwardport and becomes the
transition signal for ready -> merged
- `reviewed_by` becomes the transition signal for approval (might be a
good idea to rename it...)
- `status` is computed from the head's statuses and overrides, and
*that* becomes the validation state
Ideally, batch-level flags like `skipchecks` should be on, well, the
batch, and `state` should have a dependency on the batch. However
currently the batch is not a durable / permanent member of the system,
so it's a PR-level flag and a messy pile.
On notable change is that *forcing* the state to `ready` now does that
but also sets the reviewer, `skipchecks`, and overrides to ensure the
API-mediated readying does not get rolled back by e.g. the runbot
sending a status.
This is useful for a few types of automated / programmatic PRs
e.g. translation exports, where we set the state programmatically to
limit noise.
recursive dependency hack
-------------------------
Given a sequence of PRs with an override of the source, if one of the
PRs is updated its descendants should not have the override
anymore. However if the updated PR gets overridden, its descendants
should have *that* override.
This requires some unholy manipulations via an override of `modified`,
as the ORM supports recursive fields but not recursive
dependencies (on a different field).
unconditional followup scheduling
---------------------------------
Previously scheduling forward-port followup was contigent on the FW
policy, but it's not actually correct if the new PR is *immediately*
validated (which can happen now that the field is computed, if there
are no required statuses *or* all of the required statuses are
overridden by an ancestor) as nothing will trigger the state change
and thus scheduling of the fp followup.
The followup function checks all the properties of the batch to port,
so this should not result on incorrect ports. Although it's a bit more
expensive, and will lead to more spam.
Previously this would not happen because on creation of a PR the
validation task (commit -> PR) would still have to execute.
Misc Changes
============
- If a PR is marked as overriding / canceling stagings, it now does
so on retry not just when setting initially.
This was not handled at all previously, so a PR in P0 going into
error due to e.g. a non-deterministic bug would be retried and still
p=0, but a current staging would not get cancelled. Same when a PR
in p=0 goes into error because something was failed, then is updated
with a fix.
- Add tracking to a bunch of relevant PR fields.
Post-mortem analysis currently generally requires going through the
text logs to see what happened, which is annoying.
There is a nondeterminism / inconsistency in the tracking which
sometimes leads the admin user to trigger tracking before the bot
does, leading to the staging tracking being attributed to them
during tests, shove under the carpet by ignoring the user to whom
that tracking is attributed.
When multiple users update tracked fields in the same transaction
all the changes are attributed to the first one having triggered
tracking (?), I couldn't find why the admin sometimes takes over.
- added and leveraged support for enum-backed selection fields
- moved variuous fields from forwardport to runbot_merge
- fix a migration which had never worked and which never run (because
I forgot to bump the version on the module)
- remove some unnecessary intermediate de/serialisation
fixes #673, fixes #309, fixes #792, fixes #846 (probably)
2023-10-31 13:42:07 +07:00
pr_a . skipchecks = False
2019-04-05 13:22:05 +07:00
[CHG] *: rewrite commands set, rework status management
This commit revisits the commands set in order to make it more
regular, and limit inconsistent command-sets, although it includes
pseudo-command aliases for common tasks now removed from the core set.
Hard Errors
===========
The previous iteration of the commands set would ignore any
non-command term in a command line. This has been changed to hard
error (and ignoring the entire thing) if any command is unknown or
invalid.
This fixes inconsistent / unexpected interpretations where a user
sends a command, then writes a novel on the same line some words of
which happen to *also* be commands, leading to merge states they did
not expect. They should now be told to fuck off.
Priority Restructuring
----------------------
The numerical priority system was pretty messy in that it confused
"staging priority" (in ways which were not entirely straightforward)
with overrides to other concerns.
This has now being split along all the axis, with separate command
subsets for:
- staging prioritisation, now separated between `default`, `priority`,
and `alone`,
- `default` means PRs are picked by an unspecified order when
creating a staging, if nothing better is available
- `priority` means PRs are picked first when staging, however if
`priority` PRs don't fill the staging the rest will be filled with
`default`, this mode did not previously exist
- `alone` means the PRs are picked first, before splits, and only
`alone` PRs can be part of the staging (which usually matches the
modename)
- `skipchecks` overrides both statuses and approval checks, for the
batch, something previously implied in `p=0`, but now
independent. Setting `skipchecks` basically makes the entire batch
`ready`.
For consistency this also sets the reviewer implicitly: since
skipchecks overrides both statuses *and approval*, whoever enables
this mode is essentially the reviewer.
- `cancel` cancels any ongoing staging when the marked PR becomes
ready again, previously this was also implied (in a more restricted
form) by setting `p=0`
FWBot removal
=============
While the "forwardport bot" still exists as an API level (to segregate
access rights between tokens) it has been removed as an interaction
point, as part of the modules merge plan. As a result,
fwbot stops responding
----------------------
Feedback messages are now always sent by the mergebot, the
forward-porting bot should not send any message or notification
anymore.
commands moved to the merge bot
-------------------------------
- `ignore`/`up to` simply changes bot
- `close` as well
- `skipci` is now a choice / flag of an `fw` command, which denotes
the forward-port policy,
- `fw=default` is the old `ci` and resets the policy to default,
that is wait for the PR to be merged to create forward ports, and
for the required statuses on each forward port to be received
before creating the next
- `fw=skipci` is the old `skipci`, it waits for the merge of the
base PR but then creates all the forward ports immediately (unless
it gets a conflict)
- `fw=skipmerge` immediately creates all the forward ports, without
even waiting for the PR to be merged
This is a completely new mode, and may be rather broken as until
now the 'bot has always assumed the source PR had been merged.
approval rework
---------------
Because of the previous section, there is no distinguishing feature
between `mergebot r+` = "merge this PR" and `forwardbot r+` = "merge
this PR and all its parent with different access rights".
As a result, the two have been merged under a single `mergebot r+`
with heuristics attempting to provide the best experience:
- if approving a non-forward port, the behavior does not change
- else, with review rights on the source, all ancestors are approved
- else, as author of the original, approves all ancestors which descend
from a merged PR
- else, approves all ancestors up to and including the oldest ancestor
to which we have review rights
Most notably, the source's author is not delegated on the source or
any of its descendants anymore. This might need to be revisited if it
provides too restrictive.
For the very specialized need of approving a forward-port *and none of
its ancestors*, `review=` can now take a comma (`,`) separated list of
pull request numbers (github numbers, not mergebot ids).
Computed State
==============
The `state` field of pull requests is now computed. Hopefully this
makes the status more consistent and predictable in the long run, and
importantly makes status management more reliable (because reference
datum get updated naturally flowing to the state).
For now however it makes things more complicated as some of the states
have to be separately signaled or updated:
- `closed` and `error` are now separate flags
- `merge_date` is pulled down from forwardport and becomes the
transition signal for ready -> merged
- `reviewed_by` becomes the transition signal for approval (might be a
good idea to rename it...)
- `status` is computed from the head's statuses and overrides, and
*that* becomes the validation state
Ideally, batch-level flags like `skipchecks` should be on, well, the
batch, and `state` should have a dependency on the batch. However
currently the batch is not a durable / permanent member of the system,
so it's a PR-level flag and a messy pile.
On notable change is that *forcing* the state to `ready` now does that
but also sets the reviewer, `skipchecks`, and overrides to ensure the
API-mediated readying does not get rolled back by e.g. the runbot
sending a status.
This is useful for a few types of automated / programmatic PRs
e.g. translation exports, where we set the state programmatically to
limit noise.
recursive dependency hack
-------------------------
Given a sequence of PRs with an override of the source, if one of the
PRs is updated its descendants should not have the override
anymore. However if the updated PR gets overridden, its descendants
should have *that* override.
This requires some unholy manipulations via an override of `modified`,
as the ORM supports recursive fields but not recursive
dependencies (on a different field).
unconditional followup scheduling
---------------------------------
Previously scheduling forward-port followup was contigent on the FW
policy, but it's not actually correct if the new PR is *immediately*
validated (which can happen now that the field is computed, if there
are no required statuses *or* all of the required statuses are
overridden by an ancestor) as nothing will trigger the state change
and thus scheduling of the fp followup.
The followup function checks all the properties of the batch to port,
so this should not result on incorrect ports. Although it's a bit more
expensive, and will lead to more spam.
Previously this would not happen because on creation of a PR the
validation task (commit -> PR) would still have to execute.
Misc Changes
============
- If a PR is marked as overriding / canceling stagings, it now does
so on retry not just when setting initially.
This was not handled at all previously, so a PR in P0 going into
error due to e.g. a non-deterministic bug would be retried and still
p=0, but a current staging would not get cancelled. Same when a PR
in p=0 goes into error because something was failed, then is updated
with a fix.
- Add tracking to a bunch of relevant PR fields.
Post-mortem analysis currently generally requires going through the
text logs to see what happened, which is annoying.
There is a nondeterminism / inconsistency in the tracking which
sometimes leads the admin user to trigger tracking before the bot
does, leading to the staging tracking being attributed to them
during tests, shove under the carpet by ignoring the user to whom
that tracking is attributed.
When multiple users update tracked fields in the same transaction
all the changes are attributed to the first one having triggered
tracking (?), I couldn't find why the admin sometimes takes over.
- added and leveraged support for enum-backed selection fields
- moved variuous fields from forwardport to runbot_merge
- fix a migration which had never worked and which never run (because
I forgot to bump the version on the module)
- remove some unnecessary intermediate de/serialisation
fixes #673, fixes #309, fixes #792, fixes #846 (probably)
2023-10-31 13:42:07 +07:00
with repo_b : b . post_comment ( ' hansen skipchecks ' , config [ ' role_reviewer ' ] [ ' token ' ] )
2019-04-05 13:22:05 +07:00
assert not pr_a . blocked
2020-01-22 13:52:10 +07:00
def test_different_branches ( env , project , repo_a , repo_b , config ) :
project . write ( {
' branch_ids ' : [ ( 0 , 0 , { ' name ' : ' dev ' } ) ]
} )
# repo_b only works with master
env [ ' runbot_merge.repository ' ] . search ( [ ( ' name ' , ' = ' , repo_b . name ) ] ) \
. branch_filter = ' [( " name " , " = " , " master " )] '
with repo_a , repo_b :
2024-02-09 16:49:32 +07:00
repo_a . make_commits ( None , Commit ( ' initial ' , tree = { ' a ' : ' 0 ' } ) , ref = ' heads/dev ' )
repo_a . make_commits ( None , Commit ( ' initial ' , tree = { ' b ' : ' 0 ' } ) , ref = ' heads/master ' )
repo_b . make_commits ( None , Commit ( ' initial ' , tree = { ' b ' : ' 0 ' } ) , ref = ' heads/master ' )
2020-01-22 13:52:10 +07:00
pr_a = make_pr (
repo_a , ' xxx ' , [ { ' a ' : ' 1 ' } ] ,
target = ' dev ' ,
user = config [ ' role_user ' ] [ ' token ' ] ,
reviewer = config [ ' role_reviewer ' ] [ ' token ' ]
)
env . run_crons ( )
with repo_a :
pr_a . post_comment ( ' hansen r+ ' , config [ ' role_reviewer ' ] [ ' token ' ] )
2024-02-09 16:49:32 +07:00
repo_a . post_status ( ' heads/staging.dev ' , ' success ' )
2020-01-22 13:52:10 +07:00
env . run_crons ( )
assert to_pr ( env , pr_a ) . state == ' merged '
2020-02-10 21:05:08 +07:00
def test_remove_acl ( env , partners , repo_a , repo_b , repo_c ) :
""" Check that our way of deprovisioning works correctly
"""
r = partners [ ' self_reviewer ' ]
2020-02-26 22:21:16 +07:00
assert r . mapped ( ' review_rights.repository_id.name ' ) == [ repo_a . name , repo_b . name , repo_c . name ]
2020-02-10 21:05:08 +07:00
r . write ( { ' review_rights ' : [ ( 5 , 0 , 0 ) ] } )
assert r . mapped ( ' review_rights.repository_id ' ) == env [ ' runbot_merge.repository ' ]
2020-02-11 20:20:32 +07:00
class TestSubstitutions :
def test_substitution_patterns ( self , env , port ) :
p = env [ ' runbot_merge.project ' ] . create ( {
' name ' : ' proj ' ,
' github_token ' : ' wheeee ' ,
2024-11-29 15:04:06 +07:00
' github_name ' : " yyy " ,
' github_email ' : " zzz@example.org " ,
2020-02-11 20:20:32 +07:00
' repo_ids ' : [ ( 0 , 0 , { ' name ' : ' xxx/xxx ' } ) ] ,
' branch_ids ' : [ ( 0 , 0 , { ' name ' : ' master ' } ) ]
} )
[ADD] *: per-repository webhook secret
Currently webhook secrets are configured per *project* which is an
issue both because different repositories may have different
administrators and thus creates safety concerns, and because multiple
repositories can feed into different projects (e.g. on mergebot,
odoo-dev/odoo is both an ancillary repository to the main RD project,
and the main repository to the minor / legacy master-wowl
project). This means it can be necessary to have multiple projects
share the same secret as well, this then mandates the secret for more
repositories per (1).
This is a pain in the ass, so just detach secrets from projects and
link them *only* to repositories, it's cleaner and easier to manage
and set up progressively.
This requires a lot of changes to the tests, as they all need to
correctly configure the signaling.
For `runbot_merge` there was *some* setup sharing already via the
module-level `repo` fixtures`, those were merged into a conftest-level
fixture which could handle the signaling setup. A few tests which
unnecessarily set up repositories ad-hoc were also moved to the
fixture. But for most of the ad-hoc setup in `runbot_merge`, as well
as `forwardport` where it's all ad-hoc, events sources setup was just
appended as is. This should probably be cleaned up at one point, with
the various requirements collected and organised into a small set of
fixtures doing the job more uniformly.
Fixes #887
2024-06-06 16:07:57 +07:00
env [ ' runbot_merge.events_sources ' ] . create ( { ' repository ' : ' xxx/xxx ' } )
2020-02-11 20:20:32 +07:00
r = p . repo_ids
# replacement pattern, pr label, stored label
cases = [
( ' /^foo:/foo-dev:/ ' , ' foo:bar ' , ' foo-dev:bar ' ) ,
( ' /^foo:/foo-dev:/ ' , ' foox:bar ' , ' foox:bar ' ) ,
( ' /^foo:/foo-dev:/i ' , ' FOO:bar ' , ' foo-dev:bar ' ) ,
( ' /o/x/g ' , ' foo:bar ' , ' fxx:bar ' ) ,
( ' @foo:@bar:@ ' , ' foo:bar ' , ' bar:bar ' ) ,
( ' /foo:/bar:/ \n /bar:/baz:/ ' , ' foo:bar ' , ' baz:bar ' ) ,
]
for pr_number , ( pattern , original , target ) in enumerate ( cases , start = 1 ) :
r . substitutions = pattern
requests . post (
' http://localhost: {} /runbot_merge/hooks ' . format ( port ) ,
headers = { ' X-Github-Event ' : ' pull_request ' } ,
json = {
' action ' : ' opened ' ,
' repository ' : {
' full_name ' : r . name ,
} ,
' pull_request ' : {
2021-07-30 14:20:57 +07:00
' state ' : ' open ' ,
2021-08-11 16:36:35 +07:00
' draft ' : False ,
2020-02-11 20:20:32 +07:00
' user ' : { ' login ' : ' bob ' } ,
' base ' : {
' repo ' : { ' full_name ' : r . name } ,
' ref ' : p . branch_ids . name ,
} ,
' number ' : pr_number ,
' title ' : " a pr " ,
' body ' : None ,
' commits ' : 1 ,
' head ' : {
' label ' : original ,
' sha ' : format ( pr_number , ' x ' ) * 40 ,
}
2022-07-04 14:44:11 +07:00
} ,
' sender ' : { ' login ' : ' pytest ' }
2020-02-11 20:20:32 +07:00
}
)
pr = env [ ' runbot_merge.pull_requests ' ] . search ( [
( ' repository ' , ' = ' , r . id ) ,
( ' number ' , ' = ' , pr_number )
] )
assert pr . label == target
def test_substitutions_staging ( self , env , repo_a , repo_b , config ) :
""" Different repos from the same project may have different policies for
sourcing PRs . So allow for remapping labels on input in order to match .
"""
repo_b_id = env [ ' runbot_merge.repository ' ] . search ( [
( ' name ' , ' = ' , repo_b . name )
] )
# in repo b, replace owner part by repo_a's owner
repo_b_id . substitutions = r " /.+:/ %s :/ " % repo_a . owner
with repo_a :
2024-02-09 16:49:32 +07:00
repo_a . make_commits ( None , Commit ( ' initial ' , tree = { ' a ' : ' 0 ' } ) , ref = ' heads/master ' )
2020-02-11 20:20:32 +07:00
with repo_b :
2024-02-09 16:49:32 +07:00
repo_b . make_commits ( None , Commit ( ' initial ' , tree = { ' b ' : ' 0 ' } ) , ref = ' heads/master ' )
2020-02-11 20:20:32 +07:00
# policy is that repo_a PRs are created in the same repo while repo_b PRs
# are created in personal forks
with repo_a :
repo_a . make_commits ( ' master ' , repo_a . Commit ( ' bop ' , tree = { ' a ' : ' 1 ' } ) , ref = ' heads/abranch ' )
pra = repo_a . make_pr ( target = ' master ' , head = ' abranch ' )
[ADD] *: per-repository webhook secret
Currently webhook secrets are configured per *project* which is an
issue both because different repositories may have different
administrators and thus creates safety concerns, and because multiple
repositories can feed into different projects (e.g. on mergebot,
odoo-dev/odoo is both an ancillary repository to the main RD project,
and the main repository to the minor / legacy master-wowl
project). This means it can be necessary to have multiple projects
share the same secret as well, this then mandates the secret for more
repositories per (1).
This is a pain in the ass, so just detach secrets from projects and
link them *only* to repositories, it's cleaner and easier to manage
and set up progressively.
This requires a lot of changes to the tests, as they all need to
correctly configure the signaling.
For `runbot_merge` there was *some* setup sharing already via the
module-level `repo` fixtures`, those were merged into a conftest-level
fixture which could handle the signaling setup. A few tests which
unnecessarily set up repositories ad-hoc were also moved to the
fixture. But for most of the ad-hoc setup in `runbot_merge`, as well
as `forwardport` where it's all ad-hoc, events sources setup was just
appended as is. This should probably be cleaned up at one point, with
the various requirements collected and organised into a small set of
fixtures doing the job more uniformly.
Fixes #887
2024-06-06 16:07:57 +07:00
with repo_b , repo_b . fork ( ) as b_fork :
2020-02-11 20:20:32 +07:00
b_fork . make_commits ( ' master ' , b_fork . Commit ( ' pob ' , tree = { ' b ' : ' 1 ' } ) , ref = ' heads/abranch ' )
prb = repo_b . make_pr (
title = " a pr " ,
target = ' master ' , head = ' %s :abranch ' % b_fork . owner
)
2024-02-09 16:49:32 +07:00
pra_id = to_pr ( env , pra )
prb_id = to_pr ( env , prb )
2020-02-11 20:20:32 +07:00
assert pra_id . label . endswith ( ' :abranch ' )
assert prb_id . label . endswith ( ' :abranch ' )
with repo_a , repo_b :
2024-02-09 16:49:32 +07:00
repo_a . post_status ( pra . head , ' success ' )
2020-02-11 20:20:32 +07:00
pra . post_comment ( ' hansen r+ ' , config [ ' role_reviewer ' ] [ ' token ' ] )
2024-02-09 16:49:32 +07:00
repo_b . post_status ( prb . head , ' success ' )
2020-02-11 20:20:32 +07:00
prb . post_comment ( ' hansen r+ ' , config [ ' role_reviewer ' ] [ ' token ' ] )
env . run_crons ( )
assert pra_id . staging_id , ' PR A should be staged '
assert prb_id . staging_id , " PR B should be staged "
assert pra_id . staging_id == prb_id . staging_id , " both prs should be staged together "
assert pra_id . batch_id == prb_id . batch_id , " both prs should be part of the same batch "
2021-07-29 20:15:17 +07:00
def test_multi_project ( env , make_repo , setreviewers , users , config ,
tunnel ) :
""" There should be no linking of PRs across projects, even if there is some
structural overlap between the two .
Here we have two projects on different forks , then a user creates a PR from
a third fork ( or one of the forks should not matter ) to * both * .
The two PRs should be independent .
"""
Projects = env [ ' runbot_merge.project ' ]
gh_token = config [ ' github ' ] [ ' token ' ]
r1 = make_repo ( " repo_a " )
with r1 :
r1 . make_commits (
None , Commit ( ' root ' , tree = { ' a ' : ' a ' } ) ,
ref = ' heads/default ' )
r1_dev = r1 . fork ( )
p1 = Projects . create ( {
' name ' : ' Project 1 ' ,
' github_token ' : gh_token ,
' github_prefix ' : ' hansen ' ,
2024-11-29 15:04:06 +07:00
' github_name ' : config [ ' github ' ] [ ' name ' ] ,
' github_email ' : " foo@example.org " ,
2021-07-29 20:15:17 +07:00
' repo_ids ' : [ ( 0 , 0 , {
' name ' : r1 . name ,
' group_id ' : False ,
' required_statuses ' : ' a ' ,
} ) ] ,
' branch_ids ' : [ ( 0 , 0 , { ' name ' : ' default ' } ) ] ,
} )
setreviewers ( * p1 . repo_ids )
[ADD] *: per-repository webhook secret
Currently webhook secrets are configured per *project* which is an
issue both because different repositories may have different
administrators and thus creates safety concerns, and because multiple
repositories can feed into different projects (e.g. on mergebot,
odoo-dev/odoo is both an ancillary repository to the main RD project,
and the main repository to the minor / legacy master-wowl
project). This means it can be necessary to have multiple projects
share the same secret as well, this then mandates the secret for more
repositories per (1).
This is a pain in the ass, so just detach secrets from projects and
link them *only* to repositories, it's cleaner and easier to manage
and set up progressively.
This requires a lot of changes to the tests, as they all need to
correctly configure the signaling.
For `runbot_merge` there was *some* setup sharing already via the
module-level `repo` fixtures`, those were merged into a conftest-level
fixture which could handle the signaling setup. A few tests which
unnecessarily set up repositories ad-hoc were also moved to the
fixture. But for most of the ad-hoc setup in `runbot_merge`, as well
as `forwardport` where it's all ad-hoc, events sources setup was just
appended as is. This should probably be cleaned up at one point, with
the various requirements collected and organised into a small set of
fixtures doing the job more uniformly.
Fixes #887
2024-06-06 16:07:57 +07:00
env [ ' runbot_merge.events_sources ' ] . create ( [ { ' repository ' : r1 . name } ] )
2021-07-29 20:15:17 +07:00
r2 = make_repo ( ' repo_b ' )
with r2 :
r2 . make_commits (
None , Commit ( ' root ' , tree = { ' b ' : ' a ' } ) ,
ref = ' heads/default '
)
r2_dev = r2 . fork ( )
p2 = Projects . create ( {
' name ' : " Project 2 " ,
' github_token ' : gh_token ,
' github_prefix ' : ' hansen ' ,
2024-11-29 15:04:06 +07:00
' github_name ' : config [ ' github ' ] [ ' name ' ] ,
' github_email ' : " foo@example.org " ,
2021-07-29 20:15:17 +07:00
' repo_ids ' : [ ( 0 , 0 , {
' name ' : r2 . name ,
' group_id ' : False ,
' required_statuses ' : ' a ' ,
} ) ] ,
' branch_ids ' : [ ( 0 , 0 , { ' name ' : ' default ' } ) ] ,
} )
setreviewers ( * p2 . repo_ids )
[ADD] *: per-repository webhook secret
Currently webhook secrets are configured per *project* which is an
issue both because different repositories may have different
administrators and thus creates safety concerns, and because multiple
repositories can feed into different projects (e.g. on mergebot,
odoo-dev/odoo is both an ancillary repository to the main RD project,
and the main repository to the minor / legacy master-wowl
project). This means it can be necessary to have multiple projects
share the same secret as well, this then mandates the secret for more
repositories per (1).
This is a pain in the ass, so just detach secrets from projects and
link them *only* to repositories, it's cleaner and easier to manage
and set up progressively.
This requires a lot of changes to the tests, as they all need to
correctly configure the signaling.
For `runbot_merge` there was *some* setup sharing already via the
module-level `repo` fixtures`, those were merged into a conftest-level
fixture which could handle the signaling setup. A few tests which
unnecessarily set up repositories ad-hoc were also moved to the
fixture. But for most of the ad-hoc setup in `runbot_merge`, as well
as `forwardport` where it's all ad-hoc, events sources setup was just
appended as is. This should probably be cleaned up at one point, with
the various requirements collected and organised into a small set of
fixtures doing the job more uniformly.
Fixes #887
2024-06-06 16:07:57 +07:00
env [ ' runbot_merge.events_sources ' ] . create ( [ { ' repository ' : r2 . name } ] )
2021-07-29 20:15:17 +07:00
assert r1_dev . owner == r2_dev . owner
with r1 , r1_dev :
r1_dev . make_commits ( ' default ' , Commit ( ' new ' , tree = { ' a ' : ' b ' } ) , ref = ' heads/other ' )
# create, validate, and approve pr1
pr1 = r1 . make_pr ( title = ' pr 1 ' , target = ' default ' , head = r1_dev . owner + ' :other ' )
r1 . post_status ( pr1 . head , ' success ' , ' a ' )
pr1 . post_comment ( ' hansen r+ ' , config [ ' role_reviewer ' ] [ ' token ' ] )
with r2 , r2_dev :
r2_dev . make_commits ( ' default ' , Commit ( ' new ' , tree = { ' b ' : ' b ' } ) , ref = ' heads/other ' )
# create second PR with the same label *in a different project*, don't
# approve it
pr2 = r2 . make_pr ( title = ' pr 2 ' , target = ' default ' , head = r2_dev . owner + ' :other ' )
r2 . post_status ( pr2 . head , ' success ' , ' a ' )
env . run_crons ( )
pr1_id = to_pr ( env , pr1 )
pr2_id = to_pr ( env , pr2 )
assert pr1_id . state == ' ready ' and not pr1_id . blocked
assert pr2_id . state == ' validated '
assert pr1_id . staging_id
assert not pr2_id . staging_id
assert pr1 . comments == [
( users [ ' reviewer ' ] , ' hansen r+ ' ) ,
2024-03-05 18:59:58 +07:00
seen ( env , pr1 , users ) ,
2021-07-29 20:15:17 +07:00
]
2024-03-05 18:59:58 +07:00
assert pr2 . comments == [ seen ( env , pr2 , users ) ]
2021-11-12 22:04:34 +07:00
def test_freeze_complete ( env , project , repo_a , repo_b , repo_c , users , config ) :
""" Tests the freeze wizard feature (aside from the UI):
* have a project with 3 repos , and two branches ( 1.0 and master ) each
* have 2 PRs required for the freeze
* prep 3 freeze PRs
2022-07-11 13:17:04 +07:00
* prep 1 bump PR
2021-11-12 22:04:34 +07:00
* trigger the freeze wizard
* trigger it again ( check that the same object is returned , there should
only be one freeze per project at a time )
* configure the freeze
* check that it doesn ' t go through
* merge required PRs
* check that freeze goes through
* check that reminder is shown
* check that new branches are created w / correct parent & commit info
[CHG] *: rewrite commands set, rework status management
This commit revisits the commands set in order to make it more
regular, and limit inconsistent command-sets, although it includes
pseudo-command aliases for common tasks now removed from the core set.
Hard Errors
===========
The previous iteration of the commands set would ignore any
non-command term in a command line. This has been changed to hard
error (and ignoring the entire thing) if any command is unknown or
invalid.
This fixes inconsistent / unexpected interpretations where a user
sends a command, then writes a novel on the same line some words of
which happen to *also* be commands, leading to merge states they did
not expect. They should now be told to fuck off.
Priority Restructuring
----------------------
The numerical priority system was pretty messy in that it confused
"staging priority" (in ways which were not entirely straightforward)
with overrides to other concerns.
This has now being split along all the axis, with separate command
subsets for:
- staging prioritisation, now separated between `default`, `priority`,
and `alone`,
- `default` means PRs are picked by an unspecified order when
creating a staging, if nothing better is available
- `priority` means PRs are picked first when staging, however if
`priority` PRs don't fill the staging the rest will be filled with
`default`, this mode did not previously exist
- `alone` means the PRs are picked first, before splits, and only
`alone` PRs can be part of the staging (which usually matches the
modename)
- `skipchecks` overrides both statuses and approval checks, for the
batch, something previously implied in `p=0`, but now
independent. Setting `skipchecks` basically makes the entire batch
`ready`.
For consistency this also sets the reviewer implicitly: since
skipchecks overrides both statuses *and approval*, whoever enables
this mode is essentially the reviewer.
- `cancel` cancels any ongoing staging when the marked PR becomes
ready again, previously this was also implied (in a more restricted
form) by setting `p=0`
FWBot removal
=============
While the "forwardport bot" still exists as an API level (to segregate
access rights between tokens) it has been removed as an interaction
point, as part of the modules merge plan. As a result,
fwbot stops responding
----------------------
Feedback messages are now always sent by the mergebot, the
forward-porting bot should not send any message or notification
anymore.
commands moved to the merge bot
-------------------------------
- `ignore`/`up to` simply changes bot
- `close` as well
- `skipci` is now a choice / flag of an `fw` command, which denotes
the forward-port policy,
- `fw=default` is the old `ci` and resets the policy to default,
that is wait for the PR to be merged to create forward ports, and
for the required statuses on each forward port to be received
before creating the next
- `fw=skipci` is the old `skipci`, it waits for the merge of the
base PR but then creates all the forward ports immediately (unless
it gets a conflict)
- `fw=skipmerge` immediately creates all the forward ports, without
even waiting for the PR to be merged
This is a completely new mode, and may be rather broken as until
now the 'bot has always assumed the source PR had been merged.
approval rework
---------------
Because of the previous section, there is no distinguishing feature
between `mergebot r+` = "merge this PR" and `forwardbot r+` = "merge
this PR and all its parent with different access rights".
As a result, the two have been merged under a single `mergebot r+`
with heuristics attempting to provide the best experience:
- if approving a non-forward port, the behavior does not change
- else, with review rights on the source, all ancestors are approved
- else, as author of the original, approves all ancestors which descend
from a merged PR
- else, approves all ancestors up to and including the oldest ancestor
to which we have review rights
Most notably, the source's author is not delegated on the source or
any of its descendants anymore. This might need to be revisited if it
provides too restrictive.
For the very specialized need of approving a forward-port *and none of
its ancestors*, `review=` can now take a comma (`,`) separated list of
pull request numbers (github numbers, not mergebot ids).
Computed State
==============
The `state` field of pull requests is now computed. Hopefully this
makes the status more consistent and predictable in the long run, and
importantly makes status management more reliable (because reference
datum get updated naturally flowing to the state).
For now however it makes things more complicated as some of the states
have to be separately signaled or updated:
- `closed` and `error` are now separate flags
- `merge_date` is pulled down from forwardport and becomes the
transition signal for ready -> merged
- `reviewed_by` becomes the transition signal for approval (might be a
good idea to rename it...)
- `status` is computed from the head's statuses and overrides, and
*that* becomes the validation state
Ideally, batch-level flags like `skipchecks` should be on, well, the
batch, and `state` should have a dependency on the batch. However
currently the batch is not a durable / permanent member of the system,
so it's a PR-level flag and a messy pile.
On notable change is that *forcing* the state to `ready` now does that
but also sets the reviewer, `skipchecks`, and overrides to ensure the
API-mediated readying does not get rolled back by e.g. the runbot
sending a status.
This is useful for a few types of automated / programmatic PRs
e.g. translation exports, where we set the state programmatically to
limit noise.
recursive dependency hack
-------------------------
Given a sequence of PRs with an override of the source, if one of the
PRs is updated its descendants should not have the override
anymore. However if the updated PR gets overridden, its descendants
should have *that* override.
This requires some unholy manipulations via an override of `modified`,
as the ORM supports recursive fields but not recursive
dependencies (on a different field).
unconditional followup scheduling
---------------------------------
Previously scheduling forward-port followup was contigent on the FW
policy, but it's not actually correct if the new PR is *immediately*
validated (which can happen now that the field is computed, if there
are no required statuses *or* all of the required statuses are
overridden by an ancestor) as nothing will trigger the state change
and thus scheduling of the fp followup.
The followup function checks all the properties of the batch to port,
so this should not result on incorrect ports. Although it's a bit more
expensive, and will lead to more spam.
Previously this would not happen because on creation of a PR the
validation task (commit -> PR) would still have to execute.
Misc Changes
============
- If a PR is marked as overriding / canceling stagings, it now does
so on retry not just when setting initially.
This was not handled at all previously, so a PR in P0 going into
error due to e.g. a non-deterministic bug would be retried and still
p=0, but a current staging would not get cancelled. Same when a PR
in p=0 goes into error because something was failed, then is updated
with a fix.
- Add tracking to a bunch of relevant PR fields.
Post-mortem analysis currently generally requires going through the
text logs to see what happened, which is annoying.
There is a nondeterminism / inconsistency in the tracking which
sometimes leads the admin user to trigger tracking before the bot
does, leading to the staging tracking being attributed to them
during tests, shove under the carpet by ignoring the user to whom
that tracking is attributed.
When multiple users update tracked fields in the same transaction
all the changes are attributed to the first one having triggered
tracking (?), I couldn't find why the admin sometimes takes over.
- added and leveraged support for enum-backed selection fields
- moved variuous fields from forwardport to runbot_merge
- fix a migration which had never worked and which never run (because
I forgot to bump the version on the module)
- remove some unnecessary intermediate de/serialisation
fixes #673, fixes #309, fixes #792, fixes #846 (probably)
2023-10-31 13:42:07 +07:00
* check that a PRs ( freeze and bump ) are part of synthetic stagings so
they ' re correctly accounted for in the change history
2021-11-12 22:04:34 +07:00
"""
project . freeze_reminder = " Don ' t forget to like and subscribe "
# have a project with 3 repos, and two branches (1.0 and master)
project . branch_ids = [
( 1 , project . branch_ids . id , { ' sequence ' : 1 } ) ,
( 0 , 0 , { ' name ' : ' 1.0 ' , ' sequence ' : 2 } ) ,
]
2022-07-11 13:17:04 +07:00
[
( master_head_a , master_head_b , master_head_c ) ,
( pr_required_a , _ , pr_required_c ) ,
( pr_rel_a , pr_rel_b , pr_rel_c ) ,
pr_bump_a ,
pr_other
] = setup_mess ( repo_a , repo_b , repo_c )
2021-11-12 22:04:34 +07:00
env . run_crons ( ) # process the PRs
release_prs = {
repo_a . name : to_pr ( env , pr_rel_a ) ,
repo_b . name : to_pr ( env , pr_rel_b ) ,
repo_c . name : to_pr ( env , pr_rel_c ) ,
}
2022-07-11 13:17:04 +07:00
pr_bump_id = to_pr ( env , pr_bump_a )
2021-11-12 22:04:34 +07:00
# trigger the ~~tree~~ freeze wizard
w = project . action_prepare_freeze ( )
w2 = project . action_prepare_freeze ( )
assert w == w2 , " each project should only have one freeze wizard active at a time "
2022-02-07 19:23:41 +07:00
assert w [ ' res_model ' ] == ' runbot_merge.project.freeze '
2021-11-12 22:04:34 +07:00
w_id = env [ w [ ' res_model ' ] ] . browse ( [ w [ ' res_id ' ] ] )
assert w_id . branch_name == ' 1.1 ' , " check that the forking incremented the minor by 1 "
assert len ( w_id . release_pr_ids ) == len ( project . repo_ids ) , \
" should ask for a many release PRs as we have repositories "
# configure required PRs
w_id . required_pr_ids = ( to_pr ( env , pr_required_a ) | to_pr ( env , pr_required_c ) ) . ids
# configure releases
for r in w_id . release_pr_ids :
r . pr_id = release_prs [ r . repository_id . name ] . id
2022-02-08 17:06:34 +07:00
w_id . release_pr_ids [ - 1 ] . pr_id = to_pr ( env , pr_other ) . id
2022-07-11 13:17:04 +07:00
# configure bump
assert not w_id . bump_pr_ids , " there is no bump pr by default "
w_id . write ( { ' bump_pr_ids ' : [
( 0 , 0 , { ' repository_id ' : pr_bump_id . repository . id , ' pr_id ' : pr_bump_id . id } )
] } )
2021-11-12 22:04:34 +07:00
r = w_id . action_freeze ( )
assert r == w , " the freeze is not ready so the wizard should redirect to itself "
2022-02-08 17:06:34 +07:00
assert w_id . errors == f """ \
2023-10-27 16:15:05 +07:00
* All release PRs must have the same label , found ' {pr_rel_c.user} :release-1.1, {pr_other.user} :whocares ' .
2022-02-08 17:06:34 +07:00
* 2 required PRs not ready . """
w_id . release_pr_ids [ - 1 ] . pr_id = release_prs [ repo_c . name ] . id
2021-11-12 22:04:34 +07:00
with repo_a :
pr_required_a . post_comment ( ' hansen r+ ' , config [ ' role_reviewer ' ] [ ' token ' ] )
2024-02-09 16:49:32 +07:00
repo_a . post_status ( pr_required_a . head , ' success ' )
2021-11-12 22:04:34 +07:00
with repo_c :
pr_required_c . post_comment ( ' hansen r+ ' , config [ ' role_reviewer ' ] [ ' token ' ] )
2024-02-09 16:49:32 +07:00
repo_c . post_status ( pr_required_c . head , ' success ' )
2021-11-12 22:04:34 +07:00
env . run_crons ( )
for repo in [ repo_a , repo_b , repo_c ] :
with repo :
2024-02-09 16:49:32 +07:00
repo . post_status ( ' staging.master ' , ' success ' )
2021-11-12 22:04:34 +07:00
env . run_crons ( )
assert to_pr ( env , pr_required_a ) . state == ' merged '
assert to_pr ( env , pr_required_c ) . state == ' merged '
assert not w_id . errors
2022-02-07 19:23:41 +07:00
# assume the wizard is closed, re-open it
w = project . action_prepare_freeze ( )
assert w [ ' res_model ' ] == ' runbot_merge.project.freeze '
assert w [ ' res_id ' ] == w_id . id , " check that we ' re still getting the old wizard "
w_id = env [ w [ ' res_model ' ] ] . browse ( [ w [ ' res_id ' ] ] )
assert w_id . exists ( )
# actually perform the freeze
2021-11-12 22:04:34 +07:00
r = w_id . action_freeze ( )
# check that the wizard was deleted
assert not w_id . exists ( )
# check that the wizard pops out a reminder dialog (kinda)
assert r [ ' res_model ' ] == ' runbot_merge.project '
assert r [ ' res_id ' ] == project . id
[CHG] *: rewrite commands set, rework status management
This commit revisits the commands set in order to make it more
regular, and limit inconsistent command-sets, although it includes
pseudo-command aliases for common tasks now removed from the core set.
Hard Errors
===========
The previous iteration of the commands set would ignore any
non-command term in a command line. This has been changed to hard
error (and ignoring the entire thing) if any command is unknown or
invalid.
This fixes inconsistent / unexpected interpretations where a user
sends a command, then writes a novel on the same line some words of
which happen to *also* be commands, leading to merge states they did
not expect. They should now be told to fuck off.
Priority Restructuring
----------------------
The numerical priority system was pretty messy in that it confused
"staging priority" (in ways which were not entirely straightforward)
with overrides to other concerns.
This has now being split along all the axis, with separate command
subsets for:
- staging prioritisation, now separated between `default`, `priority`,
and `alone`,
- `default` means PRs are picked by an unspecified order when
creating a staging, if nothing better is available
- `priority` means PRs are picked first when staging, however if
`priority` PRs don't fill the staging the rest will be filled with
`default`, this mode did not previously exist
- `alone` means the PRs are picked first, before splits, and only
`alone` PRs can be part of the staging (which usually matches the
modename)
- `skipchecks` overrides both statuses and approval checks, for the
batch, something previously implied in `p=0`, but now
independent. Setting `skipchecks` basically makes the entire batch
`ready`.
For consistency this also sets the reviewer implicitly: since
skipchecks overrides both statuses *and approval*, whoever enables
this mode is essentially the reviewer.
- `cancel` cancels any ongoing staging when the marked PR becomes
ready again, previously this was also implied (in a more restricted
form) by setting `p=0`
FWBot removal
=============
While the "forwardport bot" still exists as an API level (to segregate
access rights between tokens) it has been removed as an interaction
point, as part of the modules merge plan. As a result,
fwbot stops responding
----------------------
Feedback messages are now always sent by the mergebot, the
forward-porting bot should not send any message or notification
anymore.
commands moved to the merge bot
-------------------------------
- `ignore`/`up to` simply changes bot
- `close` as well
- `skipci` is now a choice / flag of an `fw` command, which denotes
the forward-port policy,
- `fw=default` is the old `ci` and resets the policy to default,
that is wait for the PR to be merged to create forward ports, and
for the required statuses on each forward port to be received
before creating the next
- `fw=skipci` is the old `skipci`, it waits for the merge of the
base PR but then creates all the forward ports immediately (unless
it gets a conflict)
- `fw=skipmerge` immediately creates all the forward ports, without
even waiting for the PR to be merged
This is a completely new mode, and may be rather broken as until
now the 'bot has always assumed the source PR had been merged.
approval rework
---------------
Because of the previous section, there is no distinguishing feature
between `mergebot r+` = "merge this PR" and `forwardbot r+` = "merge
this PR and all its parent with different access rights".
As a result, the two have been merged under a single `mergebot r+`
with heuristics attempting to provide the best experience:
- if approving a non-forward port, the behavior does not change
- else, with review rights on the source, all ancestors are approved
- else, as author of the original, approves all ancestors which descend
from a merged PR
- else, approves all ancestors up to and including the oldest ancestor
to which we have review rights
Most notably, the source's author is not delegated on the source or
any of its descendants anymore. This might need to be revisited if it
provides too restrictive.
For the very specialized need of approving a forward-port *and none of
its ancestors*, `review=` can now take a comma (`,`) separated list of
pull request numbers (github numbers, not mergebot ids).
Computed State
==============
The `state` field of pull requests is now computed. Hopefully this
makes the status more consistent and predictable in the long run, and
importantly makes status management more reliable (because reference
datum get updated naturally flowing to the state).
For now however it makes things more complicated as some of the states
have to be separately signaled or updated:
- `closed` and `error` are now separate flags
- `merge_date` is pulled down from forwardport and becomes the
transition signal for ready -> merged
- `reviewed_by` becomes the transition signal for approval (might be a
good idea to rename it...)
- `status` is computed from the head's statuses and overrides, and
*that* becomes the validation state
Ideally, batch-level flags like `skipchecks` should be on, well, the
batch, and `state` should have a dependency on the batch. However
currently the batch is not a durable / permanent member of the system,
so it's a PR-level flag and a messy pile.
On notable change is that *forcing* the state to `ready` now does that
but also sets the reviewer, `skipchecks`, and overrides to ensure the
API-mediated readying does not get rolled back by e.g. the runbot
sending a status.
This is useful for a few types of automated / programmatic PRs
e.g. translation exports, where we set the state programmatically to
limit noise.
recursive dependency hack
-------------------------
Given a sequence of PRs with an override of the source, if one of the
PRs is updated its descendants should not have the override
anymore. However if the updated PR gets overridden, its descendants
should have *that* override.
This requires some unholy manipulations via an override of `modified`,
as the ORM supports recursive fields but not recursive
dependencies (on a different field).
unconditional followup scheduling
---------------------------------
Previously scheduling forward-port followup was contigent on the FW
policy, but it's not actually correct if the new PR is *immediately*
validated (which can happen now that the field is computed, if there
are no required statuses *or* all of the required statuses are
overridden by an ancestor) as nothing will trigger the state change
and thus scheduling of the fp followup.
The followup function checks all the properties of the batch to port,
so this should not result on incorrect ports. Although it's a bit more
expensive, and will lead to more spam.
Previously this would not happen because on creation of a PR the
validation task (commit -> PR) would still have to execute.
Misc Changes
============
- If a PR is marked as overriding / canceling stagings, it now does
so on retry not just when setting initially.
This was not handled at all previously, so a PR in P0 going into
error due to e.g. a non-deterministic bug would be retried and still
p=0, but a current staging would not get cancelled. Same when a PR
in p=0 goes into error because something was failed, then is updated
with a fix.
- Add tracking to a bunch of relevant PR fields.
Post-mortem analysis currently generally requires going through the
text logs to see what happened, which is annoying.
There is a nondeterminism / inconsistency in the tracking which
sometimes leads the admin user to trigger tracking before the bot
does, leading to the staging tracking being attributed to them
during tests, shove under the carpet by ignoring the user to whom
that tracking is attributed.
When multiple users update tracked fields in the same transaction
all the changes are attributed to the first one having triggered
tracking (?), I couldn't find why the admin sometimes takes over.
- added and leveraged support for enum-backed selection fields
- moved variuous fields from forwardport to runbot_merge
- fix a migration which had never worked and which never run (because
I forgot to bump the version on the module)
- remove some unnecessary intermediate de/serialisation
fixes #673, fixes #309, fixes #792, fixes #846 (probably)
2023-10-31 13:42:07 +07:00
release_pr_ids = functools . reduce ( operator . add , release_prs . values ( ) )
2022-07-11 13:17:04 +07:00
# stuff that's done directly
[CHG] *: rewrite commands set, rework status management
This commit revisits the commands set in order to make it more
regular, and limit inconsistent command-sets, although it includes
pseudo-command aliases for common tasks now removed from the core set.
Hard Errors
===========
The previous iteration of the commands set would ignore any
non-command term in a command line. This has been changed to hard
error (and ignoring the entire thing) if any command is unknown or
invalid.
This fixes inconsistent / unexpected interpretations where a user
sends a command, then writes a novel on the same line some words of
which happen to *also* be commands, leading to merge states they did
not expect. They should now be told to fuck off.
Priority Restructuring
----------------------
The numerical priority system was pretty messy in that it confused
"staging priority" (in ways which were not entirely straightforward)
with overrides to other concerns.
This has now being split along all the axis, with separate command
subsets for:
- staging prioritisation, now separated between `default`, `priority`,
and `alone`,
- `default` means PRs are picked by an unspecified order when
creating a staging, if nothing better is available
- `priority` means PRs are picked first when staging, however if
`priority` PRs don't fill the staging the rest will be filled with
`default`, this mode did not previously exist
- `alone` means the PRs are picked first, before splits, and only
`alone` PRs can be part of the staging (which usually matches the
modename)
- `skipchecks` overrides both statuses and approval checks, for the
batch, something previously implied in `p=0`, but now
independent. Setting `skipchecks` basically makes the entire batch
`ready`.
For consistency this also sets the reviewer implicitly: since
skipchecks overrides both statuses *and approval*, whoever enables
this mode is essentially the reviewer.
- `cancel` cancels any ongoing staging when the marked PR becomes
ready again, previously this was also implied (in a more restricted
form) by setting `p=0`
FWBot removal
=============
While the "forwardport bot" still exists as an API level (to segregate
access rights between tokens) it has been removed as an interaction
point, as part of the modules merge plan. As a result,
fwbot stops responding
----------------------
Feedback messages are now always sent by the mergebot, the
forward-porting bot should not send any message or notification
anymore.
commands moved to the merge bot
-------------------------------
- `ignore`/`up to` simply changes bot
- `close` as well
- `skipci` is now a choice / flag of an `fw` command, which denotes
the forward-port policy,
- `fw=default` is the old `ci` and resets the policy to default,
that is wait for the PR to be merged to create forward ports, and
for the required statuses on each forward port to be received
before creating the next
- `fw=skipci` is the old `skipci`, it waits for the merge of the
base PR but then creates all the forward ports immediately (unless
it gets a conflict)
- `fw=skipmerge` immediately creates all the forward ports, without
even waiting for the PR to be merged
This is a completely new mode, and may be rather broken as until
now the 'bot has always assumed the source PR had been merged.
approval rework
---------------
Because of the previous section, there is no distinguishing feature
between `mergebot r+` = "merge this PR" and `forwardbot r+` = "merge
this PR and all its parent with different access rights".
As a result, the two have been merged under a single `mergebot r+`
with heuristics attempting to provide the best experience:
- if approving a non-forward port, the behavior does not change
- else, with review rights on the source, all ancestors are approved
- else, as author of the original, approves all ancestors which descend
from a merged PR
- else, approves all ancestors up to and including the oldest ancestor
to which we have review rights
Most notably, the source's author is not delegated on the source or
any of its descendants anymore. This might need to be revisited if it
provides too restrictive.
For the very specialized need of approving a forward-port *and none of
its ancestors*, `review=` can now take a comma (`,`) separated list of
pull request numbers (github numbers, not mergebot ids).
Computed State
==============
The `state` field of pull requests is now computed. Hopefully this
makes the status more consistent and predictable in the long run, and
importantly makes status management more reliable (because reference
datum get updated naturally flowing to the state).
For now however it makes things more complicated as some of the states
have to be separately signaled or updated:
- `closed` and `error` are now separate flags
- `merge_date` is pulled down from forwardport and becomes the
transition signal for ready -> merged
- `reviewed_by` becomes the transition signal for approval (might be a
good idea to rename it...)
- `status` is computed from the head's statuses and overrides, and
*that* becomes the validation state
Ideally, batch-level flags like `skipchecks` should be on, well, the
batch, and `state` should have a dependency on the batch. However
currently the batch is not a durable / permanent member of the system,
so it's a PR-level flag and a messy pile.
On notable change is that *forcing* the state to `ready` now does that
but also sets the reviewer, `skipchecks`, and overrides to ensure the
API-mediated readying does not get rolled back by e.g. the runbot
sending a status.
This is useful for a few types of automated / programmatic PRs
e.g. translation exports, where we set the state programmatically to
limit noise.
recursive dependency hack
-------------------------
Given a sequence of PRs with an override of the source, if one of the
PRs is updated its descendants should not have the override
anymore. However if the updated PR gets overridden, its descendants
should have *that* override.
This requires some unholy manipulations via an override of `modified`,
as the ORM supports recursive fields but not recursive
dependencies (on a different field).
unconditional followup scheduling
---------------------------------
Previously scheduling forward-port followup was contigent on the FW
policy, but it's not actually correct if the new PR is *immediately*
validated (which can happen now that the field is computed, if there
are no required statuses *or* all of the required statuses are
overridden by an ancestor) as nothing will trigger the state change
and thus scheduling of the fp followup.
The followup function checks all the properties of the batch to port,
so this should not result on incorrect ports. Although it's a bit more
expensive, and will lead to more spam.
Previously this would not happen because on creation of a PR the
validation task (commit -> PR) would still have to execute.
Misc Changes
============
- If a PR is marked as overriding / canceling stagings, it now does
so on retry not just when setting initially.
This was not handled at all previously, so a PR in P0 going into
error due to e.g. a non-deterministic bug would be retried and still
p=0, but a current staging would not get cancelled. Same when a PR
in p=0 goes into error because something was failed, then is updated
with a fix.
- Add tracking to a bunch of relevant PR fields.
Post-mortem analysis currently generally requires going through the
text logs to see what happened, which is annoying.
There is a nondeterminism / inconsistency in the tracking which
sometimes leads the admin user to trigger tracking before the bot
does, leading to the staging tracking being attributed to them
during tests, shove under the carpet by ignoring the user to whom
that tracking is attributed.
When multiple users update tracked fields in the same transaction
all the changes are attributed to the first one having triggered
tracking (?), I couldn't find why the admin sometimes takes over.
- added and leveraged support for enum-backed selection fields
- moved variuous fields from forwardport to runbot_merge
- fix a migration which had never worked and which never run (because
I forgot to bump the version on the module)
- remove some unnecessary intermediate de/serialisation
fixes #673, fixes #309, fixes #792, fixes #846 (probably)
2023-10-31 13:42:07 +07:00
assert all ( pr_id . state == ' merged ' for pr_id in release_pr_ids )
2022-07-11 13:17:04 +07:00
assert pr_bump_id . state == ' merged '
2024-04-22 19:11:38 +07:00
assert pr_bump_id . commits_map != ' {} '
assert len ( release_pr_ids . batch_id ) == 1
assert release_pr_ids . batch_id . merge_date
assert release_pr_ids . batch_id . staging_ids . target . name == ' 1.1 '
assert release_pr_ids . batch_id . staging_ids . state == ' success '
assert pr_bump_id . batch_id . merge_date
assert pr_bump_id . batch_id . staging_ids . target . name == ' master '
assert pr_bump_id . batch_id . staging_ids . state == ' success '
2022-07-11 13:17:04 +07:00
# stuff that's behind a cron
env . run_crons ( )
[CHG] *: rewrite commands set, rework status management
This commit revisits the commands set in order to make it more
regular, and limit inconsistent command-sets, although it includes
pseudo-command aliases for common tasks now removed from the core set.
Hard Errors
===========
The previous iteration of the commands set would ignore any
non-command term in a command line. This has been changed to hard
error (and ignoring the entire thing) if any command is unknown or
invalid.
This fixes inconsistent / unexpected interpretations where a user
sends a command, then writes a novel on the same line some words of
which happen to *also* be commands, leading to merge states they did
not expect. They should now be told to fuck off.
Priority Restructuring
----------------------
The numerical priority system was pretty messy in that it confused
"staging priority" (in ways which were not entirely straightforward)
with overrides to other concerns.
This has now being split along all the axis, with separate command
subsets for:
- staging prioritisation, now separated between `default`, `priority`,
and `alone`,
- `default` means PRs are picked by an unspecified order when
creating a staging, if nothing better is available
- `priority` means PRs are picked first when staging, however if
`priority` PRs don't fill the staging the rest will be filled with
`default`, this mode did not previously exist
- `alone` means the PRs are picked first, before splits, and only
`alone` PRs can be part of the staging (which usually matches the
modename)
- `skipchecks` overrides both statuses and approval checks, for the
batch, something previously implied in `p=0`, but now
independent. Setting `skipchecks` basically makes the entire batch
`ready`.
For consistency this also sets the reviewer implicitly: since
skipchecks overrides both statuses *and approval*, whoever enables
this mode is essentially the reviewer.
- `cancel` cancels any ongoing staging when the marked PR becomes
ready again, previously this was also implied (in a more restricted
form) by setting `p=0`
FWBot removal
=============
While the "forwardport bot" still exists as an API level (to segregate
access rights between tokens) it has been removed as an interaction
point, as part of the modules merge plan. As a result,
fwbot stops responding
----------------------
Feedback messages are now always sent by the mergebot, the
forward-porting bot should not send any message or notification
anymore.
commands moved to the merge bot
-------------------------------
- `ignore`/`up to` simply changes bot
- `close` as well
- `skipci` is now a choice / flag of an `fw` command, which denotes
the forward-port policy,
- `fw=default` is the old `ci` and resets the policy to default,
that is wait for the PR to be merged to create forward ports, and
for the required statuses on each forward port to be received
before creating the next
- `fw=skipci` is the old `skipci`, it waits for the merge of the
base PR but then creates all the forward ports immediately (unless
it gets a conflict)
- `fw=skipmerge` immediately creates all the forward ports, without
even waiting for the PR to be merged
This is a completely new mode, and may be rather broken as until
now the 'bot has always assumed the source PR had been merged.
approval rework
---------------
Because of the previous section, there is no distinguishing feature
between `mergebot r+` = "merge this PR" and `forwardbot r+` = "merge
this PR and all its parent with different access rights".
As a result, the two have been merged under a single `mergebot r+`
with heuristics attempting to provide the best experience:
- if approving a non-forward port, the behavior does not change
- else, with review rights on the source, all ancestors are approved
- else, as author of the original, approves all ancestors which descend
from a merged PR
- else, approves all ancestors up to and including the oldest ancestor
to which we have review rights
Most notably, the source's author is not delegated on the source or
any of its descendants anymore. This might need to be revisited if it
provides too restrictive.
For the very specialized need of approving a forward-port *and none of
its ancestors*, `review=` can now take a comma (`,`) separated list of
pull request numbers (github numbers, not mergebot ids).
Computed State
==============
The `state` field of pull requests is now computed. Hopefully this
makes the status more consistent and predictable in the long run, and
importantly makes status management more reliable (because reference
datum get updated naturally flowing to the state).
For now however it makes things more complicated as some of the states
have to be separately signaled or updated:
- `closed` and `error` are now separate flags
- `merge_date` is pulled down from forwardport and becomes the
transition signal for ready -> merged
- `reviewed_by` becomes the transition signal for approval (might be a
good idea to rename it...)
- `status` is computed from the head's statuses and overrides, and
*that* becomes the validation state
Ideally, batch-level flags like `skipchecks` should be on, well, the
batch, and `state` should have a dependency on the batch. However
currently the batch is not a durable / permanent member of the system,
so it's a PR-level flag and a messy pile.
On notable change is that *forcing* the state to `ready` now does that
but also sets the reviewer, `skipchecks`, and overrides to ensure the
API-mediated readying does not get rolled back by e.g. the runbot
sending a status.
This is useful for a few types of automated / programmatic PRs
e.g. translation exports, where we set the state programmatically to
limit noise.
recursive dependency hack
-------------------------
Given a sequence of PRs with an override of the source, if one of the
PRs is updated its descendants should not have the override
anymore. However if the updated PR gets overridden, its descendants
should have *that* override.
This requires some unholy manipulations via an override of `modified`,
as the ORM supports recursive fields but not recursive
dependencies (on a different field).
unconditional followup scheduling
---------------------------------
Previously scheduling forward-port followup was contigent on the FW
policy, but it's not actually correct if the new PR is *immediately*
validated (which can happen now that the field is computed, if there
are no required statuses *or* all of the required statuses are
overridden by an ancestor) as nothing will trigger the state change
and thus scheduling of the fp followup.
The followup function checks all the properties of the batch to port,
so this should not result on incorrect ports. Although it's a bit more
expensive, and will lead to more spam.
Previously this would not happen because on creation of a PR the
validation task (commit -> PR) would still have to execute.
Misc Changes
============
- If a PR is marked as overriding / canceling stagings, it now does
so on retry not just when setting initially.
This was not handled at all previously, so a PR in P0 going into
error due to e.g. a non-deterministic bug would be retried and still
p=0, but a current staging would not get cancelled. Same when a PR
in p=0 goes into error because something was failed, then is updated
with a fix.
- Add tracking to a bunch of relevant PR fields.
Post-mortem analysis currently generally requires going through the
text logs to see what happened, which is annoying.
There is a nondeterminism / inconsistency in the tracking which
sometimes leads the admin user to trigger tracking before the bot
does, leading to the staging tracking being attributed to them
during tests, shove under the carpet by ignoring the user to whom
that tracking is attributed.
When multiple users update tracked fields in the same transaction
all the changes are attributed to the first one having triggered
tracking (?), I couldn't find why the admin sometimes takes over.
- added and leveraged support for enum-backed selection fields
- moved variuous fields from forwardport to runbot_merge
- fix a migration which had never worked and which never run (because
I forgot to bump the version on the module)
- remove some unnecessary intermediate de/serialisation
fixes #673, fixes #309, fixes #792, fixes #846 (probably)
2023-10-31 13:42:07 +07:00
# check again to be sure
assert all ( pr_id . state == ' merged ' for pr_id in release_pr_ids )
assert pr_bump_id . state == ' merged '
2022-07-11 13:17:04 +07:00
assert pr_rel_a . state == " closed "
assert pr_rel_a . base [ ' ref ' ] == ' 1.1 '
assert pr_rel_b . state == " closed "
assert pr_rel_b . base [ ' ref ' ] == ' 1.1 '
assert pr_rel_c . state == " closed "
assert pr_rel_c . base [ ' ref ' ] == ' 1.1 '
[CHG] *: rewrite commands set, rework status management
This commit revisits the commands set in order to make it more
regular, and limit inconsistent command-sets, although it includes
pseudo-command aliases for common tasks now removed from the core set.
Hard Errors
===========
The previous iteration of the commands set would ignore any
non-command term in a command line. This has been changed to hard
error (and ignoring the entire thing) if any command is unknown or
invalid.
This fixes inconsistent / unexpected interpretations where a user
sends a command, then writes a novel on the same line some words of
which happen to *also* be commands, leading to merge states they did
not expect. They should now be told to fuck off.
Priority Restructuring
----------------------
The numerical priority system was pretty messy in that it confused
"staging priority" (in ways which were not entirely straightforward)
with overrides to other concerns.
This has now being split along all the axis, with separate command
subsets for:
- staging prioritisation, now separated between `default`, `priority`,
and `alone`,
- `default` means PRs are picked by an unspecified order when
creating a staging, if nothing better is available
- `priority` means PRs are picked first when staging, however if
`priority` PRs don't fill the staging the rest will be filled with
`default`, this mode did not previously exist
- `alone` means the PRs are picked first, before splits, and only
`alone` PRs can be part of the staging (which usually matches the
modename)
- `skipchecks` overrides both statuses and approval checks, for the
batch, something previously implied in `p=0`, but now
independent. Setting `skipchecks` basically makes the entire batch
`ready`.
For consistency this also sets the reviewer implicitly: since
skipchecks overrides both statuses *and approval*, whoever enables
this mode is essentially the reviewer.
- `cancel` cancels any ongoing staging when the marked PR becomes
ready again, previously this was also implied (in a more restricted
form) by setting `p=0`
FWBot removal
=============
While the "forwardport bot" still exists as an API level (to segregate
access rights between tokens) it has been removed as an interaction
point, as part of the modules merge plan. As a result,
fwbot stops responding
----------------------
Feedback messages are now always sent by the mergebot, the
forward-porting bot should not send any message or notification
anymore.
commands moved to the merge bot
-------------------------------
- `ignore`/`up to` simply changes bot
- `close` as well
- `skipci` is now a choice / flag of an `fw` command, which denotes
the forward-port policy,
- `fw=default` is the old `ci` and resets the policy to default,
that is wait for the PR to be merged to create forward ports, and
for the required statuses on each forward port to be received
before creating the next
- `fw=skipci` is the old `skipci`, it waits for the merge of the
base PR but then creates all the forward ports immediately (unless
it gets a conflict)
- `fw=skipmerge` immediately creates all the forward ports, without
even waiting for the PR to be merged
This is a completely new mode, and may be rather broken as until
now the 'bot has always assumed the source PR had been merged.
approval rework
---------------
Because of the previous section, there is no distinguishing feature
between `mergebot r+` = "merge this PR" and `forwardbot r+` = "merge
this PR and all its parent with different access rights".
As a result, the two have been merged under a single `mergebot r+`
with heuristics attempting to provide the best experience:
- if approving a non-forward port, the behavior does not change
- else, with review rights on the source, all ancestors are approved
- else, as author of the original, approves all ancestors which descend
from a merged PR
- else, approves all ancestors up to and including the oldest ancestor
to which we have review rights
Most notably, the source's author is not delegated on the source or
any of its descendants anymore. This might need to be revisited if it
provides too restrictive.
For the very specialized need of approving a forward-port *and none of
its ancestors*, `review=` can now take a comma (`,`) separated list of
pull request numbers (github numbers, not mergebot ids).
Computed State
==============
The `state` field of pull requests is now computed. Hopefully this
makes the status more consistent and predictable in the long run, and
importantly makes status management more reliable (because reference
datum get updated naturally flowing to the state).
For now however it makes things more complicated as some of the states
have to be separately signaled or updated:
- `closed` and `error` are now separate flags
- `merge_date` is pulled down from forwardport and becomes the
transition signal for ready -> merged
- `reviewed_by` becomes the transition signal for approval (might be a
good idea to rename it...)
- `status` is computed from the head's statuses and overrides, and
*that* becomes the validation state
Ideally, batch-level flags like `skipchecks` should be on, well, the
batch, and `state` should have a dependency on the batch. However
currently the batch is not a durable / permanent member of the system,
so it's a PR-level flag and a messy pile.
On notable change is that *forcing* the state to `ready` now does that
but also sets the reviewer, `skipchecks`, and overrides to ensure the
API-mediated readying does not get rolled back by e.g. the runbot
sending a status.
This is useful for a few types of automated / programmatic PRs
e.g. translation exports, where we set the state programmatically to
limit noise.
recursive dependency hack
-------------------------
Given a sequence of PRs with an override of the source, if one of the
PRs is updated its descendants should not have the override
anymore. However if the updated PR gets overridden, its descendants
should have *that* override.
This requires some unholy manipulations via an override of `modified`,
as the ORM supports recursive fields but not recursive
dependencies (on a different field).
unconditional followup scheduling
---------------------------------
Previously scheduling forward-port followup was contigent on the FW
policy, but it's not actually correct if the new PR is *immediately*
validated (which can happen now that the field is computed, if there
are no required statuses *or* all of the required statuses are
overridden by an ancestor) as nothing will trigger the state change
and thus scheduling of the fp followup.
The followup function checks all the properties of the batch to port,
so this should not result on incorrect ports. Although it's a bit more
expensive, and will lead to more spam.
Previously this would not happen because on creation of a PR the
validation task (commit -> PR) would still have to execute.
Misc Changes
============
- If a PR is marked as overriding / canceling stagings, it now does
so on retry not just when setting initially.
This was not handled at all previously, so a PR in P0 going into
error due to e.g. a non-deterministic bug would be retried and still
p=0, but a current staging would not get cancelled. Same when a PR
in p=0 goes into error because something was failed, then is updated
with a fix.
- Add tracking to a bunch of relevant PR fields.
Post-mortem analysis currently generally requires going through the
text logs to see what happened, which is annoying.
There is a nondeterminism / inconsistency in the tracking which
sometimes leads the admin user to trigger tracking before the bot
does, leading to the staging tracking being attributed to them
during tests, shove under the carpet by ignoring the user to whom
that tracking is attributed.
When multiple users update tracked fields in the same transaction
all the changes are attributed to the first one having triggered
tracking (?), I couldn't find why the admin sometimes takes over.
- added and leveraged support for enum-backed selection fields
- moved variuous fields from forwardport to runbot_merge
- fix a migration which had never worked and which never run (because
I forgot to bump the version on the module)
- remove some unnecessary intermediate de/serialisation
fixes #673, fixes #309, fixes #792, fixes #846 (probably)
2023-10-31 13:42:07 +07:00
assert all ( pr_id . target . name == ' 1.1 ' for pr_id in release_pr_ids )
2022-07-11 13:17:04 +07:00
assert pr_bump_a . state == ' closed '
assert pr_bump_a . base [ ' ref ' ] == ' master '
assert pr_bump_id . target . name == ' master '
m_a = repo_a . commit ( ' master ' )
assert m_a . message . startswith ( ' Bump A ' )
assert repo_a . read_tree ( m_a ) == {
' f ' : ' 1 ' , # from master
' g ' : ' x ' , # from required PR (merged into master before forking)
' version ' : ' 1.2-alpha ' , # from bump PR
}
2021-11-12 22:04:34 +07:00
c_a = repo_a . commit ( ' 1.1 ' )
assert c_a . message . startswith ( ' Release 1.1 (A) ' )
assert repo_a . read_tree ( c_a ) == {
' f ' : ' 1 ' , # from master
' g ' : ' x ' , # from required pr
' version ' : ' 1.1 ' , # from release commit
}
c_a_parent = repo_a . commit ( c_a . parents [ 0 ] )
assert c_a_parent . message . startswith ( ' super important file ' )
2022-07-11 13:17:04 +07:00
assert c_a_parent . parents [ 0 ] == master_head_a
2021-11-12 22:04:34 +07:00
c_b = repo_b . commit ( ' 1.1 ' )
assert c_b . message . startswith ( ' Release 1.1 (B) ' )
2023-08-23 19:09:32 +07:00
assert repo_b . read_tree ( c_b ) == { ' f ' : ' 1 ' , ' version ' : ' 1.1 ' }
2022-07-11 13:17:04 +07:00
assert c_b . parents [ 0 ] == master_head_b
2021-11-12 22:04:34 +07:00
c_c = repo_c . commit ( ' 1.1 ' )
assert c_c . message . startswith ( ' Release 1.1 (C) ' )
2023-08-23 19:09:32 +07:00
assert repo_c . read_tree ( c_c ) == { ' f ' : ' 2 ' , ' version ' : ' 1.1 ' }
2022-07-11 13:17:04 +07:00
assert repo_c . commit ( c_c . parents [ 0 ] ) . parents [ 0 ] == master_head_c
2022-02-07 21:15:13 +07:00
2022-07-11 13:17:04 +07:00
def setup_mess ( repo_a , repo_b , repo_c ) :
master_heads = [ ]
for r in [ repo_a , repo_b , repo_c ] :
with r :
[ root , _ ] = r . make_commits (
None ,
Commit ( ' base ' , tree = { ' version ' : ' ' , ' f ' : ' 0 ' } ) ,
2023-08-23 19:09:32 +07:00
Commit ( ' release 1.0 ' , tree = { ' version ' : ' 1.0 ' } ) ,
2022-07-11 13:17:04 +07:00
ref = ' heads/1.0 '
)
master_heads . extend ( r . make_commits ( root , Commit ( ' other ' , tree = { ' f ' : ' 1 ' } ) , ref = ' heads/master ' ) )
2023-10-27 16:15:05 +07:00
a_fork = repo_a . fork ( )
b_fork = repo_b . fork ( )
c_fork = repo_c . fork ( )
assert a_fork . owner == b_fork . owner == c_fork . owner
owner = a_fork . owner
2022-07-11 13:17:04 +07:00
# have 2 PRs required for the freeze
2023-10-27 16:15:05 +07:00
with repo_a , a_fork :
a_fork . make_commits ( master_heads [ 0 ] , Commit ( ' super important file ' , tree = { ' g ' : ' x ' } ) , ref = ' heads/apr ' )
pr_required_a = repo_a . make_pr ( target = ' master ' , head = f ' { owner } :apr ' , title = " xxx " )
with repo_c , c_fork :
c_fork . make_commits ( master_heads [ 2 ] , Commit ( ' update thing ' , tree = { ' f ' : ' 2 ' } ) , ref = ' heads/cpr ' )
pr_required_c = repo_c . make_pr ( target = ' master ' , head = f ' { owner } :cpr ' , title = " yyy " )
2022-07-11 13:17:04 +07:00
# have 3 release PRs, only the first one updates the tree (version file)
2023-10-27 16:15:05 +07:00
with repo_a , a_fork :
a_fork . make_commits (
2022-07-11 13:17:04 +07:00
master_heads [ 0 ] ,
Commit ( ' Release 1.1 (A) ' , tree = { ' version ' : ' 1.1 ' } ) ,
ref = ' heads/release-1.1 '
)
2023-10-27 16:15:05 +07:00
pr_rel_a = repo_a . make_pr ( target = ' master ' , head = f ' { owner } :release-1.1 ' , title = " zzz " )
with repo_b , b_fork :
b_fork . make_commits (
2022-07-11 13:17:04 +07:00
master_heads [ 1 ] ,
2023-08-23 19:09:32 +07:00
Commit ( ' Release 1.1 (B) ' , tree = { ' version ' : ' 1.1 ' } ) ,
2022-07-11 13:17:04 +07:00
ref = ' heads/release-1.1 '
)
2023-10-27 16:15:05 +07:00
pr_rel_b = repo_b . make_pr ( target = ' master ' , head = f ' { owner } :release-1.1 ' , title = " 000 " )
with repo_c , c_fork :
c_fork . make_commits ( master_heads [ 2 ] , Commit ( " Some change " , tree = { ' a ' : ' 1 ' } ) , ref = ' heads/whocares ' )
pr_other = repo_c . make_pr ( target = ' master ' , head = f ' { owner } :whocares ' , title = " 111 " )
c_fork . make_commits (
2022-07-11 13:17:04 +07:00
master_heads [ 2 ] ,
2023-08-23 19:09:32 +07:00
Commit ( ' Release 1.1 (C) ' , tree = { ' version ' : ' 1.1 ' } ) ,
2022-07-11 13:17:04 +07:00
ref = ' heads/release-1.1 '
)
2023-10-27 16:15:05 +07:00
pr_rel_c = repo_c . make_pr ( target = ' master ' , head = f ' { owner } :release-1.1 ' , title = " 222 " )
2022-07-11 13:17:04 +07:00
# have one bump PR on repo A
2023-10-27 16:15:05 +07:00
with repo_a , a_fork :
a_fork . make_commits (
2022-07-11 13:17:04 +07:00
master_heads [ 0 ] ,
Commit ( " Bump A " , tree = { ' version ' : ' 1.2-alpha ' } ) ,
ref = ' heads/bump-1.1 ' ,
)
2023-10-27 16:15:05 +07:00
pr_bump_a = repo_a . make_pr ( target = ' master ' , head = f ' { owner } :bump-1.1 ' , title = " 333 " )
2022-07-11 13:17:04 +07:00
return master_heads , ( pr_required_a , None , pr_required_c ) , ( pr_rel_a , pr_rel_b , pr_rel_c ) , pr_bump_a , pr_other
2022-02-07 21:15:13 +07:00
def test_freeze_subset ( env , project , repo_a , repo_b , repo_c , users , config ) :
""" It should be possible to only freeze a subset of a project when e.g. one
of the repository is managed differently than the rest and has
non - synchronous releases .
- it should be possible to mark repositories as non - freezed ( just opted out
of the entire thing ) , in which case no freeze PRs should be asked of them
- it should be possible to remove repositories from the freeze wizard
- repositories which are not in the freeze wizard should just not be frozen
To do things correctly that should probably match with the branch filters
and stuff , but that ' s a configuration concern.
"""
# have a project with 3 repos, and two branches (1.0 and master)
project . branch_ids = [
( 1 , project . branch_ids . id , { ' sequence ' : 1 } ) ,
( 0 , 0 , { ' name ' : ' 1.0 ' , ' sequence ' : 2 } ) ,
]
masters = [ ]
for r in [ repo_a , repo_b , repo_c ] :
with r :
[ root , _ ] = r . make_commits (
None ,
Commit ( ' base ' , tree = { ' version ' : ' ' , ' f ' : ' 0 ' } ) ,
Commit ( ' release 1.0 ' , tree = { ' version ' : ' 1.0 ' } if r is repo_a else None ) ,
ref = ' heads/1.0 '
)
masters . extend ( r . make_commits ( root , Commit ( ' other ' , tree = { ' f ' : ' 1 ' } ) , ref = ' heads/master ' ) )
with repo_a :
repo_a . make_commits (
masters [ 0 ] ,
Commit ( ' Release 1.1 ' , tree = { ' version ' : ' 1.1 ' } ) ,
ref = ' heads/release-1.1 '
)
pr_rel_a = repo_a . make_pr ( target = ' master ' , head = ' release-1.1 ' )
# the third repository we opt out of freezing
project . repo_ids . filtered ( lambda r : r . name == repo_c . name ) . freeze = False
env . run_crons ( ) # process the PRs
# open the freeze wizard
w = project . action_prepare_freeze ( )
w_id = env [ w [ ' res_model ' ] ] . browse ( [ w [ ' res_id ' ] ] )
# check that there are only rels for repos A and B
assert w_id . mapped ( ' release_pr_ids.repository_id.name ' ) == [ repo_a . name , repo_b . name ]
# remove B from the set
b_id = w_id . release_pr_ids . filtered ( lambda r : r . repository_id . name == repo_b . name )
w_id . write ( { ' release_pr_ids ' : [ ( 3 , b_id . id , 0 ) ] } )
assert len ( w_id . release_pr_ids ) == 1
# set lone release PR
w_id . release_pr_ids . pr_id = to_pr ( env , pr_rel_a ) . id
assert not w_id . errors
w_id . action_freeze ( )
assert not w_id . exists ( )
assert repo_a . commit ( ' 1.1 ' ) , " should have created branch in repo A "
try :
repo_b . commit ( ' 1.1 ' )
pytest . fail ( " should *not* have created branch in repo B " )
except AssertionError :
. . .
try :
repo_c . commit ( ' 1.1 ' )
pytest . fail ( " should *not* have created branch in repo C " )
except AssertionError :
. . .
# can't stage because we (wilfully) don't have branches 1.1 in repos B and C
2022-07-11 13:17:04 +07:00
@pytest.mark.skip ( " env ' s session is not thread-safe sadface " )
def test_race_conditions ( ) :
""" need the ability to dup the env in order to send concurrent requests to
the inner odoo
- try to run the action_freeze during a cron ( merge or staging ) , should
error ( recover and return nice message ? )
- somehow get ahead of the action and update master ' s commit between moment
where it is fetched and moment where the bump pr is fast - forwarded ,
there ' s actually a bit of time thanks to the rate limiting (fetch of base,
update of tmp to base , rebase of commits on tmp , wait 1 s , for each release
and bump PR , then the release branches are created , and finally the bump
prs )
"""
. . .
def test_freeze_conflict ( env , project , repo_a , repo_b , repo_c , users , config ) :
""" If one of the branches we ' re trying to create already exists, the wizard
fails .
"""
project . branch_ids = [
( 1 , project . branch_ids . id , { ' sequence ' : 1 } ) ,
( 0 , 0 , { ' name ' : ' 1.0 ' , ' sequence ' : 2 } ) ,
]
heads , _ , ( pr_rel_a , pr_rel_b , pr_rel_c ) , bump , other = \
setup_mess ( repo_a , repo_b , repo_c )
env . run_crons ( )
release_prs = {
repo_a . name : to_pr ( env , pr_rel_a ) ,
repo_b . name : to_pr ( env , pr_rel_b ) ,
repo_c . name : to_pr ( env , pr_rel_c ) ,
}
w = project . action_prepare_freeze ( )
w_id = env [ w [ ' res_model ' ] ] . browse ( [ w [ ' res_id ' ] ] )
for repo , release_pr in release_prs . items ( ) :
w_id . release_pr_ids \
. filtered ( lambda r : r . repository_id . name == repo ) \
. pr_id = release_pr . id
# create conflicting branch
with repo_c :
2023-08-23 19:09:32 +07:00
[ c ] = repo_c . make_commits ( heads [ 2 ] , Commit ( " exists " , tree = { ' version ' : ' ' } ) )
repo_c . make_ref ( ' heads/1.1 ' , c )
2022-07-11 13:17:04 +07:00
# actually perform the freeze
with pytest . raises ( xmlrpc . client . Fault ) as e :
w_id . action_freeze ( )
assert f " Unable to create branch { repo_c . name } :1.1 " in e . value . faultString
# branches a and b should have been deleted
with pytest . raises ( AssertionError ) as e :
repo_a . get_ref ( ' heads/1.1 ' )
assert e . value . args [ 0 ] . startswith ( " Not Found " )
with pytest . raises ( AssertionError ) as e :
repo_b . get_ref ( ' heads/1.1 ' )
assert e . value . args [ 0 ] . startswith ( " Not Found " )
2024-02-09 14:58:19 +07:00
def test_cancel_staging ( env , project , repo_a , repo_b , users , config ) :
""" If a batch is flagged as staging cancelling (from any PR), the staging
should get cancelled if and when the batch transitions to unblocked
"""
with repo_a , repo_b :
2024-02-09 16:49:32 +07:00
repo_a . make_commits ( None , Commit ( ' initial ' , tree = { ' a ' : ' 1 ' } ) , ref = ' heads/master ' )
repo_b . make_commits ( None , Commit ( ' initial ' , tree = { ' b ' : ' 1 ' } ) , ref = ' heads/master ' )
2024-02-09 14:58:19 +07:00
pr_a = make_pr ( repo_a , ' batch ' , [ { ' a ' : ' 2 ' } ] , user = config [ ' role_user ' ] [ ' token ' ] , statuses = [ ] , reviewer = None )
pr_b = make_pr ( repo_b , ' batch ' , [ { ' b ' : ' 2 ' } ] , user = config [ ' role_user ' ] [ ' token ' ] , statuses = [ ] , reviewer = None )
pr_lone = make_pr (
repo_a ,
" C " ,
[ { ' c ' : ' 1 ' } ] ,
user = config [ ' role_user ' ] [ ' token ' ] ,
reviewer = config [ ' role_reviewer ' ] [ ' token ' ] ,
)
env . run_crons ( )
a_id , b_id , lone_id = map ( to_pr , repeat ( env ) , [ pr_a , pr_b , pr_lone ] )
assert lone_id . staging_id
st = lone_id . staging_id
with repo_a :
pr_a . post_comment ( " hansen cancel=staging " , config [ ' role_reviewer ' ] [ ' token ' ] )
assert a_id . state == ' opened '
assert a_id . cancel_staging
assert b_id . cancel_staging
assert lone_id . staging_id == st
with repo_a :
pr_a . post_comment ( ' hansen r+ ' , config [ ' role_reviewer ' ] [ ' token ' ] )
assert a_id . state == ' approved '
assert lone_id . staging_id == st
with repo_a :
2024-02-09 16:49:32 +07:00
repo_a . post_status ( a_id . head , ' success ' )
2024-02-09 14:58:19 +07:00
env . run_crons ( )
assert a_id . state == ' ready '
assert lone_id . staging_id == st
assert b_id . state == ' opened '
with repo_b :
pr_b . post_comment ( ' hansen r+ ' , config [ ' role_reviewer ' ] [ ' token ' ] )
assert b_id . state == ' approved '
assert lone_id . staging_id == st
with repo_b :
2024-02-09 16:49:32 +07:00
repo_b . post_status ( b_id . head , ' success ' )
2024-02-09 14:58:19 +07:00
assert b_id . state == ' approved '
assert lone_id . staging_id == st
env . run_crons ( )
assert b_id . state == ' ready '
# should have cancelled the staging, picked a and b, and re-staged the
# entire thing
assert lone_id . staging_id != st
assert len ( {
lone_id . staging_id . id ,
a_id . staging_id . id ,
b_id . staging_id . id ,
} ) == 1