[MERGE] bot from 16.0 to 17.0
Broken (can't run odoo at all):
- In Odoo 17.0, the `pre_init_hook` takes an env, not a cursor, update
`_check_citext`.
- Odoo 17.0 rejects `@attrs` and doesn't say where they are or how to
update them, fun, hunt down `attrs={'invisible': ...` and try to fix
them.
- Odoo 17.0 warns on non-multi creates, update them, most were very
reasonable, one very wasn't.
Test failures:
- Odoo 17.0 deprecates `name_get` and doesn't use it as a *source*
anymore, replace overrides by overrides to `_compute_display_name`.
- Multiple tracking changes:
- `_track_set_author` takes a `Partner` not an id.
- `_message_compute_author` still requires overriding in order to
handle record creation, which in standard doesn't support author
overriding.
- `mail.tracking.value.field_type` has been removed, the field type
now needs to be retrieved from the `field_id`.
- Some tracking ordering have changed and require adjusting a few
tests.
Also added a few flushes before SQL queries which are not (obviously
at least) at the start of a cron or controller, no test failure
observed but better safe than sorry (probably).
2024-08-12 18:13:03 +07:00
from operator import itemgetter
2024-12-06 19:44:08 +07:00
import pytest
2022-06-03 17:08:49 +07:00
import requests
2022-11-04 21:22:36 +07:00
from utils import Commit , to_pr , seen
[FIX] runbot_merge: ensure PR description is correct on merge
Because sometimes github updates are missed (usually because github
never triggers it), it's possible for the mergebot's view of a PR
description to be incorrect. In that case, the PR may get merged with
the wrong merge message entirely, through no fault of the user.
Since we already fetch the PR info when staging it, there's very
little overhead to checking that the PR message we store is correct
then, and update it if it's not. This means the forward-port's
description should also be correct.
While at it, clean the forward port PR's creation a bit:
- there should always be a message since the title is required on
PRs (only the body can be missing), therefore no need to check that
- as we're adding a bunch of pseudo-headers, there always is a body,
no need for the condition
- inline the `pr_data` and `URL`: they were extracted for the support
of draft PRs, since that's been removed it's now unnecessary
Fixes #530
2021-09-24 13:03:24 +07:00
2020-03-16 20:16:01 +07:00
def test_partner_merge ( env ) :
p_src = env [ ' res.partner ' ] . create ( {
' name ' : " xxx " ,
' github_login ' : ' xxx '
} )
# proper login with useful info
p_dest = env [ ' res.partner ' ] . create ( {
' name ' : ' Partner P. Partnersson ' ,
' github_login ' : ' '
} )
env [ ' base.partner.merge.automatic.wizard ' ] . create ( {
' state ' : ' selection ' ,
' partner_ids ' : ( p_src + p_dest ) . ids ,
' dst_partner_id ' : p_dest . id ,
2021-10-06 18:03:03 +07:00
} ) . action_merge ( )
2020-03-16 20:16:01 +07:00
assert not p_src . exists ( )
assert p_dest . name == ' Partner P. Partnersson '
assert p_dest . github_login == ' xxx '
2021-03-01 20:42:20 +07:00
def test_name_search ( env ) :
""" PRs should be findable by:
* number
* display_name ( ` repository #number`)
* label
This way we can find parents or sources by these informations .
"""
p = env [ ' runbot_merge.project ' ] . create ( {
' name ' : ' proj ' ,
' github_token ' : ' no ' ,
2024-11-29 15:04:06 +07:00
' github_name ' : " noo " ,
' github_email ' : " nooo@example.org " ,
2021-03-01 20:42:20 +07:00
} )
b = env [ ' runbot_merge.branch ' ] . create ( {
' name ' : ' target ' ,
' project_id ' : p . id
} )
r = env [ ' runbot_merge.repository ' ] . create ( {
' name ' : ' repo ' ,
' project_id ' : p . id ,
} )
baseline = { ' target ' : b . id , ' repository ' : r . id }
PRs = env [ ' runbot_merge.pull_requests ' ]
prs = PRs . create ( { * * baseline , ' number ' : 1964 , ' label ' : ' victor:thump ' , ' head ' : ' a ' , ' message ' : ' x ' } ) \
| PRs . create ( { * * baseline , ' number ' : 1959 , ' label ' : ' marcus:frankenstein ' , ' head ' : ' b ' , ' message ' : ' y ' } ) \
| PRs . create ( { * * baseline , ' number ' : 1969 , ' label ' : ' victor:patch-1 ' , ' head ' : ' c ' , ' message ' : ' z ' } )
[MERGE] bot from 16.0 to 17.0
Broken (can't run odoo at all):
- In Odoo 17.0, the `pre_init_hook` takes an env, not a cursor, update
`_check_citext`.
- Odoo 17.0 rejects `@attrs` and doesn't say where they are or how to
update them, fun, hunt down `attrs={'invisible': ...` and try to fix
them.
- Odoo 17.0 warns on non-multi creates, update them, most were very
reasonable, one very wasn't.
Test failures:
- Odoo 17.0 deprecates `name_get` and doesn't use it as a *source*
anymore, replace overrides by overrides to `_compute_display_name`.
- Multiple tracking changes:
- `_track_set_author` takes a `Partner` not an id.
- `_message_compute_author` still requires overriding in order to
handle record creation, which in standard doesn't support author
overriding.
- `mail.tracking.value.field_type` has been removed, the field type
now needs to be retrieved from the `field_id`.
- Some tracking ordering have changed and require adjusting a few
tests.
Also added a few flushes before SQL queries which are not (obviously
at least) at the start of a cron or controller, no test failure
observed but better safe than sorry (probably).
2024-08-12 18:13:03 +07:00
pr0 , pr1 , pr2 = [ [ pr . id , pr . display_name ] for pr in prs ]
2021-03-01 20:42:20 +07:00
assert PRs . name_search ( ' 1964 ' ) == [ pr0 ]
assert PRs . name_search ( ' 1969 ' ) == [ pr2 ]
assert PRs . name_search ( ' frank ' ) == [ pr1 ]
assert PRs . name_search ( ' victor ' ) == [ pr2 , pr0 ]
assert PRs . name_search ( ' thump ' ) == [ pr0 ]
assert PRs . name_search ( ' repo ' ) == [ pr2 , pr0 , pr1 ]
assert PRs . name_search ( ' repo#1959 ' ) == [ pr1 ]
[FIX] runbot_merge: ensure PR description is correct on merge
Because sometimes github updates are missed (usually because github
never triggers it), it's possible for the mergebot's view of a PR
description to be incorrect. In that case, the PR may get merged with
the wrong merge message entirely, through no fault of the user.
Since we already fetch the PR info when staging it, there's very
little overhead to checking that the PR message we store is correct
then, and update it if it's not. This means the forward-port's
description should also be correct.
While at it, clean the forward port PR's creation a bit:
- there should always be a message since the title is required on
PRs (only the body can be missing), therefore no need to check that
- as we're adding a bunch of pseudo-headers, there always is a body,
no need for the condition
- inline the `pr_data` and `URL`: they were extracted for the support
of draft PRs, since that's been removed it's now unnecessary
Fixes #530
2021-09-24 13:03:24 +07:00
2022-06-03 17:08:49 +07:00
def test_unreviewer ( env , project , port ) :
repo = env [ ' runbot_merge.repository ' ] . create ( {
' project_id ' : project . id ,
' name ' : ' a_test_repo ' ,
' status_ids ' : [ ( 0 , 0 , { ' context ' : ' status ' } ) ]
} )
p = env [ ' res.partner ' ] . create ( {
' name ' : ' George Pearce ' ,
' github_login ' : ' emubitch ' ,
' review_rights ' : [ ( 0 , 0 , { ' repository_id ' : repo . id , ' review ' : True } ) ]
} )
r = requests . post ( f ' http://localhost: { port } /runbot_merge/get_reviewers ' , json = {
' jsonrpc ' : ' 2.0 ' ,
' id ' : None ,
' method ' : ' call ' ,
' params ' : { } ,
} )
r . raise_for_status ( )
assert ' error ' not in r . json ( )
assert r . json ( ) [ ' result ' ] == [ ' emubitch ' ]
r = requests . post ( f ' http://localhost: { port } /runbot_merge/remove_reviewers ' , json = {
' jsonrpc ' : ' 2.0 ' ,
' id ' : None ,
' method ' : ' call ' ,
' params ' : { ' github_logins ' : [ ' emubitch ' ] } ,
} )
r . raise_for_status ( )
assert ' error ' not in r . json ( )
assert p . review_rights == env [ ' res.partner.review ' ]
2022-11-04 21:22:36 +07:00
[ADD] *: per-repository webhook secret
Currently webhook secrets are configured per *project* which is an
issue both because different repositories may have different
administrators and thus creates safety concerns, and because multiple
repositories can feed into different projects (e.g. on mergebot,
odoo-dev/odoo is both an ancillary repository to the main RD project,
and the main repository to the minor / legacy master-wowl
project). This means it can be necessary to have multiple projects
share the same secret as well, this then mandates the secret for more
repositories per (1).
This is a pain in the ass, so just detach secrets from projects and
link them *only* to repositories, it's cleaner and easier to manage
and set up progressively.
This requires a lot of changes to the tests, as they all need to
correctly configure the signaling.
For `runbot_merge` there was *some* setup sharing already via the
module-level `repo` fixtures`, those were merged into a conftest-level
fixture which could handle the signaling setup. A few tests which
unnecessarily set up repositories ad-hoc were also moved to the
fixture. But for most of the ad-hoc setup in `runbot_merge`, as well
as `forwardport` where it's all ad-hoc, events sources setup was just
appended as is. This should probably be cleaned up at one point, with
the various requirements collected and organised into a small set of
fixtures doing the job more uniformly.
Fixes #887
2024-06-06 16:07:57 +07:00
def test_staging_post_update ( env , repo , users , config ) :
2022-11-04 21:22:36 +07:00
""" Because statuses come from commits, it ' s possible to update the commits
of a staging after that staging has completed ( one way or the other ) , either
by sending statuses directly ( e . g . rebuilding , for non - deterministic errors )
or just using the staging ' s head commit in a branch.
This makes post - mortem analysis quite confusing , so stagings should
" lock in " their statuses once they complete .
"""
with repo :
[ m ] = repo . make_commits ( None , Commit ( ' initial ' , tree = { ' m ' : ' m ' } ) , ref = ' heads/master ' )
repo . make_commits ( m , Commit ( ' thing ' , tree = { ' m ' : ' c ' } ) , ref = ' heads/other ' )
pr = repo . make_pr ( target = ' master ' , head = ' other ' )
[ADD] *: per-repository webhook secret
Currently webhook secrets are configured per *project* which is an
issue both because different repositories may have different
administrators and thus creates safety concerns, and because multiple
repositories can feed into different projects (e.g. on mergebot,
odoo-dev/odoo is both an ancillary repository to the main RD project,
and the main repository to the minor / legacy master-wowl
project). This means it can be necessary to have multiple projects
share the same secret as well, this then mandates the secret for more
repositories per (1).
This is a pain in the ass, so just detach secrets from projects and
link them *only* to repositories, it's cleaner and easier to manage
and set up progressively.
This requires a lot of changes to the tests, as they all need to
correctly configure the signaling.
For `runbot_merge` there was *some* setup sharing already via the
module-level `repo` fixtures`, those were merged into a conftest-level
fixture which could handle the signaling setup. A few tests which
unnecessarily set up repositories ad-hoc were also moved to the
fixture. But for most of the ad-hoc setup in `runbot_merge`, as well
as `forwardport` where it's all ad-hoc, events sources setup was just
appended as is. This should probably be cleaned up at one point, with
the various requirements collected and organised into a small set of
fixtures doing the job more uniformly.
Fixes #887
2024-06-06 16:07:57 +07:00
repo . post_status ( pr . head , ' success ' )
2022-11-04 21:22:36 +07:00
pr . post_comment ( ' hansen r+ rebase-merge ' , config [ ' role_reviewer ' ] [ ' token ' ] )
env . run_crons ( )
pr_id = to_pr ( env , pr )
staging_id = pr_id . staging_id
assert staging_id
staging_head = repo . commit ( ' staging.master ' )
with repo :
[ADD] *: per-repository webhook secret
Currently webhook secrets are configured per *project* which is an
issue both because different repositories may have different
administrators and thus creates safety concerns, and because multiple
repositories can feed into different projects (e.g. on mergebot,
odoo-dev/odoo is both an ancillary repository to the main RD project,
and the main repository to the minor / legacy master-wowl
project). This means it can be necessary to have multiple projects
share the same secret as well, this then mandates the secret for more
repositories per (1).
This is a pain in the ass, so just detach secrets from projects and
link them *only* to repositories, it's cleaner and easier to manage
and set up progressively.
This requires a lot of changes to the tests, as they all need to
correctly configure the signaling.
For `runbot_merge` there was *some* setup sharing already via the
module-level `repo` fixtures`, those were merged into a conftest-level
fixture which could handle the signaling setup. A few tests which
unnecessarily set up repositories ad-hoc were also moved to the
fixture. But for most of the ad-hoc setup in `runbot_merge`, as well
as `forwardport` where it's all ad-hoc, events sources setup was just
appended as is. This should probably be cleaned up at one point, with
the various requirements collected and organised into a small set of
fixtures doing the job more uniformly.
Fixes #887
2024-06-06 16:07:57 +07:00
repo . post_status ( staging_head , ' failure ' )
2022-11-04 21:22:36 +07:00
env . run_crons ( )
assert pr_id . state == ' error '
assert staging_id . state == ' failure '
assert staging_id . statuses == [
[ADD] *: per-repository webhook secret
Currently webhook secrets are configured per *project* which is an
issue both because different repositories may have different
administrators and thus creates safety concerns, and because multiple
repositories can feed into different projects (e.g. on mergebot,
odoo-dev/odoo is both an ancillary repository to the main RD project,
and the main repository to the minor / legacy master-wowl
project). This means it can be necessary to have multiple projects
share the same secret as well, this then mandates the secret for more
repositories per (1).
This is a pain in the ass, so just detach secrets from projects and
link them *only* to repositories, it's cleaner and easier to manage
and set up progressively.
This requires a lot of changes to the tests, as they all need to
correctly configure the signaling.
For `runbot_merge` there was *some* setup sharing already via the
module-level `repo` fixtures`, those were merged into a conftest-level
fixture which could handle the signaling setup. A few tests which
unnecessarily set up repositories ad-hoc were also moved to the
fixture. But for most of the ad-hoc setup in `runbot_merge`, as well
as `forwardport` where it's all ad-hoc, events sources setup was just
appended as is. This should probably be cleaned up at one point, with
the various requirements collected and organised into a small set of
fixtures doing the job more uniformly.
Fixes #887
2024-06-06 16:07:57 +07:00
[ repo . name , ' default ' , ' failure ' , ' ' ] ,
2022-11-04 21:22:36 +07:00
]
with repo :
[ADD] *: per-repository webhook secret
Currently webhook secrets are configured per *project* which is an
issue both because different repositories may have different
administrators and thus creates safety concerns, and because multiple
repositories can feed into different projects (e.g. on mergebot,
odoo-dev/odoo is both an ancillary repository to the main RD project,
and the main repository to the minor / legacy master-wowl
project). This means it can be necessary to have multiple projects
share the same secret as well, this then mandates the secret for more
repositories per (1).
This is a pain in the ass, so just detach secrets from projects and
link them *only* to repositories, it's cleaner and easier to manage
and set up progressively.
This requires a lot of changes to the tests, as they all need to
correctly configure the signaling.
For `runbot_merge` there was *some* setup sharing already via the
module-level `repo` fixtures`, those were merged into a conftest-level
fixture which could handle the signaling setup. A few tests which
unnecessarily set up repositories ad-hoc were also moved to the
fixture. But for most of the ad-hoc setup in `runbot_merge`, as well
as `forwardport` where it's all ad-hoc, events sources setup was just
appended as is. This should probably be cleaned up at one point, with
the various requirements collected and organised into a small set of
fixtures doing the job more uniformly.
Fixes #887
2024-06-06 16:07:57 +07:00
repo . post_status ( staging_head , ' success ' )
2022-11-04 21:22:36 +07:00
env . run_crons ( )
assert staging_id . state == ' failure '
assert staging_id . statuses == [
[ADD] *: per-repository webhook secret
Currently webhook secrets are configured per *project* which is an
issue both because different repositories may have different
administrators and thus creates safety concerns, and because multiple
repositories can feed into different projects (e.g. on mergebot,
odoo-dev/odoo is both an ancillary repository to the main RD project,
and the main repository to the minor / legacy master-wowl
project). This means it can be necessary to have multiple projects
share the same secret as well, this then mandates the secret for more
repositories per (1).
This is a pain in the ass, so just detach secrets from projects and
link them *only* to repositories, it's cleaner and easier to manage
and set up progressively.
This requires a lot of changes to the tests, as they all need to
correctly configure the signaling.
For `runbot_merge` there was *some* setup sharing already via the
module-level `repo` fixtures`, those were merged into a conftest-level
fixture which could handle the signaling setup. A few tests which
unnecessarily set up repositories ad-hoc were also moved to the
fixture. But for most of the ad-hoc setup in `runbot_merge`, as well
as `forwardport` where it's all ad-hoc, events sources setup was just
appended as is. This should probably be cleaned up at one point, with
the various requirements collected and organised into a small set of
fixtures doing the job more uniformly.
Fixes #887
2024-06-06 16:07:57 +07:00
[ repo . name , ' default ' , ' failure ' , ' ' ] ,
2022-11-04 21:22:36 +07:00
]
2023-10-09 18:52:30 +07:00
[ADD] *: per-repository webhook secret
Currently webhook secrets are configured per *project* which is an
issue both because different repositories may have different
administrators and thus creates safety concerns, and because multiple
repositories can feed into different projects (e.g. on mergebot,
odoo-dev/odoo is both an ancillary repository to the main RD project,
and the main repository to the minor / legacy master-wowl
project). This means it can be necessary to have multiple projects
share the same secret as well, this then mandates the secret for more
repositories per (1).
This is a pain in the ass, so just detach secrets from projects and
link them *only* to repositories, it's cleaner and easier to manage
and set up progressively.
This requires a lot of changes to the tests, as they all need to
correctly configure the signaling.
For `runbot_merge` there was *some* setup sharing already via the
module-level `repo` fixtures`, those were merged into a conftest-level
fixture which could handle the signaling setup. A few tests which
unnecessarily set up repositories ad-hoc were also moved to the
fixture. But for most of the ad-hoc setup in `runbot_merge`, as well
as `forwardport` where it's all ad-hoc, events sources setup was just
appended as is. This should probably be cleaned up at one point, with
the various requirements collected and organised into a small set of
fixtures doing the job more uniformly.
Fixes #887
2024-06-06 16:07:57 +07:00
def test_merge_empty_commits ( env , repo , users , config ) :
2023-10-09 18:52:30 +07:00
""" The mergebot should allow merging already-empty commits.
"""
with repo :
[ m ] = repo . make_commits ( None , Commit ( ' initial ' , tree = { ' m ' : ' m ' } ) , ref = ' heads/master ' )
repo . make_commits ( m , Commit ( ' thing1 ' , tree = { } ) , ref = ' heads/other1 ' )
pr1 = repo . make_pr ( target = ' master ' , head = ' other1 ' )
repo . post_status ( pr1 . head , ' success ' )
pr1 . post_comment ( ' hansen r+ ' , config [ ' role_reviewer ' ] [ ' token ' ] )
repo . make_commits ( m , Commit ( ' thing2 ' , tree = { } ) , ref = ' heads/other2 ' )
pr2 = repo . make_pr ( target = ' master ' , head = ' other2 ' )
repo . post_status ( pr2 . head , ' success ' )
pr2 . post_comment ( ' hansen r+ rebase-ff ' , config [ ' role_reviewer ' ] [ ' token ' ] )
env . run_crons ( )
pr1_id = to_pr ( env , pr1 )
pr2_id = to_pr ( env , pr2 )
assert pr1_id . staging_id and pr2_id . staging_id
with repo :
repo . post_status ( ' staging.master ' , ' success ' )
env . run_crons ( )
assert pr1_id . state == pr2_id . state == ' merged '
# log is most-recent-first (?)
commits = list ( repo . log ( ' master ' ) )
head = repo . commit ( commits [ 0 ] [ ' sha ' ] )
assert repo . read_tree ( head ) == { ' m ' : ' m ' }
assert commits [ 0 ] [ ' commit ' ] [ ' message ' ] . startswith ( ' thing2 ' )
assert commits [ 1 ] [ ' commit ' ] [ ' message ' ] . startswith ( ' thing1 ' )
assert commits [ 2 ] [ ' commit ' ] [ ' message ' ] == ' initial '
2024-06-11 20:41:03 +07:00
[ADD] *: per-repository webhook secret
Currently webhook secrets are configured per *project* which is an
issue both because different repositories may have different
administrators and thus creates safety concerns, and because multiple
repositories can feed into different projects (e.g. on mergebot,
odoo-dev/odoo is both an ancillary repository to the main RD project,
and the main repository to the minor / legacy master-wowl
project). This means it can be necessary to have multiple projects
share the same secret as well, this then mandates the secret for more
repositories per (1).
This is a pain in the ass, so just detach secrets from projects and
link them *only* to repositories, it's cleaner and easier to manage
and set up progressively.
This requires a lot of changes to the tests, as they all need to
correctly configure the signaling.
For `runbot_merge` there was *some* setup sharing already via the
module-level `repo` fixtures`, those were merged into a conftest-level
fixture which could handle the signaling setup. A few tests which
unnecessarily set up repositories ad-hoc were also moved to the
fixture. But for most of the ad-hoc setup in `runbot_merge`, as well
as `forwardport` where it's all ad-hoc, events sources setup was just
appended as is. This should probably be cleaned up at one point, with
the various requirements collected and organised into a small set of
fixtures doing the job more uniformly.
Fixes #887
2024-06-06 16:07:57 +07:00
def test_merge_emptying_commits ( env , repo , users , config ) :
2023-10-09 18:52:30 +07:00
""" The mergebot should *not* allow merging non-empty commits which become
empty as part of the staging ( rebasing )
"""
with repo :
[ m , _ ] = repo . make_commits (
None ,
Commit ( ' initial ' , tree = { ' m ' : ' m ' } ) ,
Commit ( ' second ' , tree = { ' m ' : ' c ' } ) ,
ref = ' heads/master ' ,
)
[ c1 ] = repo . make_commits ( m , Commit ( ' thing ' , tree = { ' m ' : ' c ' } ) , ref = ' heads/branch1 ' )
pr1 = repo . make_pr ( target = ' master ' , head = ' branch1 ' )
repo . post_status ( pr1 . head , ' success ' )
pr1 . post_comment ( ' hansen r+ rebase-ff ' , config [ ' role_reviewer ' ] [ ' token ' ] )
[ _ , c2 ] = repo . make_commits (
m ,
Commit ( ' thing1 ' , tree = { ' c ' : ' c ' } ) ,
Commit ( ' thing2 ' , tree = { ' m ' : ' c ' } ) ,
ref = ' heads/branch2 ' ,
)
pr2 = repo . make_pr ( target = ' master ' , head = ' branch2 ' )
repo . post_status ( pr2 . head , ' success ' )
pr2 . post_comment ( ' hansen r+ rebase-ff ' , config [ ' role_reviewer ' ] [ ' token ' ] )
repo . make_commits (
m ,
Commit ( ' thing1 ' , tree = { ' m ' : ' x ' } ) ,
Commit ( ' thing2 ' , tree = { ' m ' : ' c ' } ) ,
ref = ' heads/branch3 ' ,
)
pr3 = repo . make_pr ( target = ' master ' , head = ' branch3 ' )
repo . post_status ( pr3 . head , ' success ' )
pr3 . post_comment ( ' hansen r+ squash ' , config [ ' role_reviewer ' ] [ ' token ' ] )
env . run_crons ( )
ping = f " @ { users [ ' user ' ] } @ { users [ ' reviewer ' ] } "
# check that first / sole commit emptying is caught
2024-06-11 20:31:35 +07:00
pr1_id = to_pr ( env , pr1 )
assert not pr1_id . staging_id
2023-10-09 18:52:30 +07:00
assert pr1 . comments [ 3 : ] == [
( users [ ' user ' ] , f " { ping } unable to stage: commit { c1 } results in an empty tree when merged, it is likely a duplicate of a merged commit, rebase and remove. " )
]
2024-06-11 20:31:35 +07:00
assert pr1_id . error
assert pr1_id . state == ' error '
2023-10-09 18:52:30 +07:00
# check that followup commit emptying is caught
2024-06-11 20:31:35 +07:00
pr2_id = to_pr ( env , pr2 )
assert not pr2_id . staging_id
2023-10-09 18:52:30 +07:00
assert pr2 . comments [ 3 : ] == [
( users [ ' user ' ] , f " { ping } unable to stage: commit { c2 } results in an empty tree when merged, it is likely a duplicate of a merged commit, rebase and remove. " )
]
2024-06-11 20:31:35 +07:00
assert pr2_id . error
assert pr2_id . state == ' error '
2023-10-09 18:52:30 +07:00
# check that emptied squashed pr is caught
pr3_id = to_pr ( env , pr3 )
assert not pr3_id . staging_id
assert pr3 . comments [ 3 : ] == [
( users [ ' user ' ] , f " { ping } unable to stage: results in an empty tree when merged, might be the duplicate of a merged PR. " )
]
2024-06-11 20:31:35 +07:00
assert pr3_id . error
assert pr3_id . state == ' error '
# ensure the PR does not get re-staged since it's the first of the staging
# (it's the only one)
env . run_crons ( )
assert pr1 . comments [ 3 : ] == [
( users [ ' user ' ] , f " { ping } unable to stage: commit { c1 } results in an empty tree when merged, it is likely a duplicate of a merged commit, rebase and remove. " )
2022-11-04 21:22:36 +07:00
]
2024-06-11 20:31:35 +07:00
assert len ( pr2 . comments ) == 4
assert len ( pr3 . comments ) == 4
[CHG] *: rewrite commands set, rework status management
This commit revisits the commands set in order to make it more
regular, and limit inconsistent command-sets, although it includes
pseudo-command aliases for common tasks now removed from the core set.
Hard Errors
===========
The previous iteration of the commands set would ignore any
non-command term in a command line. This has been changed to hard
error (and ignoring the entire thing) if any command is unknown or
invalid.
This fixes inconsistent / unexpected interpretations where a user
sends a command, then writes a novel on the same line some words of
which happen to *also* be commands, leading to merge states they did
not expect. They should now be told to fuck off.
Priority Restructuring
----------------------
The numerical priority system was pretty messy in that it confused
"staging priority" (in ways which were not entirely straightforward)
with overrides to other concerns.
This has now being split along all the axis, with separate command
subsets for:
- staging prioritisation, now separated between `default`, `priority`,
and `alone`,
- `default` means PRs are picked by an unspecified order when
creating a staging, if nothing better is available
- `priority` means PRs are picked first when staging, however if
`priority` PRs don't fill the staging the rest will be filled with
`default`, this mode did not previously exist
- `alone` means the PRs are picked first, before splits, and only
`alone` PRs can be part of the staging (which usually matches the
modename)
- `skipchecks` overrides both statuses and approval checks, for the
batch, something previously implied in `p=0`, but now
independent. Setting `skipchecks` basically makes the entire batch
`ready`.
For consistency this also sets the reviewer implicitly: since
skipchecks overrides both statuses *and approval*, whoever enables
this mode is essentially the reviewer.
- `cancel` cancels any ongoing staging when the marked PR becomes
ready again, previously this was also implied (in a more restricted
form) by setting `p=0`
FWBot removal
=============
While the "forwardport bot" still exists as an API level (to segregate
access rights between tokens) it has been removed as an interaction
point, as part of the modules merge plan. As a result,
fwbot stops responding
----------------------
Feedback messages are now always sent by the mergebot, the
forward-porting bot should not send any message or notification
anymore.
commands moved to the merge bot
-------------------------------
- `ignore`/`up to` simply changes bot
- `close` as well
- `skipci` is now a choice / flag of an `fw` command, which denotes
the forward-port policy,
- `fw=default` is the old `ci` and resets the policy to default,
that is wait for the PR to be merged to create forward ports, and
for the required statuses on each forward port to be received
before creating the next
- `fw=skipci` is the old `skipci`, it waits for the merge of the
base PR but then creates all the forward ports immediately (unless
it gets a conflict)
- `fw=skipmerge` immediately creates all the forward ports, without
even waiting for the PR to be merged
This is a completely new mode, and may be rather broken as until
now the 'bot has always assumed the source PR had been merged.
approval rework
---------------
Because of the previous section, there is no distinguishing feature
between `mergebot r+` = "merge this PR" and `forwardbot r+` = "merge
this PR and all its parent with different access rights".
As a result, the two have been merged under a single `mergebot r+`
with heuristics attempting to provide the best experience:
- if approving a non-forward port, the behavior does not change
- else, with review rights on the source, all ancestors are approved
- else, as author of the original, approves all ancestors which descend
from a merged PR
- else, approves all ancestors up to and including the oldest ancestor
to which we have review rights
Most notably, the source's author is not delegated on the source or
any of its descendants anymore. This might need to be revisited if it
provides too restrictive.
For the very specialized need of approving a forward-port *and none of
its ancestors*, `review=` can now take a comma (`,`) separated list of
pull request numbers (github numbers, not mergebot ids).
Computed State
==============
The `state` field of pull requests is now computed. Hopefully this
makes the status more consistent and predictable in the long run, and
importantly makes status management more reliable (because reference
datum get updated naturally flowing to the state).
For now however it makes things more complicated as some of the states
have to be separately signaled or updated:
- `closed` and `error` are now separate flags
- `merge_date` is pulled down from forwardport and becomes the
transition signal for ready -> merged
- `reviewed_by` becomes the transition signal for approval (might be a
good idea to rename it...)
- `status` is computed from the head's statuses and overrides, and
*that* becomes the validation state
Ideally, batch-level flags like `skipchecks` should be on, well, the
batch, and `state` should have a dependency on the batch. However
currently the batch is not a durable / permanent member of the system,
so it's a PR-level flag and a messy pile.
On notable change is that *forcing* the state to `ready` now does that
but also sets the reviewer, `skipchecks`, and overrides to ensure the
API-mediated readying does not get rolled back by e.g. the runbot
sending a status.
This is useful for a few types of automated / programmatic PRs
e.g. translation exports, where we set the state programmatically to
limit noise.
recursive dependency hack
-------------------------
Given a sequence of PRs with an override of the source, if one of the
PRs is updated its descendants should not have the override
anymore. However if the updated PR gets overridden, its descendants
should have *that* override.
This requires some unholy manipulations via an override of `modified`,
as the ORM supports recursive fields but not recursive
dependencies (on a different field).
unconditional followup scheduling
---------------------------------
Previously scheduling forward-port followup was contigent on the FW
policy, but it's not actually correct if the new PR is *immediately*
validated (which can happen now that the field is computed, if there
are no required statuses *or* all of the required statuses are
overridden by an ancestor) as nothing will trigger the state change
and thus scheduling of the fp followup.
The followup function checks all the properties of the batch to port,
so this should not result on incorrect ports. Although it's a bit more
expensive, and will lead to more spam.
Previously this would not happen because on creation of a PR the
validation task (commit -> PR) would still have to execute.
Misc Changes
============
- If a PR is marked as overriding / canceling stagings, it now does
so on retry not just when setting initially.
This was not handled at all previously, so a PR in P0 going into
error due to e.g. a non-deterministic bug would be retried and still
p=0, but a current staging would not get cancelled. Same when a PR
in p=0 goes into error because something was failed, then is updated
with a fix.
- Add tracking to a bunch of relevant PR fields.
Post-mortem analysis currently generally requires going through the
text logs to see what happened, which is annoying.
There is a nondeterminism / inconsistency in the tracking which
sometimes leads the admin user to trigger tracking before the bot
does, leading to the staging tracking being attributed to them
during tests, shove under the carpet by ignoring the user to whom
that tracking is attributed.
When multiple users update tracked fields in the same transaction
all the changes are attributed to the first one having triggered
tracking (?), I couldn't find why the admin sometimes takes over.
- added and leveraged support for enum-backed selection fields
- moved variuous fields from forwardport to runbot_merge
- fix a migration which had never worked and which never run (because
I forgot to bump the version on the module)
- remove some unnecessary intermediate de/serialisation
fixes #673, fixes #309, fixes #792, fixes #846 (probably)
2023-10-31 13:42:07 +07:00
[ADD] *: per-repository webhook secret
Currently webhook secrets are configured per *project* which is an
issue both because different repositories may have different
administrators and thus creates safety concerns, and because multiple
repositories can feed into different projects (e.g. on mergebot,
odoo-dev/odoo is both an ancillary repository to the main RD project,
and the main repository to the minor / legacy master-wowl
project). This means it can be necessary to have multiple projects
share the same secret as well, this then mandates the secret for more
repositories per (1).
This is a pain in the ass, so just detach secrets from projects and
link them *only* to repositories, it's cleaner and easier to manage
and set up progressively.
This requires a lot of changes to the tests, as they all need to
correctly configure the signaling.
For `runbot_merge` there was *some* setup sharing already via the
module-level `repo` fixtures`, those were merged into a conftest-level
fixture which could handle the signaling setup. A few tests which
unnecessarily set up repositories ad-hoc were also moved to the
fixture. But for most of the ad-hoc setup in `runbot_merge`, as well
as `forwardport` where it's all ad-hoc, events sources setup was just
appended as is. This should probably be cleaned up at one point, with
the various requirements collected and organised into a small set of
fixtures doing the job more uniformly.
Fixes #887
2024-06-06 16:07:57 +07:00
def test_force_ready ( env , repo , config ) :
[CHG] *: rewrite commands set, rework status management
This commit revisits the commands set in order to make it more
regular, and limit inconsistent command-sets, although it includes
pseudo-command aliases for common tasks now removed from the core set.
Hard Errors
===========
The previous iteration of the commands set would ignore any
non-command term in a command line. This has been changed to hard
error (and ignoring the entire thing) if any command is unknown or
invalid.
This fixes inconsistent / unexpected interpretations where a user
sends a command, then writes a novel on the same line some words of
which happen to *also* be commands, leading to merge states they did
not expect. They should now be told to fuck off.
Priority Restructuring
----------------------
The numerical priority system was pretty messy in that it confused
"staging priority" (in ways which were not entirely straightforward)
with overrides to other concerns.
This has now being split along all the axis, with separate command
subsets for:
- staging prioritisation, now separated between `default`, `priority`,
and `alone`,
- `default` means PRs are picked by an unspecified order when
creating a staging, if nothing better is available
- `priority` means PRs are picked first when staging, however if
`priority` PRs don't fill the staging the rest will be filled with
`default`, this mode did not previously exist
- `alone` means the PRs are picked first, before splits, and only
`alone` PRs can be part of the staging (which usually matches the
modename)
- `skipchecks` overrides both statuses and approval checks, for the
batch, something previously implied in `p=0`, but now
independent. Setting `skipchecks` basically makes the entire batch
`ready`.
For consistency this also sets the reviewer implicitly: since
skipchecks overrides both statuses *and approval*, whoever enables
this mode is essentially the reviewer.
- `cancel` cancels any ongoing staging when the marked PR becomes
ready again, previously this was also implied (in a more restricted
form) by setting `p=0`
FWBot removal
=============
While the "forwardport bot" still exists as an API level (to segregate
access rights between tokens) it has been removed as an interaction
point, as part of the modules merge plan. As a result,
fwbot stops responding
----------------------
Feedback messages are now always sent by the mergebot, the
forward-porting bot should not send any message or notification
anymore.
commands moved to the merge bot
-------------------------------
- `ignore`/`up to` simply changes bot
- `close` as well
- `skipci` is now a choice / flag of an `fw` command, which denotes
the forward-port policy,
- `fw=default` is the old `ci` and resets the policy to default,
that is wait for the PR to be merged to create forward ports, and
for the required statuses on each forward port to be received
before creating the next
- `fw=skipci` is the old `skipci`, it waits for the merge of the
base PR but then creates all the forward ports immediately (unless
it gets a conflict)
- `fw=skipmerge` immediately creates all the forward ports, without
even waiting for the PR to be merged
This is a completely new mode, and may be rather broken as until
now the 'bot has always assumed the source PR had been merged.
approval rework
---------------
Because of the previous section, there is no distinguishing feature
between `mergebot r+` = "merge this PR" and `forwardbot r+` = "merge
this PR and all its parent with different access rights".
As a result, the two have been merged under a single `mergebot r+`
with heuristics attempting to provide the best experience:
- if approving a non-forward port, the behavior does not change
- else, with review rights on the source, all ancestors are approved
- else, as author of the original, approves all ancestors which descend
from a merged PR
- else, approves all ancestors up to and including the oldest ancestor
to which we have review rights
Most notably, the source's author is not delegated on the source or
any of its descendants anymore. This might need to be revisited if it
provides too restrictive.
For the very specialized need of approving a forward-port *and none of
its ancestors*, `review=` can now take a comma (`,`) separated list of
pull request numbers (github numbers, not mergebot ids).
Computed State
==============
The `state` field of pull requests is now computed. Hopefully this
makes the status more consistent and predictable in the long run, and
importantly makes status management more reliable (because reference
datum get updated naturally flowing to the state).
For now however it makes things more complicated as some of the states
have to be separately signaled or updated:
- `closed` and `error` are now separate flags
- `merge_date` is pulled down from forwardport and becomes the
transition signal for ready -> merged
- `reviewed_by` becomes the transition signal for approval (might be a
good idea to rename it...)
- `status` is computed from the head's statuses and overrides, and
*that* becomes the validation state
Ideally, batch-level flags like `skipchecks` should be on, well, the
batch, and `state` should have a dependency on the batch. However
currently the batch is not a durable / permanent member of the system,
so it's a PR-level flag and a messy pile.
On notable change is that *forcing* the state to `ready` now does that
but also sets the reviewer, `skipchecks`, and overrides to ensure the
API-mediated readying does not get rolled back by e.g. the runbot
sending a status.
This is useful for a few types of automated / programmatic PRs
e.g. translation exports, where we set the state programmatically to
limit noise.
recursive dependency hack
-------------------------
Given a sequence of PRs with an override of the source, if one of the
PRs is updated its descendants should not have the override
anymore. However if the updated PR gets overridden, its descendants
should have *that* override.
This requires some unholy manipulations via an override of `modified`,
as the ORM supports recursive fields but not recursive
dependencies (on a different field).
unconditional followup scheduling
---------------------------------
Previously scheduling forward-port followup was contigent on the FW
policy, but it's not actually correct if the new PR is *immediately*
validated (which can happen now that the field is computed, if there
are no required statuses *or* all of the required statuses are
overridden by an ancestor) as nothing will trigger the state change
and thus scheduling of the fp followup.
The followup function checks all the properties of the batch to port,
so this should not result on incorrect ports. Although it's a bit more
expensive, and will lead to more spam.
Previously this would not happen because on creation of a PR the
validation task (commit -> PR) would still have to execute.
Misc Changes
============
- If a PR is marked as overriding / canceling stagings, it now does
so on retry not just when setting initially.
This was not handled at all previously, so a PR in P0 going into
error due to e.g. a non-deterministic bug would be retried and still
p=0, but a current staging would not get cancelled. Same when a PR
in p=0 goes into error because something was failed, then is updated
with a fix.
- Add tracking to a bunch of relevant PR fields.
Post-mortem analysis currently generally requires going through the
text logs to see what happened, which is annoying.
There is a nondeterminism / inconsistency in the tracking which
sometimes leads the admin user to trigger tracking before the bot
does, leading to the staging tracking being attributed to them
during tests, shove under the carpet by ignoring the user to whom
that tracking is attributed.
When multiple users update tracked fields in the same transaction
all the changes are attributed to the first one having triggered
tracking (?), I couldn't find why the admin sometimes takes over.
- added and leveraged support for enum-backed selection fields
- moved variuous fields from forwardport to runbot_merge
- fix a migration which had never worked and which never run (because
I forgot to bump the version on the module)
- remove some unnecessary intermediate de/serialisation
fixes #673, fixes #309, fixes #792, fixes #846 (probably)
2023-10-31 13:42:07 +07:00
with repo :
[ m ] = repo . make_commits ( None , Commit ( ' initial ' , tree = { ' m ' : ' m ' } ) , ref = " heads/master " )
2024-06-25 03:16:43 +07:00
repo . make_commits ( m , Commit ( ' first ' , tree = { ' m ' : ' c1 ' } ) , ref = " heads/other " )
pr = repo . make_pr ( target = ' master ' , head = ' other ' )
[CHG] *: rewrite commands set, rework status management
This commit revisits the commands set in order to make it more
regular, and limit inconsistent command-sets, although it includes
pseudo-command aliases for common tasks now removed from the core set.
Hard Errors
===========
The previous iteration of the commands set would ignore any
non-command term in a command line. This has been changed to hard
error (and ignoring the entire thing) if any command is unknown or
invalid.
This fixes inconsistent / unexpected interpretations where a user
sends a command, then writes a novel on the same line some words of
which happen to *also* be commands, leading to merge states they did
not expect. They should now be told to fuck off.
Priority Restructuring
----------------------
The numerical priority system was pretty messy in that it confused
"staging priority" (in ways which were not entirely straightforward)
with overrides to other concerns.
This has now being split along all the axis, with separate command
subsets for:
- staging prioritisation, now separated between `default`, `priority`,
and `alone`,
- `default` means PRs are picked by an unspecified order when
creating a staging, if nothing better is available
- `priority` means PRs are picked first when staging, however if
`priority` PRs don't fill the staging the rest will be filled with
`default`, this mode did not previously exist
- `alone` means the PRs are picked first, before splits, and only
`alone` PRs can be part of the staging (which usually matches the
modename)
- `skipchecks` overrides both statuses and approval checks, for the
batch, something previously implied in `p=0`, but now
independent. Setting `skipchecks` basically makes the entire batch
`ready`.
For consistency this also sets the reviewer implicitly: since
skipchecks overrides both statuses *and approval*, whoever enables
this mode is essentially the reviewer.
- `cancel` cancels any ongoing staging when the marked PR becomes
ready again, previously this was also implied (in a more restricted
form) by setting `p=0`
FWBot removal
=============
While the "forwardport bot" still exists as an API level (to segregate
access rights between tokens) it has been removed as an interaction
point, as part of the modules merge plan. As a result,
fwbot stops responding
----------------------
Feedback messages are now always sent by the mergebot, the
forward-porting bot should not send any message or notification
anymore.
commands moved to the merge bot
-------------------------------
- `ignore`/`up to` simply changes bot
- `close` as well
- `skipci` is now a choice / flag of an `fw` command, which denotes
the forward-port policy,
- `fw=default` is the old `ci` and resets the policy to default,
that is wait for the PR to be merged to create forward ports, and
for the required statuses on each forward port to be received
before creating the next
- `fw=skipci` is the old `skipci`, it waits for the merge of the
base PR but then creates all the forward ports immediately (unless
it gets a conflict)
- `fw=skipmerge` immediately creates all the forward ports, without
even waiting for the PR to be merged
This is a completely new mode, and may be rather broken as until
now the 'bot has always assumed the source PR had been merged.
approval rework
---------------
Because of the previous section, there is no distinguishing feature
between `mergebot r+` = "merge this PR" and `forwardbot r+` = "merge
this PR and all its parent with different access rights".
As a result, the two have been merged under a single `mergebot r+`
with heuristics attempting to provide the best experience:
- if approving a non-forward port, the behavior does not change
- else, with review rights on the source, all ancestors are approved
- else, as author of the original, approves all ancestors which descend
from a merged PR
- else, approves all ancestors up to and including the oldest ancestor
to which we have review rights
Most notably, the source's author is not delegated on the source or
any of its descendants anymore. This might need to be revisited if it
provides too restrictive.
For the very specialized need of approving a forward-port *and none of
its ancestors*, `review=` can now take a comma (`,`) separated list of
pull request numbers (github numbers, not mergebot ids).
Computed State
==============
The `state` field of pull requests is now computed. Hopefully this
makes the status more consistent and predictable in the long run, and
importantly makes status management more reliable (because reference
datum get updated naturally flowing to the state).
For now however it makes things more complicated as some of the states
have to be separately signaled or updated:
- `closed` and `error` are now separate flags
- `merge_date` is pulled down from forwardport and becomes the
transition signal for ready -> merged
- `reviewed_by` becomes the transition signal for approval (might be a
good idea to rename it...)
- `status` is computed from the head's statuses and overrides, and
*that* becomes the validation state
Ideally, batch-level flags like `skipchecks` should be on, well, the
batch, and `state` should have a dependency on the batch. However
currently the batch is not a durable / permanent member of the system,
so it's a PR-level flag and a messy pile.
On notable change is that *forcing* the state to `ready` now does that
but also sets the reviewer, `skipchecks`, and overrides to ensure the
API-mediated readying does not get rolled back by e.g. the runbot
sending a status.
This is useful for a few types of automated / programmatic PRs
e.g. translation exports, where we set the state programmatically to
limit noise.
recursive dependency hack
-------------------------
Given a sequence of PRs with an override of the source, if one of the
PRs is updated its descendants should not have the override
anymore. However if the updated PR gets overridden, its descendants
should have *that* override.
This requires some unholy manipulations via an override of `modified`,
as the ORM supports recursive fields but not recursive
dependencies (on a different field).
unconditional followup scheduling
---------------------------------
Previously scheduling forward-port followup was contigent on the FW
policy, but it's not actually correct if the new PR is *immediately*
validated (which can happen now that the field is computed, if there
are no required statuses *or* all of the required statuses are
overridden by an ancestor) as nothing will trigger the state change
and thus scheduling of the fp followup.
The followup function checks all the properties of the batch to port,
so this should not result on incorrect ports. Although it's a bit more
expensive, and will lead to more spam.
Previously this would not happen because on creation of a PR the
validation task (commit -> PR) would still have to execute.
Misc Changes
============
- If a PR is marked as overriding / canceling stagings, it now does
so on retry not just when setting initially.
This was not handled at all previously, so a PR in P0 going into
error due to e.g. a non-deterministic bug would be retried and still
p=0, but a current staging would not get cancelled. Same when a PR
in p=0 goes into error because something was failed, then is updated
with a fix.
- Add tracking to a bunch of relevant PR fields.
Post-mortem analysis currently generally requires going through the
text logs to see what happened, which is annoying.
There is a nondeterminism / inconsistency in the tracking which
sometimes leads the admin user to trigger tracking before the bot
does, leading to the staging tracking being attributed to them
during tests, shove under the carpet by ignoring the user to whom
that tracking is attributed.
When multiple users update tracked fields in the same transaction
all the changes are attributed to the first one having triggered
tracking (?), I couldn't find why the admin sometimes takes over.
- added and leveraged support for enum-backed selection fields
- moved variuous fields from forwardport to runbot_merge
- fix a migration which had never worked and which never run (because
I forgot to bump the version on the module)
- remove some unnecessary intermediate de/serialisation
fixes #673, fixes #309, fixes #792, fixes #846 (probably)
2023-10-31 13:42:07 +07:00
env . run_crons ( )
pr_id = to_pr ( env , pr )
2024-06-11 20:31:35 +07:00
pr_id . skipchecks = True
[CHG] *: rewrite commands set, rework status management
This commit revisits the commands set in order to make it more
regular, and limit inconsistent command-sets, although it includes
pseudo-command aliases for common tasks now removed from the core set.
Hard Errors
===========
The previous iteration of the commands set would ignore any
non-command term in a command line. This has been changed to hard
error (and ignoring the entire thing) if any command is unknown or
invalid.
This fixes inconsistent / unexpected interpretations where a user
sends a command, then writes a novel on the same line some words of
which happen to *also* be commands, leading to merge states they did
not expect. They should now be told to fuck off.
Priority Restructuring
----------------------
The numerical priority system was pretty messy in that it confused
"staging priority" (in ways which were not entirely straightforward)
with overrides to other concerns.
This has now being split along all the axis, with separate command
subsets for:
- staging prioritisation, now separated between `default`, `priority`,
and `alone`,
- `default` means PRs are picked by an unspecified order when
creating a staging, if nothing better is available
- `priority` means PRs are picked first when staging, however if
`priority` PRs don't fill the staging the rest will be filled with
`default`, this mode did not previously exist
- `alone` means the PRs are picked first, before splits, and only
`alone` PRs can be part of the staging (which usually matches the
modename)
- `skipchecks` overrides both statuses and approval checks, for the
batch, something previously implied in `p=0`, but now
independent. Setting `skipchecks` basically makes the entire batch
`ready`.
For consistency this also sets the reviewer implicitly: since
skipchecks overrides both statuses *and approval*, whoever enables
this mode is essentially the reviewer.
- `cancel` cancels any ongoing staging when the marked PR becomes
ready again, previously this was also implied (in a more restricted
form) by setting `p=0`
FWBot removal
=============
While the "forwardport bot" still exists as an API level (to segregate
access rights between tokens) it has been removed as an interaction
point, as part of the modules merge plan. As a result,
fwbot stops responding
----------------------
Feedback messages are now always sent by the mergebot, the
forward-porting bot should not send any message or notification
anymore.
commands moved to the merge bot
-------------------------------
- `ignore`/`up to` simply changes bot
- `close` as well
- `skipci` is now a choice / flag of an `fw` command, which denotes
the forward-port policy,
- `fw=default` is the old `ci` and resets the policy to default,
that is wait for the PR to be merged to create forward ports, and
for the required statuses on each forward port to be received
before creating the next
- `fw=skipci` is the old `skipci`, it waits for the merge of the
base PR but then creates all the forward ports immediately (unless
it gets a conflict)
- `fw=skipmerge` immediately creates all the forward ports, without
even waiting for the PR to be merged
This is a completely new mode, and may be rather broken as until
now the 'bot has always assumed the source PR had been merged.
approval rework
---------------
Because of the previous section, there is no distinguishing feature
between `mergebot r+` = "merge this PR" and `forwardbot r+` = "merge
this PR and all its parent with different access rights".
As a result, the two have been merged under a single `mergebot r+`
with heuristics attempting to provide the best experience:
- if approving a non-forward port, the behavior does not change
- else, with review rights on the source, all ancestors are approved
- else, as author of the original, approves all ancestors which descend
from a merged PR
- else, approves all ancestors up to and including the oldest ancestor
to which we have review rights
Most notably, the source's author is not delegated on the source or
any of its descendants anymore. This might need to be revisited if it
provides too restrictive.
For the very specialized need of approving a forward-port *and none of
its ancestors*, `review=` can now take a comma (`,`) separated list of
pull request numbers (github numbers, not mergebot ids).
Computed State
==============
The `state` field of pull requests is now computed. Hopefully this
makes the status more consistent and predictable in the long run, and
importantly makes status management more reliable (because reference
datum get updated naturally flowing to the state).
For now however it makes things more complicated as some of the states
have to be separately signaled or updated:
- `closed` and `error` are now separate flags
- `merge_date` is pulled down from forwardport and becomes the
transition signal for ready -> merged
- `reviewed_by` becomes the transition signal for approval (might be a
good idea to rename it...)
- `status` is computed from the head's statuses and overrides, and
*that* becomes the validation state
Ideally, batch-level flags like `skipchecks` should be on, well, the
batch, and `state` should have a dependency on the batch. However
currently the batch is not a durable / permanent member of the system,
so it's a PR-level flag and a messy pile.
On notable change is that *forcing* the state to `ready` now does that
but also sets the reviewer, `skipchecks`, and overrides to ensure the
API-mediated readying does not get rolled back by e.g. the runbot
sending a status.
This is useful for a few types of automated / programmatic PRs
e.g. translation exports, where we set the state programmatically to
limit noise.
recursive dependency hack
-------------------------
Given a sequence of PRs with an override of the source, if one of the
PRs is updated its descendants should not have the override
anymore. However if the updated PR gets overridden, its descendants
should have *that* override.
This requires some unholy manipulations via an override of `modified`,
as the ORM supports recursive fields but not recursive
dependencies (on a different field).
unconditional followup scheduling
---------------------------------
Previously scheduling forward-port followup was contigent on the FW
policy, but it's not actually correct if the new PR is *immediately*
validated (which can happen now that the field is computed, if there
are no required statuses *or* all of the required statuses are
overridden by an ancestor) as nothing will trigger the state change
and thus scheduling of the fp followup.
The followup function checks all the properties of the batch to port,
so this should not result on incorrect ports. Although it's a bit more
expensive, and will lead to more spam.
Previously this would not happen because on creation of a PR the
validation task (commit -> PR) would still have to execute.
Misc Changes
============
- If a PR is marked as overriding / canceling stagings, it now does
so on retry not just when setting initially.
This was not handled at all previously, so a PR in P0 going into
error due to e.g. a non-deterministic bug would be retried and still
p=0, but a current staging would not get cancelled. Same when a PR
in p=0 goes into error because something was failed, then is updated
with a fix.
- Add tracking to a bunch of relevant PR fields.
Post-mortem analysis currently generally requires going through the
text logs to see what happened, which is annoying.
There is a nondeterminism / inconsistency in the tracking which
sometimes leads the admin user to trigger tracking before the bot
does, leading to the staging tracking being attributed to them
during tests, shove under the carpet by ignoring the user to whom
that tracking is attributed.
When multiple users update tracked fields in the same transaction
all the changes are attributed to the first one having triggered
tracking (?), I couldn't find why the admin sometimes takes over.
- added and leveraged support for enum-backed selection fields
- moved variuous fields from forwardport to runbot_merge
- fix a migration which had never worked and which never run (because
I forgot to bump the version on the module)
- remove some unnecessary intermediate de/serialisation
fixes #673, fixes #309, fixes #792, fixes #846 (probably)
2023-10-31 13:42:07 +07:00
assert pr_id . state == ' ready '
2024-09-16 17:49:23 +07:00
assert pr_id . status == ' success '
[CHG] *: rewrite commands set, rework status management
This commit revisits the commands set in order to make it more
regular, and limit inconsistent command-sets, although it includes
pseudo-command aliases for common tasks now removed from the core set.
Hard Errors
===========
The previous iteration of the commands set would ignore any
non-command term in a command line. This has been changed to hard
error (and ignoring the entire thing) if any command is unknown or
invalid.
This fixes inconsistent / unexpected interpretations where a user
sends a command, then writes a novel on the same line some words of
which happen to *also* be commands, leading to merge states they did
not expect. They should now be told to fuck off.
Priority Restructuring
----------------------
The numerical priority system was pretty messy in that it confused
"staging priority" (in ways which were not entirely straightforward)
with overrides to other concerns.
This has now being split along all the axis, with separate command
subsets for:
- staging prioritisation, now separated between `default`, `priority`,
and `alone`,
- `default` means PRs are picked by an unspecified order when
creating a staging, if nothing better is available
- `priority` means PRs are picked first when staging, however if
`priority` PRs don't fill the staging the rest will be filled with
`default`, this mode did not previously exist
- `alone` means the PRs are picked first, before splits, and only
`alone` PRs can be part of the staging (which usually matches the
modename)
- `skipchecks` overrides both statuses and approval checks, for the
batch, something previously implied in `p=0`, but now
independent. Setting `skipchecks` basically makes the entire batch
`ready`.
For consistency this also sets the reviewer implicitly: since
skipchecks overrides both statuses *and approval*, whoever enables
this mode is essentially the reviewer.
- `cancel` cancels any ongoing staging when the marked PR becomes
ready again, previously this was also implied (in a more restricted
form) by setting `p=0`
FWBot removal
=============
While the "forwardport bot" still exists as an API level (to segregate
access rights between tokens) it has been removed as an interaction
point, as part of the modules merge plan. As a result,
fwbot stops responding
----------------------
Feedback messages are now always sent by the mergebot, the
forward-porting bot should not send any message or notification
anymore.
commands moved to the merge bot
-------------------------------
- `ignore`/`up to` simply changes bot
- `close` as well
- `skipci` is now a choice / flag of an `fw` command, which denotes
the forward-port policy,
- `fw=default` is the old `ci` and resets the policy to default,
that is wait for the PR to be merged to create forward ports, and
for the required statuses on each forward port to be received
before creating the next
- `fw=skipci` is the old `skipci`, it waits for the merge of the
base PR but then creates all the forward ports immediately (unless
it gets a conflict)
- `fw=skipmerge` immediately creates all the forward ports, without
even waiting for the PR to be merged
This is a completely new mode, and may be rather broken as until
now the 'bot has always assumed the source PR had been merged.
approval rework
---------------
Because of the previous section, there is no distinguishing feature
between `mergebot r+` = "merge this PR" and `forwardbot r+` = "merge
this PR and all its parent with different access rights".
As a result, the two have been merged under a single `mergebot r+`
with heuristics attempting to provide the best experience:
- if approving a non-forward port, the behavior does not change
- else, with review rights on the source, all ancestors are approved
- else, as author of the original, approves all ancestors which descend
from a merged PR
- else, approves all ancestors up to and including the oldest ancestor
to which we have review rights
Most notably, the source's author is not delegated on the source or
any of its descendants anymore. This might need to be revisited if it
provides too restrictive.
For the very specialized need of approving a forward-port *and none of
its ancestors*, `review=` can now take a comma (`,`) separated list of
pull request numbers (github numbers, not mergebot ids).
Computed State
==============
The `state` field of pull requests is now computed. Hopefully this
makes the status more consistent and predictable in the long run, and
importantly makes status management more reliable (because reference
datum get updated naturally flowing to the state).
For now however it makes things more complicated as some of the states
have to be separately signaled or updated:
- `closed` and `error` are now separate flags
- `merge_date` is pulled down from forwardport and becomes the
transition signal for ready -> merged
- `reviewed_by` becomes the transition signal for approval (might be a
good idea to rename it...)
- `status` is computed from the head's statuses and overrides, and
*that* becomes the validation state
Ideally, batch-level flags like `skipchecks` should be on, well, the
batch, and `state` should have a dependency on the batch. However
currently the batch is not a durable / permanent member of the system,
so it's a PR-level flag and a messy pile.
On notable change is that *forcing* the state to `ready` now does that
but also sets the reviewer, `skipchecks`, and overrides to ensure the
API-mediated readying does not get rolled back by e.g. the runbot
sending a status.
This is useful for a few types of automated / programmatic PRs
e.g. translation exports, where we set the state programmatically to
limit noise.
recursive dependency hack
-------------------------
Given a sequence of PRs with an override of the source, if one of the
PRs is updated its descendants should not have the override
anymore. However if the updated PR gets overridden, its descendants
should have *that* override.
This requires some unholy manipulations via an override of `modified`,
as the ORM supports recursive fields but not recursive
dependencies (on a different field).
unconditional followup scheduling
---------------------------------
Previously scheduling forward-port followup was contigent on the FW
policy, but it's not actually correct if the new PR is *immediately*
validated (which can happen now that the field is computed, if there
are no required statuses *or* all of the required statuses are
overridden by an ancestor) as nothing will trigger the state change
and thus scheduling of the fp followup.
The followup function checks all the properties of the batch to port,
so this should not result on incorrect ports. Although it's a bit more
expensive, and will lead to more spam.
Previously this would not happen because on creation of a PR the
validation task (commit -> PR) would still have to execute.
Misc Changes
============
- If a PR is marked as overriding / canceling stagings, it now does
so on retry not just when setting initially.
This was not handled at all previously, so a PR in P0 going into
error due to e.g. a non-deterministic bug would be retried and still
p=0, but a current staging would not get cancelled. Same when a PR
in p=0 goes into error because something was failed, then is updated
with a fix.
- Add tracking to a bunch of relevant PR fields.
Post-mortem analysis currently generally requires going through the
text logs to see what happened, which is annoying.
There is a nondeterminism / inconsistency in the tracking which
sometimes leads the admin user to trigger tracking before the bot
does, leading to the staging tracking being attributed to them
during tests, shove under the carpet by ignoring the user to whom
that tracking is attributed.
When multiple users update tracked fields in the same transaction
all the changes are attributed to the first one having triggered
tracking (?), I couldn't find why the admin sometimes takes over.
- added and leveraged support for enum-backed selection fields
- moved variuous fields from forwardport to runbot_merge
- fix a migration which had never worked and which never run (because
I forgot to bump the version on the module)
- remove some unnecessary intermediate de/serialisation
fixes #673, fixes #309, fixes #792, fixes #846 (probably)
2023-10-31 13:42:07 +07:00
reviewer = env [ ' res.users ' ] . browse ( [ env . _uid ] ) . partner_id
assert pr_id . reviewed_by == reviewer
2024-06-25 03:16:43 +07:00
def test_help ( env , repo , config , users , partners ) :
with repo :
[ m ] = repo . make_commits ( None , Commit ( ' initial ' , tree = { ' m ' : ' m ' } ) , ref = " heads/master " )
repo . make_commits ( m , Commit ( ' first ' , tree = { ' m ' : ' c1 ' } ) , ref = " heads/other " )
pr = repo . make_pr ( target = ' master ' , head = ' other ' )
env . run_crons ( )
for role in [ ' reviewer ' , ' self_reviewer ' , ' user ' , ' other ' ] :
v = config [ f ' role_ { role } ' ]
with repo :
pr . post_comment ( " hansen help " , v [ ' token ' ] )
with repo :
pr . post_comment ( " hansen r+ help " , config [ ' role_reviewer ' ] [ ' token ' ] )
assert not partners [ ' reviewer ' ] . user_ids , " the reviewer should not be an internal user "
group_internal = env . ref ( " base.group_user " )
group_admin = env . ref ( " runbot_merge.group_admin " )
env [ ' res.users ' ] . create ( {
' partner_id ' : partners [ ' reviewer ' ] . id ,
' login ' : ' reviewer ' ,
' groups_id ' : [ ( 4 , group_internal . id , 0 ) , ( 4 , group_admin . id , 0 ) ] ,
} )
with repo :
pr . post_comment ( " hansen help " , config [ ' role_reviewer ' ] [ ' token ' ] )
env . run_crons ( )
assert pr . comments == [
seen ( env , pr , users ) ,
( users [ ' reviewer ' ] , " hansen help " ) ,
( users [ ' self_reviewer ' ] , " hansen help " ) ,
( users [ ' user ' ] , " hansen help " ) ,
( users [ ' other ' ] , " hansen help " ) ,
( users [ ' reviewer ' ] , " hansen r+ help " ) ,
( users [ ' reviewer ' ] , " hansen help " ) ,
( users [ ' user ' ] , REVIEWER . format ( user = users [ ' reviewer ' ] , skip = " " ) ) ,
( users [ ' user ' ] , RANDO . format ( user = users [ ' self_reviewer ' ] ) ) ,
( users [ ' user ' ] , AUTHOR . format ( user = users [ ' user ' ] ) ) ,
( users [ ' user ' ] , RANDO . format ( user = users [ ' other ' ] ) ) ,
( users [ ' user ' ] ,
REVIEWER . format ( user = users [ ' reviewer ' ] , skip = ' ' )
+ " \n \n Warning: in invoking help, every other command has been ignored. " ) ,
( users [ ' user ' ] , REVIEWER . format (
user = users [ ' reviewer ' ] ,
skip = ' |`skipchecks`|bypasses both statuses and review| \n ' ,
) ) ,
]
REVIEWER = """ \
Currently available commands for @ { user } :
| command | |
| - | - |
| ` help ` | displays this help |
| ` r ( eview ) + ` | approves the PR , if it ' s a forwardport also approves all non-detached parents|
| ` r ( eview ) = < number > ` | only approves the specified parents |
| ` fw = no ` | does not forward - port this PR |
| ` fw = default ` | forward - ports this PR normally |
| ` fw = skipci ` | does not wait for a forward - port ' s statuses to succeed before creating the next one|
| ` up to < branch > ` | only ports this PR forward to the specified branch ( included ) |
| ` merge ` | integrate the PR with a simple merge commit , using the PR description as message |
| ` rebase - merge ` | rebases the PR on top of the target branch the integrates with a merge commit , using the PR description as message |
| ` rebase - ff ` | rebases the PR on top of the target branch , then fast - forwards |
| ` squash ` | squashes the PR as a single commit on the target branch , using the PR description as message |
| ` delegate + ` | grants approval rights to the PR author |
| ` delegate = < . . . > ` | grants approval rights on this PR to the specified github users |
| ` default ` | stages the PR normally |
| ` priority ` | tries to stage this PR first , then adds ` default ` PRs if the staging has room |
| ` alone ` | stages this PR only with other PRs of the same priority |
{ skip } \
| ` cancel = staging ` | automatically cancels the current staging when this PR becomes ready |
| ` check ` | fetches or refreshes PR metadata , resets mergebot state |
Note : this help text is dynamic and will change with the state of the PR . \
"""
AUTHOR = """ \
Currently available commands for @ { user } :
| command | |
| - | - |
| ` help ` | displays this help |
| ` fw = no ` | does not forward - port this PR |
2024-09-17 15:29:01 +07:00
| ` fw = default ` | forward - ports this PR normally |
2024-06-25 03:16:43 +07:00
| ` up to < branch > ` | only ports this PR forward to the specified branch ( included ) |
| ` check ` | fetches or refreshes PR metadata , resets mergebot state |
Note : this help text is dynamic and will change with the state of the PR . \
"""
RANDO = """ \
Currently available commands for @ { user } :
| command | |
| - | - |
| ` help ` | displays this help |
Note : this help text is dynamic and will change with the state of the PR . \
"""
2024-12-06 19:44:08 +07:00
@pytest.mark.parametrize ( " target " , [ " master " , " other " ] )
def test_close_linked_issues ( env , project , repo , config , users , partners , target ) :
""" Github ' s linked issues thingie only triggers when:
- the commit with the reference reaches the default branch
- the PR linked to the issue ( via the UI or the PR description ) is targeted
at and merged into the default branch
The former does eventually happen with odoo , after a while , usually :
forward - ports will generally go through the default branch eventually amd
the master becomes the default branch on the next major release .
* However * the latter case basically doesn ' t happen, if a PR is merged into
master it never " reaches the default branch " , and while the description is
ported forwards ( with any link it contains ) that ' s not the case of manual
links ( it ' s not even possible since there is no API to manipulate those).
Thus support for linked issues needs to be added to the mergebot . Since the
necessarily has write access to PRs ( to close them on merge ) it should have
the same on issues .
"""
project . write ( { ' branch_ids ' : [ ( 0 , 0 , { ' name ' : ' other ' } ) ] } )
with repo :
i1 = repo . make_issue ( f " Issue 1 " )
i2 = repo . make_issue ( f " Issue 2 " )
[ m ] = repo . make_commits ( None , Commit ( ' initial ' , tree = { ' m ' : ' m ' } ) , ref = " heads/master " )
# non-default branch
repo . make_ref ( " heads/other " , m )
# ensure the default branch is master so we have consistent testing state
r = repo . _session . patch ( f ' https://api.github.com/repos/ { repo . name } ' , json = { ' default_branch ' : ' master ' } )
assert r . ok , r . text
with repo :
# there are only two locations relevant to us:
#
# - commit message
# - pr description
#
# the other two are manually linked issues (there's no API for that so
# we can't test it) and the merge message (which for us is the PR
# message)
repo . make_commits ( m , Commit ( f ' This is my commit \n \n fixes # { i1 . number } ' , tree = { ' m ' : ' c1 ' } ) , ref = " heads/pr " )
pr = repo . make_pr ( target = target , head = ' pr ' , title = " a pr " , body = f " fixes # { i2 . number } " )
pr . post_comment ( ' hansen r+ ' , config [ ' role_reviewer ' ] [ ' token ' ] )
repo . post_status ( pr . head , ' success ' )
env . run_crons ( None )
pr_id = to_pr ( env , pr )
assert pr_id . state == ' ready '
assert pr_id . staging_id
assert i1 . state == ' open '
assert i2 . state == ' open '
with repo :
repo . post_status ( f ' staging. { target } ' , ' success ' )
env . run_crons ( None )
assert pr_id . state == ' merged '
assert i1 . state == ' closed '
assert i2 . state == ' closed '