mirror of
https://github.com/odoo/runbot.git
synced 2025-03-15 15:35:46 +07:00
[CHG] *: rewrite commands set, rework status management
This commit revisits the commands set in order to make it more regular, and limit inconsistent command-sets, although it includes pseudo-command aliases for common tasks now removed from the core set. Hard Errors =========== The previous iteration of the commands set would ignore any non-command term in a command line. This has been changed to hard error (and ignoring the entire thing) if any command is unknown or invalid. This fixes inconsistent / unexpected interpretations where a user sends a command, then writes a novel on the same line some words of which happen to *also* be commands, leading to merge states they did not expect. They should now be told to fuck off. Priority Restructuring ---------------------- The numerical priority system was pretty messy in that it confused "staging priority" (in ways which were not entirely straightforward) with overrides to other concerns. This has now being split along all the axis, with separate command subsets for: - staging prioritisation, now separated between `default`, `priority`, and `alone`, - `default` means PRs are picked by an unspecified order when creating a staging, if nothing better is available - `priority` means PRs are picked first when staging, however if `priority` PRs don't fill the staging the rest will be filled with `default`, this mode did not previously exist - `alone` means the PRs are picked first, before splits, and only `alone` PRs can be part of the staging (which usually matches the modename) - `skipchecks` overrides both statuses and approval checks, for the batch, something previously implied in `p=0`, but now independent. Setting `skipchecks` basically makes the entire batch `ready`. For consistency this also sets the reviewer implicitly: since skipchecks overrides both statuses *and approval*, whoever enables this mode is essentially the reviewer. - `cancel` cancels any ongoing staging when the marked PR becomes ready again, previously this was also implied (in a more restricted form) by setting `p=0` FWBot removal ============= While the "forwardport bot" still exists as an API level (to segregate access rights between tokens) it has been removed as an interaction point, as part of the modules merge plan. As a result, fwbot stops responding ---------------------- Feedback messages are now always sent by the mergebot, the forward-porting bot should not send any message or notification anymore. commands moved to the merge bot ------------------------------- - `ignore`/`up to` simply changes bot - `close` as well - `skipci` is now a choice / flag of an `fw` command, which denotes the forward-port policy, - `fw=default` is the old `ci` and resets the policy to default, that is wait for the PR to be merged to create forward ports, and for the required statuses on each forward port to be received before creating the next - `fw=skipci` is the old `skipci`, it waits for the merge of the base PR but then creates all the forward ports immediately (unless it gets a conflict) - `fw=skipmerge` immediately creates all the forward ports, without even waiting for the PR to be merged This is a completely new mode, and may be rather broken as until now the 'bot has always assumed the source PR had been merged. approval rework --------------- Because of the previous section, there is no distinguishing feature between `mergebot r+` = "merge this PR" and `forwardbot r+` = "merge this PR and all its parent with different access rights". As a result, the two have been merged under a single `mergebot r+` with heuristics attempting to provide the best experience: - if approving a non-forward port, the behavior does not change - else, with review rights on the source, all ancestors are approved - else, as author of the original, approves all ancestors which descend from a merged PR - else, approves all ancestors up to and including the oldest ancestor to which we have review rights Most notably, the source's author is not delegated on the source or any of its descendants anymore. This might need to be revisited if it provides too restrictive. For the very specialized need of approving a forward-port *and none of its ancestors*, `review=` can now take a comma (`,`) separated list of pull request numbers (github numbers, not mergebot ids). Computed State ============== The `state` field of pull requests is now computed. Hopefully this makes the status more consistent and predictable in the long run, and importantly makes status management more reliable (because reference datum get updated naturally flowing to the state). For now however it makes things more complicated as some of the states have to be separately signaled or updated: - `closed` and `error` are now separate flags - `merge_date` is pulled down from forwardport and becomes the transition signal for ready -> merged - `reviewed_by` becomes the transition signal for approval (might be a good idea to rename it...) - `status` is computed from the head's statuses and overrides, and *that* becomes the validation state Ideally, batch-level flags like `skipchecks` should be on, well, the batch, and `state` should have a dependency on the batch. However currently the batch is not a durable / permanent member of the system, so it's a PR-level flag and a messy pile. On notable change is that *forcing* the state to `ready` now does that but also sets the reviewer, `skipchecks`, and overrides to ensure the API-mediated readying does not get rolled back by e.g. the runbot sending a status. This is useful for a few types of automated / programmatic PRs e.g. translation exports, where we set the state programmatically to limit noise. recursive dependency hack ------------------------- Given a sequence of PRs with an override of the source, if one of the PRs is updated its descendants should not have the override anymore. However if the updated PR gets overridden, its descendants should have *that* override. This requires some unholy manipulations via an override of `modified`, as the ORM supports recursive fields but not recursive dependencies (on a different field). unconditional followup scheduling --------------------------------- Previously scheduling forward-port followup was contigent on the FW policy, but it's not actually correct if the new PR is *immediately* validated (which can happen now that the field is computed, if there are no required statuses *or* all of the required statuses are overridden by an ancestor) as nothing will trigger the state change and thus scheduling of the fp followup. The followup function checks all the properties of the batch to port, so this should not result on incorrect ports. Although it's a bit more expensive, and will lead to more spam. Previously this would not happen because on creation of a PR the validation task (commit -> PR) would still have to execute. Misc Changes ============ - If a PR is marked as overriding / canceling stagings, it now does so on retry not just when setting initially. This was not handled at all previously, so a PR in P0 going into error due to e.g. a non-deterministic bug would be retried and still p=0, but a current staging would not get cancelled. Same when a PR in p=0 goes into error because something was failed, then is updated with a fix. - Add tracking to a bunch of relevant PR fields. Post-mortem analysis currently generally requires going through the text logs to see what happened, which is annoying. There is a nondeterminism / inconsistency in the tracking which sometimes leads the admin user to trigger tracking before the bot does, leading to the staging tracking being attributed to them during tests, shove under the carpet by ignoring the user to whom that tracking is attributed. When multiple users update tracked fields in the same transaction all the changes are attributed to the first one having triggered tracking (?), I couldn't find why the admin sometimes takes over. - added and leveraged support for enum-backed selection fields - moved variuous fields from forwardport to runbot_merge - fix a migration which had never worked and which never run (because I forgot to bump the version on the module) - remove some unnecessary intermediate de/serialisation fixes #673, fixes #309, fixes #792, fixes #846 (probably)
This commit is contained in:
parent
955a61a1e8
commit
d4fa1fd353
@ -1,7 +1,7 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
{
|
||||
'name': 'forward port bot',
|
||||
'version': '1.3',
|
||||
'version': '1.4',
|
||||
'summary': "A port which forward ports successful PRs.",
|
||||
'depends': ['runbot_merge'],
|
||||
'data': [
|
||||
|
@ -176,7 +176,6 @@
|
||||
</group>
|
||||
<group>
|
||||
<field string="Bot Name" name="fp_github_name" readonly="0"/>
|
||||
<field string="Bot Email" name="fp_github_email" readonly="0"/>
|
||||
</group>
|
||||
</group>
|
||||
</xpath>
|
||||
@ -200,37 +199,4 @@
|
||||
</field>
|
||||
</record>
|
||||
|
||||
<record model="ir.ui.view" id="pr">
|
||||
<field name="name">Show forwardport PR fields</field>
|
||||
<field name="inherit_id" ref="runbot_merge.runbot_merge_form_prs"/>
|
||||
<field name="model">runbot_merge.pull_requests</field>
|
||||
<field name="arch" type="xml">
|
||||
<xpath expr="//field[@name='state']" position="after">
|
||||
<field name="merge_date" attrs="{'invisible': [('state', '!=', 'merged')]}"/>
|
||||
</xpath>
|
||||
<xpath expr="//sheet/group[2]" position="after">
|
||||
<separator string="Forward Port" attrs="{'invisible': [('source_id', '=', False)]}"/>
|
||||
<group attrs="{'invisible': [('source_id', '!=', False)]}">
|
||||
<group>
|
||||
<field string="Policy" name="fw_policy"/>
|
||||
</group>
|
||||
</group>
|
||||
<group attrs="{'invisible': [('source_id', '=', False)]}">
|
||||
<group>
|
||||
<field string="Original PR" name="source_id"/>
|
||||
</group>
|
||||
<group attrs="{'invisible': [('parent_id', '=', False)]}">
|
||||
<field name="parent_id"/>
|
||||
</group>
|
||||
<group colspan="4" attrs="{'invisible': [('parent_id', '!=', False)]}">
|
||||
<field string="Detached because" name="detach_reason" readonly="1"/>
|
||||
</group>
|
||||
<group>
|
||||
<field string="Forward ported up to" name="limit_id"/>
|
||||
</group>
|
||||
</group>
|
||||
</xpath>
|
||||
</field>
|
||||
</record>
|
||||
|
||||
</odoo>
|
||||
|
7
forwardport/migrations/15.0.1.4/pre-migration.py
Normal file
7
forwardport/migrations/15.0.1.4/pre-migration.py
Normal file
@ -0,0 +1,7 @@
|
||||
def migrate(cr, version):
|
||||
cr.execute("ALTER TABLE runbot_merge_project DROP COLUMN IF EXISTS fp_github_email")
|
||||
cr.execute("""
|
||||
ALTER TABLE runbot_merge_branch
|
||||
DROP COLUMN IF EXISTS fp_sequence,
|
||||
DROP COLUMN IF EXISTS fp_target
|
||||
""")
|
@ -24,7 +24,6 @@ import re
|
||||
import subprocess
|
||||
import tempfile
|
||||
import typing
|
||||
from functools import reduce
|
||||
from operator import itemgetter
|
||||
from pathlib import Path
|
||||
|
||||
@ -194,62 +193,7 @@ class PullRequests(models.Model):
|
||||
head: str
|
||||
state: str
|
||||
|
||||
statuses = fields.Text(recursive=True)
|
||||
|
||||
limit_id = fields.Many2one('runbot_merge.branch', help="Up to which branch should this PR be forward-ported")
|
||||
|
||||
parent_id = fields.Many2one(
|
||||
'runbot_merge.pull_requests', index=True,
|
||||
help="a PR with a parent is an automatic forward port"
|
||||
)
|
||||
root_id = fields.Many2one('runbot_merge.pull_requests', compute='_compute_root', recursive=True)
|
||||
source_id = fields.Many2one('runbot_merge.pull_requests', index=True, help="the original source of this FP even if parents were detached along the way")
|
||||
forwardport_ids = fields.One2many('runbot_merge.pull_requests', 'source_id')
|
||||
reminder_backoff_factor = fields.Integer(default=-4, group_operator=None)
|
||||
merge_date = fields.Datetime()
|
||||
|
||||
detach_reason = fields.Char()
|
||||
|
||||
fw_policy = fields.Selection([
|
||||
('ci', "Normal"),
|
||||
('skipci', "Skip CI"),
|
||||
# ('skipmerge', "Skip merge"),
|
||||
], required=True, default="ci")
|
||||
|
||||
_sql_constraints = [(
|
||||
'fw_constraint',
|
||||
'check(source_id is null or num_nonnulls(parent_id, detach_reason) = 1)',
|
||||
"fw PRs must either be attached or have a reason for being detached",
|
||||
)]
|
||||
|
||||
refname = fields.Char(compute='_compute_refname')
|
||||
@api.depends('label')
|
||||
def _compute_refname(self):
|
||||
for pr in self:
|
||||
pr.refname = pr.label.split(':', 1)[-1]
|
||||
|
||||
ping = fields.Char(recursive=True)
|
||||
|
||||
@api.depends('source_id.author.github_login', 'source_id.reviewed_by.github_login')
|
||||
def _compute_ping(self):
|
||||
"""For forward-port PRs (PRs with a source) the author is the PR bot, so
|
||||
we want to ignore that and use the author & reviewer of the original PR
|
||||
"""
|
||||
source = self.source_id
|
||||
if not source:
|
||||
return super()._compute_ping()
|
||||
|
||||
for pr in self:
|
||||
s = ' '.join(
|
||||
f'@{p.github_login}'
|
||||
for p in source.author | source.reviewed_by | self.reviewed_by
|
||||
)
|
||||
pr.ping = s and (s + ' ')
|
||||
|
||||
@api.depends('parent_id.root_id')
|
||||
def _compute_root(self):
|
||||
for p in self:
|
||||
p.root_id = reduce(lambda _, p: p, self._iter_ancestors())
|
||||
|
||||
@api.model_create_single
|
||||
def create(self, vals):
|
||||
@ -269,8 +213,6 @@ class PullRequests(models.Model):
|
||||
)[-1].id
|
||||
if vals.get('parent_id') and 'source_id' not in vals:
|
||||
vals['source_id'] = self.browse(vals['parent_id']).root_id.id
|
||||
if vals.get('state') == 'merged':
|
||||
vals['merge_date'] = fields.Datetime.now()
|
||||
return super().create(vals)
|
||||
|
||||
def write(self, vals):
|
||||
@ -302,8 +244,6 @@ class PullRequests(models.Model):
|
||||
if vals.get('parent_id') and 'source_id' not in vals:
|
||||
parent = self.browse(vals['parent_id'])
|
||||
vals['source_id'] = (parent.source_id or parent).id
|
||||
if vals.get('state') == 'merged':
|
||||
vals['merge_date'] = fields.Datetime.now()
|
||||
r = super().write(vals)
|
||||
if self.env.context.get('forwardport_detach_warn', True):
|
||||
for p, parent in with_parents.items():
|
||||
@ -322,14 +262,7 @@ class PullRequests(models.Model):
|
||||
token_field='fp_github_token',
|
||||
format_args={'pr': parent, 'child': p},
|
||||
)
|
||||
for p in closed_fp.filtered(lambda p: p.state != 'closed'):
|
||||
self.env.ref('runbot_merge.forwardport.reopen.detached')._send(
|
||||
repository=p.repository,
|
||||
pull_request=p.number,
|
||||
token_field='fp_github_token',
|
||||
format_args={'pr': p},
|
||||
)
|
||||
if vals.get('state') == 'merged':
|
||||
if vals.get('merge_date'):
|
||||
self.env['forwardport.branch_remover'].create([
|
||||
{'pr_id': p.id}
|
||||
for p in self
|
||||
@ -570,25 +503,6 @@ class PullRequests(models.Model):
|
||||
}
|
||||
return sorted(commits, key=lambda c: idx[c['sha']])
|
||||
|
||||
def _iter_ancestors(self):
|
||||
while self:
|
||||
yield self
|
||||
self = self.parent_id
|
||||
|
||||
def _iter_descendants(self):
|
||||
pr = self
|
||||
while pr := self.search([('parent_id', '=', pr.id)]):
|
||||
yield pr
|
||||
|
||||
@api.depends('parent_id.statuses')
|
||||
def _compute_statuses(self):
|
||||
super()._compute_statuses()
|
||||
|
||||
def _get_overrides(self):
|
||||
# NB: assumes _get_overrides always returns an "owned" dict which we can modify
|
||||
p = self.parent_id._get_overrides() if self.parent_id else {}
|
||||
p.update(super()._get_overrides())
|
||||
return p
|
||||
|
||||
def _port_forward(self):
|
||||
if not self:
|
||||
@ -713,7 +627,6 @@ class PullRequests(models.Model):
|
||||
new_batch |= new_pr
|
||||
|
||||
# allows PR author to close or skipci
|
||||
source.delegates |= source.author
|
||||
new_pr.write({
|
||||
'merge_method': pr.merge_method,
|
||||
'source_id': source.id,
|
||||
@ -722,9 +635,6 @@ class PullRequests(models.Model):
|
||||
'detach_reason': "conflicts: {}".format(
|
||||
f'\n{conflicts[pr]}\n{conflicts[pr]}'.strip()
|
||||
) if has_conflicts else None,
|
||||
# Copy author & delegates of source as well as delegates of
|
||||
# previous so they can r+ the new forward ports.
|
||||
'delegates': [(6, False, (source.delegates | pr.delegates).ids)]
|
||||
})
|
||||
if has_conflicts and pr.parent_id and pr.state not in ('merged', 'closed'):
|
||||
self.env.ref('runbot_merge.forwardport.failure.conflict')._send(
|
||||
@ -809,9 +719,8 @@ class PullRequests(models.Model):
|
||||
'prs': [(6, 0, new_batch.ids)],
|
||||
'active': not has_conflicts,
|
||||
})
|
||||
# if we're not waiting for CI, schedule followup immediately
|
||||
if any(p.source_id.fw_policy == 'skipci' for p in b.prs):
|
||||
b.prs[0]._schedule_fp_followup()
|
||||
# try to schedule followup
|
||||
new_batch[0]._schedule_fp_followup()
|
||||
return b
|
||||
|
||||
def _create_fp_branch(self, target_branch, fp_branch_name, cleanup):
|
||||
@ -869,7 +778,7 @@ class PullRequests(models.Model):
|
||||
# add target remote
|
||||
working_copy.remote(
|
||||
'add', 'target',
|
||||
'https://{p.fp_github_name}:{p.fp_github_token}@github.com/{r.fp_remote_target}'.format(
|
||||
'https://{p.fp_github_token}@github.com/{r.fp_remote_target}'.format(
|
||||
r=self.repository,
|
||||
p=project_id
|
||||
)
|
||||
|
@ -93,10 +93,6 @@ In the former case, you may want to edit this PR message as well\.
|
||||
More info at https://github\.com/odoo/odoo/wiki/Mergebot#forward-port
|
||||
''', re.DOTALL))
|
||||
]
|
||||
with prod:
|
||||
prc.post_comment(f'@{project.fp_github_name} r+', config['role_reviewer']['token'])
|
||||
env.run_crons()
|
||||
assert prc_id.state == 'opened', "approving via fw should not work on a conflict"
|
||||
|
||||
prb = prod.get_pr(prb_id.number)
|
||||
assert prb.comments == [
|
||||
@ -108,13 +104,12 @@ More info at https://github.com/odoo/odoo/wiki/Mergebot#forward-port
|
||||
'''),
|
||||
(users['user'], """@%s @%s the next pull request (%s) is in conflict. \
|
||||
You can merge the chain up to here by saying
|
||||
> @%s r+
|
||||
> @hansen r+
|
||||
|
||||
More info at https://github.com/odoo/odoo/wiki/Mergebot#forward-port
|
||||
""" % (
|
||||
users['user'], users['reviewer'],
|
||||
prc_id.display_name,
|
||||
project.fp_github_name
|
||||
))
|
||||
]
|
||||
|
||||
|
@ -11,7 +11,6 @@ from utils import seen, Commit, make_basic, to_pr
|
||||
])
|
||||
def test_configure_fp_limit(env, config, make_repo, source, limit, count):
|
||||
prod, other = make_basic(env, config, make_repo)
|
||||
bot_name = env['runbot_merge.project'].search([]).fp_github_name
|
||||
with prod:
|
||||
[c] = prod.make_commits(
|
||||
source, Commit('c', tree={'f': 'g'}),
|
||||
@ -20,7 +19,7 @@ def test_configure_fp_limit(env, config, make_repo, source, limit, count):
|
||||
pr = prod.make_pr(target=source, head='branch')
|
||||
prod.post_status(c, 'success', 'legal/cla')
|
||||
prod.post_status(c, 'success', 'ci/runbot')
|
||||
pr.post_comment(f'hansen r+\n{bot_name} up to {limit}', config['role_reviewer']['token'])
|
||||
pr.post_comment(f'hansen r+ up to {limit}', config['role_reviewer']['token'])
|
||||
env.run_crons()
|
||||
with prod:
|
||||
prod.post_status(f'staging.{source}', 'success', 'legal/cla')
|
||||
@ -38,14 +37,13 @@ def test_ignore(env, config, make_repo):
|
||||
to target
|
||||
"""
|
||||
prod, other = make_basic(env, config, make_repo)
|
||||
bot_name = env['runbot_merge.project'].search([]).fp_github_name
|
||||
branch_a = env['runbot_merge.branch'].search([('name', '=', 'a')])
|
||||
with prod:
|
||||
[c] = prod.make_commits('a', Commit('c', tree={'0': '0'}), ref='heads/mybranch')
|
||||
pr = prod.make_pr(target='a', head='mybranch')
|
||||
prod.post_status(c, 'success', 'legal/cla')
|
||||
prod.post_status(c, 'success', 'ci/runbot')
|
||||
pr.post_comment('hansen r+\n%s ignore' % bot_name, config['role_reviewer']['token'])
|
||||
pr.post_comment('hansen r+ ignore', config['role_reviewer']['token'])
|
||||
env.run_crons()
|
||||
pr_id = env['runbot_merge.pull_requests'].search([('number', '=', pr.number)])
|
||||
assert pr_id.limit_id == branch_a
|
||||
@ -67,13 +65,12 @@ def test_disable(env, config, make_repo, users):
|
||||
"""
|
||||
prod, other = make_basic(env, config, make_repo)
|
||||
project = env['runbot_merge.project'].search([])
|
||||
bot_name = project.fp_github_name
|
||||
with prod:
|
||||
[c] = prod.make_commits('a', Commit('c 0', tree={'0': '0'}), ref='heads/branch0')
|
||||
pr = prod.make_pr(target='a', head='branch0')
|
||||
prod.post_status(c, 'success', 'legal/cla')
|
||||
prod.post_status(c, 'success', 'ci/runbot')
|
||||
pr.post_comment('hansen r+\n%s up to b' % bot_name, config['role_reviewer']['token'])
|
||||
pr.post_comment('hansen r+ up to b', config['role_reviewer']['token'])
|
||||
|
||||
[c] = prod.make_commits('a', Commit('c 1', tree={'1': '1'}), ref='heads/branch1')
|
||||
pr = prod.make_pr(target='a', head='branch1')
|
||||
@ -94,30 +91,28 @@ def test_disable(env, config, make_repo, users):
|
||||
assert p.parent_id == _1
|
||||
assert p.target.name == 'c'
|
||||
|
||||
project.fp_github_token = config['role_other']['token']
|
||||
bot_name = project.fp_github_name
|
||||
with prod:
|
||||
[c] = prod.make_commits('a', Commit('c 2', tree={'2': '2'}), ref='heads/branch2')
|
||||
pr = prod.make_pr(target='a', head='branch2')
|
||||
prod.post_status(c, 'success', 'legal/cla')
|
||||
prod.post_status(c, 'success', 'ci/runbot')
|
||||
pr.post_comment(f'hansen r+\n{bot_name} up to', config['role_reviewer']['token'])
|
||||
pr.post_comment(f'{bot_name} up to b', config['role_reviewer']['token'])
|
||||
pr.post_comment(f'{bot_name} up to foo', config['role_reviewer']['token'])
|
||||
pr.post_comment(f'{bot_name} up to c', config['role_reviewer']['token'])
|
||||
pr.post_comment('hansen r+ up to', config['role_reviewer']['token'])
|
||||
pr.post_comment('hansen up to b', config['role_reviewer']['token'])
|
||||
pr.post_comment('hansen up to foo', config['role_reviewer']['token'])
|
||||
pr.post_comment('hansen up to c', config['role_reviewer']['token'])
|
||||
env.run_crons()
|
||||
|
||||
# use a set because git webhooks delays might lead to mis-ordered
|
||||
# responses and we don't care that much
|
||||
assert set(pr.comments) == {
|
||||
(users['reviewer'], f"hansen r+\n{bot_name} up to"),
|
||||
(users['other'], "@{reviewer} please provide a branch to forward-port to.".format_map(users)),
|
||||
(users['reviewer'], f"{bot_name} up to b"),
|
||||
(users['other'], "@{reviewer} branch 'b' is disabled, it can't be used as a forward port target.".format_map(users)),
|
||||
(users['reviewer'], f"{bot_name} up to foo"),
|
||||
(users['other'], "@{reviewer} there is no branch 'foo', it can't be used as a forward port target.".format_map(users)),
|
||||
(users['reviewer'], f"{bot_name} up to c"),
|
||||
(users['other'], "Forward-porting to 'c'."),
|
||||
(users['reviewer'], "hansen r+ up to"),
|
||||
(users['user'], "@{reviewer} please provide a branch to forward-port to.".format_map(users)),
|
||||
(users['reviewer'], "hansen up to b"),
|
||||
(users['user'], "@{reviewer} branch 'b' is disabled, it can't be used as a forward port target.".format_map(users)),
|
||||
(users['reviewer'], "hansen up to foo"),
|
||||
(users['user'], "@{reviewer} there is no branch 'foo', it can't be used as a forward port target.".format_map(users)),
|
||||
(users['reviewer'], "hansen up to c"),
|
||||
(users['user'], "Forward-porting to 'c'."),
|
||||
seen(env, pr, users),
|
||||
}
|
||||
|
||||
@ -127,7 +122,6 @@ def test_limit_after_merge(env, config, make_repo, users):
|
||||
reviewer = config['role_reviewer']['token']
|
||||
branch_b = env['runbot_merge.branch'].search([('name', '=', 'b')])
|
||||
branch_c = env['runbot_merge.branch'].search([('name', '=', 'c')])
|
||||
bot_name = env['runbot_merge.project'].search([]).fp_github_name
|
||||
with prod:
|
||||
[c] = prod.make_commits('a', Commit('c', tree={'0': '0'}), ref='heads/abranch')
|
||||
pr1 = prod.make_pr(target='a', head='abranch')
|
||||
@ -145,15 +139,15 @@ def test_limit_after_merge(env, config, make_repo, users):
|
||||
assert p1.limit_id == p2.limit_id == branch_c, "check that limit is correctly set"
|
||||
pr2 = prod.get_pr(p2.number)
|
||||
with prod:
|
||||
pr1.post_comment(bot_name + ' up to b', reviewer)
|
||||
pr2.post_comment(bot_name + ' up to b', reviewer)
|
||||
pr1.post_comment('hansen up to b', reviewer)
|
||||
pr2.post_comment('hansen up to b', reviewer)
|
||||
env.run_crons()
|
||||
|
||||
assert p1.limit_id == p2.limit_id == branch_b
|
||||
assert pr1.comments == [
|
||||
(users['reviewer'], "hansen r+"),
|
||||
seen(env, pr1, users),
|
||||
(users['reviewer'], f'{bot_name} up to b'),
|
||||
(users['reviewer'], 'hansen up to b'),
|
||||
(users['user'], "Forward-porting to 'b'."),
|
||||
(users['user'], f"Forward-porting to 'b' (from {p2.display_name})."),
|
||||
]
|
||||
@ -164,7 +158,7 @@ This PR targets b and is part of the forward-port chain. Further PRs will be cre
|
||||
|
||||
More info at https://github.com/odoo/odoo/wiki/Mergebot#forward-port
|
||||
"""),
|
||||
(users['reviewer'], f'{bot_name} up to b'),
|
||||
(users['reviewer'], 'hansen up to b'),
|
||||
(users['user'], f"Forward-porting {p1.display_name} to 'b'."),
|
||||
]
|
||||
|
||||
@ -181,16 +175,12 @@ More info at https://github.com/odoo/odoo/wiki/Mergebot#forward-port
|
||||
assert p2.source_id == p1
|
||||
|
||||
with prod:
|
||||
pr2.post_comment(f'{bot_name} up to c', reviewer)
|
||||
pr2.post_comment('hansen up to c', reviewer)
|
||||
env.run_crons()
|
||||
|
||||
assert pr2.comments[4:] == [
|
||||
(users['user'], "@%s @%s this PR was modified / updated and has become a normal PR. "
|
||||
"It should be merged the normal way (via @%s)" % (
|
||||
users['user'], users['reviewer'],
|
||||
p2.repository.project_id.github_prefix
|
||||
)),
|
||||
(users['reviewer'], f'{bot_name} up to c'),
|
||||
(users['user'], f"@{users['user']} @{users['reviewer']} this PR was modified / updated and has become a normal PR. It must be merged directly."),
|
||||
(users['reviewer'], 'hansen up to c'),
|
||||
(users['user'], "Forward-porting to 'c'."),
|
||||
]
|
||||
with prod:
|
||||
@ -207,7 +197,7 @@ More info at https://github.com/odoo/odoo/wiki/Mergebot#forward-port
|
||||
assert p3
|
||||
pr3 = prod.get_pr(p3.number)
|
||||
with prod:
|
||||
pr3.post_comment(f"{bot_name} up to c", reviewer)
|
||||
pr3.post_comment("hansen up to c", reviewer)
|
||||
env.run_crons()
|
||||
assert pr3.comments == [
|
||||
seen(env, pr3, users),
|
||||
@ -215,11 +205,11 @@ More info at https://github.com/odoo/odoo/wiki/Mergebot#forward-port
|
||||
@{users['user']} @{users['reviewer']} this PR targets c and is the last of the forward-port chain.
|
||||
|
||||
To merge the full chain, use
|
||||
> @{p1.repository.project_id.fp_github_name} r+
|
||||
> @hansen r+
|
||||
|
||||
More info at https://github.com/odoo/odoo/wiki/Mergebot#forward-port
|
||||
"""),
|
||||
(users['reviewer'], f"{bot_name} up to c"),
|
||||
(users['reviewer'], "hansen up to c"),
|
||||
(users['user'], f"Forward-porting {p2.display_name} to 'c'."),
|
||||
]
|
||||
# 7 of previous check, plus r+
|
||||
@ -268,7 +258,7 @@ def test_post_merge(
|
||||
from_id = PRs.search(update_from(source.id))
|
||||
from_ = prod.get_pr(from_id.number)
|
||||
with prod:
|
||||
from_.post_comment(f'{project.fp_github_name} up to {limit}', reviewer)
|
||||
from_.post_comment(f'hansen up to {limit}', reviewer)
|
||||
env.run_crons()
|
||||
|
||||
# there should always be a comment on the source and root indicating how
|
||||
@ -314,7 +304,7 @@ def test_resume_fw(env, post_merge, users, config, branches, mode):
|
||||
# fetch source PR
|
||||
[source] = PRs.search([('source_id', '=', False)])
|
||||
with prod:
|
||||
prod.get_pr(source.number).post_comment(f'{project.fp_github_name} up to 5', reviewer)
|
||||
prod.get_pr(source.number).post_comment('hansen up to 5', reviewer)
|
||||
# validate the forward ports for "child", "root", and "parent" so "current"
|
||||
# exists and we have one more target
|
||||
for branch in map(str, range(2, 5+1)):
|
||||
@ -336,12 +326,11 @@ def test_resume_fw(env, post_merge, users, config, branches, mode):
|
||||
numbers = range(5 if mode == 'mergetip' else 2, 5 + 1)
|
||||
with prod:
|
||||
for number in numbers:
|
||||
prod.get_pr(number).post_comment(f'{project.github_prefix} r+', reviewer)
|
||||
prod.get_pr(number).post_comment('hansen r+', reviewer)
|
||||
env.run_crons()
|
||||
with prod:
|
||||
for target in numbers:
|
||||
pr = PRs.search([('target.name', '=', str(target))])
|
||||
print(pr.display_name, pr.state, pr.staging_id)
|
||||
prod.post_status(f'staging.{target}', 'success')
|
||||
env.run_crons()
|
||||
for number in numbers:
|
||||
@ -349,7 +338,7 @@ def test_resume_fw(env, post_merge, users, config, branches, mode):
|
||||
|
||||
from_ = prod.get_pr(source.number)
|
||||
with prod:
|
||||
from_.post_comment(f'{project.fp_github_name} up to 6', reviewer)
|
||||
from_.post_comment('hansen up to 6', reviewer)
|
||||
env.run_crons()
|
||||
|
||||
if mode == 'failbump':
|
||||
@ -419,7 +408,6 @@ def post_merge(env, config, users, make_repo, branches):
|
||||
'github_prefix': 'hansen',
|
||||
'fp_github_token': config['github']['token'],
|
||||
'fp_github_name': 'herbert',
|
||||
'fp_github_email': 'hb@example.com',
|
||||
'branch_ids': [
|
||||
(0, 0, {'name': str(i), 'sequence': 1000 - (i * 10)})
|
||||
for i in branches
|
||||
@ -439,7 +427,6 @@ def post_merge(env, config, users, make_repo, branches):
|
||||
'review_rights': [(0, 0, {'repository_id': proj.repo_ids.id, 'review': True})]
|
||||
})
|
||||
|
||||
mbot = proj.github_prefix
|
||||
reviewer = config['role_reviewer']['token']
|
||||
# merge the source PR
|
||||
source_target = str(branches[0])
|
||||
@ -448,7 +435,7 @@ def post_merge(env, config, users, make_repo, branches):
|
||||
pr1 = prod.make_pr(target=source_target, head=c, title="a title")
|
||||
|
||||
prod.post_status(c, 'success')
|
||||
pr1.post_comment(f'{mbot} r+', reviewer)
|
||||
pr1.post_comment('hansen r+', reviewer)
|
||||
env.run_crons()
|
||||
with prod:
|
||||
prod.post_status(f'staging.{source_target}', 'success')
|
||||
|
@ -12,39 +12,45 @@ def test_override_inherited(env, config, make_repo, users):
|
||||
"""
|
||||
repo, other = make_basic(env, config, make_repo)
|
||||
project = env['runbot_merge.project'].search([])
|
||||
project.repo_ids.status_ids = [(5, 0, 0), (0, 0, {'context': 'default'})]
|
||||
env['res.partner'].search([('github_login', '=', users['reviewer'])])\
|
||||
.write({'override_rights': [(0, 0, {
|
||||
'repository_id': project.repo_ids.id,
|
||||
'context': 'ci/runbot',
|
||||
'context': 'default',
|
||||
})]})
|
||||
|
||||
with repo:
|
||||
repo.make_commits('a', Commit('C', tree={'a': '0'}), ref='heads/change')
|
||||
repo.make_commits('a', Commit('pr 1', tree={'a': '0'}), ref='heads/change')
|
||||
pr = repo.make_pr(target='a', head='change')
|
||||
repo.post_status('change', 'success', 'legal/cla')
|
||||
pr.post_comment('hansen r+ override=ci/runbot', config['role_reviewer']['token'])
|
||||
pr.post_comment('hansen r+ override=default', config['role_reviewer']['token'])
|
||||
env.run_crons()
|
||||
|
||||
original = env['runbot_merge.pull_requests'].search([('repository.name', '=', repo.name), ('number', '=', pr.number)])
|
||||
assert original.state == 'ready'
|
||||
assert original.limit_id.name == 'c'
|
||||
|
||||
with repo:
|
||||
repo.post_status('staging.a', 'success', 'legal/cla')
|
||||
repo.post_status('staging.a', 'success', 'ci/runbot')
|
||||
repo.post_status('staging.a', 'success')
|
||||
env.run_crons()
|
||||
|
||||
pr0_id, pr1_id = env['runbot_merge.pull_requests'].search([], order='number')
|
||||
pr0_id, pr1_id, pr2_id = env['runbot_merge.pull_requests'].search([], order='number')
|
||||
assert pr0_id == original
|
||||
assert pr1_id.parent_id, pr0_id
|
||||
assert pr0_id.target.name == 'a'
|
||||
|
||||
with repo:
|
||||
repo.post_status(pr1_id.head, 'success', 'legal/cla')
|
||||
env.run_crons()
|
||||
assert pr1_id.parent_id == pr0_id
|
||||
assert pr1_id.number == 2
|
||||
assert pr1_id.target.name == 'b'
|
||||
assert pr1_id.state == 'validated'
|
||||
assert statuses(pr1_id) == {'ci/runbot': 'success', 'legal/cla': 'success'}
|
||||
assert statuses(pr1_id) == {'default': 'success'}
|
||||
|
||||
assert pr2_id.parent_id == pr1_id
|
||||
assert pr2_id.target.name == 'c'
|
||||
assert pr2_id.state == 'validated'
|
||||
assert statuses(pr2_id) == {'default': 'success'}
|
||||
|
||||
# now we edit the child PR
|
||||
pr_repo, pr_ref = repo.get_pr(pr1_id.number).branch
|
||||
pr1 = repo.get_pr(pr1_id.number)
|
||||
pr_repo, pr_ref = pr1.branch
|
||||
with pr_repo:
|
||||
pr_repo.make_commits(
|
||||
pr1_id.target.name,
|
||||
@ -56,6 +62,12 @@ def test_override_inherited(env, config, make_repo, users):
|
||||
assert pr1_id.state == 'opened'
|
||||
assert not pr1_id.parent_id
|
||||
assert statuses(pr1_id) == {}, "should not have any status left"
|
||||
assert statuses(pr2_id) == {}
|
||||
|
||||
with repo:
|
||||
pr1.post_comment('hansen override=default', config['role_reviewer']['token'])
|
||||
assert statuses(pr1_id) == {'default': 'success'}
|
||||
assert statuses(pr2_id) == {'default': 'success'}
|
||||
|
||||
def test_override_combination(env, config, make_repo, users):
|
||||
""" A forwardport should inherit its parents' overrides, until it's edited.
|
||||
|
@ -161,20 +161,19 @@ def test_straightforward_flow(env, config, make_repo, users):
|
||||
* %s
|
||||
|
||||
To merge the full chain, use
|
||||
> @%s r+
|
||||
> @hansen r+
|
||||
|
||||
More info at https://github.com/odoo/odoo/wiki/Mergebot#forward-port
|
||||
""" % (
|
||||
users['other'], users['reviewer'],
|
||||
pr1.display_name,
|
||||
project.fp_github_name
|
||||
)),
|
||||
]
|
||||
with prod:
|
||||
prod.post_status(pr2.head, 'success', 'ci/runbot')
|
||||
prod.post_status(pr2.head, 'success', 'legal/cla')
|
||||
|
||||
pr2_remote.post_comment('%s r+' % project.fp_github_name, config['role_reviewer']['token'])
|
||||
pr2_remote.post_comment('hansen r+', config['role_reviewer']['token'])
|
||||
|
||||
env.run_crons()
|
||||
|
||||
@ -317,7 +316,6 @@ def test_empty(env, config, make_repo, users):
|
||||
project = env['runbot_merge.project'].search([])
|
||||
project.write({
|
||||
'fp_github_name': False,
|
||||
'fp_github_email': False,
|
||||
'fp_github_token': config['role_other']['token'],
|
||||
})
|
||||
assert project.fp_github_name == users['other']
|
||||
@ -493,7 +491,7 @@ def test_access_rights(env, config, make_repo, users, author, reviewer, delegate
|
||||
prod.post_status(pr2.head, 'success', 'ci/runbot')
|
||||
prod.post_status(pr2.head, 'success', 'legal/cla')
|
||||
prod.get_pr(pr2.number).post_comment(
|
||||
'%s r+' % project.fp_github_name,
|
||||
'hansen r+',
|
||||
token=config['role_' + reviewer]['token']
|
||||
)
|
||||
env.run_crons()
|
||||
@ -587,10 +585,10 @@ def test_delegate_fw(env, config, make_repo, users):
|
||||
(users['user'], '''@{self_reviewer} @{reviewer} this PR targets c and is the last of the forward-port chain.
|
||||
|
||||
To merge the full chain, use
|
||||
> @{bot} r+
|
||||
> @hansen r+
|
||||
|
||||
More info at https://github.com/odoo/odoo/wiki/Mergebot#forward-port
|
||||
'''.format(bot=pr1_id.repository.project_id.fp_github_name, **users)),
|
||||
'''.format_map(users)),
|
||||
(users['other'], 'hansen r+')
|
||||
]
|
||||
|
||||
@ -630,7 +628,7 @@ def test_redundant_approval(env, config, make_repo, users):
|
||||
with prod:
|
||||
pr1.post_comment('hansen r+', config['role_reviewer']['token'])
|
||||
with prod:
|
||||
pr2.post_comment(f'{project.fp_github_name} r+', config['role_reviewer']['token'])
|
||||
pr2.post_comment('hansen r+', config['role_reviewer']['token'])
|
||||
env.run_crons()
|
||||
|
||||
assert pr1.comments == [
|
||||
@ -742,7 +740,7 @@ More info at https://github.com/odoo/odoo/wiki/Mergebot#forward-port
|
||||
# ok main1 PRs
|
||||
with main1:
|
||||
validate_all([main1], [pr1c.head])
|
||||
main1.get_pr(pr1c.number).post_comment('%s r+' % project.fp_github_name, config['role_reviewer']['token'])
|
||||
main1.get_pr(pr1c.number).post_comment('hansen r+', config['role_reviewer']['token'])
|
||||
env.run_crons()
|
||||
|
||||
# check that the main1 PRs are ready but blocked on the main2 PRs
|
||||
@ -754,7 +752,7 @@ More info at https://github.com/odoo/odoo/wiki/Mergebot#forward-port
|
||||
# ok main2 PRs
|
||||
with main2:
|
||||
validate_all([main2], [pr2c.head])
|
||||
main2.get_pr(pr2c.number).post_comment('%s r+' % project.fp_github_name, config['role_reviewer']['token'])
|
||||
main2.get_pr(pr2c.number).post_comment('hansen r+', config['role_reviewer']['token'])
|
||||
env.run_crons()
|
||||
|
||||
env['runbot_merge.stagings'].search([]).mapped('target.display_name')
|
||||
@ -862,27 +860,8 @@ More info at https://github.com/odoo/odoo/wiki/Mergebot#forward-port
|
||||
with prod:
|
||||
pr1.open()
|
||||
assert pr1_id.state == 'validated'
|
||||
env.run_crons()
|
||||
assert pr1.comments[-1] == (
|
||||
users['user'],
|
||||
"@{} @{} this PR was closed then reopened. "
|
||||
"It should be merged the normal way (via @{})".format(
|
||||
users['user'],
|
||||
users['reviewer'],
|
||||
project.github_prefix,
|
||||
)
|
||||
)
|
||||
|
||||
with prod:
|
||||
pr1.post_comment(f'{project.fp_github_name} r+', config['role_reviewer']['token'])
|
||||
env.run_crons()
|
||||
assert pr1.comments[-1] == (
|
||||
users['user'],
|
||||
"@{} I can only do this on unmodified forward-port PRs, ask {}.".format(
|
||||
users['reviewer'],
|
||||
project.github_prefix,
|
||||
),
|
||||
)
|
||||
assert not pr1_id.parent_id
|
||||
assert not pr2_id.parent_id
|
||||
|
||||
def test_close_disabled(self, env, make_repo, users, config):
|
||||
""" If an fwport's target is disabled and its branch is closed, it
|
||||
@ -937,7 +916,7 @@ More info at https://github.com/odoo/odoo/wiki/Mergebot#forward-port
|
||||
* {pr2_id.display_name}
|
||||
|
||||
To merge the full chain, use
|
||||
> @herbert r+
|
||||
> @hansen r+
|
||||
|
||||
More info at https://github.com/odoo/odoo/wiki/Mergebot#forward-port
|
||||
""".format(pr2_id=pr2_id, **users)),
|
||||
@ -1063,13 +1042,13 @@ class TestRecognizeCommands:
|
||||
('number', '=', pr.number),
|
||||
])
|
||||
|
||||
# FIXME: remove / merge into mergebot tests
|
||||
def test_botname_casing(self, env, config, make_repo):
|
||||
""" Test that the botname is case-insensitive as people might write
|
||||
bot names capitalised or titlecased or uppercased or whatever
|
||||
"""
|
||||
repo, pr, pr_id = self.make_pr(env, config, make_repo)
|
||||
assert pr_id.state == 'opened'
|
||||
botname = env['runbot_merge.project'].search([]).fp_github_name
|
||||
[a] = env['runbot_merge.branch'].search([
|
||||
('name', '=', 'a')
|
||||
])
|
||||
@ -1078,27 +1057,27 @@ class TestRecognizeCommands:
|
||||
])
|
||||
|
||||
names = [
|
||||
botname,
|
||||
botname.upper(),
|
||||
botname.capitalize(),
|
||||
sPeNgBaB(botname),
|
||||
"hansen",
|
||||
"HANSEN",
|
||||
"Hansen",
|
||||
sPeNgBaB("hansen"),
|
||||
]
|
||||
|
||||
for n in names:
|
||||
assert pr_id.limit_id == c
|
||||
with repo:
|
||||
pr.post_comment('@%s up to a' % n, config['role_reviewer']['token'])
|
||||
pr.post_comment(f'@{n} up to a', config['role_reviewer']['token'])
|
||||
assert pr_id.limit_id == a
|
||||
# reset state
|
||||
pr_id.write({'limit_id': c.id})
|
||||
|
||||
# FIXME: remove / merge into mergebot tests
|
||||
@pytest.mark.parametrize('indent', ['', '\N{SPACE}', '\N{SPACE}'*4, '\N{TAB}'])
|
||||
def test_botname_indented(self, env, config, make_repo, indent):
|
||||
""" matching botname should ignore leading whitespaces
|
||||
"""
|
||||
repo, pr, pr_id = self.make_pr(env, config, make_repo)
|
||||
assert pr_id.state == 'opened'
|
||||
botname = env['runbot_merge.project'].search([]).fp_github_name
|
||||
[a] = env['runbot_merge.branch'].search([
|
||||
('name', '=', 'a')
|
||||
])
|
||||
@ -1108,5 +1087,5 @@ class TestRecognizeCommands:
|
||||
|
||||
assert pr_id.limit_id == c
|
||||
with repo:
|
||||
pr.post_comment('%s@%s up to a' % (indent, botname), config['role_reviewer']['token'])
|
||||
pr.post_comment(f'{indent}@hansen up to a', config['role_reviewer']['token'])
|
||||
assert pr_id.limit_id == a
|
||||
|
@ -25,7 +25,7 @@ def test_update_pr(env, config, make_repo, users, merge_parent) -> None:
|
||||
})
|
||||
with prod:
|
||||
prod.make_commits('c', Commit('1111', tree={'i': 'a'}), ref='heads/d')
|
||||
|
||||
|
||||
with prod:
|
||||
[p_1] = prod.make_commits(
|
||||
'a',
|
||||
@ -108,15 +108,6 @@ More info at https://github.com/odoo/odoo/wiki/Mergebot#forward-port
|
||||
|
||||
assert pr1_id.head == new_c != pr1_head, "the FP PR should be updated"
|
||||
assert not pr1_id.parent_id, "the FP PR should be detached from the original"
|
||||
assert pr1_remote.comments == [
|
||||
seen(env, pr1_remote, users),
|
||||
fp_intermediate, ci_warning, ci_warning,
|
||||
(users['user'], "@%s @%s this PR was modified / updated and has become a normal PR. "
|
||||
"It should be merged the normal way (via @%s)" % (
|
||||
users['user'], users['reviewer'],
|
||||
pr1_id.repository.project_id.github_prefix
|
||||
)),
|
||||
], "users should be warned that the PR has become non-FP"
|
||||
# NOTE: should the followup PR wait for pr1 CI or not?
|
||||
assert pr2_id.head != pr2_head
|
||||
assert pr2_id.parent_id == pr1_id, "the followup PR should still be linked"
|
||||
@ -132,7 +123,7 @@ More info at https://github.com/odoo/odoo/wiki/Mergebot#forward-port
|
||||
'h': 'a',
|
||||
'x': '5'
|
||||
}, "the followup FP should also have the update"
|
||||
|
||||
|
||||
with prod:
|
||||
prod.post_status(pr2_id.head, 'success', 'ci/runbot')
|
||||
prod.post_status(pr2_id.head, 'success', 'legal/cla')
|
||||
@ -155,7 +146,7 @@ More info at https://github.com/odoo/odoo/wiki/Mergebot#forward-port
|
||||
pr3_id.write({'parent_id': False, 'detach_reason': "testing"})
|
||||
# pump feedback messages
|
||||
env.run_crons()
|
||||
|
||||
|
||||
pr3 = prod.get_pr(pr3_id.number)
|
||||
assert pr3.comments == [
|
||||
seen(env, pr3, users),
|
||||
@ -164,14 +155,13 @@ More info at https://github.com/odoo/odoo/wiki/Mergebot#forward-port
|
||||
* {pr2_id.display_name}
|
||||
|
||||
To merge the full chain, use
|
||||
> @{pr3_id.repository.project_id.fp_github_name} r+
|
||||
> @hansen r+
|
||||
|
||||
More info at https://github.com/odoo/odoo/wiki/Mergebot#forward-port
|
||||
"""),
|
||||
(users['user'], f"@{users['user']} @{users['reviewer']} this PR was "
|
||||
f"modified / updated and has become a normal PR. It "
|
||||
f"should be merged the normal way "
|
||||
f"(via @{pr3_id.repository.project_id.github_prefix})"
|
||||
f"must be merged directly."
|
||||
)
|
||||
]
|
||||
|
||||
@ -197,7 +187,6 @@ More info at https://github.com/odoo/odoo/wiki/Mergebot#forward-port
|
||||
f"won't cross."),
|
||||
]
|
||||
|
||||
|
||||
def test_update_merged(env, make_repo, config, users):
|
||||
""" Strange things happen when an FP gets closed / merged but then its
|
||||
parent is modified and the forwardport tries to update the (now merged)
|
||||
@ -322,7 +311,6 @@ def test_duplicate_fw(env, make_repo, setreviewers, config, users):
|
||||
'github_prefix': 'hansen',
|
||||
'fp_github_token': config['github']['token'],
|
||||
'fp_github_name': 'herbert',
|
||||
'fp_github_email': 'hb@example.com',
|
||||
'branch_ids': [
|
||||
(0, 0, {'name': 'master', 'sequence': 0}),
|
||||
(0, 0, {'name': 'v3', 'sequence': 1}),
|
||||
@ -377,7 +365,7 @@ def test_duplicate_fw(env, make_repo, setreviewers, config, users):
|
||||
with repo:
|
||||
repo.make_commits('v2', Commit('c0', tree={'z': 'b'}), ref=prv2.ref, make=False)
|
||||
env.run_crons()
|
||||
assert pr_ids.mapped('state') == ['merged', 'opened', 'validated', 'validated']
|
||||
assert pr_ids.mapped('state') == ['merged', 'opened', 'opened', 'opened']
|
||||
assert repo.read_tree(repo.commit(prv2_id.head)) == {'f': 'c', 'h': 'a', 'z': 'b'}
|
||||
assert repo.read_tree(repo.commit(prv3_id.head)) == {'f': 'd', 'i': 'a', 'z': 'b'}
|
||||
assert repo.read_tree(repo.commit(prmaster_id.head)) == {'f': 'e', 'z': 'b'}
|
||||
|
@ -26,7 +26,6 @@ def make_basic(env, config, make_repo, *, fp_token, fp_remote):
|
||||
'github_prefix': 'hansen',
|
||||
'fp_github_token': fp_token and config['github']['token'],
|
||||
'fp_github_name': 'herbert',
|
||||
'fp_github_email': 'hb@example.com',
|
||||
'branch_ids': [
|
||||
(0, 0, {'name': 'a', 'sequence': 2}),
|
||||
(0, 0, {'name': 'b', 'sequence': 1}),
|
||||
@ -181,7 +180,7 @@ def test_failed_staging(env, config, make_repo):
|
||||
with prod:
|
||||
prod.post_status(pr3_id.head, 'success', 'legal/cla')
|
||||
prod.post_status(pr3_id.head, 'success', 'ci/runbot')
|
||||
pr3.post_comment('%s r+' % proj.fp_github_name, reviewer)
|
||||
pr3.post_comment('hansen r+', reviewer)
|
||||
env.run_crons()
|
||||
|
||||
prod.commit('staging.c')
|
||||
@ -265,7 +264,6 @@ class TestNotAllBranches:
|
||||
'github_prefix': 'hansen',
|
||||
'fp_github_token': config['github']['token'],
|
||||
'fp_github_name': 'herbert',
|
||||
'fp_github_email': 'hb@example.com',
|
||||
'branch_ids': [
|
||||
(0, 0, {'name': 'a', 'sequence': 2}),
|
||||
(0, 0, {'name': 'b', 'sequence': 1}),
|
||||
@ -318,7 +316,7 @@ class TestNotAllBranches:
|
||||
with a:
|
||||
a.post_status(pr2.head, 'success', 'ci/runbot')
|
||||
a.get_pr(pr2.number).post_comment(
|
||||
'%s r+' % project.fp_github_name,
|
||||
'hansen r+',
|
||||
config['role_reviewer']['token'])
|
||||
env.run_crons()
|
||||
assert pr1.staging_id
|
||||
@ -357,7 +355,7 @@ class TestNotAllBranches:
|
||||
with b:
|
||||
b.post_status(pr1.head, 'success', 'ci/runbot')
|
||||
b.get_pr(pr1.number).post_comment(
|
||||
'%s r+' % project.fp_github_name,
|
||||
'hansen r+',
|
||||
config['role_reviewer']['token'])
|
||||
env.run_crons()
|
||||
with a, b:
|
||||
@ -580,7 +578,7 @@ def test_new_intermediate_branch(env, config, make_repo):
|
||||
with prod, prod2:
|
||||
for pr in fps.filtered(lambda p: p.target.name == 'c'):
|
||||
get_repo(pr).get_pr(pr.number).post_comment(
|
||||
'%s r+' % project.fp_github_name,
|
||||
'hansen r+',
|
||||
config['role_reviewer']['token'])
|
||||
assert all(p.state == 'merged' for p in PRs.browse(sources)),\
|
||||
"all sources should be merged"
|
||||
@ -627,7 +625,7 @@ def test_author_can_close_via_fwbot(env, config, make_repo):
|
||||
pr.open(other_token)
|
||||
prod.post_status(c, 'success', 'legal/cla')
|
||||
prod.post_status(c, 'success', 'ci/runbot')
|
||||
pr.post_comment('%s close' % project.fp_github_name, other_token)
|
||||
pr.post_comment('hansen close', other_token)
|
||||
pr.post_comment('hansen r+', config['role_reviewer']['token'])
|
||||
env.run_crons()
|
||||
assert pr.state == 'open'
|
||||
@ -647,7 +645,7 @@ def test_author_can_close_via_fwbot(env, config, make_repo):
|
||||
pr1.close(other_token)
|
||||
# use can close via fwbot
|
||||
with prod:
|
||||
pr1.post_comment('%s close' % project.fp_github_name, other_token)
|
||||
pr1.post_comment('hansen close', other_token)
|
||||
env.run_crons()
|
||||
assert pr1.state == 'closed'
|
||||
assert pr1_id.state == 'closed'
|
||||
@ -660,7 +658,7 @@ def test_skip_ci_all(env, config, make_repo):
|
||||
pr = prod.make_pr(target='a', head='change')
|
||||
prod.post_status(pr.head, 'success', 'legal/cla')
|
||||
prod.post_status(pr.head, 'success', 'ci/runbot')
|
||||
pr.post_comment('%s skipci' % project.fp_github_name, config['role_reviewer']['token'])
|
||||
pr.post_comment('hansen fw=skipci', config['role_reviewer']['token'])
|
||||
pr.post_comment('hansen r+', config['role_reviewer']['token'])
|
||||
env.run_crons()
|
||||
assert env['runbot_merge.pull_requests'].search([
|
||||
@ -703,8 +701,8 @@ def test_skip_ci_next(env, config, make_repo):
|
||||
pr0_id, pr1_id = env['runbot_merge.pull_requests'].search([], order='number')
|
||||
with prod:
|
||||
prod.get_pr(pr1_id.number).post_comment(
|
||||
'%s skipci' % project.fp_github_name,
|
||||
config['role_user']['token']
|
||||
'hansen fw=skipci',
|
||||
config['role_reviewer']['token']
|
||||
)
|
||||
assert pr0_id.fw_policy == 'skipci'
|
||||
env.run_crons()
|
||||
|
@ -75,7 +75,6 @@ def make_basic(env, config, make_repo, *, reponame='proj', project_name='myproje
|
||||
'github_prefix': 'hansen',
|
||||
'fp_github_token': config['github']['token'],
|
||||
'fp_github_name': 'herbert',
|
||||
'fp_github_email': 'hb@example.com',
|
||||
'branch_ids': [
|
||||
(0, 0, {'name': 'a', 'sequence': 100}),
|
||||
(0, 0, {'name': 'b', 'sequence': 80}),
|
||||
@ -145,3 +144,7 @@ def part_of(label, pr_id, *, separator='\n\n'):
|
||||
""" Adds the "part-of" pseudo-header in the footer.
|
||||
"""
|
||||
return f'{label}{separator}Part-of: {pr_id.display_name}'
|
||||
|
||||
def ensure_one(records):
|
||||
assert len(records) == 1
|
||||
return records
|
||||
|
@ -1,7 +1,7 @@
|
||||
{
|
||||
'name': 'merge bot',
|
||||
'version': '1.9',
|
||||
'depends': ['contacts', 'website'],
|
||||
'version': '1.11',
|
||||
'depends': ['contacts', 'mail', 'website'],
|
||||
'data': [
|
||||
'security/security.xml',
|
||||
'security/ir.model.access.csv',
|
||||
|
57
runbot_merge/changelog/2023-12/commands.md
Normal file
57
runbot_merge/changelog/2023-12/commands.md
Normal file
@ -0,0 +1,57 @@
|
||||
CHG: complete rework of the commands system
|
||||
|
||||
# fun is dead: strict commands parsing
|
||||
|
||||
Historically the bots would apply whatever looked like a command and ignore the
|
||||
rest. This led to people sending novels to the bot, then being surprised the bot
|
||||
found a command in the mess.
|
||||
|
||||
The bots now ignore all lines which contain any non-command. Example:
|
||||
|
||||
> @robodoo r+ when green darling
|
||||
|
||||
Previously, the bot would apply the `r+` and ignore the rest. Now the bot will
|
||||
ignore everything and reply with
|
||||
|
||||
> unknown command "when"
|
||||
|
||||
# fwbot is dead
|
||||
|
||||
The mergebot (@robodoo) is now responsible for the old fwbot commands:
|
||||
|
||||
- close, ignore, up to, ... work as they ever did, just with robodoo
|
||||
- `robodoo r+` now approves the parents if the current PR a forward port
|
||||
- a specific PR can be approved even in forward ports by providing its number
|
||||
e.g. `robodoo r=45328` will approve just PR 45328, if that is the PR the
|
||||
comment is being posted on or one of its parents
|
||||
- the approval of forward ports won't skip over un-approvable PRs anymore
|
||||
- the rights of the original author have been restricted slightly: they can
|
||||
only approve the direct descendents of merged PRs, so if one of the parents
|
||||
has been modified and is not merged yet, the original author can't approve,
|
||||
nor can they approve the modified PR, or a conflicting PR which has to get
|
||||
fixed (?)
|
||||
|
||||
# no more p=<number>
|
||||
|
||||
The old priorities command was a tangle of multiple concerns, not all of which
|
||||
were always desired or applicable. These tangles have been split along their
|
||||
various axis.
|
||||
|
||||
# listing
|
||||
|
||||
The new commands are:
|
||||
|
||||
- `default`, sets the staging priority back to the default
|
||||
- `priority`, sets the staging priority to elevated, on staging these PRs are
|
||||
staged first, then the `normal` PRs are added
|
||||
- `alone`, sets the staging priority to high, these PRs are staged before
|
||||
considering splits, and only `alone` PRs are staged together even if the batch
|
||||
is not full
|
||||
- `fw=default`, processes forward ports normally
|
||||
- `fw=skipci`, once the current PR has been merged creates all the forward ports
|
||||
without waiting for each to have valid statuses
|
||||
- `fw=skipmerge`, immediately create all forward ports even if the base pull
|
||||
request has not even been merged yet
|
||||
- `skipchecks`, makes the entire batch (target PR and any linked PR) immediately
|
||||
ready, bypassing statuses and reviews
|
||||
- `cancel`, cancels the staging on the target branch, if any
|
@ -288,7 +288,8 @@ def handle_pr(env, event):
|
||||
)
|
||||
|
||||
pr_obj.write({
|
||||
'state': 'opened',
|
||||
'reviewed_by': False,
|
||||
'error': False,
|
||||
'head': pr['head']['sha'],
|
||||
'squash': pr['commits'] == 1,
|
||||
})
|
||||
@ -327,11 +328,10 @@ def handle_pr(env, event):
|
||||
close=True,
|
||||
message=env.ref('runbot_merge.handle.pr.merged')._format(event=event),
|
||||
)
|
||||
|
||||
if pr_obj.state == 'closed':
|
||||
elif pr_obj.closed:
|
||||
_logger.info('%s reopening %s', event['sender']['login'], pr_obj.display_name)
|
||||
pr_obj.write({
|
||||
'state': 'opened',
|
||||
'closed': False,
|
||||
# updating the head triggers a revalidation
|
||||
'head': pr['head']['sha'],
|
||||
'squash': pr['commits'] == 1,
|
||||
|
@ -40,7 +40,7 @@ runbot_merge.command.approve.failure,@{user} you may want to rebuild or fix this
|
||||
|
||||
user: github login of comment sender
|
||||
pr: pr object to which the command was sent"
|
||||
runbot_merge.command.unapprove.p0,"PR priority reset to 1, as pull requests with priority 0 ignore review state.","Responds to r- of pr in p=0.
|
||||
runbot_merge.command.unapprove.p0,"Skipchecks removed due to r-.","Responds to r- of pr in skipchecks.
|
||||
|
||||
user: github login of comment sender
|
||||
pr: pr object to which the command was sent"
|
||||
@ -107,16 +107,13 @@ pr: PR where update followup conflict happened
|
||||
previous: parent PR which triggered the followup
|
||||
stdout: markdown-formatted stdout of git, if any
|
||||
stderr: markdown-formatted stderr of git, if any"
|
||||
runbot_merge.forwardport.update.detached,{pr.ping}this PR was modified / updated and has become a normal PR. It should be merged the normal way (via @{pr.repository.project_id.github_prefix}),"Comment when a forwardport PR gets updated, documents that the PR now needs to be merged the “normal” way.
|
||||
runbot_merge.forwardport.update.detached,{pr.ping}this PR was modified / updated and has become a normal PR. It must be merged directly.,"Comment when a forwardport PR gets updated, documents that the PR now needs to be merged the “normal” way.
|
||||
|
||||
pr: the pr in question "
|
||||
runbot_merge.forwardport.update.parent,{pr.ping}child PR {child.display_name} was modified / updated and has become a normal PR. This PR (and any of its parents) will need to be merged independently as approvals won't cross.,"Sent to an open PR when its direct child has been detached.
|
||||
|
||||
pr: the pr
|
||||
child: its detached child"
|
||||
runbot_merge.forwardport.reopen.detached,{pr.ping}this PR was closed then reopened. It should be merged the normal way (via @{pr.repository.project_id.github_prefix}),"Comment when a forwardport PR gets closed then reopened, documents that the PR is now in a detached state.
|
||||
|
||||
pr: the pr in question"
|
||||
runbot_merge.forwardport.ci.failed,{pr.ping}{ci} failed on this forward-port PR,"Comment when CI fails on a forward-port PR (which thus won't port any further, for now).
|
||||
|
||||
pr: the pr in question
|
||||
@ -128,7 +125,7 @@ linked: the linked PR with a different next target
|
||||
next: next target for the current pr
|
||||
other: next target for the other pr"
|
||||
runbot_merge.forwardport.failure.conflict,"{pr.ping}the next pull request ({new.display_name}) is in conflict. You can merge the chain up to here by saying
|
||||
> @{pr.repository.project_id.fp_github_name} r+
|
||||
> @{pr.repository.project_id.github_prefix} r+
|
||||
{footer}","Comment when a forward port was created but is in conflict, warns of that & gives instructions for current PR.
|
||||
|
||||
pr: the pr which was just forward ported
|
||||
@ -163,7 +160,7 @@ footer: some footer text"
|
||||
runbot_merge.forwardport.final,"{pr.ping}this PR targets {pr.target.name} and is the last of the forward-port chain{containing}
|
||||
{ancestors}
|
||||
To merge the full chain, use
|
||||
> @{pr.repository.project_id.fp_github_name} r+
|
||||
> @{pr.repository.project_id.github_prefix} r+
|
||||
{footer}","Comment when a forward port was created and is the last of a sequence (target the limit branch).
|
||||
|
||||
pr: the new forward port
|
||||
|
|
11
runbot_merge/migrations/15.0.1.10/pre-migration.py
Normal file
11
runbot_merge/migrations/15.0.1.10/pre-migration.py
Normal file
@ -0,0 +1,11 @@
|
||||
""" Migration for the unified commands parser, fp_github fields moved from
|
||||
forwardport to mergebot (one of them is removed but we might not care)
|
||||
"""
|
||||
def migrate(cr, version):
|
||||
cr.execute("""
|
||||
UPDATE ir_model_data
|
||||
SET module = 'runbot_merge'
|
||||
WHERE module = 'forwardport'
|
||||
AND model = 'ir.model.fields'
|
||||
AND name in ('fp_github_token', 'fp_github_name')
|
||||
""")
|
124
runbot_merge/migrations/15.0.1.11/pre-migration.py
Normal file
124
runbot_merge/migrations/15.0.1.11/pre-migration.py
Normal file
@ -0,0 +1,124 @@
|
||||
def move_fields(cr, *names):
|
||||
cr.execute("""
|
||||
UPDATE ir_model_data
|
||||
SET module = 'runbot_merge'
|
||||
WHERE module = 'forwardport'
|
||||
AND model = 'runbot_merge_pull_requests'
|
||||
AND name IN %s
|
||||
""", [names])
|
||||
|
||||
def migrate(cr, version):
|
||||
# cleanup some old crap
|
||||
cr.execute("""
|
||||
ALTER TABLE runbot_merge_project_freeze
|
||||
DROP COLUMN IF EXISTS release_label,
|
||||
DROP COLUMN IF EXISTS bump_label
|
||||
""")
|
||||
|
||||
# fw constraint moved to mergebot, alongside all the fields it constrains
|
||||
cr.execute("""
|
||||
UPDATE ir_model_data
|
||||
SET module = 'runbot_merge'
|
||||
WHERE module = 'forwardport'
|
||||
AND model = 'ir.model.constraint'
|
||||
AND name = 'constraint_runbot_merge_pull_requests_fw_constraint'
|
||||
""")
|
||||
move_fields(
|
||||
cr, 'merge_date', 'refname',
|
||||
'limit_id', 'source_id', 'parent_id', 'root_id', 'forwardport_ids',
|
||||
'detach_reason', 'fw_policy')
|
||||
|
||||
# view depends on pr.state, which prevents changing the state column's type
|
||||
# we can just drop the view and it'll be recreated by the db update
|
||||
cr.execute("DROP VIEW runbot_merge_freeze_labels")
|
||||
# convert a few data types
|
||||
cr.execute("""
|
||||
CREATE TYPE runbot_merge_pull_requests_priority_type
|
||||
AS ENUM ('default', 'priority', 'alone');
|
||||
|
||||
CREATE TYPE runbot_merge_pull_requests_state_type
|
||||
AS ENUM ('opened', 'closed', 'validated', 'approved', 'ready', 'merged', 'error');
|
||||
|
||||
CREATE TYPE runbot_merge_pull_requests_merge_method_type
|
||||
AS ENUM ('merge', 'rebase-merge', 'rebase-ff', 'squash');
|
||||
|
||||
CREATE TYPE runbot_merge_pull_requests_status_type
|
||||
AS ENUM ('pending', 'failure', 'success');
|
||||
|
||||
|
||||
ALTER TABLE runbot_merge_pull_requests
|
||||
ALTER COLUMN priority
|
||||
TYPE runbot_merge_pull_requests_priority_type
|
||||
USING CASE WHEN priority = 0
|
||||
THEN 'alone'
|
||||
ELSE 'default'
|
||||
END::runbot_merge_pull_requests_priority_type,
|
||||
ALTER COLUMN state
|
||||
TYPE runbot_merge_pull_requests_state_type
|
||||
USING state::runbot_merge_pull_requests_state_type,
|
||||
ALTER COLUMN merge_method
|
||||
TYPE runbot_merge_pull_requests_merge_method_type
|
||||
USING merge_method::runbot_merge_pull_requests_merge_method_type;
|
||||
""")
|
||||
|
||||
cr.execute("""
|
||||
ALTER TABLE runbot_merge_pull_requests
|
||||
ADD COLUMN closed boolean not null default 'false',
|
||||
ADD COLUMN error boolean not null default 'false',
|
||||
ADD COLUMN skipchecks boolean not null default 'false',
|
||||
ADD COLUMN cancel_staging boolean not null default 'false',
|
||||
|
||||
ADD COLUMN statuses text not null default '{}',
|
||||
ADD COLUMN statuses_full text not null default '{}',
|
||||
ADD COLUMN status runbot_merge_pull_requests_status_type not null default 'pending'
|
||||
""")
|
||||
# first pass: update all the new unconditional (or simple) fields
|
||||
cr.execute("""
|
||||
UPDATE runbot_merge_pull_requests p
|
||||
SET closed = state = 'closed',
|
||||
error = state = 'error',
|
||||
skipchecks = priority = 'alone',
|
||||
cancel_staging = priority = 'alone',
|
||||
fw_policy = CASE fw_policy WHEN 'ci' THEN 'default' ELSE fw_policy END,
|
||||
reviewed_by = CASE state
|
||||
-- old version did not reset reviewer on PR update
|
||||
WHEN 'opened' THEN NULL
|
||||
WHEN 'validated' THEN NULL
|
||||
-- if a PR predates the reviewed_by field, assign odoobot as reviewer
|
||||
WHEN 'merged' THEN coalesce(reviewed_by, 2)
|
||||
ELSE reviewed_by
|
||||
END,
|
||||
status = CASE state
|
||||
WHEN 'validated' THEN 'success'
|
||||
WHEN 'ready' THEN 'success'
|
||||
WHEN 'merged' THEN 'success'
|
||||
ELSE 'pending'
|
||||
END::runbot_merge_pull_requests_status_type
|
||||
""")
|
||||
|
||||
# the rest only gets updated if we have a matching commit which is not
|
||||
# always the case
|
||||
cr.execute("""
|
||||
CREATE TEMPORARY TABLE parents ( id INTEGER not null, overrides jsonb not null );
|
||||
WITH RECURSIVE parent_chain AS (
|
||||
SELECT id, overrides::jsonb
|
||||
FROM runbot_merge_pull_requests
|
||||
WHERE parent_id IS NULL
|
||||
UNION ALL
|
||||
SELECT p.id, coalesce(pc.overrides || p.overrides::jsonb, pc.overrides, p.overrides::jsonb) as overrides
|
||||
FROM runbot_merge_pull_requests p
|
||||
JOIN parent_chain pc ON p.parent_id = pc.id
|
||||
)
|
||||
INSERT INTO parents SELECT * FROM parent_chain;
|
||||
CREATE INDEX ON parents (id);
|
||||
|
||||
UPDATE runbot_merge_pull_requests p
|
||||
SET statuses = jsonb_pretty(c.statuses::jsonb)::text,
|
||||
statuses_full = jsonb_pretty(
|
||||
c.statuses::jsonb
|
||||
|| coalesce((select overrides from parents where id = p.parent_id), '{}')
|
||||
|| overrides::jsonb
|
||||
)::text
|
||||
FROM runbot_merge_commit c
|
||||
WHERE p.head = c.sha
|
||||
""")
|
@ -1,7 +1,9 @@
|
||||
import enum
|
||||
from collections import abc
|
||||
from collections.abc import Iterator
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Iterator, List, Optional, Union
|
||||
from functools import partial
|
||||
from operator import contains
|
||||
from typing import Callable, List, Optional, Union
|
||||
|
||||
|
||||
def tokenize(line: str) -> Iterator[str]:
|
||||
@ -21,6 +23,7 @@ def tokenize(line: str) -> Iterator[str]:
|
||||
if cur:
|
||||
yield cur
|
||||
|
||||
|
||||
def normalize(it: Iterator[str]) -> Iterator[str]:
|
||||
"""Converts shorthand tokens to expanded version
|
||||
"""
|
||||
@ -31,14 +34,12 @@ def normalize(it: Iterator[str]) -> Iterator[str]:
|
||||
case 'r-':
|
||||
yield 'review'
|
||||
yield '-'
|
||||
case 'p':
|
||||
yield 'priority'
|
||||
case _:
|
||||
yield t
|
||||
|
||||
|
||||
@dataclass
|
||||
class Peekable(abc.Iterator[str]):
|
||||
class Peekable(Iterator[str]):
|
||||
it: Iterator[str]
|
||||
memo: Optional[str] = None
|
||||
|
||||
@ -57,22 +58,23 @@ class Peekable(abc.Iterator[str]):
|
||||
return self.memo
|
||||
|
||||
|
||||
def assert_next(it: Iterator[str], val: str):
|
||||
if (actual := next(it, None)) != val:
|
||||
raise CommandError(f"expected {val!r}, got {actual!r}")
|
||||
|
||||
|
||||
class CommandError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class Approve:
|
||||
def __str__(self):
|
||||
def __init__(self, ids: Optional[List[int]] = None) -> None:
|
||||
self.ids = ids
|
||||
|
||||
def __str__(self) -> str:
|
||||
if self.ids is not None:
|
||||
ids = ','.join(map(str, self.ids))
|
||||
return f"r={ids}"
|
||||
return 'review+'
|
||||
|
||||
|
||||
class Reject:
|
||||
def __str__(self):
|
||||
def __str__(self) -> str:
|
||||
return 'review-'
|
||||
|
||||
|
||||
@ -82,17 +84,17 @@ class MergeMethod(enum.Enum):
|
||||
REBASE_MERGE = 'rebase-merge'
|
||||
MERGE = 'merge'
|
||||
|
||||
def __str__(self):
|
||||
def __str__(self) -> str:
|
||||
return self.value
|
||||
|
||||
|
||||
class Retry:
|
||||
def __str__(self):
|
||||
def __str__(self) -> str:
|
||||
return 'retry'
|
||||
|
||||
|
||||
class Check:
|
||||
def __str__(self):
|
||||
def __str__(self) -> str:
|
||||
return 'check'
|
||||
|
||||
|
||||
@ -100,7 +102,7 @@ class Check:
|
||||
class Override:
|
||||
statuses: List[str] = field(default_factory=list)
|
||||
|
||||
def __str__(self):
|
||||
def __str__(self) -> str:
|
||||
return f"override={','.join(self.statuses)}"
|
||||
|
||||
|
||||
@ -108,124 +110,199 @@ class Override:
|
||||
class Delegate:
|
||||
users: List[str] = field(default_factory=list)
|
||||
|
||||
def __str__(self):
|
||||
def __str__(self) -> str:
|
||||
if not self.users:
|
||||
return 'delegate+'
|
||||
return f"delegate={','.join(self.users)}"
|
||||
|
||||
|
||||
class Priority(enum.IntEnum):
|
||||
NUKE = 0
|
||||
PRIORITIZE = 1
|
||||
DEFAULT = 2
|
||||
class Priority(enum.Enum):
|
||||
DEFAULT = enum.auto()
|
||||
PRIORITY = enum.auto()
|
||||
ALONE = enum.auto()
|
||||
|
||||
def __str__(self) -> str:
|
||||
return self.name.lower()
|
||||
|
||||
|
||||
Command = Union[Approve, Reject, MergeMethod, Retry, Check, Override, Delegate, Priority]
|
||||
class CancelStaging:
|
||||
def __str__(self) -> str:
|
||||
return "cancel=staging"
|
||||
|
||||
|
||||
def parse_mergebot(line: str) -> Iterator[Command]:
|
||||
it = Peekable(normalize(tokenize(line)))
|
||||
for token in it:
|
||||
match token:
|
||||
case 'review':
|
||||
match next(it):
|
||||
case '+':
|
||||
yield Approve()
|
||||
case '-':
|
||||
yield Reject()
|
||||
case t:
|
||||
raise CommandError(f"unknown review {t!r}")
|
||||
case 'squash':
|
||||
yield MergeMethod.SQUASH
|
||||
case 'rebase-ff':
|
||||
yield MergeMethod.REBASE_FF
|
||||
case 'rebase-merge':
|
||||
yield MergeMethod.REBASE_MERGE
|
||||
case 'merge':
|
||||
yield MergeMethod.MERGE
|
||||
case 'retry':
|
||||
yield Retry()
|
||||
case 'check':
|
||||
yield Check()
|
||||
case 'override':
|
||||
assert_next(it, '=')
|
||||
ci = [next(it)]
|
||||
while it.peek() == ',':
|
||||
next(it)
|
||||
ci.append(next(it))
|
||||
yield Override(ci)
|
||||
case 'delegate':
|
||||
match next(it):
|
||||
case '+':
|
||||
yield Delegate()
|
||||
case '=':
|
||||
delegates = [next(it).lstrip('#@')]
|
||||
while it.peek() == ',':
|
||||
next(it)
|
||||
delegates.append(next(it).lstrip('#@'))
|
||||
yield Delegate(delegates)
|
||||
case d:
|
||||
raise CommandError(f"unknown delegation {d!r}")
|
||||
case 'priority':
|
||||
assert_next(it, '=')
|
||||
yield Priority(int(next(it)))
|
||||
case c:
|
||||
raise CommandError(f"unknown command {c!r}")
|
||||
class SkipChecks:
|
||||
def __str__(self) -> str:
|
||||
return 'skipchecks'
|
||||
|
||||
|
||||
class FWApprove:
|
||||
def __str__(self):
|
||||
return 'review+'
|
||||
class FW(enum.Enum):
|
||||
DEFAULT = enum.auto()
|
||||
SKIPCI = enum.auto()
|
||||
SKIPMERGE = enum.auto()
|
||||
|
||||
|
||||
@dataclass
|
||||
class CI:
|
||||
run: bool
|
||||
def __str__(self):
|
||||
return 'ci' if self.run else 'skipci'
|
||||
def __str__(self) -> str:
|
||||
return f'fw={self.name.lower()}'
|
||||
|
||||
|
||||
@dataclass
|
||||
class Limit:
|
||||
branch: Optional[str]
|
||||
|
||||
def __str__(self):
|
||||
def __str__(self) -> str:
|
||||
if self.branch is None:
|
||||
return 'ignore'
|
||||
return f'up to {self.branch}'
|
||||
|
||||
|
||||
class Close:
|
||||
def __str__(self):
|
||||
def __str__(self) -> str:
|
||||
return 'close'
|
||||
|
||||
|
||||
FWCommand = Union[FWApprove, CI, Limit, Close]
|
||||
Command = Union[
|
||||
Approve,
|
||||
CancelStaging,
|
||||
Close,
|
||||
Check,
|
||||
Delegate,
|
||||
FW,
|
||||
Limit,
|
||||
MergeMethod,
|
||||
Override,
|
||||
Priority,
|
||||
Reject,
|
||||
Retry,
|
||||
SkipChecks,
|
||||
]
|
||||
|
||||
|
||||
def parse_forwardbot(line: str) -> Iterator[FWCommand]:
|
||||
it = Peekable(normalize(tokenize(line)))
|
||||
for token in it:
|
||||
match token:
|
||||
case 'review':
|
||||
match next(it):
|
||||
case '+':
|
||||
yield FWApprove()
|
||||
case t:
|
||||
raise CommandError(f"unknown review {t!r}", True)
|
||||
case 'ci':
|
||||
yield CI(True)
|
||||
case 'skipci':
|
||||
yield CI(False)
|
||||
case 'ignore':
|
||||
yield Limit(None)
|
||||
case 'up':
|
||||
assert_next(it, 'to')
|
||||
if limit := next(it, None):
|
||||
yield Limit(limit)
|
||||
class Parser:
|
||||
def __init__(self, line: str) -> None:
|
||||
self.it = Peekable(normalize(tokenize(line)))
|
||||
|
||||
def __iter__(self) -> Iterator[Command]:
|
||||
for token in self.it:
|
||||
if token.startswith("NOW"):
|
||||
# any number of ! is allowed
|
||||
if token.startswith("NOW!"):
|
||||
yield Priority.ALONE
|
||||
elif token == "NOW":
|
||||
yield Priority.PRIORITY
|
||||
else:
|
||||
raise CommandError("please provide a branch to forward-port to.", True)
|
||||
case 'close':
|
||||
yield Close()
|
||||
case c:
|
||||
raise CommandError(f"unknown command {c!r}", True)
|
||||
raise CommandError(f"unknown command {token!r}")
|
||||
yield SkipChecks()
|
||||
yield CancelStaging()
|
||||
continue
|
||||
|
||||
handler = getattr(type(self), f'parse_{token.replace("-", "_")}', None)
|
||||
if handler:
|
||||
yield handler(self)
|
||||
elif '!' in token:
|
||||
raise CommandError("skill issue, noob")
|
||||
else:
|
||||
raise CommandError(f"unknown command {token!r}")
|
||||
|
||||
def assert_next(self, val: str) -> None:
|
||||
if (actual := next(self.it, None)) != val:
|
||||
raise CommandError(f"expected {val!r}, got {actual!r}")
|
||||
|
||||
def check_next(self, val: str) -> bool:
|
||||
if self.it.peek() == val:
|
||||
self.it.memo = None # consume peeked value
|
||||
return True
|
||||
return False
|
||||
|
||||
def parse_review(self) -> Union[Approve, Reject]:
|
||||
t = next(self.it, None)
|
||||
if t == '+':
|
||||
return Approve()
|
||||
if t == '-':
|
||||
return Reject()
|
||||
if t == '=':
|
||||
t = next(self.it, None)
|
||||
if not (t and t.isdecimal()):
|
||||
raise CommandError(f"expected PR ID to approve, found {t!r}")
|
||||
|
||||
ids = [int(t)]
|
||||
while self.check_next(','):
|
||||
id = next(self.it, None)
|
||||
if id and id.isdecimal():
|
||||
ids.append(int(id))
|
||||
else:
|
||||
raise CommandError(f"expected PR ID to approve, found {id!r}")
|
||||
return Approve(ids)
|
||||
|
||||
raise CommandError(f"unknown review {t!r}")
|
||||
|
||||
def parse_squash(self) -> MergeMethod:
|
||||
return MergeMethod.SQUASH
|
||||
|
||||
def parse_rebase_ff(self) -> MergeMethod:
|
||||
return MergeMethod.REBASE_FF
|
||||
|
||||
def parse_rebase_merge(self) -> MergeMethod:
|
||||
return MergeMethod.REBASE_MERGE
|
||||
|
||||
def parse_merge(self) -> MergeMethod:
|
||||
return MergeMethod.MERGE
|
||||
|
||||
def parse_retry(self) -> Retry:
|
||||
return Retry()
|
||||
|
||||
def parse_check(self) -> Check:
|
||||
return Check()
|
||||
|
||||
def parse_override(self) -> Override:
|
||||
self.assert_next('=')
|
||||
ci = [next(self.it)]
|
||||
while self.check_next(','):
|
||||
ci.append(next(self.it))
|
||||
return Override(ci)
|
||||
|
||||
def parse_delegate(self) -> Delegate:
|
||||
match next(self.it, None):
|
||||
case '+':
|
||||
return Delegate()
|
||||
case '=':
|
||||
delegates = [next(self.it).lstrip('#@')]
|
||||
while self.check_next(','):
|
||||
delegates.append(next(self.it).lstrip('#@'))
|
||||
return Delegate(delegates)
|
||||
case d:
|
||||
raise CommandError(f"unknown delegation {d!r}")
|
||||
|
||||
def parse_default(self) -> Priority:
|
||||
return Priority.DEFAULT
|
||||
|
||||
def parse_priority(self) -> Priority:
|
||||
return Priority.PRIORITY
|
||||
|
||||
def parse_alone(self) -> Priority:
|
||||
return Priority.ALONE
|
||||
|
||||
def parse_cancel(self) -> CancelStaging:
|
||||
return CancelStaging()
|
||||
|
||||
def parse_skipchecks(self) -> SkipChecks:
|
||||
return SkipChecks()
|
||||
|
||||
def parse_fw(self) -> FW:
|
||||
self.assert_next('=')
|
||||
f = next(self.it, "")
|
||||
try:
|
||||
return FW[f.upper()]
|
||||
except KeyError:
|
||||
raise CommandError(f"unknown fw configuration {f or None!r}") from None
|
||||
|
||||
def parse_ignore(self) -> Limit:
|
||||
return Limit(None)
|
||||
|
||||
def parse_up(self) -> Limit:
|
||||
self.assert_next('to')
|
||||
if limit := next(self.it, None):
|
||||
return Limit(limit)
|
||||
else:
|
||||
raise CommandError("please provide a branch to forward-port to.")
|
||||
|
||||
def parse_close(self) -> Close:
|
||||
return Close()
|
||||
|
@ -37,11 +37,10 @@ class Project(models.Model):
|
||||
required=True,
|
||||
default="hanson", # mergebot du bot du bot du~
|
||||
help="Prefix (~bot name) used when sending commands from PR "
|
||||
"comments e.g. [hanson retry] or [hanson r+ p=1]",
|
||||
"comments e.g. [hanson retry] or [hanson r+ priority]",
|
||||
)
|
||||
fp_github_token = fields.Char()
|
||||
fp_github_name = fields.Char(store=True, compute="_compute_git_identity")
|
||||
fp_github_email = fields.Char(store=True, compute="_compute_git_identity")
|
||||
|
||||
batch_limit = fields.Integer(
|
||||
default=8, group_operator=None, help="Maximum number of PRs staged together")
|
||||
@ -105,7 +104,7 @@ class Project(models.Model):
|
||||
def _compute_git_identity(self):
|
||||
s = requests.Session()
|
||||
for project in self:
|
||||
if not project.fp_github_token or (project.fp_github_name and project.fp_github_email):
|
||||
if project.fp_github_name or not project.fp_github_token:
|
||||
continue
|
||||
|
||||
r0 = s.get('https://api.github.com/user', headers={
|
||||
@ -117,25 +116,6 @@ class Project(models.Model):
|
||||
|
||||
user = r0.json()
|
||||
project.fp_github_name = user['name'] or user['login']
|
||||
if email := user['email']:
|
||||
project.fp_github_email = email
|
||||
continue
|
||||
|
||||
if 'user:email' not in set(re.split(r',\s*', r0.headers['x-oauth-scopes'])):
|
||||
raise UserError("The forward-port github token needs the user:email scope to fetch the bot's identity.")
|
||||
r1 = s.get('https://api.github.com/user/emails', headers={
|
||||
'Authorization': 'token %s' % project.fp_github_token
|
||||
})
|
||||
if not r1.ok:
|
||||
_logger.error("Failed to fetch forward bot emails for project %s: %s", project.name, r1.text or r1.content)
|
||||
continue
|
||||
project.fp_github_email = next((
|
||||
entry['email']
|
||||
for entry in r1.json()
|
||||
if entry['primary']
|
||||
), None)
|
||||
if not project.fp_github_email:
|
||||
raise UserError("The forward-port bot needs a public or primary email set up.")
|
||||
|
||||
def _check_stagings(self, commit=False):
|
||||
# check branches with an active staging
|
||||
@ -173,18 +153,17 @@ class Project(models.Model):
|
||||
if commit:
|
||||
self.env.cr.commit()
|
||||
|
||||
def _find_commands(self, comment: str) -> List[Tuple[str, str]]:
|
||||
def _find_commands(self, comment: str) -> List[str]:
|
||||
"""Tries to find all the lines starting (ignoring leading whitespace)
|
||||
with either the merge or the forward port bot identifiers.
|
||||
|
||||
For convenience, the identifier *can* be prefixed with an ``@`` or
|
||||
``#``, and suffixed with a ``:``.
|
||||
"""
|
||||
logins = '|'.join(map(re.escape, filter(None, [self.github_prefix, self.fp_github_name])))
|
||||
# horizontal whitespace (\s - {\n, \r}), but Python doesn't have \h or \p{Blank}
|
||||
h = r'[^\S\r\n]'
|
||||
return re.findall(
|
||||
fr'^{h}*[@|#]?({logins})(?:{h}+|:{h}*)(.*)$',
|
||||
fr'^{h}*[@|#]?{self.github_prefix}(?:{h}+|:{h}*)(.*)$',
|
||||
comment, re.MULTILINE | re.IGNORECASE)
|
||||
|
||||
def _has_branch(self, name):
|
||||
|
@ -339,7 +339,10 @@ class FreezeWizard(models.Model):
|
||||
)
|
||||
|
||||
all_prs = self.release_pr_ids.pr_id | self.bump_pr_ids.pr_id
|
||||
all_prs.state = 'merged'
|
||||
all_prs.write({
|
||||
'merge_date': fields.Datetime.now(),
|
||||
'reviewed_by': self.env.user.partner_id.id,
|
||||
})
|
||||
self.env['runbot_merge.pull_requests.feedback'].create([{
|
||||
'repository': pr.repository.id,
|
||||
'pull_request': pr.number,
|
||||
|
@ -1,3 +1,5 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import ast
|
||||
import collections
|
||||
import contextlib
|
||||
@ -5,10 +7,10 @@ import datetime
|
||||
import itertools
|
||||
import json
|
||||
import logging
|
||||
import pprint
|
||||
import re
|
||||
import time
|
||||
from typing import Optional, Union, List, Literal
|
||||
from functools import reduce
|
||||
from typing import Optional, Union, List, Iterator, Tuple
|
||||
|
||||
import sentry_sdk
|
||||
import werkzeug
|
||||
@ -16,6 +18,7 @@ import werkzeug
|
||||
from odoo import api, fields, models, tools
|
||||
from odoo.exceptions import ValidationError
|
||||
from odoo.osv import expression
|
||||
from odoo.tools import html_escape
|
||||
from . import commands
|
||||
|
||||
from .. import github, exceptions, controllers, utils
|
||||
@ -219,7 +222,7 @@ All substitutions are tentatively applied sequentially to the input.
|
||||
# don't go through controller because try_closing does weird things
|
||||
# for safety / race condition reasons which ends up committing
|
||||
# and breaks everything
|
||||
pr_id.state = 'closed'
|
||||
pr_id.closed = True
|
||||
|
||||
self.env.ref('runbot_merge.pr.load.fetched')._send(
|
||||
repository=self,
|
||||
@ -305,18 +308,37 @@ class Branch(models.Model):
|
||||
|
||||
|
||||
ACL = collections.namedtuple('ACL', 'is_admin is_reviewer is_author')
|
||||
def enum(model: str, field: str) -> Tuple[str, str]:
|
||||
n = f'{model.replace(".", "_")}_{field}_type'
|
||||
return n, n
|
||||
class PullRequests(models.Model):
|
||||
_name = _description = 'runbot_merge.pull_requests'
|
||||
_name = 'runbot_merge.pull_requests'
|
||||
_description = "Pull Request"
|
||||
_inherit = ['mail.thread']
|
||||
_order = 'number desc'
|
||||
_rec_name = 'number'
|
||||
|
||||
id: int
|
||||
display_name: str
|
||||
|
||||
target = fields.Many2one('runbot_merge.branch', required=True, index=True)
|
||||
target = fields.Many2one('runbot_merge.branch', required=True, index=True, tracking=True)
|
||||
repository = fields.Many2one('runbot_merge.repository', required=True)
|
||||
# NB: check that target & repo have same project & provide project related?
|
||||
|
||||
closed = fields.Boolean(default=False, tracking=True)
|
||||
error = fields.Boolean(string="in error", default=False, tracking=True)
|
||||
skipchecks = fields.Boolean(
|
||||
string="Skips Checks",
|
||||
default=False, tracking=True,
|
||||
help="Forces entire batch to be ready, skips validation and approval",
|
||||
)
|
||||
cancel_staging = fields.Boolean(
|
||||
string="Cancels Stagings",
|
||||
default=False, tracking=True,
|
||||
help="Cancels current staging on target branch when becoming ready"
|
||||
)
|
||||
merge_date = fields.Datetime(tracking=True)
|
||||
|
||||
state = fields.Selection([
|
||||
('opened', 'Opened'),
|
||||
('closed', 'Closed'),
|
||||
@ -326,41 +348,56 @@ class PullRequests(models.Model):
|
||||
# staged?
|
||||
('merged', 'Merged'),
|
||||
('error', 'Error'),
|
||||
], default='opened', index=True)
|
||||
],
|
||||
compute='_compute_state', store=True, default='opened',
|
||||
index=True, tracking=True, column_type=enum(_name, 'state'),
|
||||
)
|
||||
|
||||
number = fields.Integer(required=True, index=True, group_operator=None)
|
||||
author = fields.Many2one('res.partner', index=True)
|
||||
head = fields.Char(required=True)
|
||||
head = fields.Char(required=True, tracking=True)
|
||||
label = fields.Char(
|
||||
required=True, index=True,
|
||||
help="Label of the source branch (owner:branchname), used for "
|
||||
"cross-repository branch-matching"
|
||||
)
|
||||
refname = fields.Char(compute='_compute_refname')
|
||||
message = fields.Text(required=True)
|
||||
draft = fields.Boolean(default=False, required=True)
|
||||
squash = fields.Boolean(default=False)
|
||||
draft = fields.Boolean(
|
||||
default=False, required=True, tracking=True,
|
||||
help="A draft PR can not be merged",
|
||||
)
|
||||
squash = fields.Boolean(default=False, tracking=True)
|
||||
merge_method = fields.Selection([
|
||||
('merge', "merge directly, using the PR as merge commit message"),
|
||||
('rebase-merge', "rebase and merge, using the PR as merge commit message"),
|
||||
('rebase-ff', "rebase and fast-forward"),
|
||||
('squash', "squash"),
|
||||
], default=False)
|
||||
], default=False, tracking=True, column_type=enum(_name, 'merge_method'))
|
||||
method_warned = fields.Boolean(default=False)
|
||||
|
||||
reviewed_by = fields.Many2one('res.partner', index=True)
|
||||
reviewed_by = fields.Many2one('res.partner', index=True, tracking=True)
|
||||
delegates = fields.Many2many('res.partner', help="Delegate reviewers, not intrinsically reviewers but can review this PR")
|
||||
priority = fields.Integer(default=2, index=True, group_operator=None)
|
||||
|
||||
overrides = fields.Char(required=True, default='{}')
|
||||
statuses = fields.Text(
|
||||
compute='_compute_statuses',
|
||||
help="Copy of the statuses from the HEAD commit, as a Python literal"
|
||||
priority = fields.Selection([
|
||||
('default', "Default"),
|
||||
('priority', "Priority"),
|
||||
('alone', "Alone"),
|
||||
], default='default', index=True, group_operator=None, required=True,
|
||||
column_type=enum(_name, 'priority'),
|
||||
)
|
||||
|
||||
overrides = fields.Char(required=True, default='{}', tracking=True)
|
||||
statuses = fields.Text(help="Copy of the statuses from the HEAD commit, as a Python literal", default="{}")
|
||||
statuses_full = fields.Text(
|
||||
compute='_compute_statuses',
|
||||
help="Compilation of the full status of the PR (commit statuses + overrides), as JSON"
|
||||
help="Compilation of the full status of the PR (commit statuses + overrides), as JSON",
|
||||
store=True,
|
||||
)
|
||||
status = fields.Char(compute='_compute_statuses')
|
||||
status = fields.Selection([
|
||||
('pending', 'Pending'),
|
||||
('failure', 'Failure'),
|
||||
('success', 'Success'),
|
||||
], compute='_compute_statuses', store=True, column_type=enum(_name, 'status'))
|
||||
previous_failure = fields.Char(default='{}')
|
||||
|
||||
batch_id = fields.Many2one('runbot_merge.batch', string="Active Batch", compute='_compute_active_batch', store=True)
|
||||
@ -384,15 +421,47 @@ class PullRequests(models.Model):
|
||||
repo_name = fields.Char(related='repository.name')
|
||||
message_title = fields.Char(compute='_compute_message_title')
|
||||
|
||||
ping = fields.Char(compute='_compute_ping')
|
||||
ping = fields.Char(compute='_compute_ping', recursive=True)
|
||||
|
||||
@api.depends('author.github_login', 'reviewed_by.github_login')
|
||||
fw_policy = fields.Selection([
|
||||
('default', "Default"),
|
||||
('skipci', "Skip CI"),
|
||||
], required=True, default="default", string="Forward Port Policy")
|
||||
source_id = fields.Many2one('runbot_merge.pull_requests', index=True, help="the original source of this FP even if parents were detached along the way")
|
||||
parent_id = fields.Many2one(
|
||||
'runbot_merge.pull_requests', index=True,
|
||||
help="a PR with a parent is an automatic forward port",
|
||||
tracking=True,
|
||||
)
|
||||
root_id = fields.Many2one('runbot_merge.pull_requests', compute='_compute_root', recursive=True)
|
||||
forwardport_ids = fields.One2many('runbot_merge.pull_requests', 'source_id')
|
||||
limit_id = fields.Many2one('runbot_merge.branch', help="Up to which branch should this PR be forward-ported", tracking=True)
|
||||
|
||||
detach_reason = fields.Char()
|
||||
|
||||
_sql_constraints = [(
|
||||
'fw_constraint',
|
||||
'check(source_id is null or num_nonnulls(parent_id, detach_reason) = 1)',
|
||||
"fw PRs must either be attached or have a reason for being detached",
|
||||
)]
|
||||
|
||||
@api.depends('label')
|
||||
def _compute_refname(self):
|
||||
for pr in self:
|
||||
pr.refname = pr.label.split(':', 1)[-1]
|
||||
|
||||
@api.depends(
|
||||
'author.github_login', 'reviewed_by.github_login',
|
||||
'source_id.author.github_login', 'source_id.reviewed_by.github_login',
|
||||
)
|
||||
def _compute_ping(self):
|
||||
for pr in self:
|
||||
s = ' '.join(
|
||||
f'@{p.github_login}'
|
||||
for p in (pr.author | pr.reviewed_by )
|
||||
)
|
||||
if source := pr.source_id:
|
||||
contacts = source.author | source.reviewed_by | pr.reviewed_by
|
||||
else:
|
||||
contacts = pr.author | pr.reviewed_by
|
||||
|
||||
s = ' '.join(f'@{p.github_login}' for p in contacts)
|
||||
pr.ping = s and (s + ' ')
|
||||
|
||||
@api.depends('repository.name', 'number')
|
||||
@ -404,6 +473,11 @@ class PullRequests(models.Model):
|
||||
pr.url = str(base.join(path))
|
||||
pr.github_url = str(gh_base.join(path))
|
||||
|
||||
@api.depends('parent_id.root_id')
|
||||
def _compute_root(self):
|
||||
for p in self:
|
||||
p.root_id = reduce(lambda _, p: p, self._iter_ancestors())
|
||||
|
||||
@api.depends('message')
|
||||
def _compute_message_title(self):
|
||||
for pr in self:
|
||||
@ -411,7 +485,7 @@ class PullRequests(models.Model):
|
||||
|
||||
@api.depends('repository.name', 'number', 'message')
|
||||
def _compute_display_name(self):
|
||||
return super(PullRequests, self)._compute_display_name()
|
||||
return super()._compute_display_name()
|
||||
|
||||
def name_get(self):
|
||||
name_template = '%(repo_name)s#%(number)d'
|
||||
@ -438,10 +512,7 @@ class PullRequests(models.Model):
|
||||
|
||||
@property
|
||||
def _approved(self):
|
||||
return self.state in ('approved', 'ready') or any(
|
||||
p.priority == 0
|
||||
for p in (self | self._linked_prs)
|
||||
)
|
||||
return self.state in ('approved', 'ready')
|
||||
|
||||
@property
|
||||
def _ready(self):
|
||||
@ -462,72 +533,39 @@ class PullRequests(models.Model):
|
||||
]) - self
|
||||
|
||||
# missing link to other PRs
|
||||
@api.depends('priority', 'state', 'squash', 'merge_method', 'batch_id.active', 'label')
|
||||
@api.depends('state')
|
||||
def _compute_is_blocked(self):
|
||||
self.blocked = False
|
||||
requirements = (
|
||||
lambda p: not p.draft,
|
||||
lambda p: p.squash or p.merge_method,
|
||||
lambda p: p.state == 'ready' \
|
||||
or any(pr.skipchecks for pr in (p | p._linked_prs)) \
|
||||
and all(pr.state != 'error' for pr in (p | p._linked_prs))
|
||||
)
|
||||
messages = ('is in draft', 'has no merge method', 'is not ready')
|
||||
for pr in self:
|
||||
if pr.state in ('merged', 'closed'):
|
||||
continue
|
||||
|
||||
linked = pr._linked_prs
|
||||
# check if PRs are configured (single commit or merge method set)
|
||||
if not (pr.squash or pr.merge_method):
|
||||
pr.blocked = 'has no merge method'
|
||||
continue
|
||||
other_unset = next((p for p in linked if not (p.squash or p.merge_method)), None)
|
||||
if other_unset:
|
||||
pr.blocked = "linked PR %s has no merge method" % other_unset.display_name
|
||||
continue
|
||||
blocking, message = next((
|
||||
(blocking, message)
|
||||
for blocking in (pr | pr._linked_prs)
|
||||
for requirement, message in zip(requirements, messages)
|
||||
if not requirement(blocking)
|
||||
), (None, None))
|
||||
if blocking == pr:
|
||||
pr.blocked = message
|
||||
elif blocking:
|
||||
pr.blocked = f"linked PR {blocking.display_name} {message}"
|
||||
|
||||
# check if any PR in the batch is p=0 and none is in error
|
||||
if any(p.priority == 0 for p in (pr | linked)):
|
||||
if pr.state == 'error':
|
||||
pr.blocked = "in error"
|
||||
other_error = next((p for p in linked if p.state == 'error'), None)
|
||||
if other_error:
|
||||
pr.blocked = "linked pr %s in error" % other_error.display_name
|
||||
# if none is in error then none is blocked because p=0
|
||||
# "unblocks" the entire batch
|
||||
continue
|
||||
|
||||
if pr.state != 'ready':
|
||||
pr.blocked = 'not ready'
|
||||
continue
|
||||
|
||||
unready = next((p for p in linked if p.state != 'ready'), None)
|
||||
if unready:
|
||||
pr.blocked = 'linked pr %s is not ready' % unready.display_name
|
||||
continue
|
||||
|
||||
def _get_overrides(self):
|
||||
def _get_overrides(self) -> dict[str, dict[str, str]]:
|
||||
if self.parent_id:
|
||||
return self.parent_id._get_overrides() | json.loads(self.overrides)
|
||||
if self:
|
||||
return json.loads(self.overrides)
|
||||
return {}
|
||||
|
||||
@api.depends('head', 'repository.status_ids', 'overrides')
|
||||
def _compute_statuses(self):
|
||||
Commits = self.env['runbot_merge.commit']
|
||||
for pr in self:
|
||||
c = Commits.search([('sha', '=', pr.head)])
|
||||
st = json.loads(c.statuses or '{}')
|
||||
statuses = {**st, **pr._get_overrides()}
|
||||
pr.statuses_full = json.dumps(statuses)
|
||||
if not statuses:
|
||||
pr.status = pr.statuses = False
|
||||
continue
|
||||
|
||||
pr.statuses = pprint.pformat(st)
|
||||
|
||||
st = 'success'
|
||||
for ci in pr.repository.status_ids._for_pr(pr):
|
||||
v = (statuses.get(ci.context) or {'state': 'pending'})['state']
|
||||
if v in ('error', 'failure'):
|
||||
st = 'failure'
|
||||
break
|
||||
if v == 'pending':
|
||||
st = 'pending'
|
||||
pr.status = st
|
||||
|
||||
@api.depends('batch_ids.active')
|
||||
def _compute_active_batch(self):
|
||||
for r in self:
|
||||
@ -562,6 +600,16 @@ class PullRequests(models.Model):
|
||||
'closing': closing,
|
||||
})
|
||||
|
||||
def _iter_ancestors(self) -> Iterator[PullRequests]:
|
||||
while self:
|
||||
yield self
|
||||
self = self.parent_id
|
||||
|
||||
def _iter_descendants(self) -> Iterator[PullRequests]:
|
||||
pr = self
|
||||
while pr := self.search([('parent_id', '=', pr.id)]):
|
||||
yield pr
|
||||
|
||||
def _parse_commands(self, author, comment, login):
|
||||
assert self, "parsing commands must be executed in an actual PR"
|
||||
|
||||
@ -574,34 +622,34 @@ class PullRequests(models.Model):
|
||||
)
|
||||
return 'ok'
|
||||
|
||||
def feedback(message: Optional[str] = None, close: bool = False, token: Literal['github_token', 'fp_github_token'] = 'github_token'):
|
||||
def feedback(message: Optional[str] = None, close: bool = False):
|
||||
self.env['runbot_merge.pull_requests.feedback'].create({
|
||||
'repository': self.repository.id,
|
||||
'pull_request': self.number,
|
||||
'message': message,
|
||||
'close': close,
|
||||
'token_field': token,
|
||||
})
|
||||
try:
|
||||
cmds: List[Union[commands.Command, commands.FWCommand]] = [
|
||||
cmds: List[commands.Command] = [
|
||||
ps
|
||||
for bot, line in commandlines
|
||||
for ps in (commands.parse_mergebot(line) if bot.casefold() == self.repository.project_id.github_prefix.casefold() else commands.parse_forwardbot(line))
|
||||
for line in commandlines
|
||||
for ps in commands.Parser(line)
|
||||
]
|
||||
except Exception as e:
|
||||
_logger.info(
|
||||
"error %s while parsing comment of %s (%s): %s",
|
||||
e,
|
||||
author.github_login, author.display_name,
|
||||
login, name,
|
||||
utils.shorten(comment['body'] or '', 50),
|
||||
exc_info=True
|
||||
)
|
||||
feedback(message=f"@{login} {e.args[0]}", token='fp_github_token' if len(e.args) >= 2 and e.args[1] else 'github_token')
|
||||
feedback(message=f"@{login} {e.args[0]}")
|
||||
return 'error'
|
||||
|
||||
is_admin, is_reviewer, is_author = self._pr_acl(author)
|
||||
_source_admin, source_reviewer, source_author = self.source_id._pr_acl(author)
|
||||
|
||||
if not (is_author or any(isinstance(cmd, commands.Override) for cmd in cmds)):
|
||||
if not (is_author or self.source_id or (any(isinstance(cmd, commands.Override) for cmd in cmds) and author.override_rights)):
|
||||
# no point even parsing commands
|
||||
_logger.info("ignoring comment of %s (%s): no ACL to %s",
|
||||
login, name, self.display_name)
|
||||
@ -614,50 +662,75 @@ class PullRequests(models.Model):
|
||||
|
||||
rejections = []
|
||||
for command in cmds:
|
||||
fwbot, msg = False, None
|
||||
msg = None
|
||||
match command:
|
||||
case commands.Approve() if self.draft:
|
||||
msg = "draft PRs can not be approved."
|
||||
case commands.Approve() if is_reviewer:
|
||||
oldstate = self.state
|
||||
newstate = RPLUS.get(self.state)
|
||||
if not author.email:
|
||||
msg = "I must know your email before you can review PRs. Please contact an administrator."
|
||||
elif not newstate:
|
||||
msg = "this PR is already reviewed, reviewing it again is useless."
|
||||
case commands.Approve() if self.parent_id:
|
||||
# rules are a touch different for forwardport PRs:
|
||||
valid = lambda _: True if command.ids is None else lambda n: n in command.ids
|
||||
_, source_reviewer, source_author = self.source_id._pr_acl(author)
|
||||
|
||||
ancestors = list(self._iter_ancestors())
|
||||
# - reviewers on the original can approve any forward port
|
||||
if source_reviewer:
|
||||
approveable = ancestors
|
||||
else:
|
||||
self.state = newstate
|
||||
self.reviewed_by = author
|
||||
_logger.debug(
|
||||
"r+ on %s by %s (%s->%s) status=%s message? %s",
|
||||
self.display_name, author.github_login,
|
||||
oldstate, newstate or oldstate,
|
||||
self.status, self.status == 'failure'
|
||||
)
|
||||
if self.status == 'failure':
|
||||
# the normal infrastructure is for failure and
|
||||
# prefixes messages with "I'm sorry"
|
||||
self.env.ref("runbot_merge.command.approve.failure")._send(
|
||||
repository=self.repository,
|
||||
pull_request=self.number,
|
||||
format_args={'user': login, 'pr': self},
|
||||
)
|
||||
# between the first merged ancestor and self
|
||||
mergeors = list(itertools.dropwhile(
|
||||
lambda p: p.state != 'merged',
|
||||
reversed(ancestors),
|
||||
))
|
||||
# between the first ancestor the current user can review and self
|
||||
reviewors = list(itertools.dropwhile(
|
||||
lambda p: not p._pr_acl(author).is_reviewer,
|
||||
reversed(ancestors),
|
||||
))
|
||||
|
||||
# source author can approve any descendant of a merged
|
||||
# forward port (or source), people with review rights
|
||||
# to a forward port have review rights to its
|
||||
# descendants, if both apply use the most favorable
|
||||
# (largest number of PRs)
|
||||
if source_author and len(mergeors) > len(reviewors):
|
||||
approveable = mergeors
|
||||
else:
|
||||
approveable = reviewors
|
||||
|
||||
if approveable:
|
||||
for pr in approveable:
|
||||
if not (pr.state in RPLUS and valid(pr.number)):
|
||||
continue
|
||||
msg = pr._approve(author, login)
|
||||
if msg:
|
||||
break
|
||||
else:
|
||||
msg = f"you can't {command} you silly little bean."
|
||||
case commands.Approve() if is_reviewer:
|
||||
if command.ids is not None and command.ids != [self.number]:
|
||||
msg = f"tried to approve PRs {command.ids} but the current PR is {self.number}"
|
||||
else:
|
||||
msg = self._approve(author, login)
|
||||
case commands.Reject() if is_author:
|
||||
newstate = RMINUS.get(self.state)
|
||||
if self.priority == 0 or newstate:
|
||||
if newstate:
|
||||
self.state = newstate
|
||||
if self.priority == 0:
|
||||
self.priority = 1
|
||||
batch = self | self._linked_prs
|
||||
if cancellers := batch.filtered('cancel_staging'):
|
||||
cancellers.cancel_staging = False
|
||||
if (skippers := batch.filtered('skipchecks')) or self.reviewed_by:
|
||||
if self.error:
|
||||
self.error = False
|
||||
if self.reviewed_by:
|
||||
self.reviewed_by = False
|
||||
if skippers:
|
||||
skippers.skipchecks = False
|
||||
self.env.ref("runbot_merge.command.unapprove.p0")._send(
|
||||
repository=self.repository,
|
||||
pull_request=self.number,
|
||||
format_args={'user': login, 'pr': self},
|
||||
format_args={'user': login, 'pr': skippers[:1]},
|
||||
)
|
||||
self.unstage("unreviewed (r-) by %s", login)
|
||||
else:
|
||||
msg = "r- makes no sense in the current PR state."
|
||||
case commands.MergeMethod() as command if is_reviewer:
|
||||
case commands.MergeMethod() if is_reviewer:
|
||||
self.merge_method = command.value
|
||||
explanation = next(label for value, label in type(self).merge_method.selection if value == command.value)
|
||||
self.env.ref("runbot_merge.command.method")._send(
|
||||
@ -665,9 +738,9 @@ class PullRequests(models.Model):
|
||||
pull_request=self.number,
|
||||
format_args={'new_method': explanation, 'pr': self, 'user': login},
|
||||
)
|
||||
case commands.Retry() if is_author:
|
||||
if self.state == 'error':
|
||||
self.state = 'ready'
|
||||
case commands.Retry() if is_author or source_author:
|
||||
if self.error:
|
||||
self.error = False
|
||||
else:
|
||||
msg = "retry makes no sense when the PR is not in error."
|
||||
case commands.Check() if is_author:
|
||||
@ -687,13 +760,23 @@ class PullRequests(models.Model):
|
||||
})
|
||||
delegates.write({'delegate_reviewer': [(4, self.id, 0)]})
|
||||
case commands.Priority() if is_admin:
|
||||
self.priority = int(command)
|
||||
if command is commands.Priority.NUKE:
|
||||
self.target.active_staging_id.cancel(
|
||||
"P=0 on %s by %s, unstaging target %s",
|
||||
self.display_name,
|
||||
author.github_login, self.target.name,
|
||||
)
|
||||
self.priority = str(command)
|
||||
case commands.SkipChecks() if is_admin:
|
||||
self.skipchecks = True
|
||||
self.reviewed_by = author
|
||||
for p in self.batch_id.prs - self:
|
||||
if not p.reviewed_by:
|
||||
p.reviewed_by = author
|
||||
case commands.CancelStaging() if is_admin:
|
||||
self.cancel_staging = True
|
||||
# FIXME: remove this when skipchecks properly affects state,
|
||||
# maybe: staging cancellation should then only occur
|
||||
# when a cancel_staging PR transitions to ready, or
|
||||
# a ready PR is flagged as cancelling staging
|
||||
self.target.active_staging_id.cancel(
|
||||
"Unstaged by %s on %s",
|
||||
author.github_login, self.display_name,
|
||||
)
|
||||
case commands.Override(statuses):
|
||||
for status in statuses:
|
||||
overridable = author.override_rights\
|
||||
@ -716,68 +799,89 @@ class PullRequests(models.Model):
|
||||
else:
|
||||
msg = f"you are not allowed to override {status!r}."
|
||||
# FW
|
||||
case commands.FWApprove():
|
||||
fwbot = True
|
||||
if not self.source_id:
|
||||
msg = "I can only do this on forward-port PRs and this is not one, see {}.".format(
|
||||
self.repository.project_id.github_prefix
|
||||
)
|
||||
elif not self.parent_id:
|
||||
msg = "I can only do this on unmodified forward-port PRs, ask {}.".format(
|
||||
self.repository.project_id.github_prefix
|
||||
)
|
||||
case commands.Close() if source_author:
|
||||
feedback(close=True)
|
||||
case commands.FW():
|
||||
if source_reviewer or is_reviewer:
|
||||
(self.source_id or self).fw_policy = command.name.lower()
|
||||
match command:
|
||||
case commands.FW.DEFAULT:
|
||||
message = "Waiting for CI to create followup forward-ports."
|
||||
case commands.FW.SKIPCI:
|
||||
message = "Not waiting for CI to create followup forward-ports."
|
||||
case commands.FW.SKIPMERGE:
|
||||
message = "Not waiting for merge to create followup forward-ports."
|
||||
feedback(message=message)
|
||||
else:
|
||||
merge_bot = self.repository.project_id.github_prefix
|
||||
# FIXME: classification of messages from child PR :(
|
||||
# don't update the root ever
|
||||
for pr in (p for p in self._iter_ancestors() if p.parent_id if p.state in RPLUS):
|
||||
# only the author is delegated explicitely on the
|
||||
pr._parse_commands(author, {**comment, 'body': merge_bot + ' r+'}, login)
|
||||
case commands.Close() if self.source_id._pr_acl(author).is_reviewer:
|
||||
feedback(close=True, token='fp_github_token')
|
||||
case commands.CI(run):
|
||||
pr = (self.source_id or self)
|
||||
if pr._pr_acl(author).is_reviewer:
|
||||
pr.fw_policy = 'ci' if run else 'skipci'
|
||||
feedback(
|
||||
message="Waiting for CI to create followup forward-ports." if run else "Not waiting for CI to create followup forward-ports.",
|
||||
token='fp_github_token',
|
||||
)
|
||||
else:
|
||||
fwbot = True
|
||||
msg = "you can't configure ci."
|
||||
case commands.Limit(branch):
|
||||
fwbot = True
|
||||
if is_author:
|
||||
ping, msg = self._maybe_update_limit(branch or self.target.name)
|
||||
if not ping:
|
||||
feedback(message=msg, token='fp_github_token')
|
||||
msg = None
|
||||
else:
|
||||
msg = "you can't set a forward-port limit."
|
||||
msg = "you can't configure forward-port CI."
|
||||
case commands.Limit(branch) if is_author:
|
||||
ping, msg = self._maybe_update_limit(branch or self.target.name)
|
||||
if not ping:
|
||||
feedback(message=msg)
|
||||
msg = None
|
||||
case commands.Limit():
|
||||
msg = "you can't set a forward-port limit."
|
||||
# NO!
|
||||
case _:
|
||||
msg = f"you can't {command}. Skill issue."
|
||||
if msg is not None:
|
||||
rejections.append((fwbot, msg))
|
||||
rejections.append(msg)
|
||||
|
||||
cmdstr = ', '.join(map(str, cmds))
|
||||
if not rejections:
|
||||
_logger.info("%s (%s) applied %s", login, name, cmds)
|
||||
return 'applied ' + ', '.join(map(str, cmds))
|
||||
_logger.info("%s (%s) applied %s", login, name, cmdstr)
|
||||
self.env.cr.precommit.data['change-author'] = author.id
|
||||
return 'applied ' + cmdstr
|
||||
|
||||
self.env.cr.rollback()
|
||||
rejections_list = ''.join(f'\n- {r}' for fw, r in rejections if not fw)
|
||||
fw_rejections_list = ''.join(f'\n- {r}' for fw, r in rejections if fw)
|
||||
_logger.info("%s (%s) tried to apply %s%s", login, name, cmds, rejections_list + fw_rejections_list)
|
||||
rejections_list = ''.join(f'\n- {r}' for r in rejections)
|
||||
_logger.info("%s (%s) tried to apply %s%s", login, name, cmdstr, rejections_list)
|
||||
footer = '' if len(cmds) == len(rejections) else "\n\nFor your own safety I've ignored everything in your comment."
|
||||
if rejections_list:
|
||||
rejections = ' ' + rejections_list.removeprefix("\n- ") if rejections_list.count('\n- ') == 1 else rejections_list
|
||||
feedback(message=f"@{login}{rejections}{footer}")
|
||||
if fw_rejections_list:
|
||||
rejections = ' ' + fw_rejections_list.removeprefix("\n- ") if fw_rejections_list.count('\n- ') else fw_rejections_list
|
||||
feedback(message=f"@{login}{rejections}{footer}", token='fp_github_token')
|
||||
return 'rejected'
|
||||
|
||||
def _approve(self, author, login):
|
||||
oldstate = self.state
|
||||
newstate = RPLUS.get(self.state)
|
||||
msg = None
|
||||
if not author.email:
|
||||
msg = "I must know your email before you can review PRs. Please contact an administrator."
|
||||
elif not newstate:
|
||||
msg = "this PR is already reviewed, reviewing it again is useless."
|
||||
else:
|
||||
self.reviewed_by = author
|
||||
_logger.debug(
|
||||
"r+ on %s by %s (%s->%s) status=%s message? %s",
|
||||
self.display_name, author.github_login,
|
||||
oldstate, newstate or oldstate,
|
||||
self.status, self.status == 'failure'
|
||||
)
|
||||
if self.status == 'failure':
|
||||
# the normal infrastructure is for failure and
|
||||
# prefixes messages with "I'm sorry"
|
||||
self.env.ref("runbot_merge.command.approve.failure")._send(
|
||||
repository=self.repository,
|
||||
pull_request=self.number,
|
||||
format_args={'user': login, 'pr': self},
|
||||
)
|
||||
return msg
|
||||
|
||||
def message_post(self, **kw):
|
||||
if author := self.env.cr.precommit.data.get('change-author'):
|
||||
kw['author_id'] = author
|
||||
if message := self.env.cr.precommit.data.get('change-message'):
|
||||
kw['body'] = html_escape(message)
|
||||
return super().message_post(**kw)
|
||||
|
||||
def _message_log(self, **kw):
|
||||
if author := self.env.cr.precommit.data.get('change-author'):
|
||||
kw['author_id'] = author
|
||||
if message := self.env.cr.precommit.data.get('change-message'):
|
||||
kw['body'] = html_escape(message)
|
||||
return super()._message_log(**kw)
|
||||
|
||||
def _pr_acl(self, user):
|
||||
if not self:
|
||||
return ACL(False, False, False)
|
||||
@ -796,29 +900,67 @@ class PullRequests(models.Model):
|
||||
# could have two PRs (e.g. one open and one closed) at least
|
||||
# temporarily on the same head, or on the same head with different
|
||||
# targets
|
||||
failed = self.browse(())
|
||||
updateable = self.filtered(lambda p: p.state != 'merged')
|
||||
updateable.statuses = statuses
|
||||
for pr in updateable:
|
||||
if pr.status == "failure":
|
||||
statuses = json.loads(pr.statuses_full)
|
||||
for ci in pr.repository.status_ids._for_pr(pr).mapped('context'):
|
||||
status = statuses.get(ci) or {'state': 'pending'}
|
||||
if status['state'] in ('error', 'failure'):
|
||||
pr._notify_ci_new_failure(ci, status)
|
||||
|
||||
def modified(self, fnames, create=False, before=False):
|
||||
""" By default, Odoo can't express recursive *dependencies* which is
|
||||
exactly what we need for statuses: they depend on the current PR's
|
||||
overrides, and the parent's overrides, and *its* parent's overrides, ...
|
||||
|
||||
One option would be to create a stored computed field which accumulates
|
||||
the overrides as *fields* can be recursive, but...
|
||||
"""
|
||||
if 'overrides' in fnames:
|
||||
descendants_or_self = self.concat(*self._iter_descendants())
|
||||
self.env.add_to_compute(self._fields['status'], descendants_or_self)
|
||||
self.env.add_to_compute(self._fields['statuses_full'], descendants_or_self)
|
||||
self.env.add_to_compute(self._fields['state'], descendants_or_self)
|
||||
super().modified(fnames, create, before)
|
||||
|
||||
@api.depends(
|
||||
'statuses', 'overrides', 'target', 'parent_id',
|
||||
'repository.status_ids.context',
|
||||
'repository.status_ids.branch_filter',
|
||||
'repository.status_ids.prs',
|
||||
)
|
||||
def _compute_statuses(self):
|
||||
for pr in self:
|
||||
required = pr.repository.status_ids._for_pr(pr).mapped('context')
|
||||
sts = {**statuses, **pr._get_overrides()}
|
||||
statuses = {**json.loads(pr.statuses), **pr._get_overrides()}
|
||||
|
||||
success = True
|
||||
for ci in required:
|
||||
status = sts.get(ci) or {'state': 'pending'}
|
||||
result = status['state']
|
||||
if result == 'success':
|
||||
continue
|
||||
pr.statuses_full = json.dumps(statuses, indent=4)
|
||||
|
||||
st = 'success'
|
||||
for ci in pr.repository.status_ids._for_pr(pr):
|
||||
v = (statuses.get(ci.context) or {'state': 'pending'})['state']
|
||||
if v in ('error', 'failure'):
|
||||
st = 'failure'
|
||||
break
|
||||
if v == 'pending':
|
||||
st = 'pending'
|
||||
pr.status = st
|
||||
|
||||
# closed, merged, error should be exclusive, so this should probably be a selection
|
||||
@api.depends("status", "reviewed_by", 'merge_date', "closed", "error")
|
||||
def _compute_state(self):
|
||||
for pr in self:
|
||||
if pr.merge_date:
|
||||
pr.state = 'merged'
|
||||
elif pr.closed:
|
||||
pr.state = "closed"
|
||||
elif pr.error:
|
||||
pr.state = "error"
|
||||
else:
|
||||
states = ("opened", "approved", "validated", "ready")
|
||||
pr.state = states[bool(pr.reviewed_by) | ((pr.status == "success") << 1)]
|
||||
|
||||
success = False
|
||||
if result in ('error', 'failure'):
|
||||
failed |= pr
|
||||
pr._notify_ci_new_failure(ci, status)
|
||||
if success:
|
||||
oldstate = pr.state
|
||||
if oldstate == 'opened':
|
||||
pr.state = 'validated'
|
||||
elif oldstate == 'approved':
|
||||
pr.state = 'ready'
|
||||
return failed
|
||||
|
||||
def _notify_ci_new_failure(self, ci, st):
|
||||
prev = json.loads(self.previous_failure)
|
||||
@ -865,6 +1007,18 @@ class PullRequests(models.Model):
|
||||
)
|
||||
|
||||
def _auto_init(self):
|
||||
for field in self._fields.values():
|
||||
if not isinstance(field, fields.Selection) or field.column_type[0] == 'varchar':
|
||||
continue
|
||||
|
||||
t = field.column_type[1]
|
||||
self.env.cr.execute("SELECT 1 FROM pg_type WHERE typname = %s", [t])
|
||||
if not self.env.cr.rowcount:
|
||||
self.env.cr.execute(
|
||||
f"CREATE TYPE {t} AS ENUM %s",
|
||||
[tuple(s for s, _ in field.selection)]
|
||||
)
|
||||
|
||||
super(PullRequests, self)._auto_init()
|
||||
# incorrect index: unique(number, target, repository).
|
||||
tools.drop_index(self._cr, 'runbot_merge_unique_pr_per_target', self._table)
|
||||
@ -885,7 +1039,7 @@ class PullRequests(models.Model):
|
||||
def create(self, vals):
|
||||
pr = super().create(vals)
|
||||
c = self.env['runbot_merge.commit'].search([('sha', '=', pr.head)])
|
||||
pr._validate(json.loads(c.statuses or '{}'))
|
||||
pr._validate(c.statuses or '{}')
|
||||
|
||||
if pr.state not in ('closed', 'merged'):
|
||||
self.env.ref('runbot_merge.pr.created')._send(
|
||||
@ -911,7 +1065,7 @@ class PullRequests(models.Model):
|
||||
], limit=1)
|
||||
|
||||
return self.env['runbot_merge.pull_requests'].create({
|
||||
'state': 'opened' if description['state'] == 'open' else 'closed',
|
||||
'closed': description['state'] != 'open',
|
||||
'number': description['number'],
|
||||
'label': repo._remap_label(description['head']['label']),
|
||||
'author': author.id,
|
||||
@ -926,30 +1080,65 @@ class PullRequests(models.Model):
|
||||
def write(self, vals):
|
||||
if vals.get('squash'):
|
||||
vals['merge_method'] = False
|
||||
prev = None
|
||||
if 'target' in vals or 'message' in vals:
|
||||
prev = {
|
||||
pr.id: {'target': pr.target, 'message': pr.message}
|
||||
for pr in self
|
||||
}
|
||||
fields = []
|
||||
canceler = vals.get('cancel_staging') or any(p.cancel_staging for p in self)
|
||||
if canceler:
|
||||
fields.append('state')
|
||||
fields.append('skipchecks')
|
||||
if 'target' in vals:
|
||||
fields.append('target')
|
||||
if 'message' in vals:
|
||||
fields.append('message')
|
||||
prev = {pr.id: {field: pr[field] for field in fields} for pr in self}
|
||||
|
||||
if vals.get('state') == 'ready':
|
||||
# skip checks anyway
|
||||
vals['skipchecks'] = True
|
||||
# if the state is forced to ready, set current user as reviewer
|
||||
# and override all statuses
|
||||
vals.setdefault('reviewed_by', self.env.user.partner_id.id)
|
||||
# override all known statuses just for safety
|
||||
vals.setdefault('overrides', json.dumps({
|
||||
st.context: {
|
||||
'state': 'success',
|
||||
'target_url': None,
|
||||
'description': f"Forced by @{self.env.user.partner_id.github_login}",
|
||||
}
|
||||
for st in self.env['runbot_merge.repository.status'].search([
|
||||
('prs', '=', True),
|
||||
])
|
||||
}))
|
||||
if vals.get('closed'):
|
||||
vals['reviewed_by'] = False
|
||||
w = super().write(vals)
|
||||
|
||||
newhead = vals.get('head')
|
||||
if newhead:
|
||||
c = self.env['runbot_merge.commit'].search([('sha', '=', newhead)])
|
||||
self._validate(json.loads(c.statuses or '{}'))
|
||||
self._validate(c.statuses or '{}')
|
||||
|
||||
if prev:
|
||||
for pr in self:
|
||||
old_target = prev[pr.id]['target']
|
||||
for pr in self:
|
||||
old = prev[pr.id]
|
||||
if canceler:
|
||||
def ready(pr):
|
||||
return pr['state'] == 'ready'\
|
||||
or (pr['skipchecks'] and pr['state'] != 'error')
|
||||
if pr.cancel_staging and not ready(old) and ready(pr):
|
||||
if old['state'] == 'error': # error -> ready gets a bespok message
|
||||
pr.target.active_staging_id.cancel(f"retrying {pr.display_name}")
|
||||
else:
|
||||
pr.target.active_staging_id.cancel(f"{pr.display_name} became ready")
|
||||
|
||||
if 'target' in vals:
|
||||
old_target = old['target']
|
||||
if pr.target != old_target:
|
||||
pr.unstage(
|
||||
"target (base) branch was changed from %r to %r",
|
||||
old_target.display_name, pr.target.display_name,
|
||||
)
|
||||
old_message = prev[pr.id]['message']
|
||||
if pr.merge_method not in (False, 'rebase-ff') and pr.message != old_message:
|
||||
|
||||
if 'message' in vals:
|
||||
if pr.merge_method not in (False, 'rebase-ff') and pr.message != old['message']:
|
||||
pr.unstage("merge message updated")
|
||||
return w
|
||||
|
||||
@ -982,8 +1171,6 @@ class PullRequests(models.Model):
|
||||
bool_or(pr.state = 'ready' AND NOT pr.link_warned)
|
||||
-- one of the others should be unready
|
||||
AND bool_or(pr.state != 'ready')
|
||||
-- but ignore batches with one of the prs at p0
|
||||
AND bool_and(pr.priority != 0)
|
||||
""")
|
||||
for [ids] in self.env.cr.fetchall():
|
||||
prs = self.browse(ids)
|
||||
@ -1070,11 +1257,11 @@ class PullRequests(models.Model):
|
||||
|
||||
self.env.cr.execute('''
|
||||
UPDATE runbot_merge_pull_requests
|
||||
SET state = 'closed'
|
||||
SET closed=True, state = 'closed', reviewed_by = null
|
||||
WHERE id = %s
|
||||
''', [self.id])
|
||||
self.env.cr.commit()
|
||||
self.modified(['state'])
|
||||
self.modified(['closed', 'state', 'reviewed_by'])
|
||||
self.unstage("closed by %s", by)
|
||||
return True
|
||||
|
||||
@ -1083,11 +1270,6 @@ RPLUS = {
|
||||
'opened': 'approved',
|
||||
'validated': 'ready',
|
||||
}
|
||||
RMINUS = {
|
||||
'approved': 'opened',
|
||||
'ready': 'validated',
|
||||
'error': 'validated',
|
||||
}
|
||||
|
||||
_TAGS = {
|
||||
False: set(),
|
||||
@ -1345,12 +1527,13 @@ class Commit(models.Model):
|
||||
for c in self.search([('to_check', '=', True)]):
|
||||
try:
|
||||
c.to_check = False
|
||||
st = json.loads(c.statuses)
|
||||
pr = PRs.search([('head', '=', c.sha)])
|
||||
if pr:
|
||||
pr._validate(st)
|
||||
self.env.cr.precommit.data['change-message'] =\
|
||||
f"statuses changed on {c.sha}"
|
||||
pr._validate(c.statuses)
|
||||
|
||||
stagings = Stagings.search([('head_ids.sha', '=', c.sha)])
|
||||
stagings = Stagings.search([('head_ids.sha', '=', c.sha), ('state', '=', 'pending')])
|
||||
if stagings:
|
||||
stagings._validate()
|
||||
except Exception:
|
||||
@ -1545,8 +1728,10 @@ class Stagings(models.Model):
|
||||
|
||||
def fail(self, message, prs=None):
|
||||
_logger.info("Staging %s failed: %s", self, message)
|
||||
self.env.cr.precommit.data['change-message'] =\
|
||||
f'staging {self.id} failed: {message}'
|
||||
prs = prs or self.batch_ids.prs
|
||||
prs.write({'state': 'error'})
|
||||
prs.error = True
|
||||
for pr in prs:
|
||||
self.env.ref('runbot_merge.pr.staging.fail')._send(
|
||||
repository=pr.repository,
|
||||
@ -1661,12 +1846,14 @@ class Stagings(models.Model):
|
||||
'reason': str(e.__cause__ or e.__context__ or e)
|
||||
})
|
||||
else:
|
||||
self.env.cr.precommit.data['change-message'] =\
|
||||
f'staging {self.id} succeeded'
|
||||
prs = self.mapped('batch_ids.prs')
|
||||
logger.info(
|
||||
"%s FF successful, marking %s as merged",
|
||||
self, prs
|
||||
)
|
||||
prs.write({'state': 'merged'})
|
||||
prs.merge_date = fields.Datetime.now()
|
||||
|
||||
pseudobranch = None
|
||||
if self.target == project.branch_ids[:1]:
|
||||
|
@ -65,19 +65,17 @@ def try_staging(branch: Branch) -> Optional[Stagings]:
|
||||
return
|
||||
|
||||
priority = rows[0][0]
|
||||
if priority == 0 or priority == 1:
|
||||
# p=0 take precedence over all else
|
||||
# p=1 allows merging a fix inside / ahead of a split (e.g. branch
|
||||
# is broken or widespread false positive) without having to cancel
|
||||
# the existing staging
|
||||
if priority == 'alone':
|
||||
batched_prs = [pr_ids for _, pr_ids in takewhile(lambda r: r[0] == priority, rows)]
|
||||
elif branch.split_ids:
|
||||
split_ids = branch.split_ids[0]
|
||||
_logger.info("Found split of PRs %s, re-staging", split_ids.mapped('batch_ids.prs'))
|
||||
batched_prs = [batch.prs for batch in split_ids.batch_ids]
|
||||
split_ids.unlink()
|
||||
else: # p=2
|
||||
batched_prs = [pr_ids for _, pr_ids in takewhile(lambda r: r[0] == priority, rows)]
|
||||
else:
|
||||
# priority, normal; priority = sorted ahead of normal, so always picked
|
||||
# first as long as there's room
|
||||
batched_prs = [pr_ids for _, pr_ids in rows]
|
||||
|
||||
original_heads, staging_state = staging_setup(branch, batched_prs)
|
||||
|
||||
@ -175,7 +173,7 @@ def ready_prs(for_branch: Branch) -> List[Tuple[int, PullRequests]]:
|
||||
env = for_branch.env
|
||||
env.cr.execute("""
|
||||
SELECT
|
||||
min(pr.priority) as priority,
|
||||
max(pr.priority) as priority,
|
||||
array_agg(pr.id) AS match
|
||||
FROM runbot_merge_pull_requests pr
|
||||
WHERE pr.target = any(%s)
|
||||
@ -191,8 +189,9 @@ def ready_prs(for_branch: Branch) -> List[Tuple[int, PullRequests]]:
|
||||
ELSE pr.label
|
||||
END
|
||||
HAVING
|
||||
bool_or(pr.state = 'ready') or bool_or(pr.priority = 0)
|
||||
ORDER BY min(pr.priority), min(pr.id)
|
||||
bool_and(pr.state = 'ready')
|
||||
OR (bool_or(pr.skipchecks) AND bool_and(pr.state != 'error'))
|
||||
ORDER BY max(pr.priority) DESC, min(pr.id)
|
||||
""", [for_branch.ids])
|
||||
browse = env['runbot_merge.pull_requests'].browse
|
||||
return [(p, browse(ids)) for p, ids in env.cr.fetchall()]
|
||||
@ -408,7 +407,7 @@ def stage(pr: PullRequests, info: StagingSlice, related_prs: PullRequests) -> Tu
|
||||
diff.append(('Message', pr.message, msg))
|
||||
|
||||
if invalid:
|
||||
pr.write({**invalid, 'state': 'opened', 'head': pr_head})
|
||||
pr.write({**invalid, 'reviewed_by': False, 'head': pr_head})
|
||||
raise exceptions.Mismatch(invalid, diff)
|
||||
|
||||
if pr.reviewed_by and pr.reviewed_by.name == pr.reviewed_by.github_login:
|
||||
|
@ -10,7 +10,7 @@ import requests
|
||||
from lxml import html
|
||||
|
||||
import odoo
|
||||
from utils import _simple_init, seen, re_matches, get_partner, Commit, pr_page, to_pr, part_of
|
||||
from utils import _simple_init, seen, re_matches, get_partner, Commit, pr_page, to_pr, part_of, ensure_one
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
@ -121,6 +121,35 @@ def test_trivial_flow(env, repo, page, users, config):
|
||||
"\n\nSigned-off-by: {reviewer.formatted_email}"\
|
||||
.format(repo=repo, reviewer=get_partner(env, users['reviewer']))
|
||||
|
||||
# reverse because the messages are in newest-to-oldest by default
|
||||
# (as that's how you want to read them)
|
||||
messages = reversed([
|
||||
(m.author_id.display_name, m.body, list(zip(
|
||||
m.tracking_value_ids.get_old_display_value(),
|
||||
m.tracking_value_ids.get_new_display_value(),
|
||||
)))
|
||||
for m in pr_id.message_ids
|
||||
])
|
||||
|
||||
assert list(messages) == [
|
||||
('OdooBot', '<p>Pull Request created</p>', []),
|
||||
('OdooBot', f'<p>statuses changed on {c1}</p>', [('Opened', 'Validated')]),
|
||||
# reviewer approved changing the state and setting reviewer as reviewer
|
||||
# plus set merge method
|
||||
('Reviewer', '', [
|
||||
('Validated', 'Ready'),
|
||||
('', 'rebase and merge, using the PR as merge commit message'),
|
||||
('', 'Reviewer'),
|
||||
]),
|
||||
# staging succeeded
|
||||
(re_matches(r'.*'), f'<p>staging {st.id} succeeded</p>', [
|
||||
# set merge date
|
||||
(False, pr_id.merge_date + 'Z'),
|
||||
# updated state
|
||||
('Ready', 'Merged'),
|
||||
]),
|
||||
]
|
||||
|
||||
class TestCommitMessage:
|
||||
def test_commit_simple(self, env, repo, users, config):
|
||||
""" verify 'closes ...' is correctly added in the commit message
|
||||
@ -756,9 +785,17 @@ class TestPREdition:
|
||||
('number', '=', prx.number)
|
||||
]).target == branch_1
|
||||
|
||||
def test_retarget_update_commits(self, env, repo):
|
||||
""" Retargeting a PR should update its commits count
|
||||
def test_retarget_update_commits(self, env, project, repo):
|
||||
""" Retargeting a PR should update its commits count, as well as follow
|
||||
the new target's requirements
|
||||
"""
|
||||
project.repo_ids.write({
|
||||
'status_ids': [
|
||||
(5, 0, 0),
|
||||
(0, 0, {'context': 'a', 'branch_filter': [('name', '=', 'master')]}),
|
||||
(0, 0, {'context': 'b', 'branch_filter': [('name', '!=', 'master')]}),
|
||||
]
|
||||
})
|
||||
branch_1 = env['runbot_merge.branch'].create({
|
||||
'name': '1.0',
|
||||
'project_id': env['runbot_merge.project'].search([]).id,
|
||||
@ -767,29 +804,35 @@ class TestPREdition:
|
||||
|
||||
with repo:
|
||||
# master is 1 commit ahead of 1.0
|
||||
m = repo.make_commit(None, 'initial', None, tree={'m': 'm'})
|
||||
repo.make_ref('heads/1.0', m)
|
||||
m2 = repo.make_commit(m, 'second', None, tree={'m': 'm2'})
|
||||
repo.make_ref('heads/master', m2)
|
||||
[m] = repo.make_commits(None, Commit('initial', tree={'m': 'm'}), ref='heads/1.0')
|
||||
[m2] = repo.make_commits(m, Commit('second', tree={'m': 'm2'}), ref='heads/master')
|
||||
|
||||
# the PR builds on master, but is errorneously targeted to 1.0
|
||||
c = repo.make_commit(m2, 'first', None, tree={'m': 'm3'})
|
||||
prx = repo.make_pr(title='title', body='body', target='1.0', head=c)
|
||||
repo.make_commits(m2, Commit('first', tree={'m': 'm3'}), ref='heads/abranch')
|
||||
prx = repo.make_pr(title='title', body='body', target='1.0', head='abranch')
|
||||
repo.post_status('heads/abranch', 'success', 'a')
|
||||
env.run_crons()
|
||||
pr = env['runbot_merge.pull_requests'].search([
|
||||
('repository.name', '=', repo.name),
|
||||
('number', '=', prx.number)
|
||||
])
|
||||
assert not pr.squash
|
||||
assert pr.status == 'pending'
|
||||
assert pr.state == 'opened'
|
||||
|
||||
with repo:
|
||||
prx.base = 'master'
|
||||
assert pr.target == master
|
||||
assert pr.squash
|
||||
assert pr.status == 'success'
|
||||
assert pr.state == 'validated'
|
||||
|
||||
with repo:
|
||||
prx.base = '1.0'
|
||||
assert pr.target == branch_1
|
||||
assert not pr.squash
|
||||
assert pr.status == 'pending'
|
||||
assert pr.state == 'opened'
|
||||
|
||||
# check if things also work right when modifying the PR then
|
||||
# retargeting (don't see why not but...)
|
||||
@ -845,6 +888,7 @@ def test_close_staged(env, repo, config, page):
|
||||
('number', '=', prx.number),
|
||||
])
|
||||
env.run_crons()
|
||||
assert pr.reviewed_by
|
||||
assert pr.state == 'ready'
|
||||
assert pr.staging_id
|
||||
|
||||
@ -856,6 +900,18 @@ def test_close_staged(env, repo, config, page):
|
||||
assert not env['runbot_merge.stagings'].search([])
|
||||
assert pr.state == 'closed'
|
||||
assert pr_page(page, prx).cssselect('.alert-light')
|
||||
assert not pr.reviewed_by
|
||||
|
||||
with repo:
|
||||
prx.open()
|
||||
assert pr.state == 'validated'
|
||||
assert not pr.reviewed_by
|
||||
|
||||
with repo:
|
||||
prx.post_comment('hansen r+', config['role_reviewer']['token'])
|
||||
assert pr.reviewed_by
|
||||
pr.write({'closed': True})
|
||||
assert not pr.reviewed_by
|
||||
|
||||
def test_forward_port(env, repo, config):
|
||||
with repo:
|
||||
@ -2130,23 +2186,28 @@ class TestPRUpdate(object):
|
||||
repo.update_ref(prx.ref, c2, force=True)
|
||||
assert pr.head == c2
|
||||
|
||||
def test_reopen_update(self, env, repo):
|
||||
def test_reopen_update(self, env, repo, config):
|
||||
with repo:
|
||||
m = repo.make_commit(None, 'initial', None, tree={'m': 'm'})
|
||||
repo.make_ref('heads/master', m)
|
||||
|
||||
c = repo.make_commit(m, 'fist', None, tree={'m': 'c1'})
|
||||
prx = repo.make_pr(title='title', body='body', target='master', head=c)
|
||||
prx.post_comment("hansen r+", config['role_reviewer']['token'])
|
||||
|
||||
pr = to_pr(env, prx)
|
||||
assert pr.state == 'approved'
|
||||
assert pr.reviewed_by
|
||||
with repo:
|
||||
prx.close()
|
||||
assert pr.state == 'closed'
|
||||
assert pr.head == c
|
||||
assert not pr.reviewed_by
|
||||
|
||||
with repo:
|
||||
prx.open()
|
||||
assert pr.state == 'opened'
|
||||
assert not pr.reviewed_by
|
||||
|
||||
with repo:
|
||||
c2 = repo.make_commit(c, 'first', None, tree={'m': 'cc'})
|
||||
@ -2393,6 +2454,7 @@ class TestPRUpdate(object):
|
||||
env.run_crons('runbot_merge.process_updated_commits')
|
||||
assert pr_id.message == 'title\n\nbody'
|
||||
assert pr_id.state == 'ready'
|
||||
old_reviewer = pr_id.reviewed_by
|
||||
|
||||
# TODO: find way to somehow skip / ignore the update_ref?
|
||||
with repo:
|
||||
@ -2413,10 +2475,12 @@ class TestPRUpdate(object):
|
||||
# in a "ready" state
|
||||
pr_id.write({
|
||||
'head': c,
|
||||
'state': 'ready',
|
||||
'reviewed_by': old_reviewer.id,
|
||||
'message': "Something else",
|
||||
'target': other.id,
|
||||
})
|
||||
assert pr_id.head == c
|
||||
assert pr_id.state == "ready"
|
||||
|
||||
env.run_crons()
|
||||
|
||||
@ -2425,8 +2489,8 @@ class TestPRUpdate(object):
|
||||
assert pr_id.head == c2
|
||||
assert pr_id.message == 'title\n\nbody'
|
||||
assert pr_id.target.name == 'master'
|
||||
assert pr.comments[-1]['body'] == """\
|
||||
@{} @{} we apparently missed updates to this PR and tried to stage it in a state \
|
||||
assert pr.comments[-1]['body'] == f"""\
|
||||
@{users['user']} we apparently missed updates to this PR and tried to stage it in a state \
|
||||
which might not have been approved.
|
||||
|
||||
The properties Head, Target, Message were not correctly synchronized and have been updated.
|
||||
@ -2435,8 +2499,8 @@ The properties Head, Target, Message were not correctly synchronized and have be
|
||||
|
||||
```diff
|
||||
Head:
|
||||
- {}
|
||||
+ {}
|
||||
- {c}
|
||||
+ {c2}
|
||||
|
||||
Target branch:
|
||||
- somethingelse
|
||||
@ -2454,7 +2518,7 @@ The properties Head, Target, Message were not correctly synchronized and have be
|
||||
Note that we are unable to check the properties Merge Method, Overrides, Draft.
|
||||
|
||||
Please check and re-approve.
|
||||
""".format(users['user'], users['reviewer'], c, c2)
|
||||
"""
|
||||
|
||||
# if the head commit doesn't change, that part should still be valid
|
||||
with repo:
|
||||
@ -2465,8 +2529,8 @@ Please check and re-approve.
|
||||
|
||||
assert pr_id.message == 'title\n\nbody'
|
||||
assert pr_id.state == 'validated'
|
||||
assert pr.comments[-1]['body'] == """\
|
||||
@{} @{} we apparently missed updates to this PR and tried to stage it in a state \
|
||||
assert pr.comments[-1]['body'] == f"""\
|
||||
@{users['user']} we apparently missed updates to this PR and tried to stage it in a state \
|
||||
which might not have been approved.
|
||||
|
||||
The properties Message were not correctly synchronized and have been updated.
|
||||
@ -2486,11 +2550,11 @@ The properties Message were not correctly synchronized and have been updated.
|
||||
Note that we are unable to check the properties Merge Method, Overrides, Draft.
|
||||
|
||||
Please check and re-approve.
|
||||
""".format(users['user'], users['reviewer'])
|
||||
"""
|
||||
|
||||
pr_id.write({
|
||||
'head': c,
|
||||
'state': 'ready',
|
||||
'reviewed_by': old_reviewer.id,
|
||||
'message': "Something else",
|
||||
'target': other.id,
|
||||
'draft': True,
|
||||
@ -2695,6 +2759,9 @@ class TestBatching(object):
|
||||
def test_batching_pressing(self, env, repo, config):
|
||||
""" "Pressing" PRs should be selected before normal & batched together
|
||||
"""
|
||||
# by limiting the batch size to 3 we allow both high-priority PRs, but
|
||||
# a single normal priority one
|
||||
env['runbot_merge.project'].search([]).batch_limit = 3
|
||||
with repo:
|
||||
m = repo.make_commit(None, 'initial', None, tree={'a': 'some content'})
|
||||
repo.make_ref('heads/master', m)
|
||||
@ -2704,51 +2771,47 @@ class TestBatching(object):
|
||||
|
||||
pr11 = self._pr(repo, 'Pressing1', [{'x': 'x'}, {'y': 'y'}], user=config['role_user']['token'], reviewer=config['role_reviewer']['token'])
|
||||
pr12 = self._pr(repo, 'Pressing2', [{'z': 'z'}, {'zz': 'zz'}], user=config['role_user']['token'], reviewer=config['role_reviewer']['token'])
|
||||
pr11.post_comment('hansen priority=1', config['role_reviewer']['token'])
|
||||
pr12.post_comment('hansen priority=1', config['role_reviewer']['token'])
|
||||
|
||||
pr21, pr22, pr11, pr12 = prs = [to_pr(env, pr) for pr in [pr21, pr22, pr11, pr12]]
|
||||
assert pr21.priority == pr22.priority == 2
|
||||
assert pr11.priority == pr12.priority == 1
|
||||
|
||||
pr11.post_comment('hansen priority', config['role_reviewer']['token'])
|
||||
pr12.post_comment('hansen priority', config['role_reviewer']['token'])
|
||||
# necessary to project commit statuses onto PRs
|
||||
env.run_crons()
|
||||
|
||||
pr21, pr22, pr11, pr12 = prs = [to_pr(env, pr) for pr in [pr21, pr22, pr11, pr12]]
|
||||
assert pr11.priority == pr12.priority == 'priority'
|
||||
assert pr21.priority == pr22.priority == 'default'
|
||||
assert all(pr.state == 'ready' for pr in prs)
|
||||
assert not pr21.staging_id
|
||||
|
||||
staging = ensure_one(env['runbot_merge.stagings'].search([]))
|
||||
assert staging.pr_ids == pr11 | pr12 | pr21
|
||||
assert not pr22.staging_id
|
||||
assert pr11.staging_id
|
||||
assert pr12.staging_id
|
||||
assert pr11.staging_id == pr12.staging_id
|
||||
|
||||
def test_batching_urgent(self, env, repo, config):
|
||||
with repo:
|
||||
m = repo.make_commit(None, 'initial', None, tree={'a': 'some content'})
|
||||
repo.make_ref('heads/master', m)
|
||||
|
||||
pr21 = self._pr(repo, 'PR1', [{'a': 'AAA'}, {'b': 'BBB'}], user=config['role_user']['token'], reviewer=config['role_reviewer']['token'])
|
||||
pr22 = self._pr(repo, 'PR2', [{'c': 'CCC'}, {'d': 'DDD'}], user=config['role_user']['token'], reviewer=config['role_reviewer']['token'])
|
||||
|
||||
pr11 = self._pr(repo, 'Pressing1', [{'x': 'x'}, {'y': 'y'}], user=config['role_user']['token'], reviewer=config['role_reviewer']['token'])
|
||||
pr12 = self._pr(repo, 'Pressing2', [{'z': 'z'}, {'zz': 'zz'}], user=config['role_user']['token'], reviewer=config['role_reviewer']['token'])
|
||||
pr11.post_comment('hansen priority=1', config['role_reviewer']['token'])
|
||||
pr12.post_comment('hansen priority=1', config['role_reviewer']['token'])
|
||||
pr11.post_comment('hansen NOW', config['role_reviewer']['token'])
|
||||
pr12.post_comment('hansen NOW', config['role_reviewer']['token'])
|
||||
|
||||
# stage PR1
|
||||
# stage current PRs
|
||||
env.run_crons()
|
||||
p_11, p_12, p_21, p_22 = \
|
||||
[to_pr(env, pr) for pr in [pr11, pr12, pr21, pr22]]
|
||||
assert not p_21.staging_id or p_22.staging_id
|
||||
assert p_11.staging_id and p_12.staging_id
|
||||
assert p_11.staging_id == p_12.staging_id
|
||||
staging_1 = p_11.staging_id
|
||||
p_11, p_12 = \
|
||||
[to_pr(env, pr) for pr in [pr11, pr12]]
|
||||
sm_all = p_11 | p_12
|
||||
staging_1 = sm_all.staging_id
|
||||
assert staging_1
|
||||
assert len(staging_1) == 1
|
||||
|
||||
# no statuses run on PR0s
|
||||
with repo:
|
||||
pr01 = self._pr(repo, 'Urgent1', [{'n': 'n'}, {'o': 'o'}], user=config['role_user']['token'], reviewer=None, statuses=[])
|
||||
pr01.post_comment('hansen priority=0 rebase-merge', config['role_reviewer']['token'])
|
||||
pr01.post_comment('hansen NOW! rebase-merge', config['role_reviewer']['token'])
|
||||
p_01 = to_pr(env, pr01)
|
||||
assert p_01.state == 'opened'
|
||||
assert p_01.priority == 0
|
||||
assert p_01.state == 'approved'
|
||||
assert p_01.priority == 'alone'
|
||||
assert p_01.skipchecks == True
|
||||
|
||||
env.run_crons()
|
||||
# first staging should be cancelled and PR0 should be staged
|
||||
@ -2757,8 +2820,89 @@ class TestBatching(object):
|
||||
assert not p_11.staging_id and not p_12.staging_id
|
||||
assert p_01.staging_id
|
||||
|
||||
# make the staging fail
|
||||
with repo:
|
||||
repo.post_status('staging.master', 'failure', 'ci/runbot')
|
||||
env.run_crons()
|
||||
assert p_01.state == 'error'
|
||||
assert not p_01.staging_id.active
|
||||
staging_2 = ensure_one(sm_all.staging_id)
|
||||
assert staging_2 != staging_1
|
||||
|
||||
with repo:
|
||||
pr01.post_comment('hansen retry', config['role_reviewer']['token'])
|
||||
env.run_crons()
|
||||
# retry should have re-triggered cancel-staging
|
||||
assert not staging_2.active
|
||||
assert p_01.staging_id.active
|
||||
|
||||
# make the staging fail again
|
||||
with repo:
|
||||
repo.post_status('staging.master', 'failure', 'ci/runbot')
|
||||
env.run_crons()
|
||||
|
||||
assert not p_01.staging_id.active
|
||||
assert p_01.state == 'error'
|
||||
staging_3 = ensure_one(sm_all.staging_id)
|
||||
assert staging_3 != staging_2
|
||||
|
||||
# check that updating the PR resets it to ~ready
|
||||
with repo:
|
||||
repo.make_commits(
|
||||
'heads/master',
|
||||
Commit("urgent+", tree={'y': 'es'}),
|
||||
ref="heads/Urgent1",
|
||||
)
|
||||
env.run_crons()
|
||||
assert not staging_3.active
|
||||
assert p_01.state == 'opened'
|
||||
assert p_01.priority == 'alone'
|
||||
assert p_01.skipchecks == True
|
||||
assert p_01.staging_id.active
|
||||
|
||||
# r- should unstage, re-enable the checks and switch off staging
|
||||
# cancellation, but leave the priority
|
||||
with repo:
|
||||
pr01.post_comment("hansen r-", config['role_reviewer']['token'])
|
||||
env.run_crons()
|
||||
|
||||
staging_4 = ensure_one(sm_all.staging_id)
|
||||
assert staging_4 != staging_3
|
||||
|
||||
assert not p_01.staging_id.active
|
||||
assert p_01.state == 'opened'
|
||||
assert p_01.priority == 'alone'
|
||||
assert p_01.skipchecks == False
|
||||
assert p_01.cancel_staging == False
|
||||
|
||||
p_01.cancel_staging = True
|
||||
# FIXME: cancel_staging should only cancel when the PR is or
|
||||
# transitions to ready
|
||||
# assert staging_4.active
|
||||
# re-staging, should not be necessary
|
||||
env.run_crons()
|
||||
|
||||
staging_5 = ensure_one(sm_all.staging_id)
|
||||
assert staging_5.active
|
||||
# cause the PR to become ready the normal way
|
||||
with repo:
|
||||
pr01.post_comment("hansen r+", config['role_reviewer']['token'])
|
||||
repo.post_status(p_01.head, 'success', 'legal/cla')
|
||||
repo.post_status(p_01.head, 'success', 'ci/runbot')
|
||||
env.run_crons()
|
||||
|
||||
# a cancel_staging pr becoming ready should have cancelled the staging,
|
||||
# and because the PR is `alone` it should... have been restaged alone,
|
||||
# without the ready non-alone PRs
|
||||
assert not sm_all.staging_id.active
|
||||
assert p_01.staging_id.active
|
||||
assert p_01.state == 'ready'
|
||||
assert p_01.priority == 'alone'
|
||||
assert p_01.skipchecks == False
|
||||
assert p_01.cancel_staging == True
|
||||
|
||||
def test_batching_urgenter_than_split(self, env, repo, config):
|
||||
""" p=0 PRs should take priority over split stagings (processing
|
||||
""" p=alone PRs should take priority over split stagings (processing
|
||||
of a staging having CI-failed and being split into sub-stagings)
|
||||
"""
|
||||
with repo:
|
||||
@ -2789,7 +2933,7 @@ class TestBatching(object):
|
||||
# during restaging of pr1, create urgent PR
|
||||
with repo:
|
||||
pr0 = self._pr(repo, 'urgent', [{'a': 'a', 'b': 'b'}], user=config['role_user']['token'], reviewer=None, statuses=[])
|
||||
pr0.post_comment('hansen priority=0', config['role_reviewer']['token'])
|
||||
pr0.post_comment('hansen NOW!', config['role_reviewer']['token'])
|
||||
env.run_crons()
|
||||
|
||||
# TODO: maybe just deactivate stagings instead of deleting them when canceling?
|
||||
@ -2810,7 +2954,7 @@ class TestBatching(object):
|
||||
# no statuses run on PR0s
|
||||
with repo:
|
||||
pr01 = self._pr(repo, 'Urgent1', [{'n': 'n'}, {'o': 'o'}], user=config['role_user']['token'], reviewer=None, statuses=[])
|
||||
pr01.post_comment('hansen priority=0', config['role_reviewer']['token'])
|
||||
pr01.post_comment('hansen NOW!', config['role_reviewer']['token'])
|
||||
p_01 = to_pr(env, pr01)
|
||||
p_01.state = 'error'
|
||||
|
||||
@ -2871,7 +3015,7 @@ class TestBatching(object):
|
||||
env.run_crons('runbot_merge.process_updated_commits', 'runbot_merge.merge_cron', 'runbot_merge.staging_cron')
|
||||
assert pr2.state == 'merged'
|
||||
|
||||
class TestReviewing(object):
|
||||
class TestReviewing:
|
||||
def test_reviewer_rights(self, env, repo, users, config):
|
||||
"""Only users with review rights will have their r+ (and other
|
||||
attributes) taken in account
|
||||
@ -3048,23 +3192,23 @@ class TestReviewing(object):
|
||||
])
|
||||
|
||||
with repo:
|
||||
prx.post_review('COMMENT', "hansen priority=1", config['role_reviewer']['token'])
|
||||
assert pr.priority == 1
|
||||
prx.post_review('COMMENT', "hansen priority", config['role_reviewer']['token'])
|
||||
assert pr.priority == 'priority'
|
||||
assert pr.state == 'opened'
|
||||
|
||||
with repo:
|
||||
prx.post_review('APPROVE', "hansen priority=2", config['role_reviewer']['token'])
|
||||
assert pr.priority == 2
|
||||
prx.post_review('APPROVE', "hansen default", config['role_reviewer']['token'])
|
||||
assert pr.priority == 'default'
|
||||
assert pr.state == 'opened'
|
||||
|
||||
with repo:
|
||||
prx.post_review('REQUEST_CHANGES', 'hansen priority=1', config['role_reviewer']['token'])
|
||||
assert pr.priority == 1
|
||||
prx.post_review('REQUEST_CHANGES', 'hansen priority', config['role_reviewer']['token'])
|
||||
assert pr.priority == 'priority'
|
||||
assert pr.state == 'opened'
|
||||
|
||||
with repo:
|
||||
prx.post_review('COMMENT', 'hansen r+', config['role_reviewer']['token'])
|
||||
assert pr.priority == 1
|
||||
assert pr.priority == 'priority'
|
||||
assert pr.state == 'approved'
|
||||
|
||||
def test_no_email(self, env, repo, users, config, partners):
|
||||
@ -3101,6 +3245,28 @@ class TestReviewing(object):
|
||||
env.run_crons()
|
||||
assert to_pr(env, pr).state == 'approved'
|
||||
|
||||
def test_skipchecks(self, env, repo, users, config):
|
||||
"""Skipcheck makes the PR immediately ready (if it's not in error or
|
||||
something)
|
||||
"""
|
||||
with repo:
|
||||
[m, _] = repo.make_commits(
|
||||
None,
|
||||
Commit("initial", tree={'m': 'm'}),
|
||||
Commit("second", tree={"m2": "m2"}),
|
||||
ref="heads/master"
|
||||
)
|
||||
|
||||
[c1] = repo.make_commits(m, Commit('first', tree={'m': 'c1'}))
|
||||
pr = repo.make_pr(title='title', target='master', head=c1)
|
||||
pr.post_comment('hansen skipchecks', config['role_reviewer']['token'])
|
||||
env.run_crons()
|
||||
|
||||
pr_id = to_pr(env, pr)
|
||||
# assert pr_id.state == 'ready'
|
||||
assert not pr_id.blocked
|
||||
# since the pr is not blocked it should have been staged by the relevant cron
|
||||
assert pr_id.staging_id
|
||||
|
||||
class TestUnknownPR:
|
||||
""" Sync PRs initially looked excellent but aside from the v4 API not
|
||||
@ -3157,7 +3323,7 @@ class TestUnknownPR:
|
||||
(users['reviewer'], 'hansen r+'),
|
||||
(users['reviewer'], 'hansen r+'),
|
||||
seen(env, prx, users),
|
||||
(users['user'], f"@{users['user']} @{users['reviewer']} I didn't know about this PR and had to "
|
||||
(users['user'], f"@{users['user']} I didn't know about this PR and had to "
|
||||
"retrieve its information, you may have to "
|
||||
"re-approve it as I didn't see previous commands."),
|
||||
]
|
||||
@ -3213,7 +3379,7 @@ class TestUnknownPR:
|
||||
# reviewer is set because fetch replays all the comments (thus
|
||||
# setting r+ and reviewer) but then syncs the head commit thus
|
||||
# unsetting r+ but leaving the reviewer
|
||||
(users['user'], f"@{users['user']} @{users['reviewer']} I didn't know about this PR and had to retrieve "
|
||||
(users['user'], f"@{users['user']} I didn't know about this PR and had to retrieve "
|
||||
"its information, you may have to re-approve it "
|
||||
"as I didn't see previous commands."),
|
||||
]
|
||||
@ -3576,41 +3742,6 @@ class TestRMinus:
|
||||
assert pr2.state == 'validated', "state should have been reset"
|
||||
assert not env['runbot_merge.split'].search([]), "there should be no split left"
|
||||
|
||||
def test_rminus_p0(self, env, repo, config, users):
|
||||
""" In and of itself r- doesn't do anything on p=0 since they bypass
|
||||
approval, so unstage and downgrade to p=1.
|
||||
"""
|
||||
|
||||
with repo:
|
||||
m = repo.make_commit(None, 'initial', None, tree={'m': 'm'})
|
||||
repo.make_ref('heads/master', m)
|
||||
|
||||
c = repo.make_commit(m, 'first', None, tree={'m': 'c'})
|
||||
prx = repo.make_pr(title='title', body=None, target='master', head=c)
|
||||
repo.post_status(prx.head, 'success', 'ci/runbot')
|
||||
repo.post_status(prx.head, 'success', 'legal/cla')
|
||||
prx.post_comment('hansen p=0', config['role_reviewer']['token'])
|
||||
env.run_crons()
|
||||
|
||||
pr = env['runbot_merge.pull_requests'].search([
|
||||
('repository.name', '=', repo.name),
|
||||
('number', '=', prx.number),
|
||||
])
|
||||
assert pr.priority == 0
|
||||
assert pr.staging_id
|
||||
|
||||
with repo:
|
||||
prx.post_comment('hansen r-', config['role_reviewer']['token'])
|
||||
env.run_crons()
|
||||
assert not pr.staging_id, "pr should have been unstaged"
|
||||
assert pr.priority == 1, "priority should have been downgraded"
|
||||
assert prx.comments == [
|
||||
(users['reviewer'], 'hansen p=0'),
|
||||
seen(env, prx, users),
|
||||
(users['reviewer'], 'hansen r-'),
|
||||
(users['user'], "PR priority reset to 1, as pull requests with priority 0 ignore review state."),
|
||||
]
|
||||
|
||||
class TestComments:
|
||||
def test_address_method(self, repo, env, config):
|
||||
with repo:
|
||||
|
@ -5,6 +5,8 @@ source branches).
|
||||
When preparing a staging, we simply want to ensure branch-matched PRs
|
||||
are staged concurrently in all repos
|
||||
"""
|
||||
import functools
|
||||
import operator
|
||||
import time
|
||||
import xmlrpc.client
|
||||
|
||||
@ -775,8 +777,8 @@ class TestMultiBatches:
|
||||
prs[2][0] | prs[2][1] | prs[3][0] | prs[3][1] | prs[4][0]
|
||||
|
||||
def test_urgent(env, repo_a, repo_b, config):
|
||||
""" Either PR of a co-dependent pair being p=0 leads to the entire pair
|
||||
being prioritized
|
||||
""" Either PR of a co-dependent pair being prioritised leads to the entire
|
||||
pair being prioritized
|
||||
"""
|
||||
with repo_a, repo_b:
|
||||
make_branch(repo_a, 'master', 'initial', {'a0': 'a'})
|
||||
@ -784,19 +786,31 @@ def test_urgent(env, repo_a, repo_b, config):
|
||||
|
||||
pr_a = make_pr(repo_a, 'batch', [{'a1': 'a'}, {'a2': 'a'}], user=config['role_user']['token'], reviewer=None, statuses=[])
|
||||
pr_b = make_pr(repo_b, 'batch', [{'b1': 'b'}, {'b2': 'b'}], user=config['role_user']['token'], reviewer=None, statuses=[])
|
||||
pr_c = make_pr(repo_a, 'C', [{'c1': 'c', 'c2': 'c'}], user=config['role_user']['token'], reviewer=config['role_reviewer']['token'],)
|
||||
pr_c = make_pr(repo_a, 'C', [{'c1': 'c', 'c2': 'c'}], user=config['role_user']['token'], reviewer=config['role_reviewer']['token'])
|
||||
|
||||
pr_a.post_comment('hansen rebase-merge', config['role_reviewer']['token'])
|
||||
pr_b.post_comment('hansen rebase-merge p=0', config['role_reviewer']['token'])
|
||||
pr_b.post_comment('hansen rebase-merge alone skipchecks', config['role_reviewer']['token'])
|
||||
env.run_crons()
|
||||
# should have batched pr_a and pr_b despite neither being reviewed or
|
||||
# approved
|
||||
p_a, p_b = to_pr(env, pr_a), to_pr(env, pr_b)
|
||||
p_c = to_pr(env, pr_c)
|
||||
|
||||
p_a, p_b, p_c = to_pr(env, pr_a), to_pr(env, pr_b), to_pr(env, pr_c)
|
||||
assert not p_a.blocked
|
||||
assert not p_b.blocked
|
||||
|
||||
assert p_a.staging_id and p_b.staging_id and p_a.staging_id == p_b.staging_id,\
|
||||
"a and b should be staged despite neither beinbg reviewed or approved"
|
||||
assert p_a.batch_id and p_b.batch_id and p_a.batch_id == p_b.batch_id,\
|
||||
"a and b should have been recognised as co-dependent"
|
||||
assert not p_c.staging_id
|
||||
|
||||
with repo_a:
|
||||
pr_a.post_comment('hansen r-', config['role_reviewer']['token'])
|
||||
env.run_crons()
|
||||
assert not p_b.staging_id.active, "should be unstaged"
|
||||
assert p_b.priority == 'alone', "priority should not be affected anymore"
|
||||
assert not p_b.skipchecks, "r- of linked pr should have un-skipcheck-ed this one"
|
||||
assert p_a.blocked
|
||||
assert p_b.blocked
|
||||
|
||||
class TestBlocked:
|
||||
def test_merge_method(self, env, repo_a, config):
|
||||
with repo_a:
|
||||
@ -854,8 +868,8 @@ class TestBlocked:
|
||||
def test_linked_unready(self, env, repo_a, repo_b, config):
|
||||
""" Create a PR A linked to a non-ready PR B,
|
||||
* A is blocked by default
|
||||
* A is not blocked if A.p=0
|
||||
* A is not blocked if B.p=0
|
||||
* A is not blocked if A.skipci
|
||||
* A is not blocked if B.skipci
|
||||
"""
|
||||
with repo_a, repo_b:
|
||||
make_branch(repo_a, 'master', 'initial', {'a0': 'a'})
|
||||
@ -868,13 +882,11 @@ class TestBlocked:
|
||||
pr_a = to_pr(env, a)
|
||||
assert pr_a.blocked
|
||||
|
||||
with repo_a: a.post_comment('hansen p=0', config['role_reviewer']['token'])
|
||||
with repo_a: a.post_comment('hansen skipchecks', config['role_reviewer']['token'])
|
||||
assert not pr_a.blocked
|
||||
pr_a.skipchecks = False
|
||||
|
||||
with repo_a: a.post_comment('hansen p=2', config['role_reviewer']['token'])
|
||||
assert pr_a.blocked
|
||||
|
||||
with repo_b: b.post_comment('hansen p=0', config['role_reviewer']['token'])
|
||||
with repo_b: b.post_comment('hansen skipchecks', config['role_reviewer']['token'])
|
||||
assert not pr_a.blocked
|
||||
|
||||
def test_different_branches(env, project, repo_a, repo_b, config):
|
||||
@ -1126,6 +1138,8 @@ def test_freeze_complete(env, project, repo_a, repo_b, repo_c, users, config):
|
||||
* check that freeze goes through
|
||||
* check that reminder is shown
|
||||
* check that new branches are created w/ correct parent & commit info
|
||||
* check that a PRs (freeze and bump) are part of synthetic stagings so
|
||||
they're correctly accounted for in the change history
|
||||
"""
|
||||
project.freeze_reminder = "Don't forget to like and subscribe"
|
||||
|
||||
@ -1215,22 +1229,25 @@ def test_freeze_complete(env, project, repo_a, repo_b, repo_c, users, config):
|
||||
assert r['res_model'] == 'runbot_merge.project'
|
||||
assert r['res_id'] == project.id
|
||||
|
||||
release_pr_ids = functools.reduce(operator.add, release_prs.values())
|
||||
# stuff that's done directly
|
||||
for pr_id in release_prs.values():
|
||||
assert pr_id.state == 'merged'
|
||||
assert all(pr_id.state == 'merged' for pr_id in release_pr_ids)
|
||||
assert pr_bump_id.state == 'merged'
|
||||
|
||||
# stuff that's behind a cron
|
||||
env.run_crons()
|
||||
|
||||
# check again to be sure
|
||||
assert all(pr_id.state == 'merged' for pr_id in release_pr_ids)
|
||||
assert pr_bump_id.state == 'merged'
|
||||
|
||||
assert pr_rel_a.state == "closed"
|
||||
assert pr_rel_a.base['ref'] == '1.1'
|
||||
assert pr_rel_b.state == "closed"
|
||||
assert pr_rel_b.base['ref'] == '1.1'
|
||||
assert pr_rel_c.state == "closed"
|
||||
assert pr_rel_c.base['ref'] == '1.1'
|
||||
for pr_id in release_prs.values():
|
||||
assert pr_id.target.name == '1.1'
|
||||
assert all(pr_id.target.name == '1.1' for pr_id in release_pr_ids)
|
||||
|
||||
assert pr_bump_a.state == 'closed'
|
||||
assert pr_bump_a.base['ref'] == 'master'
|
||||
|
@ -252,3 +252,27 @@ def test_merge_emptying_commits(env, project, make_repo, setreviewers, users, co
|
||||
assert pr3.comments[3:] == [
|
||||
(users['user'], f"{ping} unable to stage: results in an empty tree when merged, might be the duplicate of a merged PR.")
|
||||
]
|
||||
|
||||
def test_force_ready(env, make_repo, project, setreviewers, config):
|
||||
repo = make_repo('repo')
|
||||
project.write({'repo_ids': [(0, 0, {
|
||||
'name': repo.name,
|
||||
'group_id': False,
|
||||
'required_statuses': 'default',
|
||||
})]})
|
||||
setreviewers(*project.repo_ids)
|
||||
|
||||
with repo:
|
||||
[m] = repo.make_commits(None, Commit('initial', tree={'m': 'm'}), ref="heads/master")
|
||||
|
||||
[c] = repo.make_commits(m, Commit('first', tree={'m': 'c1'}), ref="heads/other")
|
||||
pr = repo.make_pr(title='title', body='body', target='master', head=c)
|
||||
env.run_crons()
|
||||
|
||||
pr_id = to_pr(env, pr)
|
||||
pr_id.state = 'ready'
|
||||
|
||||
assert pr_id.state == 'ready'
|
||||
reviewer = env['res.users'].browse([env._uid]).partner_id
|
||||
assert pr_id.reviewed_by == reviewer
|
||||
assert pr_id.overrides
|
||||
|
0
runbot_merge/tests/test_project_toggles.py
Normal file
0
runbot_merge/tests/test_project_toggles.py
Normal file
@ -129,35 +129,77 @@
|
||||
<h1>
|
||||
<field name="repository"/>#<field name="number"/>
|
||||
</h1>
|
||||
<h2>
|
||||
<field name="state"/>
|
||||
<span attrs="{'invisible': [('blocked', '=', False)]}">
|
||||
(<field name="blocked"/>)
|
||||
</span>
|
||||
</h2>
|
||||
</div>
|
||||
<!-- massively impactful status items -->
|
||||
<group>
|
||||
<group>
|
||||
<group>
|
||||
<field name="skipchecks" widget="boolean_toggle"/>
|
||||
</group>
|
||||
<group>
|
||||
<field name="cancel_staging" widget="boolean_toggle"/>
|
||||
</group>
|
||||
</group>
|
||||
<!-- main PR metadata -->
|
||||
<group>
|
||||
<group name="metadata">
|
||||
<field name="target"/>
|
||||
<field name="state"/>
|
||||
<field name="author"/>
|
||||
</group>
|
||||
<group>
|
||||
<field name="label"/>
|
||||
<field name="author"/>
|
||||
<field name="head"/>
|
||||
</group>
|
||||
<group name="merging">
|
||||
<field name="priority"/>
|
||||
<field name="merge_method"/>
|
||||
<field name="squash"/>
|
||||
<field name="draft"/>
|
||||
<field string="Up To" name="limit_id"/>
|
||||
<field string="Forward-Port Policy" name="fw_policy"/>
|
||||
</group>
|
||||
</group>
|
||||
<group>
|
||||
<group colspan="4">
|
||||
<field name="head"/>
|
||||
<field name="statuses"/>
|
||||
<!-- influencers -->
|
||||
<group string="State">
|
||||
<group>
|
||||
<field name="merge_date"/>
|
||||
<field name="reviewed_by"/>
|
||||
<field name="error"/>
|
||||
<field name="closed"/>
|
||||
</group>
|
||||
<group colspan="4">
|
||||
<group name="status">
|
||||
<field name="status"/>
|
||||
<field name="statuses"/>
|
||||
<field name="overrides"/>
|
||||
</group>
|
||||
</group>
|
||||
<group>
|
||||
<group colspan="4" string="Message">
|
||||
<group string="Forward Porting">
|
||||
<group>
|
||||
<field string="Original PR" name="source_id"/>
|
||||
<field name="parent_id"/>
|
||||
<field
|
||||
attrs="{'invisible': [('parent_id', '=', False), ('source_id', '!=', False)]}"
|
||||
string="Detached because" name="detach_reason" readonly="1"/>
|
||||
</group>
|
||||
<group>
|
||||
<field name="forwardport_ids" nolabel="1" readonly="True">
|
||||
<tree>
|
||||
<field name="target" string="Branch"/>
|
||||
<field name="number"/>
|
||||
</tree>
|
||||
</field>
|
||||
</group>
|
||||
</group>
|
||||
<group string="Message">
|
||||
<group colspan="4">
|
||||
<field name="message" nolabel="1"/>
|
||||
</group>
|
||||
</group>
|
||||
<group>
|
||||
<group colspan="4" string="Delegates">
|
||||
<group string="Delegates">
|
||||
<group colspan="4">
|
||||
<field name="delegates" nolabel="1">
|
||||
<tree>
|
||||
<button type="object" name="get_formview_action" icon="fa-external-link"/>
|
||||
@ -168,6 +210,10 @@
|
||||
</group>
|
||||
</group>
|
||||
</sheet>
|
||||
<div class="oe_chatter">
|
||||
<field name="message_follower_ids" widget="mail_followers"/>
|
||||
<field name="message_ids" widget="mail_thread"/>
|
||||
</div>
|
||||
</form>
|
||||
</field>
|
||||
</record>
|
||||
|
Loading…
Reference in New Issue
Block a user