- Update branch name to prefix with project as it can be hard to
differentiate when filtering by or trying to set targets, given some
targets are extremely common (e.g. `master`/`main`) and not all
fields are filtered by project (or even can be).
- Add a proper menu item and list view for batches, maybe it'll be of
use one day.
- Upgrade label in PR search, it's more likely to be needed than
author or target.
- Put PRs first in the mergebot menu, as it's *by far* the most likely
item to look for, unless it's staging in order to cancel one.
Previously PR descriptions were displayed as raw text in the PR
dashboard. While not wrong per se, this was pretty ugly and not always
convenient as e.g. links had to be copied by hand.
Push descriptions through pymarkdown for rendering them, with a few
customisations:
- Enabled footnotes & tables & fenced code blocks because GFM has
that, this doesn't quite put pymarkdown's base behaviour on par with
gfm (and py-gfm ultimately gave up on that effort moving to just
wrap github's own markdown renderer instead).
- Don't allow raw html because too much of a hassle to do it
correctly, and very few people ever do it (mostly me I think).
- Added a bespoke handler / renderer for github-style references.
Note: uses positional captures because it started that way and named
captures are not removed from that sequence so mixing and matching
is not very useful, plus python does not support identically named
groups (even exclusive) so all 4 repo captures and all 3 issue
number captures would need different names...
- And added a second bespoke handler for our own opw/issue references
leading to odoo.com, that's something we can't do via github[^1] so
it's a genuine value-add.
Fixes#889
[^1]: github can do it (though possibly not with the arbitrary
unspecified nonsense I got when I tried to list some of the
reference styles, some folks need therapy), but it's not available
on our plan
I thought I'd removed the error message when approving an already
approved PR but apparently not?
However we can improve the message in that specific case, to make the
expected operation clearer.
Fixes#906
The goal is to reduce maintenance and odd disk interactions &
concurrency issues, by not creating concurrent clones, not having to
push forks back in the repository, etc... it also removes the need to
cleanup "scratch" working copies though that looks not to have been an
issue in a while.
The work is done on isolated objects without using or mutating refs,
so even concurrent work should not be a problem.
This turns out to not be any more verbose (less so if anything) than
using `cherry-pick`, as that is not really designed for scripted /
non-interactive use, or for squashing commits thereafter. Working
directly with trees and commits is quite a bit cleaner even without a
ton of helpers.
Much of the credit goes to Julia Evans for [their investigation of
3-way merges as the underpinnings of cherry-picking][3-way merge],
this would have been a lot more difficult if I'd had to rediscover the
merge-base trick independently.
A few things have been changed by this:
- The old trace/stderr from cherrypick has disappeared as it's
generated by cherrypick, but for a non-interactive use it's kinda
useless anyway so I probably should have looked into removing it
earlier (I think the main use was investigation of the inflateinit
issue).
- Error on emptied commits has to be hand-rolled as `merge-tree`
couldn't care less, this is not hard but is a bit annoying.
- `merge-tree`'s conflict information only references raw commits,
which makes sense, but requires updating a bunch of tests. Then
again so does the fact that it *usually* doesn't send anything to
stderr, so that's usually disappearing.
Conveniently `merge-tree` merges the conflict marker directly in the
files / tree so we don't have to mess about moving them back out of
the repository and into the working copy as I assume cherry-pick does,
which means we don't have to try and commit them back in ether. That
is a huge part of the gain over faffing about with the working copy.
Fixes#847
[3-way merge]: https://jvns.ca/blog/2023/11/10/how-cherry-pick-and-revert-work/
After seeing it be used, I foresee confusion around the current
behaviour (where it sets the limit), as one would expect the `fw=`
flags to affect one another when it looks like that would make sense
e.g. no/default/skipci/skipmerge all specify how to forward port, so
`fw=default` not doing anything after you've said `fw=no` (possibly by
mistake) would be fucking weird.
Also since the author can set limits, allow them to reset the fw
policy to default (keep skipci for reviewers), and for @d-fence add a
`fw=disabled` alias.
Fixes#902
Although the handling of forward ports on disabled branch was improved
in 94fe0329b4 in order to avoid losing
or needing to manually port such, because it goes through
`_schedule_fw_followup` some of the tests *that* performs were missed,
most notably that it only ports batches when no PRs are detached.
This is an issue if we need to force the port because of a branch
being deactivated: the forward-port could have stopped there due to a
conflict, in which case it's always going to be detached.
Thus the `force_fw` flag should also override the parenting state
check.
Also while at it make `force_fw` a regular flag, I don't understand
why I made it into a context value in the first place, it's only
passed from one location and that's directy calling the one function
which uses it...
Fixes#897
If a PR is cancel=staging, even if it's not the
urgentest (priority=alone) odds are good it's being staged to fix the
split. And even if it's not, it probably can't hurt.
So cancel splits in order to stage it. This may be slightly harmful if
the split is legit and has nothing to do with the PR being
prioritised, but that seems like the less likely scenario. And having
to update staging priorities on the fly seems like a bad idea. Though
obviously it might do nothing if the PRs are in "default" priority.#
Also simplify the unstage trigger from the PRs becoming ready:
- the user is useless as it's always the system user
- the batch id is not really helpful
Noticed that while writing up the docs on the wiki, seems like an
unnecessary restriction, and an inconvenient one to boot: the author
could r+, then realize they forgot to do an update they needed to do
on the fw, so they should be able to cancel the staging without
needing a reviewer.
d4fa1fd353 added tracking to changes
from *comments* (as well as a few hacks around authorship transfer),
however it missed two things:
First, it set the `change-author` during comments handling only, so
changes from the `PullRequest` hook e.g. open, synchronise, close,
edit, don't get attributed to their actual source, and instead just
fall back to uid(1). This is easy enough to fix as the `sender` is
always provided, that can be resolved to a partner which is then set
as the author of whatever changes happen.
Second, I actually missed one of the message hooks: there's both
`_message_log` and `_message_log_batch` and they don't call one
another, so both have to be overridden in order for tracking to be
consistent. In this case, specifically, the *creation* of a tracked
object goes through `_message_log_batch` (since that's a very generic
message and so works on every tracked object created during the
transaction... even though batch has a message per record anyway...)
while *updates* go through `_message_log`.
Fixes#895
- When a redundant approval is sent to a PR, notify but don't ignore
the entire command set, there's no actual risk.
- Indicate that the entire comment was ignored when finding something
which does not parse.
Fixes#892, fixes#893
The commit cron needs to be triggered any time we:
- create a new commit
- update a commit to set its `to_check`
So do that in create and write as well as the SQL query in the
webhook handler.
This should mean we don't need the periodic cron anymore, but for
safety's sake run it on 30mn for now.
TBF even if we miss triggers, the next `status` webhook hitting will
check all the relevant commits anyway...
This is useful to repro issues.
60c4b5141d added `inverse=readonly`
hooks to various newly computed fields to ensure they can not be *written*
to, either overwriting the content (stored) or silently being
dropped (non-stored).
However because they're `inverse` hooks this had the effect of making
them writeable from the backend UI since the ORM uses `inverse` as a
signal to make the field writeable. This then caused the web client to
send stuff for those fields, which are not necessarily even visible in
the form, leading to write errors when trying to save a PR creation.
By marking the fields as `readonly` explicitly we make sure that
doesn't happen, and we can create PRs from the backend UI (kinda, I
think the label is still an issue).
The method was not marked as a create, following which it did not
allow creating commits via the UI (annoying for testing / reproducing
issues involving statuses).
If a PR gets approved *then* fails CI, there should be a notification
warning the author & reviewer since
48e08b657b, it even has a test, which
passes (in fact it has *two*, one of which is redundant, so merge
`test_ci_failure_after_review` into the later `test_ci_approved`).
*However* this is in runbot_merge, turns out in
fafa7ef437 some refactoring was done in
order to override the notification and customise it for *forward
ports* with a failed status... except that override never called its
`super()`, so as soon as forwardport is installed the base
notification stops working, and that's been that since October
2019 (had been added in March that year, ignoring deployment lag).
This can be revealed by adding the corresponding check in the
*forwardport* tests, revealing the failure.
This was a pain to track down, thankfully it reproduced relatively
easily locally.
While this could be resolved in the override, might as well fold it
into the base method in furtherance of #789: the mergebot is only
used by odoo, and only with both modules combined, so splitting them
is not useful. And furthermore it things should work fine with the
forwardport installed but unused.
Fixes#894
- Instead of warning about the merge method on ready PRs, also warn on
*approved* (but exclude staged just cuz), as that's really when the
user wants to know that they forgot to set the merge method
- The cron only triggers hourly, but *if* a user approves a PR *and*
the merge method is not set yet, chances are good they'll need a
reminder (if they `r+ rebase-merge` or w/e the cron will just ignore
the PR and it's no skin off our back), so `_trigger` the cron for
validation.
- Also do the same when skipchecks is set as it's very similar.
In reality we might want to hook off of the state transitioning to
reviewed but I'm not sure there's good ways to do that (triggering a
cron inside a compute doesn't seem like a good idea).
Update a pair of tests which would approve a multi-commit PR without
setting a merge method, just because the helper they use to build the
PR happens to create multiple commits.
Fix#891
This is a low issue as the prs of a commit are only listed from the
form so the compute is pretty much always called with a single record,
but still an unforced error which can easily be fixed.
`_schedule_fp_followup` correctly iterates on `self`, however some of
the per-iteration work did not handle that correctly, and would try to
access fields on `self`.
Thankfully in most cases it only works on one batch at a time
anyway, *however* if multiple PRs share a HEAD (which is weird but...)
then `_validate` is called on multiple PRs, which through the
forwardport override leads to `_schedule_fp_followup` being called on
multiple batches, and failing when trying to access the `fw_policy`.
Fix by avoiding the misuse of `self` in the two locations where it's
doing something other than accessing `env`.
Without fw-bot being its bearer, "ignore" is a lot less clear than it
used to as it looks to be asking to ignore the PR entirely (as if it
was targeted to an unmanaged branch).
Deprecate this command, and tack on the shortcut to the fw
subcommand. It is slightly sub-par as technically it does not quite
fit with the other subcommands, and furthermore can't be disabled via
fw=default... although maybe it could be? Maybe instead of setting the
limit fw=no could set that value to the forwardport mode, and the
fw_policy users could check that? It would require some more finessing
tho:
- `DEFAULT` would need to be accessible to the author as well as the
reviewers so the author could toggle between `NO` and `DEFAULT`.
- There should probably be a warning of some sort when setting a limit
to an unportable PR.
- The dashboards would need to take `NO` in account (though I guess
that's just defaulting the limit to the target).
And filter it to only consider branches in the same project as the PR,
and a lower sequence than its target. That way it's harder to fuck up
when trying to set limits from the backend.
Setting the PR state directly really doesn't work as it doesn't
correctly save (and can get overwritten by any dependency of which
there are many).
This caused setting odoo/odoo#165777 in error to fail, leading to it
being re-staged (and failing) repeatedly, and the PR being spammed
with comments.
- create a more formal helper for preventing directly setting computed
functions (without an actual inverse)
- replace direct state setting by setting the corresponding dependency
e.g. `error` for error and `skipchecks` to force a PR to ready
- add a `skipchecks` inverse to the PR so it can also set itself as
reviewed, and is convenient, might be worth also adding stuff to
`Batch.write`
Currently webhook secrets are configured per *project* which is an
issue both because different repositories may have different
administrators and thus creates safety concerns, and because multiple
repositories can feed into different projects (e.g. on mergebot,
odoo-dev/odoo is both an ancillary repository to the main RD project,
and the main repository to the minor / legacy master-wowl
project). This means it can be necessary to have multiple projects
share the same secret as well, this then mandates the secret for more
repositories per (1).
This is a pain in the ass, so just detach secrets from projects and
link them *only* to repositories, it's cleaner and easier to manage
and set up progressively.
This requires a lot of changes to the tests, as they all need to
correctly configure the signaling.
For `runbot_merge` there was *some* setup sharing already via the
module-level `repo` fixtures`, those were merged into a conftest-level
fixture which could handle the signaling setup. A few tests which
unnecessarily set up repositories ad-hoc were also moved to the
fixture. But for most of the ad-hoc setup in `runbot_merge`, as well
as `forwardport` where it's all ad-hoc, events sources setup was just
appended as is. This should probably be cleaned up at one point, with
the various requirements collected and organised into a small set of
fixtures doing the job more uniformly.
Fixes#887
Add support for the ability to validate *stagings* over RPC rather
than via webhook. This may later be expanded to PRs as well.
The core motivation for this is to avoid bouncing through github which
sometimes drops the ball on statuses, and it's frustrating to have a
staging time out because GH fucked up.
Implemented via RPC, requiring both the staging itself (by id) and the
head commit being affected, as that is necessary to know what CIs are
required for that head and correctly report cross branch on the
various PRs.
Fix#881 (kinda)
Rather than compute staging state directly from commit statuses, copy
statuses into the staging's `statuses_cache` then compute state based
on that. Also refactor `statuses` and `staging_end` to be computed
based on the statuses cache instead of the commits (or `state`
update).
The goal is to allow non-webhook validation of stagings, for direct
communications between the mergebot and the CI (mostly runbot).
Computed on the fly for now. Formatted nicely in the frontend, there
does not seem to be any sort of duration widget in the backend so
just display the integer number of seconds.
Fixes#865
if rebased. Untouched commits (straight merge) remain unalterated, but
all rebased or squashed commits now get signoff and `Related` headers
added on top of the already previously added `part-of`.
Implement by generalising `_build_merge_message` to `_build_message`
and having `add_self_references` delegate to it, removes some of the
redundancy / differential handling.
Update the `part_of` helper to also add the S-O-B header to the PR,
although it currently does not reference the entire forward port
chain.
Fixes#876
Shove a bunch of stuff in notebook tabs, add a few
affordances (e.g. github and frontend links, links from m2m), surface
a few missing fields.
Hopefully makes the backend form both easier to navigate and easier to
administrate from.
Displays the entire batch set as a table, along both
repository (linked PRs) and branch (forward ports). Should provide a
much more complete overview.
Adds a copy of the dashboard as a raster render, to link from the PR:
as usual SVG is shit, content-based viewboxes are hell and having to
duplicate the entire CSS because `<img/>`-linked CSS can't run is
gross. And there's no payoff since the image is not interactible
anyway.
Performing manual ad-hoc table rendering via pillow is not
significantly worse, it works fine and it's possible to do *really*
good conditional request handling (hopefully) because I've basically
got all the information I need right here.
In fact it might make sense to upgrade the regular HTML page with
similar conditional request handling, at least for the last-update
bit if not the etag.
Fixes #771,fixes #770
Merged PRs should have a batch which should have a staging, this makes
the treatment uniform across the board and avoids funky data which is
hard to place or issues when reconstructing history.
Also create synthetic batches & stagings for older freezes (and bumps)
Initially wanted to skip this only for FW PRs, but after some thinking
I feel this info could still be valuable even for non-fw PRs which
were never merged in the first place.
Requires a few adjustments to not break *everything*: `batch.prs`
excludes closed PRs by default as most processes only expect to be
faced by a closed PR inside a batch, and we *especially* want to avoid
that before the batch is merged (as we'd risk staging a closed PR).
However since PRs don't get removed from batches anymore (and batches
don't get deleted when they have no PRs) we now may have a bunch of
batches whose PRs (usually a single one) are all closed, this has two
major side-effects:
- a new PR may get attached to an old batch full of closed PRs (as
batches are filtered out on being *merged*), which is weird
- the eventual list of batches gets polluted with a bunch of
irrelevant batches which are hard to filter out
The solution is to reintroduce an `active` field, as a stored compute
field based on the state of batch PRs. This way if all PRs of a batch
are closed it switches to inactive, and is automatically filtered out
by search which solves both issues.
Batch ordering in stagings is important in order to correctly
reconstitute the full project history.
In the old mergebot, since batches are created on the fly during
staging this information is reified by the batch ids. But since batch
ids are now persistent and there is no relationship between the
creation of a batch and its merging (especially not relative to other
batches) it's an issue as reconstituting sub-staging git history would
be impossible.
Which is not the worst, but is not great.
The solution is to reify the join table between stagings and batches
in order for *that* to keep history (simply via the sequential PK),
and in converting to the new system carefully generate the new links
in an order matching the old batch ids.
Given a batch which has been merged, and been forward-ported, to
multiple branches (because skipci was set or ci passed on the repos
the batch covers).
There might come the need to add a PR for one of the uncovered
repos. This raises the question of what to do with it, since the
forward-ports for the batch already exist it's not going to get
forwardported normally, nor may we want to, possibly?
Options are:
- don't do anything, such additions don't get ported, this is
incongruous and unexpected as by default PRs are forward-ported, and
if the batch wasn't an intermediate (but e.g. a conflict) it
probably would be ported forward
- port on merge, this allows configuring the PR properly (as it might
need its own limit) but it means further batches may get
unexpectedly merged (or at least retied) without the additional PR
even though we likely want it in
- immediately port the additional PR on creation, this makes the limit
harder or impossible to configure but it makes the *batch sequence*
more consistent
We ended up selecting the latter, it feels closer to the updates
system, and it creates more consistent batches through the
sequence. It's also technically easier to ad-hoc port a PR through a
bunch of branches than it is to update the "normal" forward-port
process to handle partial fixups.
Test and refine the handling of batch forward ports around branch
deactivation, especially with differential. Notably, fix an error in
the conversion of the FW process to batches: individual PR limit was
not correctly taken in account during forward port unless *all* PRs
were done, even though that is a primary motivation for the
change.
Partial forward porting should now work correctly, and the detection
and handling of differential next target should be better handled to
boot.
Significantly rework the interplay between batches and PRs being
closed in order to maintain sequencing / consistency of forward port
sequences: previously a batch would get deleted if all its PRs are
closed, but that is an issue when it is part of a forward port
sequence as we now lose information.
Instead, detach the PRs from the batch as before but have the batch
skip unlinking if it has historical value (parent or child
batch). Currently the batch's state is a bit weird as it doesn't get
merged, but...
While at it, significantly simplify `_try_closing` as it turns out to
have a ton of incidental / historical complexity from old attempts at
fixing concurrency issues, which should not be necessary anymore and
in fact actively interfere with the new and more compute-heavy state
of things.
Thank god I have a bunch of tests because once again I forgot / missed
a bunch of edge cases in doing the conversion, which the tests
caught (sadly that means I almost certainly broke a few untested edge
cases).
Important notes:
Handling of parent links
------------------------
Unlike PRs, batches don't lose their parent info ever, the link is
permanent, which is convenient to trawl through a forward port
(currently implemented very inefficiently, maybe we'll optimise that
in the future).
However this means the batch having a parent and the batch's PRs
having parents are slightly different informations, one of the edge
cases I missed is that of conflicting PRs, which are deparented and
have to be merged by hand before being forward ported further, I had
originally replaced the checks on a pr and its sibling having parents
by just the batch.
Batches & targets
-----------------
Batches were originally concepted as being fixed to a target and PRs
having that target, a PR being retargeted would move it from one batch
to an other.
As it turns out this does not work in the case where people retarget
forward-port PRs, which I know they do because #551
(2337bd8518). I could not think of a
good way to handle this issue as is, so scrapped the moving PRs thing,
instead one of the coherence checks of a batch being ready is that all
its PRs have the same target, and a batch only has a target if all its
PRs have the same target.
It's possible for somewhat odd effects to arise, notably if a PR is
closed (removed from batch), the other PRs are retargeted, and the new
PR is reopened, it will now be on a separate batch even if it also
gets retargeted. This is weird. I don't quite know how I should handle
it, maybe batches could merge if they have the same target and label?
however batches don't currently have a label so...
Improve limits
--------------
Keep limits on the PRs rather than lift them on the batchL if we can
add/remove PRs of batches having different limits on different PRs of
the same batch is reasonable.
Also leave limit unset by default: previously, the limit was eagerly
set to the tip (accessible) branch. That doesn't really seem
necessary, so stop doing that.
Also remove completely unnecessary `max` when trying to find a PR's
next target: `root` is either `self` or `self.source_id`, so it should
not be possible for that to have a later target.
And for now ensure the limits are consistent per batch: a PR defaults
to the limit of their batch-mate if they don't have one, and if a
limit is set via command it's set on all PRs of a batch.
This commit does not allow differential limits via commands, they are
allowed via the backend but not really tested. The issue is mostly
that it's not clear what the UX should look like to have clear and not
super error prone interactions. So punt on it for now, and hopefully
there's no hole I missed which will create inconsistent batches.
In case of PRs not being ready, don't just say the PRs are waiting for
CI even though they might be unreviewed, and make a difference
between *waiting* for CI (pending) and having failed CI.
It's a bit weird and inconsistent to have a PR being staged while
unreviewed or unapproved or w/e. If we compute the state based on
skipchecks then everything is consistent.
Also remove the implicit override of all statuses when explicitly
marking the pr as `ready`, it risks creating difficult to understand
states, and it's unnecessary since `skipchecks` gets set.
Also as with setting skipchecks, sets the current user as reviewer on
all PRs of the batch without a reviewer.
Move staging cancellation to the batch, remove its (complicated)
handling from the PRs.
This loses some precision in the cancellation message, but that could
likely be recovered (in part) by adding more precise checks &
diagnostic extractions in the compute.
Because `alone` (formerly p != 2) is selected before split PRs, if a
prioritised PR gets split (or a split PR gets prioritised) it will be
staged once as prioritised, and again because split.
Improve the selection of ready batches to exclude split batches
upstream, such that they don't have to be rechecked over and over, and
their priorities don't cause us issues.
Simplifies the `ready_prs` query a bit and allows it to be converted
to an ORM search, by moving the priority check outside. This also
allows the caller to not need to post-process the records list
anywhere near the previous state of affairs.
`ready_prs` now returns *either* the "alone" batches, or the non-alone
batches, rather than mixing both into a single sequence. This requires
correctly applying the search filters to not retrieve priority of
batches in error or targeting other branches.
Staging readiness is a batch-level concerns, and many of the markers
are already there though a few need to be aggregated from the PRs. As
such, staging has no reason to be performed in terms of PRs anymore,
it should be performed via batches directly.
There is a bit of a mess in order not to completely fuck up when
retargeting PRs (implicitly via freeze wizard, or explicitely) as for
now we're moving PRs between batches in order to keep the
batches *mostly* target-bound.
Some of the side-effects in managing the coherence of the targeting
and moving PRs between batches is... not great. This might need to be
revisited and cleaned up with those scenarios better considered.
- `merge_date` should be common to an entire batch, so move it there
- remove `Batch.active` which should probably have been removed when
batches were made persistent (can eventually re-add as a proxy for
`merge_date` being set maybe, but for now removing it seems a better
way to catch mistakes)
- update various sites to use `Batch.merge_date` instead of
`Batch.active`