And unconditionally unstage when the HEAD of a PR is synchronised.
While a rebuild on a PR which was previously staged can be a false
positive (e.g. because it hit a non-derministic test the second time
around) it can also be legitimate e.g. auto-rebase of an old PR to
check it out. In that case the PR should be unstaged.
Furthermore, as soon as the PR gets rebuilt it goes back into
`approved` (because the status goes to pending and currently there's
no great way to suppress that in the rebuild case without also fucking
it up for the sync case). This would cause a sync on the PR to be
missed (in that it would not unstage the PR), which is broken. Fix
that by not checking the state of the PR beforehand, it seems to be an
unnecessary remnant of older code, and not really an optimisation (or
at least one likely not worth bothering with, especially as we then
proceed to perform a full PR validation anyway).
Fixes#950
The UX around the split of limit and forward port policy (and
especially moving "don't forward port" to the policy) was not really
considered and caused confusion for both me and devs: after having
disabled forward porting, updating the limit would not restore it, but
there would be no indication of such an issue.
This caused odoo/enterprise#68916 to not be forward ported at merge
(despite looking for all the world like it should be), and while
updating the limit post-merge did force a forward-port that
inconsistency was just as jarring (also not helped by being unable to
create an fw batch via the backend UI because reasons, see
10ca096d86).
Fix this along most axis:
- Notify the user and reset the fw policy if the limit is updated
while `fw=no`.
- Trigger a forward port if the fw policy is updated (from `no`) on a
merged PR, currently only sources.
- Add check & warning to limit update process so it does *not* force a
port (though maybe it should under the assumption that we're
updating the limit anyway? We'll see).
Fixes#953
- rather than enumerate states, forward-porting should just check if
the statuses are successful on a PR
- for the same consistency reasons explained in
f97502e503, `skipchecks` should force
the status of a PR to `success`: it very odd that a PR would be
ready without being validated...
Today (or really a month ago) I learned: when giving git a symbolic
ref (e.g. a ref name), if it's ambiguous then
1. If `$GIT_DIR/$name` exists, that is what you mean (this is usually
useful only for `HEAD`, `FETCH_HEAD`, `ORIG_HEAD`, `MERGE_HEAD`,
`REBASE_HEAD`, `REVERT_HEAD`, `CHERRY_PICK_HEAD`, `BISECT_HEAD` and
`AUTO_MERGE`)
2. otherwise, `refs/$name` if it exists
3. otherwise, `refs/tags/$name` if it exists
4. otherwise, `refs/heads/$name` if it exists
5. otherwise, `refs/remotes/$name` if it exists
6. otherwise, `refs/remotes/$name/HEAD` if it exists
This means if a tag and a branch have the same name and only the name
is provided (not the full ref), git will select the tag, which gets
very confusing for the mergebot as it now tries to rebase onto the tag
(which because that's not fun otherwise was not even on the branch of
the same name).
Fix by providing full refs to `rev-parse` when trying to retrieve the
head of the target branches. And as a defense in depth opportunity,
also exclude tags when fetching refs by spec: apparently fetching a
specific commit does not trigger the retrieval of tags, but any sort
of spec will see the tags come along for the ride even if the tags are
not in any way under the fetched ref e.g. `refs/heads/*` will very
much retrieve the tags despite them being located at `refs/tags/*`.
Fixes#922
This is an approximation under the assumption that stored computes
update the `write_date`, and that there's not much else that will be
computed on a batch.
Eventually it might be a good idea for this to be a proper field,
computed alongside the unblocking of the batch.
Fixes#932
Rather than only setting `staging_end` on status change, set it when
the staging gets deactivated. This way cancelling a staging (whether
explicitely or via a PR update) will also end it, so will a staging
timeout, etc..., rather than keep the counter running.
Fixes#931
Reminding users that `r-` on a forward port only unreviews *that*
forwardport is useful, as `r+;r-` is not a no-op (all preceding
siblings are still reviewed).
However it is useless if all siblings are not approved or already
merged. So avoid sending the warning in that case.
Fixes#934
In some cases, feedback to the PR author that an r+ is redundant went
missing.
This turns out to be due to the convolution of the handling of
approval on forward-port, and the fact that the target PR is treated
exactly like its ancestors: if the PR is already approved the approval
is not even attempted (and so no feedback if it's incorrect).
Straighten up this bit and add a special case for the PR being
commented on, it should have the usual feedback if in error or already
commented on.
Furthermore, update `PullRequests._pr_acl` to kinda work out of the
box for forward-port: if the current PR is a forward port,
`is_reviewer` should check delegation on all ancestors, there doesn't
seem to be any reason to split "source_reviewer", "parent_reviewer",
and "is_reviewer".
Fixes#939
- Odoo 17 seems to not be adjusting `nolabel` fields to be `colspan=2`
by default, so every such occurrence has to be adjusted by hand or
it gets squeezed in just the labels column.
- Because of the loss of readonly mode, some fields / setups which
previously looked ugly during the rare edition (e.g. Pr titles) now
look ugly all the time. Rework layout and force them to always be
readonly (hopefully we won't need to edit those).
- This is compounded by unfortunate styling I can't find how to
override e.g. char fields are 100% width even if readonly.
- `<header>` system requires some workarounds to have the right layout
and spacing (notably `header` has a bunch of awful rules which we
need to work around via an interstitial div to set up our own
flexbox).
- reduce font-size and bold-ness a hair as it causes issues in the
backend
- remove font adjustment on root object
- add `text-bg-primary` in the oustanding FP view, apparently BS5 does
not do anything like that by default when setting `bg-primary` for
some reason so the current team or user appears in black on dark
blue leading to sub-par readability
Turns out having the same ids for the feedback template and feedback
object actions was kinda dumb, who'd have thought?
Split them apart so I can get both objects in my menu...
Fix a few issues with migrated bootstrap classes (left -> start, right
-> end), revert a bunch of shitty colors from standard, fixup
backgrounds.
Tried to remove the background overrides what with having used
variables but it does completely wrong for info, and I can't be
arsed (also force primary alerts for the same reason, I don't
understand what either bootstrap or standard does and how to override
it properly but it's shit). It'll keep.
It was all borken and the web client couldn't even load.
The logic is basically the same, except the new client doesn't support
forcing onchanges maybe, so we need to `read()` the entire thing if
the form is not modified.
`message_main_attachment_id` removed from `mail.thread` in
odoo/odoo@f8c7f2e5bb (16.2), however
precise inheritance tracking was only added in
odoo/odoo@b27eb20a41 (17.1), so
stock migration only handles stock models for this migration.
Broken (can't run odoo at all):
- In Odoo 17.0, the `pre_init_hook` takes an env, not a cursor, update
`_check_citext`.
- Odoo 17.0 rejects `@attrs` and doesn't say where they are or how to
update them, fun, hunt down `attrs={'invisible': ...` and try to fix
them.
- Odoo 17.0 warns on non-multi creates, update them, most were very
reasonable, one very wasn't.
Test failures:
- Odoo 17.0 deprecates `name_get` and doesn't use it as a *source*
anymore, replace overrides by overrides to `_compute_display_name`.
- Multiple tracking changes:
- `_track_set_author` takes a `Partner` not an id.
- `_message_compute_author` still requires overriding in order to
handle record creation, which in standard doesn't support author
overriding.
- `mail.tracking.value.field_type` has been removed, the field type
now needs to be retrieved from the `field_id`.
- Some tracking ordering have changed and require adjusting a few
tests.
Also added a few flushes before SQL queries which are not (obviously
at least) at the start of a cron or controller, no test failure
observed but better safe than sorry (probably).
odoo/odoo@1341d52 added native support for tracking / overriding
tracking message so we don't need `change-message `anymore. The calls
had to be modified a bit as `_track_set_log_message` has to be invoked
on the records for which the tracking message is being set / updated.
Sadly the same for authors was only added in 16.3 *and* it's
unsuitable for our needs as we want to set the author without knowing
the scope of affected records (at least in the controller).
That way hopefully in 17.0 we can remove the `_message_compute_author`
override and things should work out. And maybe for 19.0 we can get the
ability to set a per-model (or even global) fallback author into
standard.
Breakages:
- the entire http.py API was updated requiring fixing the uses of
`request.jsonrequest` and the patches to `WebRequest` to hook in
sentry
- `fontawesome` was moved
- `*[@groups]` are now completely removed from the view if not
matching, so any field inside of them which needs to be used outside
(e.g. attrs) has to be added as invisible outside the element
- discuss removed the mail tracking value helpers from RPC in
odoo/odoo#88547, so reimplement locally (and better)
Because the first operation the notification task performs is updating
the commit, this has to be flushed in order to read-back the commit
hash (probably fixed in later version).
This means if updating the flag triggers a serialization
failure (which happens and is normal) we transition to the `exception`
handler, which *tries to retrieve the sha again* and we get an "in
failed transaction" error on top of the current "serialization
failure", leading to a giant traceback.
Also an unhelpful one since a serialization failure is expected in
this cron.
- Move the reads with no write dependency out of the `try`, there's no
reason for them to fail, and notably memoize the `sha`.
- Split the handling of serialization failures out of the normal one,
and just log an `info` (we could actually log nothing, probably).
- Also set the precommit data just before the commit, in case staging
tracking is ever a thing which could use it.
Fixes#909
Detailed statuses are useful in the actual PR dashboard as that allows
direct access to the builds, however in the PR where it's only a
picture it's useless, so fold that information. Also fold it when a PR
is staged.
And while at it add a note / sub-title that the PR is staged.
Fixes#919
The weekly maintenance would not prune refs. This is not an issue on
odoo/odoo because development branches are in a separate repository,
thus never fetched (we push to them but only using local commits and
remote refs).
However on repos like odoo/documentation the reference and development
branches are collocated, the lack of pruning thus keeps every
development branch alive locally, even years after the branch has been
deleted in the repository.
By pruning remote-tracking refs before GC-ing, we should have cleaner
local clones, and better packing.
`test_inconsistent_target` was, appropriately, inconsistent, but would
only fail a very small fraction of the time: the issue is that a PR
would switch target between `other` and `master` assuming neither was
an intrinsic blocker *but* the branches are created independently,
just with the same content.
This means if a second ticked over between the creation of the
`master` branch's commit and that of `other`, they would get different
commit hashes (because different timestamp), thus the PR would get 2
commits (or complete nonsense) when targeted to `other`, and the PR
itself would be blocked for lack of a merge method.
The solution is to be slightly less lazy, and create `other` from
`master` instead of copy/pasting the `make_commits` directive. This
means the PR has the exact same number of commits whether targeted to
`master` or `other`, and we now test what we want to test 60 seconds
out of every minute.
Since every cron runs on a fresh database, on the first `run_crons`
every single cron in the db will run even though almost none of them
is relevant.
Aside from the slight inefficiency, this creates unnecessary extra
garbage in the test logs.
By setting the `nextcall` of all crons to infinity in the template we
avoid this issue, only triggered crons (or the crons whose nextcall we
set ourselves) will trigger during calls.
This requires adjusting the branch cleanup cron slightly: it didn't
correctly handle the initial run (`lastcall` being false).
With the trigger-ification pretty much complete the only cron that's
still routinely triggered explicitly is the cross-pr check, and it's
that in all modules, so there's no cause to keep an overridable
fixture.
The staging cron turns out to be pretty reasonable to trigger, as we
already have a handler on the transition of a batch to `not blocked`,
which is exactly when we want to create a staging (that and the
completion of the previous staging).
The batch transition is in a compute which is not awesome, but on the
flip side we also cancel active stagings in that exact scenario (if it
applies), so that matches.
The only real finesse is that one of the tests wants to observe the
instant between the end of a staging (and creation of splits) and the
start of the next one, which because the staging cron is triggered by
the failure of the previous staging is now "atomic", requiring
disabling the staging cron, which means the trigger is skipped
entirely. So this requires triggering the staging cron by hand.
The merge cron is the one in charge of checking staging state and
either integrating the staging into the reference branch (if
successful) or cancelling the staging (if failed).
The most obvious trigger for the merge cron is a change in staging
state from the states computation (transition from pending to either
success or failure). Explicitly cancelling / failing a staging marks
it as inactive so the merge cron isn't actually needed.
However an other major trigger is *timeout*, which doesn't have a
trivial signal. Instead, it needs to be hooked on the `timeout_limit`,
and has to be re-triggered at every update to the `timeout_limit`,
which in normal operations is mostly from "pending" statuses bumping
the timeout limit. In that case, `_trigger` to the `timeout_limit` as
that's where / when we expect a status change.
Worst case scenario with this is we have parasitic wakeups of this
cron, but having half a dozen wakeups unnecessary wakeups in an hour
is still probably better than having a wakeup every minute.
These are pretty simple to convert as they are straightforward: an
item is added to a work queue (table), then a cron regularly scans
through the table executing the items and deleting them.
That means the cron trigger can just be added on `create` and things
should work out fine.
There's just two wrinkles in the port_forward cron:
- It can be requeued in the future, so needs a conditional trigger-ing
in `write`.
- It is disabled during freeze (maybe something to change), as a
result triggers don't enqueue at all, so we need to immediately
trigger after freeze to force the cron re-enabling it.
Mergebot / forwardport crons need to run in a specific ordering in
order to flow into one another correctly. The default ordering being
unspecified, it was not possible to use the normal cron
runner (instead of the external driver running crons in sequence one
at a time). This can be fixed by setting *sequences* on crons, as the
cron runner (`_process_jobs`) will use that order to acquire and run
crons.
Also override `_process_jobs` however: the built-in cron runner
fetches a static list of ready crons, then runs that.
This is fine for normal situation where the cron runner runs in a loop
anyway but it's any issue for the tests, as we expect that cron A can
trigger cron B, and we want cron B to run *right now* even if it
hadn't been triggered before cron A ran.
We can replace `_process_job` with a cut down version which does
that (cut down because we don't need most of the error handling /
resilience, there's no concurrent workers, there's no module being
installed, versions must match, ...). This allows e.g. the cron
propagating commit statuses to trigger the staging cron, and both will
run within the same `run_crons` session.
Something I didn't touch is that `_process_jobs` internally creates
completely new environments so there is no way to pass context into
the cron jobs anymore (whereas it works for `method_direct_trigger`),
this means the context values have to be shunted elsewhere for that
purpose which is gross. But even though I'm replacing `_process_jobs`,
this seems a bit too much of a change in cron execution semantics. So
left it out.
While at it tho, silence the spammy `py.warnings` stuff I can't do
much about.
Merge errors are logical failures, not technical, it doesn't make
sense to log them out because there's nothing to be done technically,
a PR having consistency issues or a conflict is "normal". As such
those messages are completely useless and just take unnecessary space
in the logs, making their use more difficult.
Instead of sending them to logging, log staging attempts to the PR
itself, and only do normal logging of the operation as an indicative
item. And remove a bunch of `expect_log_errors` which don't stand
anymore.
While at it, fix a missed issue in forward porting: if the `root.head`
doesn't exist in the repo its `fetch` will immediately fail before
`cat-file` can even run, so the second one is redundant and the first
one needs to be handled properly. Do that. And leave checking
for *that* specific condition as a logged-out error as it should mean
there's something very odd with the repository (how can a pull request
have a head we can't fetch?)
Apparently I'd already fixed that in
286c1fdaee but it has yet to be
deployed.
While at it, add a feedback message to clarify that, unlike `r+`, `r-`
on forward ports does *not* propagate.
Fixes#912
The PR dashboard picture provides a great overview of the batch state
both horizontally and vertically *but* apparently people can't for the
life of them go check the actual dashboard when things don't line
up. So expand the "current batch" to a view more similar to
dashboard *page*, which gives details of the sub-checks being
performed and whether they are or are not fulfilled.
Fixes#908
The previous version worked but was extremely plodding and
procedural. Initially I wanted to compose the table in a single pass
but that turns out not to really be possible as the goal for #908 is
to have a "drawer" for extended information about the current batch:
this means different cells of the same row can have different heights,
so we can't one-pass the image either vertically (later cells of the
current column might be wider) or horizontally (later cells of the
current row might be taller).
However what can be done is give the entire thing *structure*,
essentially defining a very cut down and ad-hoc layout system before
committing the layout to raster.
This also deduplicates formatting and labelling information which was
previously in both the computation first step and the rasterisation
second step.
Extract current table generation into a separate function, add an
other function to render an alert / list of PR targets if the batch is
not consistent.
This means an extra pass on the table contents to precompute the image
size, but we can delay loading fonts until after etag computation
which might be a bigger gain all things considered: there aren't many
cells in most PR tables, but fonts are rather expensive to
load (I should probably load them at import and cache them in the module...)
From the previous version of `_compute_target` this was clearly
intended otherwise the fallback makes no sense, but just as clearly I
completely missed / forgot about it halfway through (and the lack of
test didn't help).
The compute is also way overcomplicated, it's not clear why (the only
explanation I can think of is that an intermediate version had a
string target but if that ever happened it was squashed away).
- Update branch name to prefix with project as it can be hard to
differentiate when filtering by or trying to set targets, given some
targets are extremely common (e.g. `master`/`main`) and not all
fields are filtered by project (or even can be).
- Add a proper menu item and list view for batches, maybe it'll be of
use one day.
- Upgrade label in PR search, it's more likely to be needed than
author or target.
- Put PRs first in the mergebot menu, as it's *by far* the most likely
item to look for, unless it's staging in order to cancel one.
This color was altered in 232aa271b0, it
was moved from a cyan-ish green to a yellow quite close to the warning
color.
There is no explanation why, the commit concerns itself with *PR*
dashboards but this class / color is only used on the main
dashboard. It may have been a victim of the color refactoring in that
commit and I fucked up.
This is very disagreeable as it shows up as basically a warning
between the end of staging and it actually getting merged. Rollback
this change back to a green-cyan.
Previously PR descriptions were displayed as raw text in the PR
dashboard. While not wrong per se, this was pretty ugly and not always
convenient as e.g. links had to be copied by hand.
Push descriptions through pymarkdown for rendering them, with a few
customisations:
- Enabled footnotes & tables & fenced code blocks because GFM has
that, this doesn't quite put pymarkdown's base behaviour on par with
gfm (and py-gfm ultimately gave up on that effort moving to just
wrap github's own markdown renderer instead).
- Don't allow raw html because too much of a hassle to do it
correctly, and very few people ever do it (mostly me I think).
- Added a bespoke handler / renderer for github-style references.
Note: uses positional captures because it started that way and named
captures are not removed from that sequence so mixing and matching
is not very useful, plus python does not support identically named
groups (even exclusive) so all 4 repo captures and all 3 issue
number captures would need different names...
- And added a second bespoke handler for our own opw/issue references
leading to odoo.com, that's something we can't do via github[^1] so
it's a genuine value-add.
Fixes#889
[^1]: github can do it (though possibly not with the arbitrary
unspecified nonsense I got when I tried to list some of the
reference styles, some folks need therapy), but it's not available
on our plan
I thought I'd removed the error message when approving an already
approved PR but apparently not?
However we can improve the message in that specific case, to make the
expected operation clearer.
Fixes#906
The goal is to reduce maintenance and odd disk interactions &
concurrency issues, by not creating concurrent clones, not having to
push forks back in the repository, etc... it also removes the need to
cleanup "scratch" working copies though that looks not to have been an
issue in a while.
The work is done on isolated objects without using or mutating refs,
so even concurrent work should not be a problem.
This turns out to not be any more verbose (less so if anything) than
using `cherry-pick`, as that is not really designed for scripted /
non-interactive use, or for squashing commits thereafter. Working
directly with trees and commits is quite a bit cleaner even without a
ton of helpers.
Much of the credit goes to Julia Evans for [their investigation of
3-way merges as the underpinnings of cherry-picking][3-way merge],
this would have been a lot more difficult if I'd had to rediscover the
merge-base trick independently.
A few things have been changed by this:
- The old trace/stderr from cherrypick has disappeared as it's
generated by cherrypick, but for a non-interactive use it's kinda
useless anyway so I probably should have looked into removing it
earlier (I think the main use was investigation of the inflateinit
issue).
- Error on emptied commits has to be hand-rolled as `merge-tree`
couldn't care less, this is not hard but is a bit annoying.
- `merge-tree`'s conflict information only references raw commits,
which makes sense, but requires updating a bunch of tests. Then
again so does the fact that it *usually* doesn't send anything to
stderr, so that's usually disappearing.
Conveniently `merge-tree` merges the conflict marker directly in the
files / tree so we don't have to mess about moving them back out of
the repository and into the working copy as I assume cherry-pick does,
which means we don't have to try and commit them back in ether. That
is a huge part of the gain over faffing about with the working copy.
Fixes#847
[3-way merge]: https://jvns.ca/blog/2023/11/10/how-cherry-pick-and-revert-work/
Automating via parameters is riskier as we can hit the CLI
limitations (cf 0a839a4857). Going
through stdin is a lot safer and cleaner when automating, and it's not
much of an imposition here.
After seeing it be used, I foresee confusion around the current
behaviour (where it sets the limit), as one would expect the `fw=`
flags to affect one another when it looks like that would make sense
e.g. no/default/skipci/skipmerge all specify how to forward port, so
`fw=default` not doing anything after you've said `fw=no` (possibly by
mistake) would be fucking weird.
Also since the author can set limits, allow them to reset the fw
policy to default (keep skipci for reviewers), and for @d-fence add a
`fw=disabled` alias.
Fixes#902
Although the handling of forward ports on disabled branch was improved
in 94fe0329b4 in order to avoid losing
or needing to manually port such, because it goes through
`_schedule_fw_followup` some of the tests *that* performs were missed,
most notably that it only ports batches when no PRs are detached.
This is an issue if we need to force the port because of a branch
being deactivated: the forward-port could have stopped there due to a
conflict, in which case it's always going to be detached.
Thus the `force_fw` flag should also override the parenting state
check.
Also while at it make `force_fw` a regular flag, I don't understand
why I made it into a context value in the first place, it's only
passed from one location and that's directy calling the one function
which uses it...
Fixes#897
If a PR is cancel=staging, even if it's not the
urgentest (priority=alone) odds are good it's being staged to fix the
split. And even if it's not, it probably can't hurt.
So cancel splits in order to stage it. This may be slightly harmful if
the split is legit and has nothing to do with the PR being
prioritised, but that seems like the less likely scenario. And having
to update staging priorities on the fly seems like a bad idea. Though
obviously it might do nothing if the PRs are in "default" priority.#
Also simplify the unstage trigger from the PRs becoming ready:
- the user is useless as it's always the system user
- the batch id is not really helpful
Comments which are too long cause `logging` itself to crash, which
kinda sucks. And long comments seem very unlikely to have anything for
the mergebot to do besides.
So just ignore them at intake. Limit is set to 5000 because there
needs to be a limit somewhere and that's about the extent of it.
Noticed that while writing up the docs on the wiki, seems like an
unnecessary restriction, and an inconvenient one to boot: the author
could r+, then realize they forgot to do an update they needed to do
on the fw, so they should be able to cancel the staging without
needing a reviewer.
On forward-porting, odoo/odoo#170183 generates a conflict on pretty
much every one of the 1111 files it touches, because they are
modify/delete conflicts that generates a conflict message over 200
bytes per file, which is over 200kB of output.
For this specific scenario, the commit message was being passed
through arguments to the `git` command, resulting in a command line
exceeding `MAX_ARG_STRLEN`[^1]. The obvious way to fix this is to pass
the commit message via stdin as is done literally in the line above
where we just copy a non-generated commit message.
However I don't think hundreds of kbytes worth of stdout[^2] is of any
use, so shorten that a bit, and stderr while at it.
Don't touch the commit message size for now, possibly forever, but
note that some log diving reveals a commit with a legit 18kB message
(odoo/odoo@42a3b704f7) so if we want to
restrict that the limit should be at least 32k, and possibly 64. But
it might be a good idea to make that limit part of the ready / merge
checks too, rather than cut things off or trigger errors during
staging.
Fixes#900
[^1]: Most resources on "Argument list too long" reference `ARG_MAX`,
but on both my machine and the server it is 2097152 (25% of the
default stack), which is ~10x larger than the commit message we
tried to generate. The actual limit is `MAX_ARG_STRLEN` which
can't be queried directly but is essentially hard-coded to
PAGE_SIZE * 32 = 128kiB, which tracks.
[^2]: Somewhat unexpectedly, that's where `git-cherry-pick` sends the
conflict info.
d4fa1fd353 added tracking to changes
from *comments* (as well as a few hacks around authorship transfer),
however it missed two things:
First, it set the `change-author` during comments handling only, so
changes from the `PullRequest` hook e.g. open, synchronise, close,
edit, don't get attributed to their actual source, and instead just
fall back to uid(1). This is easy enough to fix as the `sender` is
always provided, that can be resolved to a partner which is then set
as the author of whatever changes happen.
Second, I actually missed one of the message hooks: there's both
`_message_log` and `_message_log_batch` and they don't call one
another, so both have to be overridden in order for tracking to be
consistent. In this case, specifically, the *creation* of a tracked
object goes through `_message_log_batch` (since that's a very generic
message and so works on every tracked object created during the
transaction... even though batch has a message per record anyway...)
while *updates* go through `_message_log`.
Fixes#895
- When a redundant approval is sent to a PR, notify but don't ignore
the entire command set, there's no actual risk.
- Indicate that the entire comment was ignored when finding something
which does not parse.
Fixes#892, fixes#893
The commit cron needs to be triggered any time we:
- create a new commit
- update a commit to set its `to_check`
So do that in create and write as well as the SQL query in the
webhook handler.
This should mean we don't need the periodic cron anymore, but for
safety's sake run it on 30mn for now.
TBF even if we miss triggers, the next `status` webhook hitting will
check all the relevant commits anyway...
This is useful to repro issues.
60c4b5141d added `inverse=readonly`
hooks to various newly computed fields to ensure they can not be *written*
to, either overwriting the content (stored) or silently being
dropped (non-stored).
However because they're `inverse` hooks this had the effect of making
them writeable from the backend UI since the ORM uses `inverse` as a
signal to make the field writeable. This then caused the web client to
send stuff for those fields, which are not necessarily even visible in
the form, leading to write errors when trying to save a PR creation.
By marking the fields as `readonly` explicitly we make sure that
doesn't happen, and we can create PRs from the backend UI (kinda, I
think the label is still an issue).
The method was not marked as a create, following which it did not
allow creating commits via the UI (annoying for testing / reproducing
issues involving statuses).
If a PR gets approved *then* fails CI, there should be a notification
warning the author & reviewer since
48e08b657b, it even has a test, which
passes (in fact it has *two*, one of which is redundant, so merge
`test_ci_failure_after_review` into the later `test_ci_approved`).
*However* this is in runbot_merge, turns out in
fafa7ef437 some refactoring was done in
order to override the notification and customise it for *forward
ports* with a failed status... except that override never called its
`super()`, so as soon as forwardport is installed the base
notification stops working, and that's been that since October
2019 (had been added in March that year, ignoring deployment lag).
This can be revealed by adding the corresponding check in the
*forwardport* tests, revealing the failure.
This was a pain to track down, thankfully it reproduced relatively
easily locally.
While this could be resolved in the override, might as well fold it
into the base method in furtherance of #789: the mergebot is only
used by odoo, and only with both modules combined, so splitting them
is not useful. And furthermore it things should work fine with the
forwardport installed but unused.
Fixes#894
The backend links in the PR dashboard were gated behind the
`group_user` (internal user) group, however turns out while internal
users have read access to PRs they don't have access to ancillary
objects (e.g. batches, stagings, the link between stagings and
batches), and I think the only way to fix the issue would be to move
it to an optional inheritance (inheritance + group), because `groups`
on view nodes only hides the content without removing it.
I believe in more recent Odoo versions this actually works correctly,
so that might actually be more of an incentive to upgrade...
Previous version would always hide the title if the PR was
blocked (e.g. blocked or failed), turns out there are people who
actually use the PR title on the main dashboard, so suppressing that
is inconvenient for them.
Try to show the PR title if available, and add the blocked message if
present.
- Instead of warning about the merge method on ready PRs, also warn on
*approved* (but exclude staged just cuz), as that's really when the
user wants to know that they forgot to set the merge method
- The cron only triggers hourly, but *if* a user approves a PR *and*
the merge method is not set yet, chances are good they'll need a
reminder (if they `r+ rebase-merge` or w/e the cron will just ignore
the PR and it's no skin off our back), so `_trigger` the cron for
validation.
- Also do the same when skipchecks is set as it's very similar.
In reality we might want to hook off of the state transitioning to
reviewed but I'm not sure there's good ways to do that (triggering a
cron inside a compute doesn't seem like a good idea).
Update a pair of tests which would approve a multi-commit PR without
setting a merge method, just because the helper they use to build the
PR happens to create multiple commits.
Fix#891
This is a low issue as the prs of a commit are only listed from the
form so the compute is pretty much always called with a single record,
but still an unforced error which can easily be fixed.
`_schedule_fp_followup` correctly iterates on `self`, however some of
the per-iteration work did not handle that correctly, and would try to
access fields on `self`.
Thankfully in most cases it only works on one batch at a time
anyway, *however* if multiple PRs share a HEAD (which is weird but...)
then `_validate` is called on multiple PRs, which through the
forwardport override leads to `_schedule_fp_followup` being called on
multiple batches, and failing when trying to access the `fw_policy`.
Fix by avoiding the misuse of `self` in the two locations where it's
doing something other than accessing `env`.
Without fw-bot being its bearer, "ignore" is a lot less clear than it
used to as it looks to be asking to ignore the PR entirely (as if it
was targeted to an unmanaged branch).
Deprecate this command, and tack on the shortcut to the fw
subcommand. It is slightly sub-par as technically it does not quite
fit with the other subcommands, and furthermore can't be disabled via
fw=default... although maybe it could be? Maybe instead of setting the
limit fw=no could set that value to the forwardport mode, and the
fw_policy users could check that? It would require some more finessing
tho:
- `DEFAULT` would need to be accessible to the author as well as the
reviewers so the author could toggle between `NO` and `DEFAULT`.
- There should probably be a warning of some sort when setting a limit
to an unportable PR.
- The dashboards would need to take `NO` in account (though I guess
that's just defaulting the limit to the target).
Replace the unclear "unchecked" and "unreviewed" by "missing statuses"
and "missing r+", which are hopefully clearer as they better match
other lingo.
Also increase font for attributes, as size 10 was a bit small.
And finally add staging state to caching key, to differentiate "ready"
from "staged" pictures in gh's cache. "ready" should not be necessary
as it ought be implied by the label.
And filter it to only consider branches in the same project as the PR,
and a lower sequence than its target. That way it's harder to fuck up
when trying to set limits from the backend.
Currently this just silently returns a 404. Since repos are gated by
default (only accessible to internal users) this can get very
confusing when trying to setup a new repo or when forgetting this
information when writing tests.
Seems like a good idea to better keep track of the log of an Odoo used
to testing, and avoid silently ignoring logged errors.
- intercept odoo's stderr via a pipe, that way we can still write it
back out and pytest is able to read & buffer it, pytest's capfd
would not work correctly: it breaks output capturing (and printing
on failure); and because of the way it hooks in it's unable to
capture from subprocesses inheriting the standard stream, cf
pytest-dev/pytest#4428
- update the env fixture to check that the odoo log doesn't have any
exception on failure
- make that check conditional on the `expect_log_errors` marker, this
way we can mark tests for which we expect errors to be logged, and
assert that that does happen
Setting the PR state directly really doesn't work as it doesn't
correctly save (and can get overwritten by any dependency of which
there are many).
This caused setting odoo/odoo#165777 in error to fail, leading to it
being re-staged (and failing) repeatedly, and the PR being spammed
with comments.
- create a more formal helper for preventing directly setting computed
functions (without an actual inverse)
- replace direct state setting by setting the corresponding dependency
e.g. `error` for error and `skipchecks` to force a PR to ready
- add a `skipchecks` inverse to the PR so it can also set itself as
reviewed, and is convenient, might be worth also adding stuff to
`Batch.write`
Because one of the previous commits adds the duration of the staging
to the staging dropdown toggles, it's now much longer, and by default
the text does not wrap so it looks like shit and goes completely out
the column "CSS is awesome" style.
Update the style of the dropdown toggles specifically to allow text
wrapping. Also align them left instead of centering, because the text
makes a centered layout super ugly.
44084e303c changed the interpretation
and schema of the `statuses_cache` field on stagings, but I forgot to
add a migration, so it would just blow up on opening the home
dashboard or the staging lists.
The dashboard can be a bit unclear as to the state of a PR when
everything's gone well. Make it more clear / explicit that it's ready
or staged.
Fixes#888
Currently webhook secrets are configured per *project* which is an
issue both because different repositories may have different
administrators and thus creates safety concerns, and because multiple
repositories can feed into different projects (e.g. on mergebot,
odoo-dev/odoo is both an ancillary repository to the main RD project,
and the main repository to the minor / legacy master-wowl
project). This means it can be necessary to have multiple projects
share the same secret as well, this then mandates the secret for more
repositories per (1).
This is a pain in the ass, so just detach secrets from projects and
link them *only* to repositories, it's cleaner and easier to manage
and set up progressively.
This requires a lot of changes to the tests, as they all need to
correctly configure the signaling.
For `runbot_merge` there was *some* setup sharing already via the
module-level `repo` fixtures`, those were merged into a conftest-level
fixture which could handle the signaling setup. A few tests which
unnecessarily set up repositories ad-hoc were also moved to the
fixture. But for most of the ad-hoc setup in `runbot_merge`, as well
as `forwardport` where it's all ad-hoc, events sources setup was just
appended as is. This should probably be cleaned up at one point, with
the various requirements collected and organised into a small set of
fixtures doing the job more uniformly.
Fixes#887
Using a regex as the pattern is quite frustrating due to all the
escaping necessary, which in this refactoring I found out I'd missed,
multiple times.
Convert the pattern to something bespoke but not too complicated, we
may want to add anchoring support and a bit more finesse and the
future but for now straightforward "holes" seem to work well. I've
added support for capturing and even named groups even if this as yet
unnecessary and unused.
Fixes#861
[^1]: https://docs.pytest.org/en/stable/reference.html#pytest.hookspec.pytest_assertrepr_compare
I have been convinced that this might be an improvement to the affairs
of the people: originally the message was sent to the source PR so we
wouldn't have to ping the author & reviewer and to limit the amount of
spam, *however*:
- we ended up adding pings anyway
- it also pings the followers of the source PR
- it increases the size of the original discussion (especially if was
- originally long)
- it adds steps to fixing the issue as you need to bounce from the
source to the forward ports
Note that this might still notify a lot of people as they might be
made followers of the forward ports automatically, and it increases
the messaging load of the forwardbot significantly. But we'll see how
things go. Worst case scenario, we can revert it back.
Fixes#836
Add support for the ability to validate *stagings* over RPC rather
than via webhook. This may later be expanded to PRs as well.
The core motivation for this is to avoid bouncing through github which
sometimes drops the ball on statuses, and it's frustrating to have a
staging time out because GH fucked up.
Implemented via RPC, requiring both the staging itself (by id) and the
head commit being affected, as that is necessary to know what CIs are
required for that head and correctly report cross branch on the
various PRs.
Fix#881 (kinda)
Rather than compute staging state directly from commit statuses, copy
statuses into the staging's `statuses_cache` then compute state based
on that. Also refactor `statuses` and `staging_end` to be computed
based on the statuses cache instead of the commits (or `state`
update).
The goal is to allow non-webhook validation of stagings, for direct
communications between the mergebot and the CI (mostly runbot).
Github makes it painfully difficult to access the statuses (especially
their URL / related build) once a PR has been merged, as it's
necessary to find the last non-staging commit mention / update in
order to find its statuses checkbox thingie, open that, and access the
statuses.
The mergebot has all the links, so it can just display them in the
merged mode as well rather than only display them in open mode. That
way even on a merged PR the statuses are just two clicks away.
Fixes#873
Computed on the fly for now. Formatted nicely in the frontend, there
does not seem to be any sort of duration widget in the backend so
just display the integer number of seconds.
Fixes#865
if rebased. Untouched commits (straight merge) remain unalterated, but
all rebased or squashed commits now get signoff and `Related` headers
added on top of the already previously added `part-of`.
Implement by generalising `_build_merge_message` to `_build_message`
and having `add_self_references` delegate to it, removes some of the
redundancy / differential handling.
Update the `part_of` helper to also add the S-O-B header to the PR,
although it currently does not reference the entire forward port
chain.
Fixes#876
Shove a bunch of stuff in notebook tabs, add a few
affordances (e.g. github and frontend links, links from m2m), surface
a few missing fields.
Hopefully makes the backend form both easier to navigate and easier to
administrate from.
Displays the entire batch set as a table, along both
repository (linked PRs) and branch (forward ports). Should provide a
much more complete overview.
Adds a copy of the dashboard as a raster render, to link from the PR:
as usual SVG is shit, content-based viewboxes are hell and having to
duplicate the entire CSS because `<img/>`-linked CSS can't run is
gross. And there's no payoff since the image is not interactible
anyway.
Performing manual ad-hoc table rendering via pillow is not
significantly worse, it works fine and it's possible to do *really*
good conditional request handling (hopefully) because I've basically
got all the information I need right here.
In fact it might make sense to upgrade the regular HTML page with
similar conditional request handling, at least for the last-update
bit if not the etag.
Fixes #771,fixes #770
Merged PRs should have a batch which should have a staging, this makes
the treatment uniform across the board and avoids funky data which is
hard to place or issues when reconstructing history.
Also create synthetic batches & stagings for older freezes (and bumps)
Initially wanted to skip this only for FW PRs, but after some thinking
I feel this info could still be valuable even for non-fw PRs which
were never merged in the first place.
Requires a few adjustments to not break *everything*: `batch.prs`
excludes closed PRs by default as most processes only expect to be
faced by a closed PR inside a batch, and we *especially* want to avoid
that before the batch is merged (as we'd risk staging a closed PR).
However since PRs don't get removed from batches anymore (and batches
don't get deleted when they have no PRs) we now may have a bunch of
batches whose PRs (usually a single one) are all closed, this has two
major side-effects:
- a new PR may get attached to an old batch full of closed PRs (as
batches are filtered out on being *merged*), which is weird
- the eventual list of batches gets polluted with a bunch of
irrelevant batches which are hard to filter out
The solution is to reintroduce an `active` field, as a stored compute
field based on the state of batch PRs. This way if all PRs of a batch
are closed it switches to inactive, and is automatically filtered out
by search which solves both issues.
Batch ordering in stagings is important in order to correctly
reconstitute the full project history.
In the old mergebot, since batches are created on the fly during
staging this information is reified by the batch ids. But since batch
ids are now persistent and there is no relationship between the
creation of a batch and its merging (especially not relative to other
batches) it's an issue as reconstituting sub-staging git history would
be impossible.
Which is not the worst, but is not great.
The solution is to reify the join table between stagings and batches
in order for *that* to keep history (simply via the sequential PK),
and in converting to the new system carefully generate the new links
in an order matching the old batch ids.
Given a batch which has been merged, and been forward-ported, to
multiple branches (because skipci was set or ci passed on the repos
the batch covers).
There might come the need to add a PR for one of the uncovered
repos. This raises the question of what to do with it, since the
forward-ports for the batch already exist it's not going to get
forwardported normally, nor may we want to, possibly?
Options are:
- don't do anything, such additions don't get ported, this is
incongruous and unexpected as by default PRs are forward-ported, and
if the batch wasn't an intermediate (but e.g. a conflict) it
probably would be ported forward
- port on merge, this allows configuring the PR properly (as it might
need its own limit) but it means further batches may get
unexpectedly merged (or at least retied) without the additional PR
even though we likely want it in
- immediately port the additional PR on creation, this makes the limit
harder or impossible to configure but it makes the *batch sequence*
more consistent
We ended up selecting the latter, it feels closer to the updates
system, and it creates more consistent batches through the
sequence. It's also technically easier to ad-hoc port a PR through a
bunch of branches than it is to update the "normal" forward-port
process to handle partial fixups.
This is definitely non-trivial, due to the structural changes and the
amounts of stuff to move around (e.g. lift from PR to batch), as well
as the reification of previously non-existent relations (batches,
batch history, ...) which sometimes uncovers inconsistencies in the
current state of the mergebot (some of which are the result of bugs,
the bug got fixed but the nonsense it generated was left untouched).
Test and refine the handling of batch forward ports around branch
deactivation, especially with differential. Notably, fix an error in
the conversion of the FW process to batches: individual PR limit was
not correctly taken in account during forward port unless *all* PRs
were done, even though that is a primary motivation for the
change.
Partial forward porting should now work correctly, and the detection
and handling of differential next target should be better handled to
boot.
Significantly rework the interplay between batches and PRs being
closed in order to maintain sequencing / consistency of forward port
sequences: previously a batch would get deleted if all its PRs are
closed, but that is an issue when it is part of a forward port
sequence as we now lose information.
Instead, detach the PRs from the batch as before but have the batch
skip unlinking if it has historical value (parent or child
batch). Currently the batch's state is a bit weird as it doesn't get
merged, but...
While at it, significantly simplify `_try_closing` as it turns out to
have a ton of incidental / historical complexity from old attempts at
fixing concurrency issues, which should not be necessary anymore and
in fact actively interfere with the new and more compute-heavy state
of things.
Thank god I have a bunch of tests because once again I forgot / missed
a bunch of edge cases in doing the conversion, which the tests
caught (sadly that means I almost certainly broke a few untested edge
cases).
Important notes:
Handling of parent links
------------------------
Unlike PRs, batches don't lose their parent info ever, the link is
permanent, which is convenient to trawl through a forward port
(currently implemented very inefficiently, maybe we'll optimise that
in the future).
However this means the batch having a parent and the batch's PRs
having parents are slightly different informations, one of the edge
cases I missed is that of conflicting PRs, which are deparented and
have to be merged by hand before being forward ported further, I had
originally replaced the checks on a pr and its sibling having parents
by just the batch.
Batches & targets
-----------------
Batches were originally concepted as being fixed to a target and PRs
having that target, a PR being retargeted would move it from one batch
to an other.
As it turns out this does not work in the case where people retarget
forward-port PRs, which I know they do because #551
(2337bd8518). I could not think of a
good way to handle this issue as is, so scrapped the moving PRs thing,
instead one of the coherence checks of a batch being ready is that all
its PRs have the same target, and a batch only has a target if all its
PRs have the same target.
It's possible for somewhat odd effects to arise, notably if a PR is
closed (removed from batch), the other PRs are retargeted, and the new
PR is reopened, it will now be on a separate batch even if it also
gets retargeted. This is weird. I don't quite know how I should handle
it, maybe batches could merge if they have the same target and label?
however batches don't currently have a label so...
Improve limits
--------------
Keep limits on the PRs rather than lift them on the batchL if we can
add/remove PRs of batches having different limits on different PRs of
the same batch is reasonable.
Also leave limit unset by default: previously, the limit was eagerly
set to the tip (accessible) branch. That doesn't really seem
necessary, so stop doing that.
Also remove completely unnecessary `max` when trying to find a PR's
next target: `root` is either `self` or `self.source_id`, so it should
not be possible for that to have a later target.
And for now ensure the limits are consistent per batch: a PR defaults
to the limit of their batch-mate if they don't have one, and if a
limit is set via command it's set on all PRs of a batch.
This commit does not allow differential limits via commands, they are
allowed via the backend but not really tested. The issue is mostly
that it's not clear what the UX should look like to have clear and not
super error prone interactions. So punt on it for now, and hopefully
there's no hole I missed which will create inconsistent batches.
In case of PRs not being ready, don't just say the PRs are waiting for
CI even though they might be unreviewed, and make a difference
between *waiting* for CI (pending) and having failed CI.
It's a bit weird and inconsistent to have a PR being staged while
unreviewed or unapproved or w/e. If we compute the state based on
skipchecks then everything is consistent.
Also remove the implicit override of all statuses when explicitly
marking the pr as `ready`, it risks creating difficult to understand
states, and it's unnecessary since `skipchecks` gets set.
Also as with setting skipchecks, sets the current user as reviewer on
all PRs of the batch without a reviewer.
- remove the `legal/cla` and `ci/runbot` context names, which I use a
lot for historical reasons but fundamentally they're not useful to
the tests, the `default` context is generally simpler.
- remove `make_branch` helper as we don't actually use branch
protection and at the end of the day it doesn't do much else
- convert a few explicit PR lookups to the project-wide `to_pr` helper
Move staging cancellation to the batch, remove its (complicated)
handling from the PRs.
This loses some precision in the cancellation message, but that could
likely be recovered (in part) by adding more precise checks &
diagnostic extractions in the compute.
Because `alone` (formerly p != 2) is selected before split PRs, if a
prioritised PR gets split (or a split PR gets prioritised) it will be
staged once as prioritised, and again because split.
Improve the selection of ready batches to exclude split batches
upstream, such that they don't have to be rechecked over and over, and
their priorities don't cause us issues.
Simplifies the `ready_prs` query a bit and allows it to be converted
to an ORM search, by moving the priority check outside. This also
allows the caller to not need to post-process the records list
anywhere near the previous state of affairs.
`ready_prs` now returns *either* the "alone" batches, or the non-alone
batches, rather than mixing both into a single sequence. This requires
correctly applying the search filters to not retrieve priority of
batches in error or targeting other branches.
Staging readiness is a batch-level concerns, and many of the markers
are already there though a few need to be aggregated from the PRs. As
such, staging has no reason to be performed in terms of PRs anymore,
it should be performed via batches directly.
There is a bit of a mess in order not to completely fuck up when
retargeting PRs (implicitly via freeze wizard, or explicitely) as for
now we're moving PRs between batches in order to keep the
batches *mostly* target-bound.
Some of the side-effects in managing the coherence of the targeting
and moving PRs between batches is... not great. This might need to be
revisited and cleaned up with those scenarios better considered.
- `merge_date` should be common to an entire batch, so move it there
- remove `Batch.active` which should probably have been removed when
batches were made persistent (can eventually re-add as a proxy for
`merge_date` being set maybe, but for now removing it seems a better
way to catch mistakes)
- update various sites to use `Batch.merge_date` instead of
`Batch.active`
This probably has latent bugs, and is only the start of the road to v2
(#789): PR batches are now created up-front (alongside the PR), with
PRs attached and detached as needed, hopefully such that things are
not broken (tests pass but...), this required a fair number of
ajustments to code not taking batches into account, or creating
batches on the fly.
`PullRequests.blocked` has also been updated to rely on the batch to
get its batch-mates, such that it can now be a stored field with the
right dependencies.
The next step is to better leverage this change:
- move cross-PR state up to the batch (e.g. skipchecks, priority, ...)
- add fw info to the batch, perform forward-ports batchwise in order
to avoid redundant batch-selection work, and allow altering batches
during fw (e.g. adding or removing PRs)
- use batches to select stagings
- maybe expose staging history of a batch?
Not sure it's going to be useful but it's hard to know if we can't
test it. The intent is mostly the ability to prioritize throughput (or
attempt to) during high-load events, if we can favour staging N
new batches over a split's N/2 we might be able to merge more crap.
But maybe not, we'll see, either way now it's here and seems to more
or less work.
Fixes#798
Because the mergebot crons are on such a tight scheduling, and just
them finding out they have nothing to do can take a while, disabling
them can be a chore. Disabling staging via the project is much less
likely to cause issues as the projects don't normally (or ever?) get
exclusively locked, so they can generally be written to at any moment.
Furthermore, if we ever get in a situation where we have multiple
active projects (not really the case currently, we have multiple
projects but only one is really active) it's less disruptive to
disable stagings on a single specific project.
Fixes#860
This commit revisits the commands set in order to make it more
regular, and limit inconsistent command-sets, although it includes
pseudo-command aliases for common tasks now removed from the core set.
Hard Errors
===========
The previous iteration of the commands set would ignore any
non-command term in a command line. This has been changed to hard
error (and ignoring the entire thing) if any command is unknown or
invalid.
This fixes inconsistent / unexpected interpretations where a user
sends a command, then writes a novel on the same line some words of
which happen to *also* be commands, leading to merge states they did
not expect. They should now be told to fuck off.
Priority Restructuring
----------------------
The numerical priority system was pretty messy in that it confused
"staging priority" (in ways which were not entirely straightforward)
with overrides to other concerns.
This has now being split along all the axis, with separate command
subsets for:
- staging prioritisation, now separated between `default`, `priority`,
and `alone`,
- `default` means PRs are picked by an unspecified order when
creating a staging, if nothing better is available
- `priority` means PRs are picked first when staging, however if
`priority` PRs don't fill the staging the rest will be filled with
`default`, this mode did not previously exist
- `alone` means the PRs are picked first, before splits, and only
`alone` PRs can be part of the staging (which usually matches the
modename)
- `skipchecks` overrides both statuses and approval checks, for the
batch, something previously implied in `p=0`, but now
independent. Setting `skipchecks` basically makes the entire batch
`ready`.
For consistency this also sets the reviewer implicitly: since
skipchecks overrides both statuses *and approval*, whoever enables
this mode is essentially the reviewer.
- `cancel` cancels any ongoing staging when the marked PR becomes
ready again, previously this was also implied (in a more restricted
form) by setting `p=0`
FWBot removal
=============
While the "forwardport bot" still exists as an API level (to segregate
access rights between tokens) it has been removed as an interaction
point, as part of the modules merge plan. As a result,
fwbot stops responding
----------------------
Feedback messages are now always sent by the mergebot, the
forward-porting bot should not send any message or notification
anymore.
commands moved to the merge bot
-------------------------------
- `ignore`/`up to` simply changes bot
- `close` as well
- `skipci` is now a choice / flag of an `fw` command, which denotes
the forward-port policy,
- `fw=default` is the old `ci` and resets the policy to default,
that is wait for the PR to be merged to create forward ports, and
for the required statuses on each forward port to be received
before creating the next
- `fw=skipci` is the old `skipci`, it waits for the merge of the
base PR but then creates all the forward ports immediately (unless
it gets a conflict)
- `fw=skipmerge` immediately creates all the forward ports, without
even waiting for the PR to be merged
This is a completely new mode, and may be rather broken as until
now the 'bot has always assumed the source PR had been merged.
approval rework
---------------
Because of the previous section, there is no distinguishing feature
between `mergebot r+` = "merge this PR" and `forwardbot r+` = "merge
this PR and all its parent with different access rights".
As a result, the two have been merged under a single `mergebot r+`
with heuristics attempting to provide the best experience:
- if approving a non-forward port, the behavior does not change
- else, with review rights on the source, all ancestors are approved
- else, as author of the original, approves all ancestors which descend
from a merged PR
- else, approves all ancestors up to and including the oldest ancestor
to which we have review rights
Most notably, the source's author is not delegated on the source or
any of its descendants anymore. This might need to be revisited if it
provides too restrictive.
For the very specialized need of approving a forward-port *and none of
its ancestors*, `review=` can now take a comma (`,`) separated list of
pull request numbers (github numbers, not mergebot ids).
Computed State
==============
The `state` field of pull requests is now computed. Hopefully this
makes the status more consistent and predictable in the long run, and
importantly makes status management more reliable (because reference
datum get updated naturally flowing to the state).
For now however it makes things more complicated as some of the states
have to be separately signaled or updated:
- `closed` and `error` are now separate flags
- `merge_date` is pulled down from forwardport and becomes the
transition signal for ready -> merged
- `reviewed_by` becomes the transition signal for approval (might be a
good idea to rename it...)
- `status` is computed from the head's statuses and overrides, and
*that* becomes the validation state
Ideally, batch-level flags like `skipchecks` should be on, well, the
batch, and `state` should have a dependency on the batch. However
currently the batch is not a durable / permanent member of the system,
so it's a PR-level flag and a messy pile.
On notable change is that *forcing* the state to `ready` now does that
but also sets the reviewer, `skipchecks`, and overrides to ensure the
API-mediated readying does not get rolled back by e.g. the runbot
sending a status.
This is useful for a few types of automated / programmatic PRs
e.g. translation exports, where we set the state programmatically to
limit noise.
recursive dependency hack
-------------------------
Given a sequence of PRs with an override of the source, if one of the
PRs is updated its descendants should not have the override
anymore. However if the updated PR gets overridden, its descendants
should have *that* override.
This requires some unholy manipulations via an override of `modified`,
as the ORM supports recursive fields but not recursive
dependencies (on a different field).
unconditional followup scheduling
---------------------------------
Previously scheduling forward-port followup was contigent on the FW
policy, but it's not actually correct if the new PR is *immediately*
validated (which can happen now that the field is computed, if there
are no required statuses *or* all of the required statuses are
overridden by an ancestor) as nothing will trigger the state change
and thus scheduling of the fp followup.
The followup function checks all the properties of the batch to port,
so this should not result on incorrect ports. Although it's a bit more
expensive, and will lead to more spam.
Previously this would not happen because on creation of a PR the
validation task (commit -> PR) would still have to execute.
Misc Changes
============
- If a PR is marked as overriding / canceling stagings, it now does
so on retry not just when setting initially.
This was not handled at all previously, so a PR in P0 going into
error due to e.g. a non-deterministic bug would be retried and still
p=0, but a current staging would not get cancelled. Same when a PR
in p=0 goes into error because something was failed, then is updated
with a fix.
- Add tracking to a bunch of relevant PR fields.
Post-mortem analysis currently generally requires going through the
text logs to see what happened, which is annoying.
There is a nondeterminism / inconsistency in the tracking which
sometimes leads the admin user to trigger tracking before the bot
does, leading to the staging tracking being attributed to them
during tests, shove under the carpet by ignoring the user to whom
that tracking is attributed.
When multiple users update tracked fields in the same transaction
all the changes are attributed to the first one having triggered
tracking (?), I couldn't find why the admin sometimes takes over.
- added and leveraged support for enum-backed selection fields
- moved variuous fields from forwardport to runbot_merge
- fix a migration which had never worked and which never run (because
I forgot to bump the version on the module)
- remove some unnecessary intermediate de/serialisation
fixes#673, fixes#309, fixes#792, fixes#846 (probably)
- move all commands parsing to runbot_merge as part of the long-term
unification effort (#789)
- set up an actual parser-ish structure to parse the commands to
something approaching a sum type (fixes#507)
- this is mostly prep for reworking the commands set (#673), although
*strict command parsing* has been implemented (cf update to
`test_unknown_commands`)
Sadly m2ms don't support tracking, so add a bunch of ad-hoc tracking
to the override rights in order to know who, what, when at least.
Do the same for the review rights although maybe tracking works for
those.
It might not be a huge amount of extra work since we're never actually
retrieving the rows, but it still seems completely unnecessary.
Sadly we can't do something cleaner like an aggregation, because
aggregating requires moving the locking query to a subquery, and
experimentally that seems slower than just ignoring / discarding the
result set.
If an untracked PR is closed, especially on an inactive or untracked
branch, the closer (or author) almost certainly don't care to receive
3 different notifications on the subject.
The fix requires a schema change in order to track that we're fetching
the PR due to a `closed` event, as in other cases we may still want to
notify the user that we received the request (and it just happened to
resolve to a closed PR).
Fixes#857
- correctly handle projects without a secret set, we don't want the
requests to blow up by trying to `strip()` a `False` or `None`, that
is dumb, who would do that?
- provide better reporting on signature mismatch: which repo we tried
to access, and the full list of headers
- log when there was no signature matching, either because there was
no signature in the request and no secret on the project, or because
the request is signed but no secret is configured on the repo
`gc --prune` can not take a *separate* parameter, it has to be part of
the same arg (the `=` is not optional), otherwise the `gc` call blows
up.
So use the positional form of the git command to generate the correct
invocation, Python-level `foo=bar` generates a split-style option in
two args which does not please git.
Before this, we would check if a repository had a name and run
maintenance on it, leading to repeated (but unnoticed until now
because I didn't monitor it) tracebacks as the maintenance cron would
fail to find the local repo then run maintenance on nowhere anyway.
Also augment the repo-finding process to try and get better
information about what it's doing when it fails, rather than failing
completely silently.
The signature validation code seems correct, but there are validation
failure in production, increase logging around webhook requests to
try and diagnose things better:
- dump the *entire* body to the github_requests logfile
- add the received & computed signatures to the log error
Turns out I've always been mistaken about the handling of quotes
*inside* shell parameters, apparently they are always consumed by the
shell unless nested so
--foo="bar"
reaches the underlying program as
--foo=bar
This means when using subprocess (without shell=True), adding the
quotes leads to mishandling of the parameters (as the subprocess now
has quotes it's not equipped to deal with).
This exact error is made in the `--pretty` parameter of git show,
locally this results in the author name and the committer email being
terminated by double quotes although somehow other layers seem to
exclude those from the end result (I assume `commit-tree` strips the
quotes from the envvars under the assumption that users can mistakenly
quote them or something?
Anyway while it does not seem harmful (so far), better safe than
sorry.
Add intermediate forks to a pair of tests, because github now (?)
requires being able to write on a branch to create a PR from it, so
the non-collaborator reviewers were not able to create a PR from a
branch created by user.
Github delivery delays keep getting worse. Depending on what comes
before `to_pr`, this leads it to fail more often as it runs before the
PR it's looking for was signaled to the mergebot.
In order to mitigate this issue, add a wait loop in `to_pr`, waiting
up to 4 seconds for the PR it's looking for before aborting.
Also replace manual lookups by `to_pr` in every method of
`TestPRUpdate` while at it since it hit a few of the issues. And
remove the xfail test case since it seems unlikely github will change
tack (maybe? could be worth testing to be sure).
Reverts commit 85a7890023 which
untrimmed the commits: while it's *probably* true that git and
github's APIs differ in their treatment of whitespace (in that git
pretty much always terminates the commit message with a newline while
github does not, as far as I understand, though I didn't really
validate it) the issue was that github also trims on *output* when
fetching over the API, something the fake did not do.
So rather than update the test data I should have fixed the fake, but
I failed to realise that at the time. I only realised when I decided
to re-run against github actual (something I rarely do anymore as it's
painfully slow) and it went on to choke on every message I'd updated.
The logging line was copied over from the github-api version, but it
was not correctly fixed up to match, leading to a lot of spam on
stderr when debug is enabled (aka spams journalctl on the production
server).
Splat the logging call out of `rebase` and into the various callers,
so they have access to the pr object to log it.
Forgot to bump the version when creating the migration. Also convert
the migration to a single sql query, although the migration will never
run because I ran the query manually to fix things up after finding
out the data was "dirty" since the new code (assuming only modern
statuses) was merged without running the migration.
Thankfully it looks like the impact was not too severe (because the
legacy statuses should only be present on very old commits / PRs), I
don't remember when I deployed the update but apparently just a pair
of PRs got affected, because their `previous_failure` was the old
style and thus broke the "new failure" check.
Forgot to deref the id of the staging we're trying to lock, so the
specific case where we start a freeze with a bump PR and an
outstanding staging in master would instantly blow up.
The low-level APIs used by the staging process don't do any merge
check, so because of the way Git works it's possible for them to merge
commits with content as empty commits, e.g. if something was merged
then backported and the backport was merged on top. This should
trigger a merge failure as we don't really want to merge newly
empty. This is a feature which some high level commands of git
support, kind-of, e.g. by default `git rebase --interactive` will ask
about newly empty commits.
Take care to allow merging already-empty commits, as these do have a
use for signaling, freezes, ....
Fixes#809
Prepares the possibility of either more direct communication with the
CI platform(s) or just assuming CI has gotten reliable enough and
colleagues intelligent enough that this is not an issue anymore
because they've stopped pushing empty branches (which we know is not
the case).
Fixes#806
During the 17.0 freezeathon, the freeze wizard blew up with
MergeError: merge-tree: {oid} - not something we can merge
Turns out when freezes were moved to local
(4d2c0f86e1) I forgot to fetch the heads
of the release and bump PRs into the local repo, so rebasing them atop
their branch would fail because the local repository would just not
find the object being rebased.
I had missed that case in testing as well, but in fairness even if I
had tried testing it I'd likely have missed it: implementation
limitations (shortcuts) of dummy central mean it currently ignores
what objects the client requests and bundles everything it can find
associated with the repository (meaning it sends the entire network).
This is not usually an issue because the test repos are pretty small,
but it means the client can have objects they should not because they
never requested them and might not even be supposed to be aware of
their existence.
Anyway solve by doing the obvious: fetch the heads of the release and
bump PRs at the same time we update the branch being forked off. Also
update the freeze tests to trigger the issue (by creating the release
/ bump PRs in different repos) and running the tests against github
actual to make sure we can actually see them fail (correctly, the
merge error we expect) not via errors in the test), and we do fix
them.
Fixes#821
Currently, once a source PR has been merged it's not possible to set
or update a limit, which can be inconvenient (e.g. people might have
forgotten to set it, or realise after the fact that setting one is not
useful, or might realise after the fact that they should *unset* it).
This PR relaxes that constraint (which is not just a relaxation as it
requires a bunch of additional work and validation), it should now be
possible to:
- update the limit to any target, as long as that target is at or
above the current forwardport tip's
- with the exception of a tip being forward ported, as that can't be
cancelled
- resume a forward port stopped by a previously set limit (just
increase it to whatever)
- set a limit from any forward-port PR
- set a limit from a source, post-merge
The process should also provide pretty chatty feedback:
- feedback on the source and closest root
- feedback about adjustments if any (e.g. setting the limit to B but
there's already a forward port to C, the 'bot should set the limit
to C and tell you that it happened and why)
Fixes#506
If a branch `foo` is disabled, then `tmp.foo` and `staging.foo` become
unnecessary (with #247 fixed the tmp refs are not used for creating
stagings anymore, but for now they're still used for the "safety
dance" of merging a successful staging into the corresponding
mainline).
Fixes#605
Per the Github webhook documentation:
1. sha1 signatures are deprecated, github recommends sha256 (though
that's unlikely to be a concern anyway), and dummy-central supports
both so it should be no issue.
> If possible, we recommend that you use the x-hub-signature-256
> header for improved security.
2. Non-ascii secrets are supported and should be utf8-encoded to
compute signatures... that's not actually documented as github docs
only mention payload encoding but it seems to make sense anyway.
Also improve the warning message by replacing the signature (which is
useless) by the delivery id (which could allow introspecing the hook
or something).
Currently a user is not notified that the parent of a detached PR
needs to be independently approved and may miss that information. Add
a notification to *that* PR as well.
Fixes#788
The github API has gotten a lot more constraining (with rate
restrictions being newly enforced or added somewhat out of nowhere),
and as importantly a lot less reliable. So move the staging process
off of github and locally, similar to the forward porting
process (whose repo cache is being reused for this).
Fixes#247
Probably less necessary than for the regular staging stuff, but might
as well while at it.
Requires updating one of the test to generate a non-ff push, as
O_CREAT doesn't exist at the git level, and the client (and it is
client-side) only protects against force pushes. So there is no way to
trigger an issue with just the creation of the new branch, it needs to
exist *and point to a non-ancestor commit*.
Also remove a sleep in the ref update loop as there are no ref updates
anymore, until the very final sync via git.
NB: maybe it'd be possible to push both bump and release PRs together
for each repo, but getting which update failed in case of failure
seems difficult.
It has been a consideration for a while, but the pain of subtly
interacting with git via the ignominous CLI kept it back. Then ~~the
fire nation attacked~~ github got more and more tight-fisted (and in
some ways less reliable) with their API.
Staging pretty much just interacts with the git database, so it's both
a facultative github operator (it can just interact with git directly)
and a big consumer of API requests (because the git database endpoints
are very low level so it takes quite a bit of work to do anything
especially when high-level operations like rebase have to be
replicated by hand).
Furthermore, an issue has also been noticed which can be attributed to
using the github API (and that API's reliability getting worse): in
some cases github will fail to propagate a ref update / reset, so when
staging 2 PRs it's possible that the second one is merged on top of
the temporary branch of the first one, yielding a kinda broken commit
(in that it's a merge commit with a broken error message) instead of
the rebase / squash commit we expected.
As it turns out it's a very old issue but only happened very early so
was misattributed and not (sufficiently) guarded against:
- 41bd82244bb976bbd4d4be5e7bd792417c7dae6b (October 8th 2018) was
spotted but thought to be a mergebot issue (might have been one of
the opportunities where ref-checks were added though I can't find
any reference to the commit in the runbot repo).
- 2be25052e147b151d1d8a5bc73cceb351586ce03 (October 15th, 2019) was
missed (or ignored).
- 5a9fe7a7d05a9df7186072a7bffd60c6b428fd0e (July 31st, 2023) was
spotted, but happened at a moment where everything kinda broke
because of github rate-limiting ref updates, so the forensics were
difficult and it was attributed to rate limiting issues.
- f10d03bf0f2e8f88f62a5d8356b84f714196130f (August 24th, 2023) broke
the camel's back (and the head block): the logs were not too
interspersed with other garbage and pretty clear that github ack'd a
ref update, returned the correct oid when checking the ref, then
returned the wrong oid when fetching it later on.
No Working Copy
===============
The working copy turns out to not be necessary, the plumbing commands
we *need* work just fine on a bare repository.
Working without a WC means we had to reimplement the high level
operations (rebase) by hand much as we'd done previously, *but* we
needed to do that anyway as git doesn't seem to provide any way to
retrieve the mapping when rebasing/cherrypicking, and cherrypicking by
commit doesn't work well as it can't really find the *merge base* it
needs.
Forward-porting can almost certainly be implemented similarly (with
some overhead), issue #803 has been opened to keep track of the idea.
No TMP
======
The `tmp.` branches are no more, the process of creating stagings is
based entirely around oids, if staging something fails we can just
abandon the oids (they'll be collected by the weekly GC), we only
need to update the staging branches at the very end of the process.
This simplifies things a fair bit.
For now we have stopped checking for visibility / backoff as we're
pushing via git, hopefully it is a more reliable reference than the
API.
Commmit Message Formatting
==========================
There's some unfortunate churn in the test, as the handling of
trailing newlines differs between github's APIs and git itself.
Fixes#247
PS: It might be a good idea to use pygit2 instead of the CLI
eventually, the library is typed which is nice, and it avoids
shelling out although that's really unlikely to be a major cost.
Necessary to create commits *as* the mergebot without going through
the github API. Copy of the improved version from forwardport. *Not*
an override, to avoid unnecessarily triggering one or the other which
is confusing and weird.