The `to_pr` helper was added a *while* ago to replace the pretty
verbose process of looking a PR with the right number in the right
repository after a `make_pr`. However while I did some ad-hoc
replacement of existing `search` calls as I had to work with existing
tests I never did a full search and replace to try and excise searches
from the test suite.
This is now done. I've undoubtedly missed a few, but a hundred-odd
lines of codes have been simplified.
If a comment causes an unknown PR to be fetched, it's a bit odd to
ping the author (and possibly reviewer) anyway as they're not super
concerned (and technically we could be ignoring the purported /
attempted reviewer).
So if a fetch job was created because of a comment, remember the
comment author and ping *them* instead of using the default ping
policy.
Fixes#981
Because of the false negatives due to github's reordering of events on
retargeting, blocking merge methods can be rather frustrating or the
user as what's happening and how to solve it isn't clear in that case.
Keep the warnings and all, but remove the blocking factor: if a PR
doesn't have a merge method and is not single-commit, just skip it on
staging. This way, PRs which are actually single-commit will stage
fine even if the mergebot thinks they shouldn't be.
Fixes#957
By updating the staging timeout every time we run `_compute_state` and
still have a `pending` status, we'd actually bump the timeout *on
every success CI* except for the last one. Which was never the
intention and can add an hour or two to the mergebot-side timeout.
Instead, add an `updated_at` attribute to statuses (name taken from
the webhook payload even though we don't use that), read it when we
see `pending` required statuses, and update the staging timeout based
on that if necessary.
That way as long as we don't get *new* pending statuses, the timeout
doesn't get updated.
Fixes#952
- Switch to just `default` ci.
- Autouse fixture to init the master branch.
- Switch to `make_commits`.
- Merge `test_reopen_update` and `test_update_closed_revalidate` into
`test_update_closed`: the former did basically nothing useful and
the latter could easily be folded into `test_update_closed` by just
validating the new commit.
Broken (can't run odoo at all):
- In Odoo 17.0, the `pre_init_hook` takes an env, not a cursor, update
`_check_citext`.
- Odoo 17.0 rejects `@attrs` and doesn't say where they are or how to
update them, fun, hunt down `attrs={'invisible': ...` and try to fix
them.
- Odoo 17.0 warns on non-multi creates, update them, most were very
reasonable, one very wasn't.
Test failures:
- Odoo 17.0 deprecates `name_get` and doesn't use it as a *source*
anymore, replace overrides by overrides to `_compute_display_name`.
- Multiple tracking changes:
- `_track_set_author` takes a `Partner` not an id.
- `_message_compute_author` still requires overriding in order to
handle record creation, which in standard doesn't support author
overriding.
- `mail.tracking.value.field_type` has been removed, the field type
now needs to be retrieved from the `field_id`.
- Some tracking ordering have changed and require adjusting a few
tests.
Also added a few flushes before SQL queries which are not (obviously
at least) at the start of a cron or controller, no test failure
observed but better safe than sorry (probably).
Breakages:
- the entire http.py API was updated requiring fixing the uses of
`request.jsonrequest` and the patches to `WebRequest` to hook in
sentry
- `fontawesome` was moved
- `*[@groups]` are now completely removed from the view if not
matching, so any field inside of them which needs to be used outside
(e.g. attrs) has to be added as invisible outside the element
- discuss removed the mail tracking value helpers from RPC in
odoo/odoo#88547, so reimplement locally (and better)
The staging cron turns out to be pretty reasonable to trigger, as we
already have a handler on the transition of a batch to `not blocked`,
which is exactly when we want to create a staging (that and the
completion of the previous staging).
The batch transition is in a compute which is not awesome, but on the
flip side we also cancel active stagings in that exact scenario (if it
applies), so that matches.
The only real finesse is that one of the tests wants to observe the
instant between the end of a staging (and creation of splits) and the
start of the next one, which because the staging cron is triggered by
the failure of the previous staging is now "atomic", requiring
disabling the staging cron, which means the trigger is skipped
entirely. So this requires triggering the staging cron by hand.
The merge cron is the one in charge of checking staging state and
either integrating the staging into the reference branch (if
successful) or cancelling the staging (if failed).
The most obvious trigger for the merge cron is a change in staging
state from the states computation (transition from pending to either
success or failure). Explicitly cancelling / failing a staging marks
it as inactive so the merge cron isn't actually needed.
However an other major trigger is *timeout*, which doesn't have a
trivial signal. Instead, it needs to be hooked on the `timeout_limit`,
and has to be re-triggered at every update to the `timeout_limit`,
which in normal operations is mostly from "pending" statuses bumping
the timeout limit. In that case, `_trigger` to the `timeout_limit` as
that's where / when we expect a status change.
Worst case scenario with this is we have parasitic wakeups of this
cron, but having half a dozen wakeups unnecessary wakeups in an hour
is still probably better than having a wakeup every minute.
These are pretty simple to convert as they are straightforward: an
item is added to a work queue (table), then a cron regularly scans
through the table executing the items and deleting them.
That means the cron trigger can just be added on `create` and things
should work out fine.
There's just two wrinkles in the port_forward cron:
- It can be requeued in the future, so needs a conditional trigger-ing
in `write`.
- It is disabled during freeze (maybe something to change), as a
result triggers don't enqueue at all, so we need to immediately
trigger after freeze to force the cron re-enabling it.
Merge errors are logical failures, not technical, it doesn't make
sense to log them out because there's nothing to be done technically,
a PR having consistency issues or a conflict is "normal". As such
those messages are completely useless and just take unnecessary space
in the logs, making their use more difficult.
Instead of sending them to logging, log staging attempts to the PR
itself, and only do normal logging of the operation as an indicative
item. And remove a bunch of `expect_log_errors` which don't stand
anymore.
While at it, fix a missed issue in forward porting: if the `root.head`
doesn't exist in the repo its `fetch` will immediately fail before
`cat-file` can even run, so the second one is redundant and the first
one needs to be handled properly. Do that. And leave checking
for *that* specific condition as a logged-out error as it should mean
there's something very odd with the repository (how can a pull request
have a head we can't fetch?)
- Update branch name to prefix with project as it can be hard to
differentiate when filtering by or trying to set targets, given some
targets are extremely common (e.g. `master`/`main`) and not all
fields are filtered by project (or even can be).
- Add a proper menu item and list view for batches, maybe it'll be of
use one day.
- Upgrade label in PR search, it's more likely to be needed than
author or target.
- Put PRs first in the mergebot menu, as it's *by far* the most likely
item to look for, unless it's staging in order to cancel one.
I thought I'd removed the error message when approving an already
approved PR but apparently not?
However we can improve the message in that specific case, to make the
expected operation clearer.
Fixes#906
d4fa1fd353 added tracking to changes
from *comments* (as well as a few hacks around authorship transfer),
however it missed two things:
First, it set the `change-author` during comments handling only, so
changes from the `PullRequest` hook e.g. open, synchronise, close,
edit, don't get attributed to their actual source, and instead just
fall back to uid(1). This is easy enough to fix as the `sender` is
always provided, that can be resolved to a partner which is then set
as the author of whatever changes happen.
Second, I actually missed one of the message hooks: there's both
`_message_log` and `_message_log_batch` and they don't call one
another, so both have to be overridden in order for tracking to be
consistent. In this case, specifically, the *creation* of a tracked
object goes through `_message_log_batch` (since that's a very generic
message and so works on every tracked object created during the
transaction... even though batch has a message per record anyway...)
while *updates* go through `_message_log`.
Fixes#895
- When a redundant approval is sent to a PR, notify but don't ignore
the entire command set, there's no actual risk.
- Indicate that the entire comment was ignored when finding something
which does not parse.
Fixes#892, fixes#893
If a PR gets approved *then* fails CI, there should be a notification
warning the author & reviewer since
48e08b657b, it even has a test, which
passes (in fact it has *two*, one of which is redundant, so merge
`test_ci_failure_after_review` into the later `test_ci_approved`).
*However* this is in runbot_merge, turns out in
fafa7ef437 some refactoring was done in
order to override the notification and customise it for *forward
ports* with a failed status... except that override never called its
`super()`, so as soon as forwardport is installed the base
notification stops working, and that's been that since October
2019 (had been added in March that year, ignoring deployment lag).
This can be revealed by adding the corresponding check in the
*forwardport* tests, revealing the failure.
This was a pain to track down, thankfully it reproduced relatively
easily locally.
While this could be resolved in the override, might as well fold it
into the base method in furtherance of #789: the mergebot is only
used by odoo, and only with both modules combined, so splitting them
is not useful. And furthermore it things should work fine with the
forwardport installed but unused.
Fixes#894
- Instead of warning about the merge method on ready PRs, also warn on
*approved* (but exclude staged just cuz), as that's really when the
user wants to know that they forgot to set the merge method
- The cron only triggers hourly, but *if* a user approves a PR *and*
the merge method is not set yet, chances are good they'll need a
reminder (if they `r+ rebase-merge` or w/e the cron will just ignore
the PR and it's no skin off our back), so `_trigger` the cron for
validation.
- Also do the same when skipchecks is set as it's very similar.
In reality we might want to hook off of the state transitioning to
reviewed but I'm not sure there's good ways to do that (triggering a
cron inside a compute doesn't seem like a good idea).
Update a pair of tests which would approve a multi-commit PR without
setting a merge method, just because the helper they use to build the
PR happens to create multiple commits.
Fix#891
Seems like a good idea to better keep track of the log of an Odoo used
to testing, and avoid silently ignoring logged errors.
- intercept odoo's stderr via a pipe, that way we can still write it
back out and pytest is able to read & buffer it, pytest's capfd
would not work correctly: it breaks output capturing (and printing
on failure); and because of the way it hooks in it's unable to
capture from subprocesses inheriting the standard stream, cf
pytest-dev/pytest#4428
- update the env fixture to check that the odoo log doesn't have any
exception on failure
- make that check conditional on the `expect_log_errors` marker, this
way we can mark tests for which we expect errors to be logged, and
assert that that does happen
Setting the PR state directly really doesn't work as it doesn't
correctly save (and can get overwritten by any dependency of which
there are many).
This caused setting odoo/odoo#165777 in error to fail, leading to it
being re-staged (and failing) repeatedly, and the PR being spammed
with comments.
- create a more formal helper for preventing directly setting computed
functions (without an actual inverse)
- replace direct state setting by setting the corresponding dependency
e.g. `error` for error and `skipchecks` to force a PR to ready
- add a `skipchecks` inverse to the PR so it can also set itself as
reviewed, and is convenient, might be worth also adding stuff to
`Batch.write`
Currently webhook secrets are configured per *project* which is an
issue both because different repositories may have different
administrators and thus creates safety concerns, and because multiple
repositories can feed into different projects (e.g. on mergebot,
odoo-dev/odoo is both an ancillary repository to the main RD project,
and the main repository to the minor / legacy master-wowl
project). This means it can be necessary to have multiple projects
share the same secret as well, this then mandates the secret for more
repositories per (1).
This is a pain in the ass, so just detach secrets from projects and
link them *only* to repositories, it's cleaner and easier to manage
and set up progressively.
This requires a lot of changes to the tests, as they all need to
correctly configure the signaling.
For `runbot_merge` there was *some* setup sharing already via the
module-level `repo` fixtures`, those were merged into a conftest-level
fixture which could handle the signaling setup. A few tests which
unnecessarily set up repositories ad-hoc were also moved to the
fixture. But for most of the ad-hoc setup in `runbot_merge`, as well
as `forwardport` where it's all ad-hoc, events sources setup was just
appended as is. This should probably be cleaned up at one point, with
the various requirements collected and organised into a small set of
fixtures doing the job more uniformly.
Fixes#887
Using a regex as the pattern is quite frustrating due to all the
escaping necessary, which in this refactoring I found out I'd missed,
multiple times.
Convert the pattern to something bespoke but not too complicated, we
may want to add anchoring support and a bit more finesse and the
future but for now straightforward "holes" seem to work well. I've
added support for capturing and even named groups even if this as yet
unnecessary and unused.
Fixes#861
[^1]: https://docs.pytest.org/en/stable/reference.html#pytest.hookspec.pytest_assertrepr_compare
Add support for the ability to validate *stagings* over RPC rather
than via webhook. This may later be expanded to PRs as well.
The core motivation for this is to avoid bouncing through github which
sometimes drops the ball on statuses, and it's frustrating to have a
staging time out because GH fucked up.
Implemented via RPC, requiring both the staging itself (by id) and the
head commit being affected, as that is necessary to know what CIs are
required for that head and correctly report cross branch on the
various PRs.
Fix#881 (kinda)
if rebased. Untouched commits (straight merge) remain unalterated, but
all rebased or squashed commits now get signoff and `Related` headers
added on top of the already previously added `part-of`.
Implement by generalising `_build_merge_message` to `_build_message`
and having `add_self_references` delegate to it, removes some of the
redundancy / differential handling.
Update the `part_of` helper to also add the S-O-B header to the PR,
although it currently does not reference the entire forward port
chain.
Fixes#876
Displays the entire batch set as a table, along both
repository (linked PRs) and branch (forward ports). Should provide a
much more complete overview.
Adds a copy of the dashboard as a raster render, to link from the PR:
as usual SVG is shit, content-based viewboxes are hell and having to
duplicate the entire CSS because `<img/>`-linked CSS can't run is
gross. And there's no payoff since the image is not interactible
anyway.
Performing manual ad-hoc table rendering via pillow is not
significantly worse, it works fine and it's possible to do *really*
good conditional request handling (hopefully) because I've basically
got all the information I need right here.
In fact it might make sense to upgrade the regular HTML page with
similar conditional request handling, at least for the last-update
bit if not the etag.
Fixes #771,fixes #770
Batch ordering in stagings is important in order to correctly
reconstitute the full project history.
In the old mergebot, since batches are created on the fly during
staging this information is reified by the batch ids. But since batch
ids are now persistent and there is no relationship between the
creation of a batch and its merging (especially not relative to other
batches) it's an issue as reconstituting sub-staging git history would
be impossible.
Which is not the worst, but is not great.
The solution is to reify the join table between stagings and batches
in order for *that* to keep history (simply via the sequential PK),
and in converting to the new system carefully generate the new links
in an order matching the old batch ids.
It's a bit weird and inconsistent to have a PR being staged while
unreviewed or unapproved or w/e. If we compute the state based on
skipchecks then everything is consistent.
Also remove the implicit override of all statuses when explicitly
marking the pr as `ready`, it risks creating difficult to understand
states, and it's unnecessary since `skipchecks` gets set.
Also as with setting skipchecks, sets the current user as reviewer on
all PRs of the batch without a reviewer.
Because `alone` (formerly p != 2) is selected before split PRs, if a
prioritised PR gets split (or a split PR gets prioritised) it will be
staged once as prioritised, and again because split.
Improve the selection of ready batches to exclude split batches
upstream, such that they don't have to be rechecked over and over, and
their priorities don't cause us issues.
Simplifies the `ready_prs` query a bit and allows it to be converted
to an ORM search, by moving the priority check outside. This also
allows the caller to not need to post-process the records list
anywhere near the previous state of affairs.
`ready_prs` now returns *either* the "alone" batches, or the non-alone
batches, rather than mixing both into a single sequence. This requires
correctly applying the search filters to not retrieve priority of
batches in error or targeting other branches.
Staging readiness is a batch-level concerns, and many of the markers
are already there though a few need to be aggregated from the PRs. As
such, staging has no reason to be performed in terms of PRs anymore,
it should be performed via batches directly.
There is a bit of a mess in order not to completely fuck up when
retargeting PRs (implicitly via freeze wizard, or explicitely) as for
now we're moving PRs between batches in order to keep the
batches *mostly* target-bound.
Some of the side-effects in managing the coherence of the targeting
and moving PRs between batches is... not great. This might need to be
revisited and cleaned up with those scenarios better considered.
This commit revisits the commands set in order to make it more
regular, and limit inconsistent command-sets, although it includes
pseudo-command aliases for common tasks now removed from the core set.
Hard Errors
===========
The previous iteration of the commands set would ignore any
non-command term in a command line. This has been changed to hard
error (and ignoring the entire thing) if any command is unknown or
invalid.
This fixes inconsistent / unexpected interpretations where a user
sends a command, then writes a novel on the same line some words of
which happen to *also* be commands, leading to merge states they did
not expect. They should now be told to fuck off.
Priority Restructuring
----------------------
The numerical priority system was pretty messy in that it confused
"staging priority" (in ways which were not entirely straightforward)
with overrides to other concerns.
This has now being split along all the axis, with separate command
subsets for:
- staging prioritisation, now separated between `default`, `priority`,
and `alone`,
- `default` means PRs are picked by an unspecified order when
creating a staging, if nothing better is available
- `priority` means PRs are picked first when staging, however if
`priority` PRs don't fill the staging the rest will be filled with
`default`, this mode did not previously exist
- `alone` means the PRs are picked first, before splits, and only
`alone` PRs can be part of the staging (which usually matches the
modename)
- `skipchecks` overrides both statuses and approval checks, for the
batch, something previously implied in `p=0`, but now
independent. Setting `skipchecks` basically makes the entire batch
`ready`.
For consistency this also sets the reviewer implicitly: since
skipchecks overrides both statuses *and approval*, whoever enables
this mode is essentially the reviewer.
- `cancel` cancels any ongoing staging when the marked PR becomes
ready again, previously this was also implied (in a more restricted
form) by setting `p=0`
FWBot removal
=============
While the "forwardport bot" still exists as an API level (to segregate
access rights between tokens) it has been removed as an interaction
point, as part of the modules merge plan. As a result,
fwbot stops responding
----------------------
Feedback messages are now always sent by the mergebot, the
forward-porting bot should not send any message or notification
anymore.
commands moved to the merge bot
-------------------------------
- `ignore`/`up to` simply changes bot
- `close` as well
- `skipci` is now a choice / flag of an `fw` command, which denotes
the forward-port policy,
- `fw=default` is the old `ci` and resets the policy to default,
that is wait for the PR to be merged to create forward ports, and
for the required statuses on each forward port to be received
before creating the next
- `fw=skipci` is the old `skipci`, it waits for the merge of the
base PR but then creates all the forward ports immediately (unless
it gets a conflict)
- `fw=skipmerge` immediately creates all the forward ports, without
even waiting for the PR to be merged
This is a completely new mode, and may be rather broken as until
now the 'bot has always assumed the source PR had been merged.
approval rework
---------------
Because of the previous section, there is no distinguishing feature
between `mergebot r+` = "merge this PR" and `forwardbot r+` = "merge
this PR and all its parent with different access rights".
As a result, the two have been merged under a single `mergebot r+`
with heuristics attempting to provide the best experience:
- if approving a non-forward port, the behavior does not change
- else, with review rights on the source, all ancestors are approved
- else, as author of the original, approves all ancestors which descend
from a merged PR
- else, approves all ancestors up to and including the oldest ancestor
to which we have review rights
Most notably, the source's author is not delegated on the source or
any of its descendants anymore. This might need to be revisited if it
provides too restrictive.
For the very specialized need of approving a forward-port *and none of
its ancestors*, `review=` can now take a comma (`,`) separated list of
pull request numbers (github numbers, not mergebot ids).
Computed State
==============
The `state` field of pull requests is now computed. Hopefully this
makes the status more consistent and predictable in the long run, and
importantly makes status management more reliable (because reference
datum get updated naturally flowing to the state).
For now however it makes things more complicated as some of the states
have to be separately signaled or updated:
- `closed` and `error` are now separate flags
- `merge_date` is pulled down from forwardport and becomes the
transition signal for ready -> merged
- `reviewed_by` becomes the transition signal for approval (might be a
good idea to rename it...)
- `status` is computed from the head's statuses and overrides, and
*that* becomes the validation state
Ideally, batch-level flags like `skipchecks` should be on, well, the
batch, and `state` should have a dependency on the batch. However
currently the batch is not a durable / permanent member of the system,
so it's a PR-level flag and a messy pile.
On notable change is that *forcing* the state to `ready` now does that
but also sets the reviewer, `skipchecks`, and overrides to ensure the
API-mediated readying does not get rolled back by e.g. the runbot
sending a status.
This is useful for a few types of automated / programmatic PRs
e.g. translation exports, where we set the state programmatically to
limit noise.
recursive dependency hack
-------------------------
Given a sequence of PRs with an override of the source, if one of the
PRs is updated its descendants should not have the override
anymore. However if the updated PR gets overridden, its descendants
should have *that* override.
This requires some unholy manipulations via an override of `modified`,
as the ORM supports recursive fields but not recursive
dependencies (on a different field).
unconditional followup scheduling
---------------------------------
Previously scheduling forward-port followup was contigent on the FW
policy, but it's not actually correct if the new PR is *immediately*
validated (which can happen now that the field is computed, if there
are no required statuses *or* all of the required statuses are
overridden by an ancestor) as nothing will trigger the state change
and thus scheduling of the fp followup.
The followup function checks all the properties of the batch to port,
so this should not result on incorrect ports. Although it's a bit more
expensive, and will lead to more spam.
Previously this would not happen because on creation of a PR the
validation task (commit -> PR) would still have to execute.
Misc Changes
============
- If a PR is marked as overriding / canceling stagings, it now does
so on retry not just when setting initially.
This was not handled at all previously, so a PR in P0 going into
error due to e.g. a non-deterministic bug would be retried and still
p=0, but a current staging would not get cancelled. Same when a PR
in p=0 goes into error because something was failed, then is updated
with a fix.
- Add tracking to a bunch of relevant PR fields.
Post-mortem analysis currently generally requires going through the
text logs to see what happened, which is annoying.
There is a nondeterminism / inconsistency in the tracking which
sometimes leads the admin user to trigger tracking before the bot
does, leading to the staging tracking being attributed to them
during tests, shove under the carpet by ignoring the user to whom
that tracking is attributed.
When multiple users update tracked fields in the same transaction
all the changes are attributed to the first one having triggered
tracking (?), I couldn't find why the admin sometimes takes over.
- added and leveraged support for enum-backed selection fields
- moved variuous fields from forwardport to runbot_merge
- fix a migration which had never worked and which never run (because
I forgot to bump the version on the module)
- remove some unnecessary intermediate de/serialisation
fixes#673, fixes#309, fixes#792, fixes#846 (probably)
- move all commands parsing to runbot_merge as part of the long-term
unification effort (#789)
- set up an actual parser-ish structure to parse the commands to
something approaching a sum type (fixes#507)
- this is mostly prep for reworking the commands set (#673), although
*strict command parsing* has been implemented (cf update to
`test_unknown_commands`)
If an untracked PR is closed, especially on an inactive or untracked
branch, the closer (or author) almost certainly don't care to receive
3 different notifications on the subject.
The fix requires a schema change in order to track that we're fetching
the PR due to a `closed` event, as in other cases we may still want to
notify the user that we received the request (and it just happened to
resolve to a closed PR).
Fixes#857
Add intermediate forks to a pair of tests, because github now (?)
requires being able to write on a branch to create a PR from it, so
the non-collaborator reviewers were not able to create a PR from a
branch created by user.
Github delivery delays keep getting worse. Depending on what comes
before `to_pr`, this leads it to fail more often as it runs before the
PR it's looking for was signaled to the mergebot.
In order to mitigate this issue, add a wait loop in `to_pr`, waiting
up to 4 seconds for the PR it's looking for before aborting.
Also replace manual lookups by `to_pr` in every method of
`TestPRUpdate` while at it since it hit a few of the issues. And
remove the xfail test case since it seems unlikely github will change
tack (maybe? could be worth testing to be sure).
Reverts commit 85a7890023 which
untrimmed the commits: while it's *probably* true that git and
github's APIs differ in their treatment of whitespace (in that git
pretty much always terminates the commit message with a newline while
github does not, as far as I understand, though I didn't really
validate it) the issue was that github also trims on *output* when
fetching over the API, something the fake did not do.
So rather than update the test data I should have fixed the fake, but
I failed to realise that at the time. I only realised when I decided
to re-run against github actual (something I rarely do anymore as it's
painfully slow) and it went on to choke on every message I'd updated.
The github API has gotten a lot more constraining (with rate
restrictions being newly enforced or added somewhat out of nowhere),
and as importantly a lot less reliable. So move the staging process
off of github and locally, similar to the forward porting
process (whose repo cache is being reused for this).
Fixes#247
It has been a consideration for a while, but the pain of subtly
interacting with git via the ignominous CLI kept it back. Then ~~the
fire nation attacked~~ github got more and more tight-fisted (and in
some ways less reliable) with their API.
Staging pretty much just interacts with the git database, so it's both
a facultative github operator (it can just interact with git directly)
and a big consumer of API requests (because the git database endpoints
are very low level so it takes quite a bit of work to do anything
especially when high-level operations like rebase have to be
replicated by hand).
Furthermore, an issue has also been noticed which can be attributed to
using the github API (and that API's reliability getting worse): in
some cases github will fail to propagate a ref update / reset, so when
staging 2 PRs it's possible that the second one is merged on top of
the temporary branch of the first one, yielding a kinda broken commit
(in that it's a merge commit with a broken error message) instead of
the rebase / squash commit we expected.
As it turns out it's a very old issue but only happened very early so
was misattributed and not (sufficiently) guarded against:
- 41bd82244bb976bbd4d4be5e7bd792417c7dae6b (October 8th 2018) was
spotted but thought to be a mergebot issue (might have been one of
the opportunities where ref-checks were added though I can't find
any reference to the commit in the runbot repo).
- 2be25052e147b151d1d8a5bc73cceb351586ce03 (October 15th, 2019) was
missed (or ignored).
- 5a9fe7a7d05a9df7186072a7bffd60c6b428fd0e (July 31st, 2023) was
spotted, but happened at a moment where everything kinda broke
because of github rate-limiting ref updates, so the forensics were
difficult and it was attributed to rate limiting issues.
- f10d03bf0f2e8f88f62a5d8356b84f714196130f (August 24th, 2023) broke
the camel's back (and the head block): the logs were not too
interspersed with other garbage and pretty clear that github ack'd a
ref update, returned the correct oid when checking the ref, then
returned the wrong oid when fetching it later on.
No Working Copy
===============
The working copy turns out to not be necessary, the plumbing commands
we *need* work just fine on a bare repository.
Working without a WC means we had to reimplement the high level
operations (rebase) by hand much as we'd done previously, *but* we
needed to do that anyway as git doesn't seem to provide any way to
retrieve the mapping when rebasing/cherrypicking, and cherrypicking by
commit doesn't work well as it can't really find the *merge base* it
needs.
Forward-porting can almost certainly be implemented similarly (with
some overhead), issue #803 has been opened to keep track of the idea.
No TMP
======
The `tmp.` branches are no more, the process of creating stagings is
based entirely around oids, if staging something fails we can just
abandon the oids (they'll be collected by the weekly GC), we only
need to update the staging branches at the very end of the process.
This simplifies things a fair bit.
For now we have stopped checking for visibility / backoff as we're
pushing via git, hopefully it is a more reliable reference than the
API.
Commmit Message Formatting
==========================
There's some unfortunate churn in the test, as the handling of
trailing newlines differs between github's APIs and git itself.
Fixes#247
PS: It might be a good idea to use pygit2 instead of the CLI
eventually, the library is typed which is nice, and it avoids
shelling out although that's really unlikely to be a major cost.
When I updated the status storage (including `previous_failure`) for
some reason I didn't just migrate from the old to the new format, and
added bridge functions instead.
This is not really necessary (or useful), so convert all the legacy
data and remove the conversion helpers.
Relates to #802
Current system makes it hard to iterate feedback messages and make
them clearer, this should improve things a touch.
Use a bespoke model to avoid concerns with qweb rendering
complexity (we just want GFM output and should not need logic).
Also update fwbot test setup to always configure an fwbot name, in
order to avoid ping messages closing the PRs they're talking
about, that took a while to debug, and given the old message I assume
I'd already hit it and just been too lazy to fix. This requires
updating a bunch of tests as fwbot ping are sent *to*
`fp_github_name`, but sent *from* the reference user (because that's
the key we set).
Note: noupdate on CSV files doesn't seem to work anymore, which isn't
great. But instead set tracking on the template's templates, it's not
quite as good but should be sufficient.
Fixes#769