Implemented rebase (-ish?) by hand, following the model of the one
built on the github API, so we have all the information we need *and*
still don't need a local working copy.
Had to update *several* tests due to formatting changes from using the
CLI git (maybe that could be avoided by feeding the commit message
through stdin but w/e).
Also added a dupe of an existing helper for the formatting changes
bit, will have to remove it once all's done.
Move *almost* all the staging code to free functions, in a separate
module, and extensively typed.
The only bits which didn't move are:
- the entry point (the cron hook), because it has to be a model method
in order to be called
- the `_build_merge_message` method, because it needs to be
overridable
There's also a bit of an import mess, because the cron &
`_build_merge_message` need to call into the new module, but the new
module wants the types they belong to, so it's a bit circular.
Currently the heads of a staging (both staging heads and merged heads)
are just JSON data on the staging itself. Historically this was
convenient as the heads were mostly of use to the staging process, and
thus accessed directly through the staging essentially exclusively.
However this makes finding stagings from merged commits e.g. for
forensic research almost impossible, because querying based on
the *values* of a JSON map is expensive, and indexing it is difficult.
To make this use case more feasible, split the `heads` field into two
join tables, one for the staging heads and one for the merged heads,
this makes looking for stagings by commits much more
efficient (although the queries may not be trivial). Also add two
utility RPC methods, so that it's possible to query stagings
reasonably easily and efficiently based on a set of commits (branch
heads).
related to #768
A few cases of conflict were missing from the provisioning
handler.
They can't really be auto-fixed, so just output a warning and ignore
the entry, that way the rest of the provisioning succeeds.
Current system makes it hard to iterate feedback messages and make
them clearer, this should improve things a touch.
Use a bespoke model to avoid concerns with qweb rendering
complexity (we just want GFM output and should not need logic).
Also update fwbot test setup to always configure an fwbot name, in
order to avoid ping messages closing the PRs they're talking
about, that took a while to debug, and given the old message I assume
I'd already hit it and just been too lazy to fix. This requires
updating a bunch of tests as fwbot ping are sent *to*
`fp_github_name`, but sent *from* the reference user (because that's
the key we set).
Note: noupdate on CSV files doesn't seem to work anymore, which isn't
great. But instead set tracking on the template's templates, it's not
quite as good but should be sufficient.
Fixes#769
- currently disabling staging only works globally, allow disabling on
a single branch
- use a toggle
- remove a pair of tests which work specifically with `fp_target`,
can't work with `active` (probably)
- cleanup search of possible and active stagings, add relevant
indexes and use direct search of relevant branches instead of
looking up from the project
- also use toggle button for `active` on branches
- shitty workaround for upgrading DB: apparently mail really wants to
have a `user_id` to do some weird thing, so need to re-add it after
resetting everything
Fixes#727
- github logins are case-insensitive while the db field is CI the dict
in which partners are stored for matching is not, And the caller may
not preserve casing.
Thus it's necessary to check the casefolded database values against
casefolded parameters, rather than exactly.
- users may get disabled by mistake or when one leaves the project,
they may also get switched from internal to portal, therefore it is
necessary to re-enable and re-enroll them if they come back.
- while at it remove the user's email when they depart, as they likely
use an organisational email which they don't have access to anymore
Side-note, but remove the limit on the number of users / partners
being created at once: because there are almost no modules in the
mergebot's instance, creating partner goes quite fast (compared to a
full instance), thus the limitation is almost certainly unnecessary
(creating ~300 users seems to take ~450ms).
Fixes ##776
1cea247e6c tried to improve staging
checks to avoid staging PRs in the wrong state, however it had two
issues:
PR state
--------
The process would reset the PR's state to open, but unless the head
was being resync'd it wouldn't re-apply the statuses on the state,
leading to a PR with all-valid statuses, but a missing CI.
Message
-------
The message check didn't compose the PR message the same way PR
creation / update did (it did not trim the title and description
individually, only after concatenation), resulting in a
not-actually-existing divergence getting signaled in the case where
the PR title ends or the description starts with whitespace.
Expand relevant test, add a utility function to compose a PR message
and use it everywhere for coherence.
Also update the logging and reporting to show a diff of all the
updated items (hidden behind a `details` element).
Previously the mergebot would only sync the head commit, but synching
more is useful.
Also update the final sanity check on staging:
- as with check, update the message & target branch
- reset PR state and post a message when updating message instead of
doing so silently
Note: maybe only fail the staging if the message is updated *and*
relevant to staging (aka there's a merge method and it's not
`rebase`)?
Fixes#680
If commits have different authors (/ committers), the mergebot would
ask github to create a commit with an author (/ committer) of `None` /
`null`.
Apparently github really does not like that, and complains that
> nil is not an object
So remove the key entirely. Also fix the collision between `author`
and the `Co-Authored-By` list, which could lead to trying to set an
`author` of `[name, email]` instead of an object, which is also not
accepted by github.
In case where the last branch (before the branch being frozen) is
disabled, the forwardport inserter screws up, and fails to correctly
create the intermediate forwardports from the new branch.
Also when disabling a branch, if there are FW PRs which target that
branch and have not been forward-ported further, automatically
forward-port them as if the branch had been disabled when they were
created, this should limit data loss and confusion.
Also change the message set on PRs when disabling a branch: because of
user conflicts in test setup, the message about a branch being
disabled would close the PRs, which would then orphan the followup,
leading to unexpected / inconsistent behaviour.
Fixes#665
The `statuses` field of a staging is always "live" because it's a
computed non-stored field. This is an issue when a staging finishes in
whatever state, then someone gets new statuses sent on one of the head
commits, either by rebuilding (part of) the staging or by just using
the same commit for one of their branches.
This makes the reporting of the main dashboard confusing, as one might
look at a failed staging and see all the required statuses
successful. It also makes post-mortem analysis more complicated as the
logs have to be trawled for what the statuses used to be (and they
don't always tell).
Solve this by storing a snapshot of the statuses the first time a
staging moves away from `pending`, whether it's to success or failure.
Fixes#667
Partially revert 0c882fc0df
This turns out to be more bane than boon, as it breaks forward-port
chains and confuses people (despite the message). Update notification
message and don't close the PR anymore.
While at it, disable any pending staging on the branch being deactivated.
Fixes#654
af016f4239 did a half-assed job and
didn't fix the one test which actually checks the dashboard.
TBF I was in a bit of a hurry trying to make the mergebot work and be
presentable again, but still...
Test seems to fail from time to time with one of the PRs getting
lost. Tried to move code around trying to investigate, can't repro
anymore. Possibly a race condition because the `to_pr` call was
performed too early, before the webhook had run (and thus before the
PR object had been created on the odoo side).
By moving the `to_pr` calls to after the cron run, we really ensure
the webhooks will have run.
Also update `to_pr` to ensure exactly one PR was retrieved, as
currently nothing is checked so we might have gotten none (yet), which
should be noticed early and clearly. In theory this also guards
against multiple PRs, but PRs should be unique on (repo, number).
Currently deactivating a branch kinda leaves users in the dark, with
little way to know what has happened aside from inferring it from the
branch having disappeared from the main dashboard.
- surface the state of the branch in the PR dashboard (also surface
the target branch at all so users can see if their PR is targeted
as they expect as far as the mergebot is concerned)
- close & notify every PR to a branch being deactivated
- cancel any current staging to the branch (as a consequence of the
above)
Closes#632
The previous version of the code assumed `pr['body']` is always a
string, which is not correct, when the PR body is emptied the body
itself is removed (its value is `None`).
Add a case for this in the PR edition test, and avoid blowing up (or
adding empty newlines) when the PR body is empty. For PR creation this
issue was fixed in c2db5659d8 but
apparently I missed that the exact same issue occurs just a few lines
above.
Also turns out github does *not* send change information when the body
is updated from (or to?) `None`, so don't even bother with that, just
check every time if the overall message has been updated.
Fixes#629
Stop *staging* release PRs: they are normally fairly simple and should
not fail their staging outside of unreliable tests (or possibly a few
edge cases e.g. forgot one version change thing), however staging them
creates the possibility of a "version hole" on the release branch
which is undesirable.
Instead, immediately and unconditionally push the release commits onto
the newly created branches, if there are things which don't work they
can be fixed afterwards (and the process refined, maybe).
Also add the same feature for *bump* PRs, with the difference that the
bump PRs are not created / requested by default (they have to be opted
in individually).
For convenience, add a feature which automatically finds the PRs via
inputting the label (not really tested yet).
Closes#603
Because the searching of the PR occurs *right* after the PR was
created on the server, despite the additional operations (status,
approval) it's apparently possible for the lookup of the new PR to
occur about the same time the PR is being created, kinda, maybe?
On DS it triggers very reliably for every PR but the first. By moving
the retrieval after the repo timeout & we've run the crons, we near
guarantee the PRs are visible (it's possible for things to fail on
grounds of github reliability, but then they'd have failed *even more*
before).
- add a logging entry for PR updates
- change the generic log entry to log the sender (of the event) rather
than the PR author
- fix the post-facto PR loader to more systematically and reliably set
a `sender`
Old messages were quite inconsistent in their pinging of the PR author
and reviewer.
Reviewed messages (probably missed some but...) and try to more
consistently ping when the feedback requires some sort of action in
order to proceed.
Fixes#592
A few fixes and improvements after testing the feature:
- ensure the provisioned users are created as internal (not portal)
- assume oauth is installed and just crash if it's not
- handle a user not having an email (ignore)
- return value from json handler, otherwise JsonRequest sends no
payload which is *weird*
Stagings would be cancelled automatically if the PR's commits were
updated, but not if the target (base) was changed, even though that
has a drastic impact on staging.
Add hooks to unstage PRs if their base is updated, or if their message
is updated and relevant to staging (merge or rebase-merge methods).
Fixes#604
When a staging's fast-forward (to the target branch) fails, the
mergebot would provide no useful information on the staging or the
dashboard.
This is because the reason was set to the HTTP status, which in case
of a fast-forward error is just "422 client error: unprocessable
entity".
Improve this by trying to parse github's response in that case, and
using the JSON error message as failure reason. This provides more
useful failure information like "update is not a fast forward",
"reference does not exist", or a branch protection failure.
Closes#591
New accounts endpoint such that the SSO can push new pre-configured
users / employees directly. This lowers maintenance burden.
Also remove one of the source partners from the merge test, as
ordering seems wonky for unclear reasons leading to random failures of
that test.
On #509, the rebasing process was changed to forcefully update the
commit date of the commits, in order to force trigger builds.
However when squashing was re-enabled for #539 for some fool reason it
implemented its own bespoke rebasing (despite that not actually saving
any API call that I can see), meaning it did *not* update the commit
date. As such, an old commit being squashed would not get picked up by
the runbot, which is what happened to odoo/documentation#1226 (which
ultimately had to be hand-rebased after some confusion as to why it
did not work).
Update `_stage_squash` to go through `rebase` the normal way, also
update `rebase` to pop the commit date entirely instead of setting it
manually, and update the squashing test to check that the commit date
gets properly updated.
Fixes#579, closes#582
Currently limited to release/freeze PRs: it can be difficult to be
sure the right PR was selected then, and a mistake there seems more
impactful than in the PRs being waited for?
Note: adds a test to make sure I don't break the check that all
release PRs must have the same label (be linked). This was
already safe, and in a way this PR adds convenience but not
really safety, but better sure than sorry.
- add flag to not select repos for freezing
- allow removing more repositories from the wizard
- when performing the freeze, only create branches for the selected
repos
The freeze wizard was implemented using a single action to open and
validate the dialog.
This was a mistake, as it means if there are no errors left (e.g. all
the PRs being waited for are now validated) trying to view the freeze
wizard will immediately validate it and commit the freeze, which is
unexpected, surprising, and unsafe e.g.
- open wizard
- add freeze prs
- add a required pr or two
- close and go do something else
- be told that more PRs need to be waited for
- reopen wizard
- oops freeze is done
So split the "open action" part of `action_freeze` into opening the
action and performing the freeze. The "freeze" / "view freeze" button
on the project only activates the latter, and the actual freeze
operation is only triggered from the wizard's "Freeze" button.
Part of #559.
Provides a less manual interface for creating the freeze:
* takes the name of the branch to create
* takes any number of PRs which must be part of the freeze
* takes PRs representing the HEADs of the new branches
Then essentially takes care of the test.
Implementation of the actual wizard is not trivial but fairly
straightforward and linear, biggest issue is not being able to
`project_id.branch_ids[1]` to get the new branch, not sure why but it
seems to ignore the ordering, clearing the cache doens't fix it.
When creating the branches, add a sleep after each one for secondary
rate limiting purposes. Same when deleting branches.
Also the forwardbot has been updated to disable the forwardport cron
while a freeze is ongoing, this simplifies the freezing process.
Note: after recommendation of @aab-odoo, tried using `_applyChanges`
in `_checkState` but it simply did not work: the two relational fields
got completely frozen and were impossible to update, which is less
than ideal. Oh well, hopefully it works well enough like this for now.
To prep for the addition of the freeze wizard:
* move projects out of `pull_requests.py`
* then realize half the methods there have no relation to projects and
move them to more relevant places in `pull_requests.py`
* update corresponding crons (and tests using those crons) as the
methods have changed model, and the cron definitions thus need to be
updated
* split update to labels out of sending feedback comments while at it:
labels are not used much during tests so their manipulation can be
avoided; and labels are not as urgent as feedback so the crons can
be quite a bit slower
* move the project view out of `mergebot.xml` as well
When a commit is lacking the purpose (?) tag e.g. `[FIX]`, `[IMP]`,
..., a normal commit message of the form `<module>: <info>` marches
the looks of a git pseudo-header.
This results in a commit rewrite rejiggering the entire thing and
breaking the message by moving the title to the pseudo-headers and
mis-promoting either the `closes` line of body content to "title",
resulting in a really crappy commit message
e.g. odoo/odoo@d4aa9ad031.
Update the commit rewriting procedure to specifically skip the title
line, and re-inject it without processing in the output.
Fixes#540
Re-introduce a "squash" mode solely for the purpose of fixing up
commit messages without having to go and edit them: for now "squash"
only works for single-commit PRs, acts as a normal
integration (`rebase-ff`) *but* replaces the message of the commit
itself by that of the PR, similar to the `merge` modes.
This means maintainers can update commit messages to standards by
editing the PR description (though this is obviously sensible to
edition races with the original author).
Fixes#539
If a reviewer doesn't have an email set, the Signed-Off-By is an
`@users.noreply.github.com` address which just looks weird in the
final result.
Initially the thinking was that emails would be required for users to
*be* reviewers or self-reviewers, but since those are now o2ms / m2ms
it's a bit of a pain in the ass.
Instead, provide an action to easily try and fetch the public email of
a user from github.
Fixes#531
Since we have the model fields loaded up, we can just check into that
and assume anything that's not a field is a method.
That avoids having to go through `_call`, making things way less awkward.
After internal discussions it was concluded that this didn't extend
much more trust than allowing authors to accept their single-PR
commits without additional supervisions, and it would avoid some
inconveniences and PR-blocking.
Fixes#69 (nice)
Because sometimes github updates are missed (usually because github
never triggers it), it's possible for the mergebot's view of a PR
description to be incorrect. In that case, the PR may get merged with
the wrong merge message entirely, through no fault of the user.
Since we already fetch the PR info when staging it, there's very
little overhead to checking that the PR message we store is correct
then, and update it if it's not. This means the forward-port's
description should also be correct.
While at it, clean the forward port PR's creation a bit:
- there should always be a message since the title is required on
PRs (only the body can be missing), therefore no need to check that
- as we're adding a bunch of pseudo-headers, there always is a body,
no need for the condition
- inline the `pr_data` and `URL`: they were extracted for the support
of draft PRs, since that's been removed it's now unnecessary
Fixes#530
The page of PRs in "error" is currently kinda broken: it does not show
any feedback aside from the PR being in error which is not very
useful.
The intent was always to show an explanation, but when adding the page
I just deref'd `staging_id` which always fails though in two different
ways:
* when the PR can not be staged at all (because of a conflict) there
is no staging at all with a reason to show, so there should be
a fallback that the PR could not even be staged
* `staging_id` is a related field which deref's to the staging_ids
of the first *active* batch, except when a staging completes
(successfully or not) both staging and batch are disabled.
Plus the first batch will be the one for the first staging so if the
PR is retried and fails again the wrong reason may be displayed.
So update the section to show what we want: the reason of the
staging of the *last* batch attached to the PR.
NOTE: there's one failure mode remaining, namely if a staging fails
then on retry the PR conflicts with the new state of the
repository (so it can't be staged at all), the "reason" will
remain that of the staging. This could be mitigated by attaching
a "nonsense" batch on failure to stage (similar to the
forwardport stuff), that batch would have no staging, therefore
no staging reason, therefore fallback.
Closes#525
On staging failure, the 'bot would point to the first error or failure
status it found on the commit. This turns out not to be correct as
we (now) have various statuses which are optional, and may fail
without blocking stagings (either because they're solely informational
or because they're blocking & overridable on PRs).
Fix this so the 'bot points to the first *required* failure.
Fixes#517
* Remove the forwardport creating PRs in draft, that was mostly to
avoid codeowners triggering but we've removed the github one and
hand-rolled it, so not a concern anymore.
* Prevent merging `draft` PRs, the mergebot rejects approval on draft
PRs and insults people.
TBD (maybe): try to create *conflicting* forward-port PRs in draft so
it's clearer they need to be *fixed*? Issue of not being able to do
that on all private repositories remains so~~
Fixes#500
"Uniquifier" commits were introduced to ensure branches of a staging
on which nothing had been staged would still be rebuilt properly.
This means technically the branches on which something had been
staged never *needed* a uniquifier, strictly speaking. And those lead
to extra building, because once the actually staged PRs get pushed
from staging to their final destination it's an unknown commit to the
runbot, which needs to rebuild it instead of being able to just use
the staging it already has.
Thus only add the uniquifier where it *might* be necessary:
technically the runbot should not manage this use case much better,
however there are still issues like an ancillary build working with
the same branch tip (e.g. the "current master") and sending a failure
result which would fail the entire staging. The uniquifier guards
against this issue.
Also update rebase semantics to always update the *commit date* of the
rebased commits: this ensures the tip commit is always "recent" in the
case of a rebase-ff (which is common as that's what single-commit PRs
do), as the runbot may skip commits it considers "old".
Also update some of the utility methods around repos / commits to be
simpler, and avoid assuming the result is JSON-decodable (sometimes it
is not).
Also update the handling of commit statuses using postgres' ON
CONFLICT and jsonb support, hopefully this improves (or even fixes)
the serialization errors. Should be compatible with 9.5 onwards which
is *ancient* at this point.
Fixes#509
Although it's possible to find what PR a commit was part of with a bit
of `git log` magic (e.g. `--ancestry-path COMMIT.. --reverse`) it's
not the most convenient, and many people don't know about it, leading
them to various debatable decisions to try and mitigate the issue,
such as tagging every commit in a PR with the PR's identity, which
then leads github to spam the PR itself with pingbacks from its own
commits. Which is great.
Add this information to the commits when rebasing them (and *only*
when rebasing them), using a `Part-of:` pseudo-header.
Fixes#482
If a PR is closed on github and unknown by the mergebot, when fetched
it should be properly sync'd as "closed" in the backend, otherwise the
PR can get in a weird state and cause issues.
Also move the "I fetched the thing" comment before the actual creation
of the PR for workflow clarity, otherwise the reader has the
impression that the 'bot knew about the PR then fetched it anyway.
And improve savepoint management around the fetching: savepoints
should be released in all cases.
Closes#488.
If two PRs have the same label *in different projects entirely*, the
mergebot should not consider them to be linked, but it did as shown by
the warning message on odoo-dev/odoo#905 (two PRs created from the
same branch in different projects were seen as linked by the status
checker).
3b417b16a1 fixed the actual staging
selection, it's only the warning which did not properly segregate PRs.
Only group PRs which target the same branch (therefore are within the
same project).
Fixes#487
Previously, a PR's status page would only show the linked / related
PRs when `open`.
Since the relations between PRs remains useful, also make this
information available during staging and after merging.
Fixes#463
If a PR got merged to master (or whatever the current development
branch is), there's no easy way to know what maintenance branch it
ended up landing in, except by asking git which branches contain the
commit (which can be rather slow).
Add a special case on merge which labels the PR with a pseudo-branch
patterned after the second-to-last branch of the project:
* if the branch ends with a number, increment the number by one
e.g. 2.0 -> 2.1, 5 -> 5.1
* otherwise, just prefix with `post-` e.g. "maint" ->
"post-maint" (that one doesn't sound very helpful, but I guess it's
nice for the weirdoes who call their branches "natty narwhal" and
shit)
Fixes#450