- add formatting for a bunch of backend objects
- add cross-links in order to use toplevel navigation between objects
e.g. project -> branch -> staging -> PR with breadcrumbs instead of
shitty dialog boxes
Relates to #802
When I updated the status storage (including `previous_failure`) for
some reason I didn't just migrate from the old to the new format, and
added bridge functions instead.
This is not really necessary (or useful), so convert all the legacy
data and remove the conversion helpers.
Relates to #802
Mostly a temporary safety feature after the events of 07-31: it's
still not clear whether that was a one-off issue or a change in
policy (I was not able to reproduce locally even doing several
set_refs a second) and the gh support is not super talkative, but it
probably doesn't hurt to commit the workaround until #247 gets
implemented.
On 2023-07-31, around 08:30 UTC, `set_ref` started failing, a lot
(although oddly enough not continuously), with the unhelpful message
that
> 422: Reference cannot be updated
This basically broke all stagings, until a workaround was implemented
by adding a 1s sleep before `set_ref` to ensure no more than 1
`set_ref` per second, which kinda sorta has been the github
recommendation forever but had never been an issue
before. Contributing to this suspicion is that in late 2022, the
documentation of error 422 on `PATCH git/refs/{ref}` was updated to:
> Validation failed, or the endpoint has been spammed.
Still would be nice if GH was clear about it and sent a 429 instead.
Technically the recommendation is:
> If you're making a large number of POST, PATCH, PUT, or DELETE
> requests for a single user or client ID, wait at least one second
> between each request.
So... actually implement that. On a per-worker basis as for the most
part these are serial processes (e.g. crons), we can still get above
the rate limit due to concurrent crons but it should be less likely.
Also take `Retry-After` in account, can't hurt, though we're supposed
to retry just the request rather than abort the entire thing. Maybe a
future update can improve this handling.
Would also be nice to take `X-RateLimit` in account, although that's
supposed to apply to *all* requests so we'd need a second separate
timestamp to track it. Technically that's probably also the case for
`Retry-After`. And fixing #247 should cut down drastically on the API
calls traffic as staging is a very API-intensive process, especially
with the sanity checks we had to add, these days we might be at 4
calls per commit per PR, and up to 80 PRs/staging (5 repositories and
16 batches per staging), with 13 live branches (though realistically
only 6-7 have significant traffic, and only 1~2 get close to filling
their staging slots).
`/runbot_merge/stagings`
========================
This endpoint is a reverse lookup from any number of commits to a
(number of) staging(s):
- it takes a list of commit hashes as either the `commits` or the
`heads` keyword parameter
- it then returns the stagings which have *all* these commits as
respectively commits or heads, if providing all commits for a
project the result should always be unique (if any)
- `commits` are the merged commits, aka the stuff which ends up in the
actual branches
- `heads` are the staging heads, aka the commits at the tip of the
`staging.$name` branches, those may be the same as the corresponding
commit, or might be deduplicator commits which get discarded on
success
`/runbot_merge/stagings/:id`
============================
Returns a list of all PRs in the staging, grouped by batch (aka PRs
which have the same label and must be merged together).
For each PR, the `repository` name, `number`, and `name` in the form
`$repository#$number` get returned.
`/runbot_merge/stagings/:id1/:id2`
==================================
Returns a list of all the *successfully merged* stagings between `id1`
and `id2`, from oldest to most recent. Individual records have the
form:
- `staging` is the id of the staging
- `prs` is the contents of the previous endpoint (a list of PRs
grouped by batch)
`id1` *must* be lower than `id2`.
By default, this endpoint is inclusive on both ends, the
`include_from` and / or `include_to` parameters can be passed with the
`False` value to exclude the corresponding bound from the result.
Related to #768
`auto_session_tracking` causes issues when not specified on the super
old version of the client which is available on ubuntu.
Also disable tracing as it seems less useful than hoped for, and I've
not been using what's been collected so far.
Currently the heads of a staging (both staging heads and merged heads)
are just JSON data on the staging itself. Historically this was
convenient as the heads were mostly of use to the staging process, and
thus accessed directly through the staging essentially exclusively.
However this makes finding stagings from merged commits e.g. for
forensic research almost impossible, because querying based on
the *values* of a JSON map is expensive, and indexing it is difficult.
To make this use case more feasible, split the `heads` field into two
join tables, one for the staging heads and one for the merged heads,
this makes looking for stagings by commits much more
efficient (although the queries may not be trivial). Also add two
utility RPC methods, so that it's possible to query stagings
reasonably easily and efficiently based on a set of commits (branch
heads).
related to #768
Allow filtering stagings by state (success or failure), and provide a
control to explicitly update the staging date limit.
Should make it easier to drill through stagings when looking for
specific information.
Related to #751
Fix outstanding query to make a positive `state` filtering, instead of
negative, matching 3b52b1aace8674259812a76b1566260937dbcacb.
Also manually create a map of stagings (grouped by branch) sharing a
single prefetch set.
For odoo the mergebot home page has 12 branches in the odoo project
and 8 in spreadsheet, 6 stagings each. This means 120 queries to
retrieve all the heads (Odoo stagings have 5 heads and spreadsheet
have 1, but that seems immaterial).
By fixing `_compute_statuses` and creating a single prefetch set for
all stagings of all branches we can fetch all the commits in a single
query instead of 120.
- add support for authorship (not just approval)
- make display counts directly
- fix `state` filter: postgres can't do negative index lookups
- add indexes for author and reviewed_by as we look them up
- ensure we handle the entire source filtering via a single subquery
Closes#778
A few cases of conflict were missing from the provisioning
handler.
They can't really be auto-fixed, so just output a warning and ignore
the entry, that way the rest of the provisioning succeeds.
During the 16.3 freeze an issue was noticed with the concurrency
safety of the freeze wizard (because it blew up, which caused a few
issues): it is possible for the cancelling of an active staging to the
master branch to fail, which causes the mergebot side of the freeze to
fail, but the github state is completed, which puts the entire thing
in a less than ideal state.
Especially with the additional issue that the branch inserter has its
own concurrency issue (which maybe I should fix): if there are
branches *being* forward-ported across the new branch, it's unable to
see them, and thus can not create the now-missing PRs.
Try to make the freeze wizard more resilient:
1. Take a lock on the master staging (if any) early on, this means if
we can acquire it we should be able to cancel it, and it won't
suffer a concurrency error.
2. Add the `process_updated_commits` cron to the set of locked crons,
trying to read the log timeline it looks like the issue was commits
being impacted on that staging while the action had started:
REPEATABLE READ meant the freeze's transaction was unable to see
the update from the commit statuses, therefore creating a diverging
update when it cancelled the staging, which postgres then reported
as a serialization error.
I'd like to relax the locking of the cron (to just FOR SHARE), but I
think it would work, per postgres:
> SELECT FOR UPDATE, and SELECT FOR SHARE commands behave the same as
> SELECT in terms of searching for target rows: they will only find
> target rows that were committed as of the transaction start
> time. However, such a target row might have already been updated (or
> deleted or locked) by another concurrent transaction by the time it
> is found. In this case, the repeatable read transaction will wait
> for the first updating transaction to commit or roll back (if it is
> still in progress). If the first updater rolls back, then its
> effects are negated and the repeatable read transaction can proceed
> with updating the originally found row. But if the first updater
> commits (and actually updated or deleted the row, not just locked
> it) then the repeatable read transaction will be rolled back with
> the message
This means it would be possible to lock the cron, and then get a
transaction error because the cron modified one of the records we're
going to hit while it was running: as far as the above is concerned
the cron's worker had "just locked" the row so it's fine to
continue. However this makes it more and more likely an error will be
hit when trying to freeze (to no issue, but still). We'll have to see
how that ends up.
Fixes#766 maybe
Currently sentry is only hooked from the outside, which doesn't
necessarily provide sufficiently actionable information.
Add some a few hooks to (try and) report odoo / mergebot metadata:
- add the user to WSGI transactions
- add a transaction (with users) around crons
- add the webhook event info to webhook requests
- add a few spans to the long-running crons, when they cover multiple
units per iteration (e.g. a span per branch being staged)
Closes#544
- move sentry configuration and add exception-based filtering
- clarify and reclassify (e.g. from warning to info) a few messages
- convert assertions in rebase to MergeError so they can be correctly
logged & reported, and ignored by sentry, also clarify them
(especially the consistency one)
Related to #544
Largely informed by sentry,
- Fix an inconsistency in staging ref verification, `set_ref`
internally waits for the observed head to match the requested head,
but then `try_staging` would re-check that and immediately fail if
it didn't.
Because github is *eventually* consistent (hopefully) this second
check can fail (and is also an extra API call), breaking staging
unnecessarily, especially as we're still going to wait for the
update to be visible to git.
Remove this redundant check entirely, as github provides no way to
ensure we have a consistent view of anything, it doesn't have much
value and can do much harm.
- Add github request id to one of the sanity check warnings as that
could be a useful thing to send upstream, missing github request ids
in the future should be noted and added.
- Reworked the GH object's calls to be clearer and more coherent:
consistently log the same thing on all GH errors (if `check`),
rather than just on the one without a `check` entry.
Also remove `raise_for_status` and raise `HTTPError` by hand every
time we hit a status >= 400, so we always forward the response body
no matter what its type is.
- Try again to log the request body (in full as it should be pretty
small), also remove stripping since we specifically wanted to add a
newline at the start, I've no idea what I was thinking.
Fixes#735, #764, #544
Current system makes it hard to iterate feedback messages and make
them clearer, this should improve things a touch.
Use a bespoke model to avoid concerns with qweb rendering
complexity (we just want GFM output and should not need logic).
Also update fwbot test setup to always configure an fwbot name, in
order to avoid ping messages closing the PRs they're talking
about, that took a while to debug, and given the old message I assume
I'd already hit it and just been too lazy to fix. This requires
updating a bunch of tests as fwbot ping are sent *to*
`fp_github_name`, but sent *from* the reference user (because that's
the key we set).
Note: noupdate on CSV files doesn't seem to work anymore, which isn't
great. But instead set tracking on the template's templates, it's not
quite as good but should be sufficient.
Fixes#769
- currently disabling staging only works globally, allow disabling on
a single branch
- use a toggle
- remove a pair of tests which work specifically with `fp_target`,
can't work with `active` (probably)
- cleanup search of possible and active stagings, add relevant
indexes and use direct search of relevant branches instead of
looking up from the project
- also use toggle button for `active` on branches
- shitty workaround for upgrading DB: apparently mail really wants to
have a `user_id` to do some weird thing, so need to re-add it after
resetting everything
Fixes#727
- github logins are case-insensitive while the db field is CI the dict
in which partners are stored for matching is not, And the caller may
not preserve casing.
Thus it's necessary to check the casefolded database values against
casefolded parameters, rather than exactly.
- users may get disabled by mistake or when one leaves the project,
they may also get switched from internal to portal, therefore it is
necessary to re-enable and re-enroll them if they come back.
- while at it remove the user's email when they depart, as they likely
use an organisational email which they don't have access to anymore
Side-note, but remove the limit on the number of users / partners
being created at once: because there are almost no modules in the
mergebot's instance, creating partner goes quite fast (compared to a
full instance), thus the limitation is almost certainly unnecessary
(creating ~300 users seems to take ~450ms).
Fixes ##776
652b1ff9ae wanted to check if a request
was available, however it deref'd the `request` object without
checking it which is not correct: a `request` normally has an
`httprequest`, but the `request` itself might be missing if the
handler is called from e.g. a cron.
Fixes#739
The mismatch diff attribute contains values from the in-db object and
the github PR structure, some of which are explicitly *not*
strings (e.g. the squash flag, possibly the commits # in the future).
As a result, when the squash-flag of a PR differs from the actual the
formatting for diffing blows up, because difflib can't handle
non-strings.
Stringify values between passing them to `format_items`, this way the
string operations on names and values should work correctly.
The mergebot page become a bit slow with the years, it is time to make
small optimisation to speed up thinks a little.
Note: all changes where applied modifying the views or adding index by
hand. There is still room for improvement but it would need more in
depth refactoring, mainly adding specialized computed fields to
enable a better batching.
The first issue was using branch.staging_ids
branch.staging_ids.sorted(lambda s: s.staged_at, reverse=True)[:6]
The number of staging_ids is increasing and prefetching + sorting all
of them is slow.
The proposed solution is to replace it by a search, not ideal, a
specialized compute field may be a good idea, but this is a quick fix
that can be done editing a view.
branch.env['runbot_merge.stagings'].search([('target', '=', branch.id)],order='staged_at desc', limit=6)
Other changes are just index on critical columns.
Before changes, /runbot_merge page takes ~5s to load
After changes, /runbot_merge page takes ~1s to load
Small note: note 100% sure that runbot_merge.batch.target was useful
The loggers would only print the "tail" of the path, not including the
repo name, or the `/repos` prefix.
While this made logs shorter, it was not intentional and made
debugging some issues on endpoints harder than necessary as the calls
had to be adjusted mentally, which is completely unnecessary.
1cea247e6c tried to improve staging
checks to avoid staging PRs in the wrong state, however it had two
issues:
PR state
--------
The process would reset the PR's state to open, but unless the head
was being resync'd it wouldn't re-apply the statuses on the state,
leading to a PR with all-valid statuses, but a missing CI.
Message
-------
The message check didn't compose the PR message the same way PR
creation / update did (it did not trim the title and description
individually, only after concatenation), resulting in a
not-actually-existing divergence getting signaled in the case where
the PR title ends or the description starts with whitespace.
Expand relevant test, add a utility function to compose a PR message
and use it everywhere for coherence.
Also update the logging and reporting to show a diff of all the
updated items (hidden behind a `details` element).
If there are bump PRs anyway: the bump commits will cause the
forward-port of the staging to fail, so might as well clearly notify
everybody of the issue if there is a pending staging, and not waste
too much time waiting for a staging which can not succeed.
We could also cancel stagings when there's no bump PR, but it's not
clear that there's any reason to do so: if we didn't touch any master
branch, there's no reason for the staging to fail, or to otherwise
cancel it.
And obviously we can't have staged anything on the new branch so
there's nothing to cancel.
Part-Of: #718
I DECLARE BANKRUPTCY!!!
The previous implementation of labels lookup was really not
intuitive (it was just a char field, and matched labels by equality
including the owner tag), and was also full of broken edge
cases (e.g. traceback if a label matched multiple PRs in the same repo
because people reuse branch names).
Tried messing about with contextual `display_name` and `name_search`
on PRs but the client goes wonky in that case, and there is no clean
autocomplete for non-relational fields.
So created a view which reifies labels, and that can be used as the
basis for our search. It doesn't have to be maintained by hand, can be
searched somewhat flexibly, we can add new view fields in the future
if desirable, and it seems to work fine providing a nice
understandable UX, with the reliability of using a normal Odoo model
the normal way.
Also fixed the handling of bump PRs, clearly clearing the entire field
before trying to update existing records (even with a link_to
inbetween) is not the web client's fancy, re-selecting the current
label would just empty the thing entirely.
So use a two-step process slightly closer to the release PRs instead:
- first update or delete the existing bump PRs
- then add the new ones
The second part is because bump PRs are somewhat less critical than
release, so it can be a bit more DWIM compared to the more deliberate
process of release PRs where first the list of repositories involved
has to be set up just so, then the PRs can be filled in each of them.
Fixes#697
In order to support partial freezing, we need the ability to remove
some of the release lines for the repos we don't want to
freeze (e.g. because they don't use per-version branches).
This subsequently means we need the ability to *create* new lines if
we fucked up and removed one we should not have. Alternatively the
freeze meat-bot could cancel the entire thing and redo the wizard but
that seems harsh and mean, so don't do that.
Fixes 0f3647b7c7 which specifically
mentioned partial freeze then proceeded to make them entirely
impossible anyway.
Part of #718
Previously the mergebot would only sync the head commit, but synching
more is useful.
Also update the final sanity check on staging:
- as with check, update the message & target branch
- reset PR state and post a message when updating message instead of
doing so silently
Note: maybe only fail the staging if the message is updated *and*
relevant to staging (aka there's a merge method and it's not
`rebase`)?
Fixes#680
Was missing a logging message in the case where the current and sync'd
head are identical, which seems to occur from time to time but can
only be inferred (by seeing a sync event then nothing happening).
Add a logging warning (because it's a strange situation) in order to
explicitely note the issue.
Also make the sync logging messages more regular for clarity.
And add the delivery information (delivery id and user-agent) to event
log, so it's more possible to report issues to github.
After review, there doesn't seem to be a single integer field created
by the mergebot or fortwardbot modules for which a `group_operator`
makes sense, let alone the default of `sum`.
So just disable them all.
Fixes#674
If commits have different authors (/ committers), the mergebot would
ask github to create a commit with an author (/ committer) of `None` /
`null`.
Apparently github really does not like that, and complains that
> nil is not an object
So remove the key entirely. Also fix the collision between `author`
and the `Co-Authored-By` list, which could lead to trying to set an
`author` of `[name, email]` instead of an object, which is also not
accepted by github.
Fixes to the new bits which didn't really work:
- Fix borked view layout
- Add some help to the label fields
- Improve the resolution of label -> pr, and fix
- Also make the feature actually work for bump PRs
- Also make pr -> label work more reliably, now allows setting one PR
and getting the other PRs of the same batch (with the same label)
even without setting the label by hand
An autocomplete for the label has been considered but there is no
autocomplete field for char/selection fields, and it seems way too
much work for the utility:
- either create a brand new widget for 15.0 which will have to be
entirely rewritten in 16
- or create a transient model composed entirely of fake records to
provide an m2o to records which don't actually exist as label
bearers, which is also a lot of unnecessary work
NOTE: we want to support partial freezing (aka not freeze all the
branches because some of them have different release models
than others), so some project repos *not* having a release
PR is fine and normal, such a validation should not be added.
Fixes#664
In case where the last branch (before the branch being frozen) is
disabled, the forwardport inserter screws up, and fails to correctly
create the intermediate forwardports from the new branch.
Also when disabling a branch, if there are FW PRs which target that
branch and have not been forward-ported further, automatically
forward-port them as if the branch had been disabled when they were
created, this should limit data loss and confusion.
Also change the message set on PRs when disabling a branch: because of
user conflicts in test setup, the message about a branch being
disabled would close the PRs, which would then orphan the followup,
leading to unexpected / inconsistent behaviour.
Fixes#665
The `statuses` field of a staging is always "live" because it's a
computed non-stored field. This is an issue when a staging finishes in
whatever state, then someone gets new statuses sent on one of the head
commits, either by rebuilding (part of) the staging or by just using
the same commit for one of their branches.
This makes the reporting of the main dashboard confusing, as one might
look at a failed staging and see all the required statuses
successful. It also makes post-mortem analysis more complicated as the
logs have to be trawled for what the statuses used to be (and they
don't always tell).
Solve this by storing a snapshot of the statuses the first time a
staging moves away from `pending`, whether it's to success or failure.
Fixes#667
In the branch lists of stagings, the timestamps in the left column and
the labels in the data cells can not be selected, because they're
buttons and anyway bootstrap explicitly sets
.btn {
...
user-select: none;
}
This can be frustrating, as timestamps and labels are useful
information to cross-reference, the ability to copy them is
convenient.
Custom-set the reverse via our own CSS.
Fixes#668
Partially revert 0c882fc0df
This turns out to be more bane than boon, as it breaks forward-port
chains and confuses people (despite the message). Update notification
message and don't close the PR anymore.
While at it, disable any pending staging on the branch being deactivated.
Fixes#654
af016f4239 did a half-assed job and
didn't fix the one test which actually checks the dashboard.
TBF I was in a bit of a hurry trying to make the mergebot work and be
presentable again, but still...
15.0 (or 14.0) dropped some of the BS3 (?) compatibility stuff, which
the mergebot was (apparently) relying on. This lead to a visual
degradation as well as the frontend dropdown looking absolutely awful.
Fix that, on both style and templates.
15.0 (or 14.0) also dropped the bespoke responsive utility classes,
switch to bootstrap's.
Turns out I was running "15.0" except just on the runbot, enterprise
and community were still the 14.0 repos, so some of the changes were
missing.
While at it, bundle fixes for 3.10, as that's what Jammy needs, and
the mergebot/15.0 will be running on that.
Test seems to fail from time to time with one of the PRs getting
lost. Tried to move code around trying to investigate, can't repro
anymore. Possibly a race condition because the `to_pr` call was
performed too early, before the webhook had run (and thus before the
PR object had been created on the odoo side).
By moving the `to_pr` calls to after the cron run, we really ensure
the webhooks will have run.
Also update `to_pr` to ensure exactly one PR was retrieved, as
currently nothing is checked so we might have gotten none (yet), which
should be noticed early and clearly. In theory this also guards
against multiple PRs, but PRs should be unique on (repo, number).
- Some batches in a few stagings are apparently empty (e.g. batch
71771 from staging 32511), the listing was not resilient to such
issues.
Update the code to suppress the display of empty batches.
- The possibility of accessing `/runbot_merge/<id>` with a
non-existent branch had not been considered and triggered a
rendering error in the template.
Fixes#630, fixes#631
- override the staging's name_get to provide a slightly more useful
display_name (though still not great as the staging object remains
quite technical and inimical to human interaction)
- show individual PRs in a batch (as m2m tags) for readability
- update PR views to show the author and reviewer, except in the list
of delegations of users where it's a lot less useful
/cc #632
Currently deactivating a branch kinda leaves users in the dark, with
little way to know what has happened aside from inferring it from the
branch having disappeared from the main dashboard.
- surface the state of the branch in the PR dashboard (also surface
the target branch at all so users can see if their PR is targeted
as they expect as far as the mergebot is concerned)
- close & notify every PR to a branch being deactivated
- cancel any current staging to the branch (as a consequence of the
above)
Closes#632
Previously if a branch name could not be line-broken (because it was
full of underscores) it would break the layout by making "its"
staging's column much wider than the expected 1/2 / 1/4 /
1/6 (depending on window width), compressing the width of its
sibling's columns.
By disabling content-based width (only taking in account flex-width)
and setting overflow to hidden, the overlong branch names get cut off
instead.
- if stderr has been rerouted or explicitely rerouted to STDOUT,
`e.stderr` is `None` and the error reporting blows up (which is
inconvenient). Handle this case.
- handle the case where `fp_github_name` has not been configured (it's
not a super useful handling but meh, apparently git/hub doesn't
really care if there's a username when using API tokens)
- minor improvement to the refline parser: per-spec, the trailing
newline is optional, so don't fail if it's missing
The previous version of the code assumed `pr['body']` is always a
string, which is not correct, when the PR body is emptied the body
itself is removed (its value is `None`).
Add a case for this in the PR edition test, and avoid blowing up (or
adding empty newlines) when the PR body is empty. For PR creation this
issue was fixed in c2db5659d8 but
apparently I missed that the exact same issue occurs just a few lines
above.
Also turns out github does *not* send change information when the body
is updated from (or to?) `None`, so don't even bother with that, just
check every time if the overall message has been updated.
Fixes#629
Stop *staging* release PRs: they are normally fairly simple and should
not fail their staging outside of unreliable tests (or possibly a few
edge cases e.g. forgot one version change thing), however staging them
creates the possibility of a "version hole" on the release branch
which is undesirable.
Instead, immediately and unconditionally push the release commits onto
the newly created branches, if there are things which don't work they
can be fixed afterwards (and the process refined, maybe).
Also add the same feature for *bump* PRs, with the difference that the
bump PRs are not created / requested by default (they have to be opted
in individually).
For convenience, add a feature which automatically finds the PRs via
inputting the label (not really tested yet).
Closes#603
Because the searching of the PR occurs *right* after the PR was
created on the server, despite the additional operations (status,
approval) it's apparently possible for the lookup of the new PR to
occur about the same time the PR is being created, kinda, maybe?
On DS it triggers very reliably for every PR but the first. By moving
the retrieval after the repo timeout & we've run the crons, we near
guarantee the PRs are visible (it's possible for things to fail on
grounds of github reliability, but then they'd have failed *even more*
before).
- add a logging entry for PR updates
- change the generic log entry to log the sender (of the event) rather
than the PR author
- fix the post-facto PR loader to more systematically and reliably set
a `sender`
It's likely that the PR pages are seen more commonly than the
dashboard by most users, so add alerts there in case users wonder
what's happening.
Fixes#580
The log message only indicated whether the PR was squashed or not, but
that's not actually useful.
Improve the message to log the actual merge method, for
information. The old "squash" (aka squash flag set and no merge
method, since an actual squash merge method was reintroduced a while
ago) has been renamed to "single" for the purpose of this display.
Before this, until the first status for a required context the status
would appear as pending, but would be have oddly (e.g. not clickable).
Update the style of such statuses for clarity:
- use a light background to show them as inactive
- use the `wait` cursor to show their status as oddball (and not clickable)
Setting this styling on the link (or even `li`) doesn't seem to work,
so set it on the `ul`, the actual active links will set the relevant
"active" cursor instead, which seems to work fine.
While at it, extract the status menu to its own template and unify the
disparate bits, mainly in that both the main dashboard and the
per-branch list display the staging instant in UTC on hover: before,
the main dashboard would display a relative delta and provide the
UTC-formatted instant on hover, but the branch would only show a zoned
ISO-8601 instant.
While adjusting is easy, it's unnecessary, we can easily provide the UTC
staging instant there).
Old messages were quite inconsistent in their pinging of the PR author
and reviewer.
Reviewed messages (probably missed some but...) and try to more
consistently ping when the feedback requires some sort of action in
order to proceed.
Fixes#592
A few fixes and improvements after testing the feature:
- ensure the provisioned users are created as internal (not portal)
- assume oauth is installed and just crash if it's not
- handle a user not having an email (ignore)
- return value from json handler, otherwise JsonRequest sends no
payload which is *weird*
Stagings would be cancelled automatically if the PR's commits were
updated, but not if the target (base) was changed, even though that
has a drastic impact on staging.
Add hooks to unstage PRs if their base is updated, or if their message
is updated and relevant to staging (merge or rebase-merge methods).
Fixes#604
Extract the creation of a PR link (to github) as a dedicated
template for easier updates, and to use `display_name`
everywhere (instead of reimplementing it by name).
Also implement support for repo-level groups setting for information
hiding.
Fixes#590
When a staging's fast-forward (to the target branch) fails, the
mergebot would provide no useful information on the staging or the
dashboard.
This is because the reason was set to the HTTP status, which in case
of a fast-forward error is just "422 client error: unprocessable
entity".
Improve this by trying to parse github's response in that case, and
using the JSON error message as failure reason. This provides more
useful failure information like "update is not a fast forward",
"reference does not exist", or a branch protection failure.
Closes#591
- code in the various menus added over time through the UI (queues,
configuration, ...)
- update / improve PR layout a tick
- fix "outstanding forward ports" count on the dashboard
- improve hover title / help on dashboard
- add date of last modification (usually date of success / failure)
- make casing more coherent (everything lowercase)
- add explicit note that UTC date on staged at label is staged at datetime
- rediscover yet again that the staging information is when hovering
on the staging *except the staged at label*
- improve `PullRequest.unstage` to always insert the PR at the start of the
reason when cancelling the staging, for clarity / traceability
Closes#560, closes#609
New accounts endpoint such that the SSO can push new pre-configured
users / employees directly. This lowers maintenance burden.
Also remove one of the source partners from the merge test, as
ordering seems wonky for unclear reasons leading to random failures of
that test.
On #509, the rebasing process was changed to forcefully update the
commit date of the commits, in order to force trigger builds.
However when squashing was re-enabled for #539 for some fool reason it
implemented its own bespoke rebasing (despite that not actually saving
any API call that I can see), meaning it did *not* update the commit
date. As such, an old commit being squashed would not get picked up by
the runbot, which is what happened to odoo/documentation#1226 (which
ultimately had to be hand-rebased after some confusion as to why it
did not work).
Update `_stage_squash` to go through `rebase` the normal way, also
update `rebase` to pop the commit date entirely instead of setting it
manually, and update the squashing test to check that the commit date
gets properly updated.
Fixes#579, closes#582
Multiple fixes to various issues of the freeze wizard.
Some of the less fix-oriented sub-tasks were moved to #586 for discussion instead.
Closes#559, #587
Currently limited to release/freeze PRs: it can be difficult to be
sure the right PR was selected then, and a mistake there seems more
impactful than in the PRs being waited for?
Note: adds a test to make sure I don't break the check that all
release PRs must have the same label (be linked). This was
already safe, and in a way this PR adds convenience but not
really safety, but better sure than sorry.
- add flag to not select repos for freezing
- allow removing more repositories from the wizard
- when performing the freeze, only create branches for the selected
repos
The freeze wizard was implemented using a single action to open and
validate the dialog.
This was a mistake, as it means if there are no errors left (e.g. all
the PRs being waited for are now validated) trying to view the freeze
wizard will immediately validate it and commit the freeze, which is
unexpected, surprising, and unsafe e.g.
- open wizard
- add freeze prs
- add a required pr or two
- close and go do something else
- be told that more PRs need to be waited for
- reopen wizard
- oops freeze is done
So split the "open action" part of `action_freeze` into opening the
action and performing the freeze. The "freeze" / "view freeze" button
on the project only activates the latter, and the actual freeze
operation is only triggered from the wizard's "Freeze" button.
Part of #559.
Provides a less manual interface for creating the freeze:
* takes the name of the branch to create
* takes any number of PRs which must be part of the freeze
* takes PRs representing the HEADs of the new branches
Then essentially takes care of the test.
Implementation of the actual wizard is not trivial but fairly
straightforward and linear, biggest issue is not being able to
`project_id.branch_ids[1]` to get the new branch, not sure why but it
seems to ignore the ordering, clearing the cache doens't fix it.
When creating the branches, add a sleep after each one for secondary
rate limiting purposes. Same when deleting branches.
Also the forwardbot has been updated to disable the forwardport cron
while a freeze is ongoing, this simplifies the freezing process.
Note: after recommendation of @aab-odoo, tried using `_applyChanges`
in `_checkState` but it simply did not work: the two relational fields
got completely frozen and were impossible to update, which is less
than ideal. Oh well, hopefully it works well enough like this for now.
Could not reproduce it in a shell, but in the original version
`self.env.cr.rowcount` would always be 0 after the `modified`.
Turns out the check is really completely dumb, because if we got any
match in `select for update` we're going to find the same on in the
update, and thus the conditional is unnecessary. I've no idea why I
did that.
Anyway remove the conditional and just always try to unstage the PR.
To prep for the addition of the freeze wizard:
* move projects out of `pull_requests.py`
* then realize half the methods there have no relation to projects and
move them to more relevant places in `pull_requests.py`
* update corresponding crons (and tests using those crons) as the
methods have changed model, and the cron definitions thus need to be
updated
* split update to labels out of sending feedback comments while at it:
labels are not used much during tests so their manipulation can be
avoided; and labels are not as urgent as feedback so the crons can
be quite a bit slower
* move the project view out of `mergebot.xml` as well
Following #531 reviews from reviewers without an email set are
rejected.
For delegates this isn't very helpful, however for specifically
configured reviewers we can warn the configurer that they need to set
an email for things to work out.
* Adds a changelog page, linked from the main, with content
automatically loaded from the source. To avoid conflicts, each entry
is its own file and entries are grouped by the month during which
the update will (probably) be deployed
* The last group (most likely "last update") doesn't have a title, the
rest do.
* Add changelog entries from the last update so it's not too empty.
* Also update the layout for the alerts a bit: remove bottom margin to
reduce loss of whitespace.
When a commit is lacking the purpose (?) tag e.g. `[FIX]`, `[IMP]`,
..., a normal commit message of the form `<module>: <info>` marches
the looks of a git pseudo-header.
This results in a commit rewrite rejiggering the entire thing and
breaking the message by moving the title to the pseudo-headers and
mis-promoting either the `closes` line of body content to "title",
resulting in a really crappy commit message
e.g. odoo/odoo@d4aa9ad031.
Update the commit rewriting procedure to specifically skip the title
line, and re-inject it without processing in the output.
Fixes#540
Re-introduce a "squash" mode solely for the purpose of fixing up
commit messages without having to go and edit them: for now "squash"
only works for single-commit PRs, acts as a normal
integration (`rebase-ff`) *but* replaces the message of the commit
itself by that of the PR, similar to the `merge` modes.
This means maintainers can update commit messages to standards by
editing the PR description (though this is obviously sensible to
edition races with the original author).
Fixes#539
The list of outstanding forwardports was pretty messy as the ordering
was unclear and there was little way to really drill into the thing.
* Shows outstanding forward ports sorted by merged date ascending, the
oldest-merged PRs are the ones most in need of fixing while PRs
which were only just merged can safely be ignored.
* List reviewers with outstanding forward-ports, allow filtering by
clicking on their name, allow deseleting through the subtitle of the
page.
* Don't display reviewer in list when page is already filtered by
reviewer.
Also improve PR page a bit:
* Add reviewer.
* Add direct link to backend (closes#524).
Closes#529
Because only the first staging failure is considered "hard" and will
put the PR in error, when looking at staging logs it's possible to see
the same PR get staged over and over and over again, which is quite
confusing.
To make the logs less weird, always log a staging failure even when it
doesn't put the PR in error. Sadly this can't be tested as `capsys` is
not able to intercept an stderr inherited by a child process (capfd
doesn't work either).
Fixes#527
If a reviewer doesn't have an email set, the Signed-Off-By is an
`@users.noreply.github.com` address which just looks weird in the
final result.
Initially the thinking was that emails would be required for users to
*be* reviewers or self-reviewers, but since those are now o2ms / m2ms
it's a bit of a pain in the ass.
Instead, provide an action to easily try and fetch the public email of
a user from github.
Fixes#531
Since we have the model fields loaded up, we can just check into that
and assume anything that's not a field is a method.
That avoids having to go through `_call`, making things way less awkward.
After internal discussions it was concluded that this didn't extend
much more trust than allowing authors to accept their single-PR
commits without additional supervisions, and it would avoid some
inconveniences and PR-blocking.
Fixes#69 (nice)
Because sometimes github updates are missed (usually because github
never triggers it), it's possible for the mergebot's view of a PR
description to be incorrect. In that case, the PR may get merged with
the wrong merge message entirely, through no fault of the user.
Since we already fetch the PR info when staging it, there's very
little overhead to checking that the PR message we store is correct
then, and update it if it's not. This means the forward-port's
description should also be correct.
While at it, clean the forward port PR's creation a bit:
- there should always be a message since the title is required on
PRs (only the body can be missing), therefore no need to check that
- as we're adding a bunch of pseudo-headers, there always is a body,
no need for the condition
- inline the `pr_data` and `URL`: they were extracted for the support
of draft PRs, since that's been removed it's now unnecessary
Fixes#530
The page of PRs in "error" is currently kinda broken: it does not show
any feedback aside from the PR being in error which is not very
useful.
The intent was always to show an explanation, but when adding the page
I just deref'd `staging_id` which always fails though in two different
ways:
* when the PR can not be staged at all (because of a conflict) there
is no staging at all with a reason to show, so there should be
a fallback that the PR could not even be staged
* `staging_id` is a related field which deref's to the staging_ids
of the first *active* batch, except when a staging completes
(successfully or not) both staging and batch are disabled.
Plus the first batch will be the one for the first staging so if the
PR is retried and fails again the wrong reason may be displayed.
So update the section to show what we want: the reason of the
staging of the *last* batch attached to the PR.
NOTE: there's one failure mode remaining, namely if a staging fails
then on retry the PR conflicts with the new state of the
repository (so it can't be staged at all), the "reason" will
remain that of the staging. This could be mitigated by attaching
a "nonsense" batch on failure to stage (similar to the
forwardport stuff), that batch would have no staging, therefore
no staging reason, therefore fallback.
Closes#525
Draft was added in 82174ae66e but turns
out the v13 ORM is not able to create a required column (even when
given a default value), at least for booleans.
So create it by hand.
On staging failure, the 'bot would point to the first error or failure
status it found on the commit. This turns out not to be correct as
we (now) have various statuses which are optional, and may fail
without blocking stagings (either because they're solely informational
or because they're blocking & overridable on PRs).
Fix this so the 'bot points to the first *required* failure.
Fixes#517
* Remove the forwardport creating PRs in draft, that was mostly to
avoid codeowners triggering but we've removed the github one and
hand-rolled it, so not a concern anymore.
* Prevent merging `draft` PRs, the mergebot rejects approval on draft
PRs and insults people.
TBD (maybe): try to create *conflicting* forward-port PRs in draft so
it's clearer they need to be *fixed*? Issue of not being able to do
that on all private repositories remains so~~
Fixes#500