When posting a reminder that there are open / waiting forward ports on
a source PR, also post *which* PRs those are.
While at it, move the cron code in a proper python file (so we can use
stuff from odoo.tools), and fix display_name so we can straight use
display_name as a github ref' ({owner}/{repo}#{number}). This impacts
log-grepping but it seems like an improvement nonetheless.
Closesodoo/runbot#228
* shorten the postfix, forwardbot is now a bigram!
* shorten the uniquifier: go from 5 to 3 bytes, and use urlsafe base64
that way we only have a 4-char uniquifier instead of 8
* while at it, fix deprecated calls to logging.warn (should be
logging.warning)
Fixes#226
Attempt to avoid some of the comment spam by dedup-ing input (only
signaling when the status actually changes and ignoring identity
transformations) and in case of failing CI keeping the last failed
status and not signaling on the next update if it's the same failure.
Closes#225
The staging validation routine would ignore stagings which were
cancelled or ff_failed, but it should also have ignored failed and
successful aka all terminal state.
Simplify the condition for that: just ignore a staging's validation if
the staging is not pending.
Closes#211
Turns out we don't want to close the cursor on success, we just want to
commit, but that's not what the default context manager does.
So don't use said context manager.
If a _validate call blows up, the entire Commit._notify cron gets
stuck, which is an issue because not only does it stop creating
forward ports, it also stops "progressing" stagings.
If the CI is greatly backed up (either insufficient capacity or jobs
spike) a timeout which is normally perfectly fine might be
insufficient e.g. given a 2h timeout, if a job normally takes 80mn but
the staging's job starts 40mn after the staging was actually created
we're sunk. And cancelling the staging once the job has finally gotten
started is not going to improve load on the CI, it just wastes a CI
slot.
Therefore assume a `pending` event denotes the actual start of the job
on the CI, and reset the timeout to start from that moment so
ci_timeout is the timeout of the CI job itself, not of the staging
having been created.
Closes#202
Having all the feedback be sent by the mergebot user (github_token) is
confusing. Add a way to specify which field of project should be used to
source the token used when sending feedback.
Fixes#190
* Cherrypicking is handrolled because there seems to be no easy way to
programmatically edit commit messages during the cherrypicking
sequence: `-n` basically squashes all commits and `-e` invokes a
subprocess. `-e` with `VISUAL=false` kinda sorta works (in that it
interrupts the process before each commit), however there doesn't
seem to be clean status codes so it's difficult to know if the
cherrypick failed or if it's just waiting for a commit of this step.
Instead, cherrypick commits individually then edit / rewrite their
commit messages:
* add a reference to the original commit
* convert signed-off-by to something else as the original commit was
signed off but not necessarily this one
* Can't assign users when creating PRs: only repository collaborators
or people who commented on the issue / PR (which we're in the
process of creating) can be assigned.
PR authors are as likely to be collaborators as not, and we can have
non-collaborator reviewers. So pinging via a regular comment seems
less fraught as a way to notify users.
Prepares for more complex edition operations on the forwardbot side
* split out the pseudo-headers from the message body
* don't separate the co-authored-by headers from the others, seems
unnecessary, we just need to ensure they're at the end so github
doesn't miss them (/it)
* split action_cancel (UI button) from cancel (internal): since the
xhr mapping is weird, if there are available args the mapper thinks
it should pass the call context as reason which is unexpected
* make cancel a no-op when called on already inactive stagings
* make cancel work when called on multiple statgings
* make computing the active staging work properly in an
active_test=False context (e.g. when it's interacted with from the
form view because that comes from the list view which is
active_test=False, probably so we can see not just the stagings but
recursively see deactivated batches in deactivated stagings)
* don't show the cancel button on inactive stagings
Stagings have a "statuses" field which was shown but useless (as it's
a binary), they also have a "heads" field which only provides a
mapping of repository names to commits.
This change provides the staging heads as a commits m2m.
Fixes#178
* extract method to create a PR object from a github result (from the
PR endpoint)
* move some of the remote's fixtures to a global conftest (so they can
be reused in the forwardbot)
In case of error while fast-forwarding a staging to its source, we'd
log the target to which we couldn't FF. Sadly this relied on a
`repo_name` variable which (likely since the introduction of the
"safety dance" fast forwarding) can not actually be set in case of
failure.
So stash the relevant bit (the repo name) inside the FF error exception
and use that to compose our logging message instead of a variable which
can only be None.
Github constrains a single issue (/PR) number per repository, having
different targets does not allow two PRs to share a number.
Doesn't fix but should mitigate #169 slightly.
Before this change mergebot assumes github's tags are in sync with its
"previous" state, but because tags update was highly non-atomic (one
call per removal plus one for additions) and state can further change
between a failure and an update retry (especially as the labels endpoint
fails *a lot*), it's possible for set tags (in github) to be completely
desync'd from the mergebot state, leading to very misleading on-pr
indications.
This first fetches the current tagstate from github (to not lose non-
mergebot tags) then (hopefully atomically) resets all tags tags based on
the current mergebot state. This should avoid desyncs, and eventually
resync PRs (if they change state).
Fixes#170
On a PR being updated, closed or unreviewed, if it is part of an
active staging that staging would get cancelled (yay). However, if the
PR was part of a pending *split*, then the split would *not* get
cancelled / updated (to remove the PR from it), and the PR could go on
to get staged as if everything were right in the world which is an
issue.
It doesn't look like it actually happened (at least I got no echo of
it), but it almost did at least once.
fixes#160
Also add test for it & feedback of an approved PR failing CI, and fix
corner case with it (might not send a warning immediately on CI failure
depending on status requirement ordering).
Fixes#158
* when rebasing, store a map of rebased to source, that way it'll be
possible to link cherry-picked forward ports to the originally
integrated commit rather than just the one from the PR (which was
likely not itself integrated as the straight merge mode is somewhat
rare: as of 5600 PRs merged so far only 100 were straight merged)
* while at it, store the "merge head" of the PR (whether squashed,
merged or rebased) and put *that* in the commit message
fixes#161
Sometimes people add co-authored-by lines in the middle of their
message, where github ignores them.
Since we previously added properly handling existing (correct) C-A-B
lines in the case where we're adding fixes and signed-off-by, we might
as well fix-up existing but mispalced co-authored-by lines.
Fixes#107
The race condition which prompted STAGING_SLEEP rears its ugly head
again: when pushing a base repo and its dependents, it's possible for
the update to the base repo's new head to take much longer to be visible
than the dependents (or so it seems?).
In this case, CI might pick up the correct dependent but pick an older /
incorrect revision of the base, leading to a staging failing for no good
reason.
This change uses info/refs to check for the updated staging head to be
visible at the repo level after it's been set / updated via the API. It
assumes repos are in topological order.
Use the proper / actual "is there any stageable PR" query to check if
a PR is blocked as well, that way they shoudn't be diverging all the
time even if it might make PR.blocked a bit more expensive.
fixes#111
Will comment any time a statuses update folds to a CI failure on a
reviewed pull request. Might be somewhat spammy, we'll see.
No notification if the PR is not reviewed yet.
fixes#87
Before this, impacting a commit's statuses on the relevant PR or
staging would be performed immediatly / inline with its
consumption. This, however, is problematic if we want to implement
additional processing like #87 (and possibly though probably not #52):
webhook handlers should be kept short and fast, feeding back into
github would not be acceptable.
- flag commits as needing processing instead of processing them
immediately, this uses a partial index as it looks like the
recommended / proper way to index a boolean column in which one of
the values is searched much more than the other (todo: eventually
check if that actually does anythnig)
- add a new cron for commits processing
- alter tests so they use this new cron (mostly by migrating them to
`run_crons` though not solely as some still need more detailed
management to properly check intermediate steps)
Fix an issue with closing a staged PR while at it (the "merging" tag
would potentially never be removed).
Proper RFC5322 makes for much noisier messages, and seems completely
unnecessary as examples of sign-off on the internet don't quote spaces
/ names.
closes#102
* split out truly awaiting PRs from those waiting on an event of some
sort
* if a staging is active but doesn't have a state yet, it should be
considered pending not cancelled
closes#74
If a PR gets sync'd to a known-valid commit, it should be marked as
valid rather than get in this weird state where it's merely open but
github knows it passes CI.
Fixes#72
This is somewhat less useful with runbot's fail-fast as a runbot
failure (false positive or not) will now very quickly trigger an end
to the current staging.
Still, could be of use.
closes#89
Turns out skipping locks is not very useful when there are no locks
being held because we only touch the PRs *after* the merge has been
applied.
So finally do that, lock all of a staging's PRs before we try to
fast-forward the relevant repositories, so a close command coming back
from github (from having seen the closes #xxx annotation) doesn't
screw us over.
No test because I don't understand how / why it's triggered, it's just
that some PRs don't have a label. I assumed the issue occurred when
the source branch or even repo (cross-repo PR) was deleted, but it
doesn't seem to trigger the issue (or in any case not in as short a
time as a test, maybe GH eventually does some vacuuming which causes
the issue?
Anyway we may eventually want to reclaim these PRs (allowing a lack of
label and treating them like the patch-\d labels: with no semantic
value) however the simplest thing to do for now is to just ignore the
corresponding PR.
closes#101
Github is subject to a fair amount of transient failures, which are
currently ill-logged: an exception is raised and the caller /
responsible might eventually log something, but it's not really
formalised and centralised, and is thus inconvenient to try and
post-mortem issues with github's support.
Change this such that *almost* all github API calls get extensively
logged (status, reason, all headers, body) on failure.
Also automatically sets debug logging for odoo in local tests, and
alter the fake response constructor thing so it doesn't set a json
mimetype when the body is not valid json.
Closes#98
When a user changes the merge method via github messages, no feedback is
sent. This could lead to strange behavior, for example when a user try
to joke with the mergebot like this:
> robodoo are you goin ti merge my PR rogntudju !
This sets the merge method to "merge" and the user is not aware of it.
Before this change, the order of PRs to list in an "unready" feedback
message was whatever the DB returned which could vary. This change
fixes the order by applying model order.
If a transient github failure makes the integration fail but also
makes the following reset fail the entire staging process would be
cancelled (and operations so far rollbacked) except for the failure
comment which would be effected, as in odoo/odoo#26380.
By pushing the comment to the feedback queue, if the reset fails the
comment is rollbacked and "unqueued".
Hopefully this finally fixes the double commenting
issue (e.g. odoo/odoo#28160). This seems (according to reading the
logs and also logic) like a failure (concurrency of some sort) leading
to a transaction rollback *after* the GH API call, so the cron's DB
actions are rollbacked (then performed again on the next run) *but*
the GH API interactions are not rollbacked, and are thus performed
twice.
Since the entire purpose of the feedback table is to send comments,
send both "staging failed" and "Linked pull requests" comments via
that.
Once more unto the breach, with the issue of pushing stagings (with
"closes" annotations) to the target branch making GH close the PR &
send the hook, which makes runbot_merge consider the PR closed and the
staging cancelled.
This probably still doesn't fix the issue, but it reduces the
problematic window: before this, the process first updates the
branches, then marks the PRs, then comments & closes the PRs, and
finally commits the PR update.
This means as runbot_merge is sending a comment & a status update to
each PR in a staging, GH has some time to send the "closed" webhook
behind its back, making the controller immediately cancel the current
staging, especially if the v3 endpoint is a bit slow.
By moving the commenting & closing out of the critical path (to the
feedback queue), this window should be significantly shortened.