As the odds of having more projects or more repos with different
requirements in the same project, the need to have different sets of
reviewers for different repositories increases.
As a result, rather than be trivial boolean flags the review info
should probably depend on the user / partner and the repo. Turns out
the permission checks had already been extracted into their own
function so most of the mess comes from testing utilities which went
and configured their review rights as needed.
Incidentally it might be that the test suite could just use something
like a sequence of commoditized accounts which get configured as
needed and not even looked at unless they're used.
Before this change, `r-` on a pr[p=0] does essentially nothing. At
most it will unstage if the PR had been (somewhat unnecessarily) r+'d
in the past but then the PR will get re-staged immediately.
To avoid this odd behaviour, if r- is sent to a p=0 PR not only is
the PR unreviewed (if it was reviewed) it always gets unstaged, and
its priority gets reset to 1 (high priority but doesn't bypass CI and
review). Also send a comment on that subject so followers of the pr
are notified.
Fixes#313
The staging cron was already essentially split between "check if one
of the stagings is successful (and merge it)" and "check if we should
create a staging" as these were two separate loops in the cron.
But it might be useful to disable these two operations separately
e.g. we might want to stop the creation of new staging but let the
existing stagings complete.
The actual splitting is easy but it turns out a bunch of tests were
"optimised" to only run the merge cron. Most of them didn't blow up
but it seems more prudent to fix them all.
fixesodoo/runbot#310
The PR creation had been fixed to always validate even without a
commit found (in case there was no need for a commit), but the update
of a PR in such a situation was not tested, and thus naturally did not
work because why would it work if it wasn't tested?
Also remove the conditional skip on updating a PR to a new head.
The test was checking things would work properly with
required_statuses being an empty string, because I'd also forgotten an
empty field becomes stored as `False` in the database, so trying
things out live neither the PRs nor the staging would work as their
assumption that they could straight split the required_statuses would
always fail.
Update the test to better match expectations, and hopefully this is
the end of that saga.
PRs transitioning to 'ready' had been checked and tested but turns out
I had completely forgotten to test that stagings would validate
properly therefore of course they didn't.
The issue here was I'd forgotten `''.split(',')` returns `['']` rather
than `[]`, so on an empty required_statuses the staging validator
would keep looking for a status matching the context `''` and would
never find it, keeping the staging pending until timeout. So most
likely the problem could have been resolved by just adding a condition
to
[r.strip() for r in repomap[c.sha].required_statuses.split(',')]
but I'd already done all the rest of the reorganisation by that point,
test pass and I think it's a somewhat better logic. Therefore I'll go
with that for now.
* properly handle empty required_statuses during staging validation
* remove the final postcondition, if we're missing commits which don't
require any statuse we should not care
* expand test to include up to merging PRs
* automatically create dummy commits when creating stagings, that way
the relevant commits are in the database (can't hurt)
PS: an other alternative would have been to filter out or skip ahead
on commits which don't require any statuses aka cmap &
required_statuse / cmap would not even have that entry
Refactor the selection thingie, hopefully in a way which doesn't
absolutely crater performances, so that it's possible to explain the
reason why a PR is considered blocked.
Despite the existing dedup' sometimes the "xxx failed on this
forward-port PR" would still get multiplicated due to split builds
e.g. in odoo/odoo#43935 4 such messages appear within ~5 minutes, then
one more 10mn later.
This is despite all of them having the same "build" (target_url) and
status (failure). Since the description is the only thing that's not
logged I assume that's the field which varies and makes the dedup'
fail. Therefore:
* add the description to the logging (when getting a status ping)
* exclude the description when checking if a new status should be
taken in account or ignored: the build (and thus url) should change
on rebuild
Hopefully fixes#281
A while back I implemented name_get/display_name to print PRs using
the canonical github format (owner/repo#number), however looks like
some of the logging calls were still using bespoke formatting.
Interaction of CacheMiss and BaseModel is fucked, leading to an
infinite loop when trying to provide useful __str__ on a model (by
accessing model fields).
Using `modified` seems safer than just blowing the cache with respect
to stored computed fields depending on PR state (not sure there are
any but it's likely).
Previous version incorrectly browsed the PR *number* (rather than ID)
so at best it would do nothing and at worst it might go and notify the
wrong PR entirely.
Discussing #238 with @odony, the main concern was the difficulty of
understanding if things merged in one repo were related to things
merged in an other repo: currently, knowing this requires going to the
merged PR, getting its label, and checking the PRs with the same HEAD
in the other repository to see if there's a correlation (e.g. PRs
merged around the same time).
The current structure of the mergebot makes it reasonably easy to add
the other PRs of the batch in the pseudo-headers, such that we get
links to all "related" PRs in the head commit (and links back from the
commits which is probably less useful but...)
Fixes#238
1. if we try to stage a PR and realize we'd stored / checked the wrong
head, cancel the staging and notify the PR
2. provide a command to forcefully update pr heads (or at least check
that a PR's head is up to date)
Closes#241
When closing a PR, github completely separates the events "close the
PR" and "comment on the PR" (even when using "comment and close" in
the UI, a feature which isn't even available in the API). It doesn't
aggregate the notifications either, so users following the PR for
one reason or another get 2 notifications / mails every time a PR
gets merged, which is a lot of traffic, even more so with
forward-ported PRs multiplying the amount of PRs users are involved
in.
The comment on top of the closure itself is useful though: it allows
tracking exactly where and how the PR was merged from the PR, this
information should not be lost.
While more involved than a simple comment, *deployments* seem like
a suitable solution: they allow providing links as permanent
information / metadata on the PRs, and apparently don't trigger
notifications to users.
Therefore, modify the "close" method so it doesn't do
"comment-and-close", and provide a way to close PRs with non-comment
feedback: when the feedback's message is structured (parsable as
json) assume it's intended as deployment-bound notifications.
TODO: maybe add more keys to the feedback event payload, though in my
tests (odoo/runbot#222) none of the deployment metadata
outside of "environment" and "target_url" is listed on the PR
UI
Fixes#224
It's a waste to lose the entire staging if it's only a short blip /
delay thing, so retry multiple times. Add utility function to make
backoff functions easier (though the UI is not great ATM).
Also log the "left" parent of a merge commit (which should be the
"base") when creating it, for additional post-mortem information.
Turns out not only can that operation fail, that operation can succeed
but have its effect delayed. To try and guard against that,
immediately check that we get the correct ref' after having reset it.
This is the cause of the November 6 mess: when preparing a staging,
the mergebot does the following,
1. get the head of <branch>
2. hard-reset tmp.<branch> to that
3. start merging PRs, which requires getting the current state of
tmp.<branch> back
On the 6ths, these steps looked like this
```text
2019-11-06 10:03:21,588 head(odoo/odoo, master) -> ab6d0c38512e4944458b0b6f80f38d6c26b6b597
2019-11-06 10:03:22,375 set_ref(update, odoo/odoo, tmp.master, ab6d0c38512e4944458b0b6f80f38d6c26b6b597 -> 200 (OK)
2019-11-06 10:03:28,674 head(odoo/odoo, tmp.master) -> de2a852e7cc1f390e50190cfc497bc253687fba8
2019-11-06 10:03:30,292 head(odoo/odoo, tmp.master) -> de2a852e7cc1f390e50190cfc497bc253687fba8
```
So the 'bot fetched the commit at the head of master (ab6d0c), reset
tmp.master to that... and then got a different commit when it fetched
the tmp head to stage a PR on it.
That different head being of course a previous rejected staging. When
the new staging succeeded, it brought the entire thing in and made a
mess.
This was compounded by an issue I still have to investigate: the
staging of the new PR took the wrong base commit *but the right base
tree*, as a result the first thing it did was *reverse the entire
previous commit* (without that we could probably have left it as-is
rather than need to force-push master -- twice).
When posting a reminder that there are open / waiting forward ports on
a source PR, also post *which* PRs those are.
While at it, move the cron code in a proper python file (so we can use
stuff from odoo.tools), and fix display_name so we can straight use
display_name as a github ref' ({owner}/{repo}#{number}). This impacts
log-grepping but it seems like an improvement nonetheless.
Closesodoo/runbot#228
* shorten the postfix, forwardbot is now a bigram!
* shorten the uniquifier: go from 5 to 3 bytes, and use urlsafe base64
that way we only have a 4-char uniquifier instead of 8
* while at it, fix deprecated calls to logging.warn (should be
logging.warning)
Fixes#226
Attempt to avoid some of the comment spam by dedup-ing input (only
signaling when the status actually changes and ignoring identity
transformations) and in case of failing CI keeping the last failed
status and not signaling on the next update if it's the same failure.
Closes#225
The staging validation routine would ignore stagings which were
cancelled or ff_failed, but it should also have ignored failed and
successful aka all terminal state.
Simplify the condition for that: just ignore a staging's validation if
the staging is not pending.
Closes#211
Turns out we don't want to close the cursor on success, we just want to
commit, but that's not what the default context manager does.
So don't use said context manager.
If a _validate call blows up, the entire Commit._notify cron gets
stuck, which is an issue because not only does it stop creating
forward ports, it also stops "progressing" stagings.
If the CI is greatly backed up (either insufficient capacity or jobs
spike) a timeout which is normally perfectly fine might be
insufficient e.g. given a 2h timeout, if a job normally takes 80mn but
the staging's job starts 40mn after the staging was actually created
we're sunk. And cancelling the staging once the job has finally gotten
started is not going to improve load on the CI, it just wastes a CI
slot.
Therefore assume a `pending` event denotes the actual start of the job
on the CI, and reset the timeout to start from that moment so
ci_timeout is the timeout of the CI job itself, not of the staging
having been created.
Closes#202
Having all the feedback be sent by the mergebot user (github_token) is
confusing. Add a way to specify which field of project should be used to
source the token used when sending feedback.
Fixes#190
* Cherrypicking is handrolled because there seems to be no easy way to
programmatically edit commit messages during the cherrypicking
sequence: `-n` basically squashes all commits and `-e` invokes a
subprocess. `-e` with `VISUAL=false` kinda sorta works (in that it
interrupts the process before each commit), however there doesn't
seem to be clean status codes so it's difficult to know if the
cherrypick failed or if it's just waiting for a commit of this step.
Instead, cherrypick commits individually then edit / rewrite their
commit messages:
* add a reference to the original commit
* convert signed-off-by to something else as the original commit was
signed off but not necessarily this one
* Can't assign users when creating PRs: only repository collaborators
or people who commented on the issue / PR (which we're in the
process of creating) can be assigned.
PR authors are as likely to be collaborators as not, and we can have
non-collaborator reviewers. So pinging via a regular comment seems
less fraught as a way to notify users.
Prepares for more complex edition operations on the forwardbot side
* split out the pseudo-headers from the message body
* don't separate the co-authored-by headers from the others, seems
unnecessary, we just need to ensure they're at the end so github
doesn't miss them (/it)
* split action_cancel (UI button) from cancel (internal): since the
xhr mapping is weird, if there are available args the mapper thinks
it should pass the call context as reason which is unexpected
* make cancel a no-op when called on already inactive stagings
* make cancel work when called on multiple statgings
* make computing the active staging work properly in an
active_test=False context (e.g. when it's interacted with from the
form view because that comes from the list view which is
active_test=False, probably so we can see not just the stagings but
recursively see deactivated batches in deactivated stagings)
* don't show the cancel button on inactive stagings
Stagings have a "statuses" field which was shown but useless (as it's
a binary), they also have a "heads" field which only provides a
mapping of repository names to commits.
This change provides the staging heads as a commits m2m.
Fixes#178
* extract method to create a PR object from a github result (from the
PR endpoint)
* move some of the remote's fixtures to a global conftest (so they can
be reused in the forwardbot)
In case of error while fast-forwarding a staging to its source, we'd
log the target to which we couldn't FF. Sadly this relied on a
`repo_name` variable which (likely since the introduction of the
"safety dance" fast forwarding) can not actually be set in case of
failure.
So stash the relevant bit (the repo name) inside the FF error exception
and use that to compose our logging message instead of a variable which
can only be None.
Github constrains a single issue (/PR) number per repository, having
different targets does not allow two PRs to share a number.
Doesn't fix but should mitigate #169 slightly.
Before this change mergebot assumes github's tags are in sync with its
"previous" state, but because tags update was highly non-atomic (one
call per removal plus one for additions) and state can further change
between a failure and an update retry (especially as the labels endpoint
fails *a lot*), it's possible for set tags (in github) to be completely
desync'd from the mergebot state, leading to very misleading on-pr
indications.
This first fetches the current tagstate from github (to not lose non-
mergebot tags) then (hopefully atomically) resets all tags tags based on
the current mergebot state. This should avoid desyncs, and eventually
resync PRs (if they change state).
Fixes#170
On a PR being updated, closed or unreviewed, if it is part of an
active staging that staging would get cancelled (yay). However, if the
PR was part of a pending *split*, then the split would *not* get
cancelled / updated (to remove the PR from it), and the PR could go on
to get staged as if everything were right in the world which is an
issue.
It doesn't look like it actually happened (at least I got no echo of
it), but it almost did at least once.
fixes#160