If a staging covers multiple repositories and there's a fast-forward
issue on any but the first repo/target, runbot_merge attempted to
revert the commits it had fast-forwarded on the previous repos.
This doesn't work when branch-protection is active, unless runbot_merge
is a repository administrator (and branch protection is not configured
to apply to those): reverting is done by push-forcing the original head
back onto the ref, which branch-protection unconditionally precludes.
This commit does not entirely fix the race condition (it does not look
like github provides any way to do that), but it should significantly
reduce the race-condition window as it performs a semi-wet run of the
fast-forward process on the tmp branches before actually updating the
targets. That way the only remaining breakage should be when somebody
pushes on repositories 1.. between the test-FF on tmp branches and the
actual fast forward.
While at it, updated the github API thing to *always* dump the JSON body
on an error response, if the content-type is json.
Previously, runbot_merge assumed github would return commits in
topological order (from base to head of PR). However as in the UI
github sorts commits using the author's date field, so depending on
rebasing/cherrypick/... it's possible to have the head of the commit
be "younger" than the rest. In that case robodoo will try to merge
it *first*, then attempt to merge the rest on top of it (-ish, it'd
probably make a hash of it if that worked), at which point github
replies with a 204 (nothing to merge) because the PR head has already
included everything which topologically precedes it.
Fix: re-sort commits topologically when fetching the PR's log. That
way they're rebased in the proper order and correctly linked to one
another.
Example problematic PR: odoo/enterprise#2794, the commits are
773aef03a59d50b33221d7cdcdf54cd0cbe0c914
author.date: 2018-10-01T14:58:38Z
879547c8dd37e7f413a97393a82f92377785b50b (parent: 773aef03)
author.date: 2018-10-01T12:02:08Z
Because 879547c8 is "older" than 773aef03, github returns it first,
both in the UI and via the API.
Also fixed up support for committer & author metadata in fake_github
so the local tests would both expose the issue properly and allow
fixing it.
Rather than blow up with a json error and take down the cron, convert
json decode error to a mergeerror in order to put the PR in error and
try to dump however much data we can.
It should be unnecessary: creating commits directly does not update
the ref (hence 2b1cd83b07) and we're
forcefully setting the ref afterwards, either resetting it to the
original head (for rebase) or updating it to the commits we've just
created (for squash).
A limitation to 50 commits PRs was put in place to avoid rebasing
huge PRs (as a rebase means 1 merge + 1 commit *per source commit*),
however the way it was done would also limit regular merges, and the
way the limitation was implemented was not clear.
* explicitly check that limit in the rebase case
* make error message on PR sizes (rebase 50 or merge 250) clearer
* remove limit from commits-fetching (check it beforehand)
* add a test to merge >50 commits PRs
* fix the local implementation of pulls/:number/commits to properly
paginate
a0063f9df0 slightly improved the error
message on non-PR ci failure (e.g. a community PR makes enterprise
break) by adding the failed commit, but that's still not exactly clear,
even for technical users (plus it requires having access to all the
repos which is not the case for everyone).
This commit further improves the situation by storing the target_url
and description fields of the commit statuses, and printing out the
target_url on failure if it's present.
That way the PR comment denoting build failure should now have a link to
the relevant failed build on runbot, as that's the target_url it
provides.
The change is nontrivial as it tries to be compatible with both old and
new statuses storage format, such that there is no migration to perform.
After discussion with al & rco, conclusion was default PR merging method
should be rebase-and-merge for cleaner history.
Add test for that scenario (w/ test for final DAG) and implement this
change.
The old "sync pr" thing is turning out to be a bust, while it
originally worked fine these days it's a catastrophe as the v4 API
performances seem to have significantly degraded, to the point that
fetching all 15k PRs by pages of 100 simply blows up after a few
hundreds/thousands.
Instead, add a table of PRs to sync: if we get notified of a
"compatible" PR (enabled repo & target) which we don't know of, create
an entry in a "fetch jobs" table, then a cron will handle fetching the
PR then fetching/applying all relevant metadata (statuses,
review-comments and reviews).
Also change indexation of Commit(sha) and PR(head) to hash, as btree
indexes are not really sensible for such content (the ordering is
unhelpful and the index locality is awful by design/definition).
Turns out PATCH /git/refs/:ref returns a 422 when the ref does not
exist, rather than the 404 I'd expected.
Also improve the error message by including the JSON body which tends
to be more descriptive/helpful than the reason for Github's API.
Maybe I should replace all raise_for_status() by printing the JSON
body instead...