Requirement for odoo/runbot#376: one can't expect there being someone
to override CI checks on stagings, so it only makes sense for checks
on PRs, which in turns requires that there could be checks only
required on PRs.
Could also be useful for features like incremental linting /
formatting, we may want to apply checks on PRs which filter on the
lines modified, but not require the entire software be reformatted at
once.
Having the required statuses be a mere list of contexts has become a
bit too limiting for our needs as it doesn't allow e.g. adding new
required statuses on only some branches of a repository (e.g. only
master), nor does it allow putting checks on only branches, or only
stagings, which would be useful for overridable checks and the like,
or for checks which only make sense linked to a specific revision
range (e.g. "incremental" linting which would only check whatever's
been modified in a PR).
Split the required statuses into a separate set of objects, any of
which can be separately marked as applying only to some branches (no
branch = all branches).
Fixes#382
While the head gets updated (properly), the squash flag did not which
could lead to odd results. Since a PR can only be reopened if it was
regular-pushed to (not after a force push) there are two scenarios:
* the PR updated to have 0 commits, closed, pushed to with one commit
then reopened, after reopening the PR would be marked as !squash and
would ask for a merge method (that's what happened with
odoo/odoo#51763)
* the PR has a single commit, is closed, pushed to then reopened,
after reopening the PR would still be marked a squash and potentially
straight rebased without asking for a merge method
Nothing would break per-se but both scenarios are undesirable.
Close#373
The logic of the partner merge wizard is to collect all relevant data
from source partners, write them to a destination partner, then remove
the sources.
This... doesn't work when the field in question has a UNIQUE
constraint (like github_login), because it's going to copy the value
from a source onto a dest which will blow the constraint, and so the
copy fails. In that case the user first has to *move over* the unique
field's value then they can use the wizard.
Just fix for the github login: take all sources, remove (and store)
their github logins, then write the login onto the dst.
An alternative would have been to *defer* the constraint, however:
* it only works on unique constraints, not unique indexes
* it requires the constraint to be declared DEFERRABLE
Closes#301
approving a PR which failed CI should trigger a feedback message since
6cb58a322d (#158), the code has not been
removed and the tests still pass.
However fwbot r+ would go through its own process for r+ which would
explain why that feedback is sometimes gone / lost (cf #327 and #336).
* make fwbot r+ delegate to mergebot r+
* add dedicated logging for this operation to better analyze
post-mortem
* automatically ping the reviewer to specifically tell them they're idiots
* move the feedback item out of the state change bit, send it even if
it's a useless r+ (because it's already r+'d)
* add a test for forward-ports
Closes#327, closes#336
Rather than try to fix up various bits where we search & all and
wonder what index we should be using, make the column a CIText.
For mergebot the main use case would be properly handling
delegate=XXX: currently if XXX is not a case-sensitive match we're
going to create a new partner with the new github login and
give *them* delegation, and the intended target of the delegation
isn't going to work correctly.
Also try to install the citext extension if it's not in the database,
and run the database-creation process with `check=True` so if that
fails we properly bubble up the error and don't try to run tests on a
corrupted / broken DB.
Fixes#318
As the odds of having more projects or more repos with different
requirements in the same project, the need to have different sets of
reviewers for different repositories increases.
As a result, rather than be trivial boolean flags the review info
should probably depend on the user / partner and the repo. Turns out
the permission checks had already been extracted into their own
function so most of the mess comes from testing utilities which went
and configured their review rights as needed.
Incidentally it might be that the test suite could just use something
like a sequence of commoditized accounts which get configured as
needed and not even looked at unless they're used.
Before this change, `r-` on a pr[p=0] does essentially nothing. At
most it will unstage if the PR had been (somewhat unnecessarily) r+'d
in the past but then the PR will get re-staged immediately.
To avoid this odd behaviour, if r- is sent to a p=0 PR not only is
the PR unreviewed (if it was reviewed) it always gets unstaged, and
its priority gets reset to 1 (high priority but doesn't bypass CI and
review). Also send a comment on that subject so followers of the pr
are notified.
Fixes#313
The staging cron was already essentially split between "check if one
of the stagings is successful (and merge it)" and "check if we should
create a staging" as these were two separate loops in the cron.
But it might be useful to disable these two operations separately
e.g. we might want to stop the creation of new staging but let the
existing stagings complete.
The actual splitting is easy but it turns out a bunch of tests were
"optimised" to only run the merge cron. Most of them didn't blow up
but it seems more prudent to fix them all.
fixesodoo/runbot#310
The PR creation had been fixed to always validate even without a
commit found (in case there was no need for a commit), but the update
of a PR in such a situation was not tested, and thus naturally did not
work because why would it work if it wasn't tested?
Also remove the conditional skip on updating a PR to a new head.
The test was checking things would work properly with
required_statuses being an empty string, because I'd also forgotten an
empty field becomes stored as `False` in the database, so trying
things out live neither the PRs nor the staging would work as their
assumption that they could straight split the required_statuses would
always fail.
Update the test to better match expectations, and hopefully this is
the end of that saga.
PRs transitioning to 'ready' had been checked and tested but turns out
I had completely forgotten to test that stagings would validate
properly therefore of course they didn't.
The issue here was I'd forgotten `''.split(',')` returns `['']` rather
than `[]`, so on an empty required_statuses the staging validator
would keep looking for a status matching the context `''` and would
never find it, keeping the staging pending until timeout. So most
likely the problem could have been resolved by just adding a condition
to
[r.strip() for r in repomap[c.sha].required_statuses.split(',')]
but I'd already done all the rest of the reorganisation by that point,
test pass and I think it's a somewhat better logic. Therefore I'll go
with that for now.
* properly handle empty required_statuses during staging validation
* remove the final postcondition, if we're missing commits which don't
require any statuse we should not care
* expand test to include up to merging PRs
* automatically create dummy commits when creating stagings, that way
the relevant commits are in the database (can't hurt)
PS: an other alternative would have been to filter out or skip ahead
on commits which don't require any statuses aka cmap &
required_statuse / cmap would not even have that entry
The pytest suite had been partially unified between mergebot and
forwardport but because of session-scoped modules it could not run
across those.
Make the db cache lazy and able to cache multiple databases, and move
the "current required module" to function scoped, this way things
should (and seem to) work properly on runs involving mergebot & fwbot.
Next step: xdist! (need to randomise repo names for that, probably).
Discussing #238 with @odony, the main concern was the difficulty of
understanding if things merged in one repo were related to things
merged in an other repo: currently, knowing this requires going to the
merged PR, getting its label, and checking the PRs with the same HEAD
in the other repository to see if there's a correlation (e.g. PRs
merged around the same time).
The current structure of the mergebot makes it reasonably easy to add
the other PRs of the batch in the pseudo-headers, such that we get
links to all "related" PRs in the head commit (and links back from the
commits which is probably less useful but...)
Fixes#238
1. if we try to stage a PR and realize we'd stored / checked the wrong
head, cancel the staging and notify the PR
2. provide a command to forcefully update pr heads (or at least check
that a PR's head is up to date)
Closes#241
It should have already been working, added an additional check for
update-then-retarget just in case but that worked out of the box. So
not sure why odoo/odoo#40106 failed.
Closes#256
* add a sorted method on fake models
* fix recordset equality to ignore ids order
* when creating commits on a ref, add a param to only *update* the ref
(forcefully): when simulating a force-push we don't want to *create*
a ref as that might silently be done in the wrong repository entirely
* fix pytest.skip call at the module level, not sure where it came
from and why I missed it until now
The fw-bot testing API should improve the perfs of mergebot tests
somewhat (less waiting around for instance).
The code has been updated to the bare minimum (context-managing repos,
change to PRs and replacing rolenames by explicit token provisions)
but extra facilities were used to avoid changing *everything*
e.g. make_commit (singular), automatic generation of PR refs, ...
The tests should eventually be updated to remove these.
Also remove the local fake / mock. Being so much faster is a huge
draw, but I don't really want to spend more time updating it,
especially when fwbot doesn't get to take advantage. A local /
lightweight fake github (as an external service over http) might
eventually be a good idea though, and more applicable (including to
third-parties).
If the CI is greatly backed up (either insufficient capacity or jobs
spike) a timeout which is normally perfectly fine might be
insufficient e.g. given a 2h timeout, if a job normally takes 80mn but
the staging's job starts 40mn after the staging was actually created
we're sunk. And cancelling the staging once the job has finally gotten
started is not going to improve load on the CI, it just wastes a CI
slot.
Therefore assume a `pending` event denotes the actual start of the job
on the CI, and reset the timeout to start from that moment so
ci_timeout is the timeout of the CI job itself, not of the staging
having been created.
Closes#202
Converge the pytest setups of runbot_merge and forwardport a bit
more (the goal is obviously to eventually share the infrastructure so
they run the same way).
* Cherrypicking is handrolled because there seems to be no easy way to
programmatically edit commit messages during the cherrypicking
sequence: `-n` basically squashes all commits and `-e` invokes a
subprocess. `-e` with `VISUAL=false` kinda sorta works (in that it
interrupts the process before each commit), however there doesn't
seem to be clean status codes so it's difficult to know if the
cherrypick failed or if it's just waiting for a commit of this step.
Instead, cherrypick commits individually then edit / rewrite their
commit messages:
* add a reference to the original commit
* convert signed-off-by to something else as the original commit was
signed off but not necessarily this one
* Can't assign users when creating PRs: only repository collaborators
or people who commented on the issue / PR (which we're in the
process of creating) can be assigned.
PR authors are as likely to be collaborators as not, and we can have
non-collaborator reviewers. So pinging via a regular comment seems
less fraught as a way to notify users.
Prepares for more complex edition operations on the forwardbot side
* split out the pseudo-headers from the message body
* don't separate the co-authored-by headers from the others, seems
unnecessary, we just need to ensure they're at the end so github
doesn't miss them (/it)
A deactivated branch is generally treated as unmanaged which is mostly
correct except for the case of retargeting an existing PR.
When a branch is deactivated the corresponding PRs are not removed, so
it's possible to have live PRs associated with ~unmanamaged
branches. When retargeting those PRs to active branches, the mergebot
would assume there was no existing PR and would create a duplicate,
then either get completely lost (before
a84595ea04) or blow up (after the same).
Properly search amongst deactivated branches for retargeting sources
so we update the relevant PR instead of trying to create duplicates.
Fixes#169
* extract method to create a PR object from a github result (from the
PR endpoint)
* move some of the remote's fixtures to a global conftest (so they can
be reused in the forwardbot)
Before this change mergebot assumes github's tags are in sync with its
"previous" state, but because tags update was highly non-atomic (one
call per removal plus one for additions) and state can further change
between a failure and an update retry (especially as the labels endpoint
fails *a lot*), it's possible for set tags (in github) to be completely
desync'd from the mergebot state, leading to very misleading on-pr
indications.
This first fetches the current tagstate from github (to not lose non-
mergebot tags) then (hopefully atomically) resets all tags tags based on
the current mergebot state. This should avoid desyncs, and eventually
resync PRs (if they change state).
Fixes#170
On a PR being updated, closed or unreviewed, if it is part of an
active staging that staging would get cancelled (yay). However, if the
PR was part of a pending *split*, then the split would *not* get
cancelled / updated (to remove the PR from it), and the PR could go on
to get staged as if everything were right in the world which is an
issue.
It doesn't look like it actually happened (at least I got no echo of
it), but it almost did at least once.
fixes#160
Also add test for it & feedback of an approved PR failing CI, and fix
corner case with it (might not send a warning immediately on CI failure
depending on status requirement ordering).
Fixes#158
* when rebasing, store a map of rebased to source, that way it'll be
possible to link cherry-picked forward ports to the originally
integrated commit rather than just the one from the PR (which was
likely not itself integrated as the straight merge mode is somewhat
rare: as of 5600 PRs merged so far only 100 were straight merged)
* while at it, store the "merge head" of the PR (whether squashed,
merged or rebased) and put *that* in the commit message
fixes#161
Sometimes people add co-authored-by lines in the middle of their
message, where github ignores them.
Since we previously added properly handling existing (correct) C-A-B
lines in the case where we're adding fixes and signed-off-by, we might
as well fix-up existing but mispalced co-authored-by lines.
Fixes#107
Previously, creating a PR would validate the head (in case it had
already passed CI) but reopening it would not, which is inconvenient
as the CI would not automatically run on a reopened PR.
Update both the state and the head of the PR on reopen to force a
revalidation, that way if the head has already passed CI the PR will
be reopened validated and there won't be an unclear need to perform an
explicit CI run.
Fixes#119
Use the proper / actual "is there any stageable PR" query to check if
a PR is blocked as well, that way they shoudn't be diverging all the
time even if it might make PR.blocked a bit more expensive.
fixes#111
Will comment any time a statuses update folds to a CI failure on a
reviewed pull request. Might be somewhat spammy, we'll see.
No notification if the PR is not reviewed yet.
fixes#87
Before this, impacting a commit's statuses on the relevant PR or
staging would be performed immediatly / inline with its
consumption. This, however, is problematic if we want to implement
additional processing like #87 (and possibly though probably not #52):
webhook handlers should be kept short and fast, feeding back into
github would not be acceptable.
- flag commits as needing processing instead of processing them
immediately, this uses a partial index as it looks like the
recommended / proper way to index a boolean column in which one of
the values is searched much more than the other (todo: eventually
check if that actually does anythnig)
- add a new cron for commits processing
- alter tests so they use this new cron (mostly by migrating them to
`run_crons` though not solely as some still need more detailed
management to properly check intermediate steps)
Fix an issue with closing a staged PR while at it (the "merging" tag
would potentially never be removed).
If a PR gets sync'd to a known-valid commit, it should be marked as
valid rather than get in this weird state where it's merely open but
github knows it passes CI.
Fixes#72
The choice to keep sync'd PRs in error means it's possible to update
the code and re-run the PR directly without it going through review &
CI again, which is a bit odd. Remove the special case and always reset
a sync'd PR to opened for clarity and simplicity.
closes#71closes#83
In remote tests, if the deletion of a test repository fails (because
gh glitch) or the repo creation succeeded but reported a failure (for
some reason) the entire run is hosed because every test trying to
create a similarly named repository will explode.
Alter repomaker to just try to delete the repo, unless --no-delete
mode in which case just skip any further test trying to use the same
repository (not deleting the repo is the entire point of --no-delete,
as its purpose is the ability to do post-mortem debugging on
repository state).
closes#99
Github is subject to a fair amount of transient failures, which are
currently ill-logged: an exception is raised and the caller /
responsible might eventually log something, but it's not really
formalised and centralised, and is thus inconvenient to try and
post-mortem issues with github's support.
Change this such that *almost* all github API calls get extensively
logged (status, reason, all headers, body) on failure.
Also automatically sets debug logging for odoo in local tests, and
alter the fake response constructor thing so it doesn't set a json
mimetype when the body is not valid json.
Closes#98
When a user changes the merge method via github messages, no feedback is
sent. This could lead to strange behavior, for example when a user try
to joke with the mergebot like this:
> robodoo are you goin ti merge my PR rogntudju !
This sets the merge method to "merge" and the user is not aware of it.
If a transient github failure makes the integration fail but also
makes the following reset fail the entire staging process would be
cancelled (and operations so far rollbacked) except for the failure
comment which would be effected, as in odoo/odoo#26380.
By pushing the comment to the feedback queue, if the reset fails the
comment is rollbacked and "unqueued".
Closed tagging was broken since the raw-sql alterations of the close
hook: because it's raw SQL, the write() method doesn't get invoked
anymore and as a result the tagging feedback record is not created,
and never executed.
Add a test to check for the PR's proper tagging, and fix this issue by
explicitly creating a tagging record.
Closes#49
Hopefully this finally fixes the double commenting
issue (e.g. odoo/odoo#28160). This seems (according to reading the
logs and also logic) like a failure (concurrency of some sort) leading
to a transaction rollback *after* the GH API call, so the cron's DB
actions are rollbacked (then performed again on the next run) *but*
the GH API interactions are not rollbacked, and are thus performed
twice.
Since the entire purpose of the feedback table is to send comments,
send both "staging failed" and "Linked pull requests" comments via
that.
Once more unto the breach, with the issue of pushing stagings (with
"closes" annotations) to the target branch making GH close the PR &
send the hook, which makes runbot_merge consider the PR closed and the
staging cancelled.
This probably still doesn't fix the issue, but it reduces the
problematic window: before this, the process first updates the
branches, then marks the PRs, then comments & closes the PRs, and
finally commits the PR update.
This means as runbot_merge is sending a comment & a status update to
each PR in a staging, GH has some time to send the "closed" webhook
behind its back, making the controller immediately cancel the current
staging, especially if the v3 endpoint is a bit slow.
By moving the commenting & closing out of the critical path (to the
feedback queue), this window should be significantly shortened.
Runbot is supposed to run on v11, yet because the runbot_merge test
suite was built on v12 due to an API change in
Registry.enter_test_mode the (local) test suite would only run on v12.
Add a conditional pick such that the test suite can run transparently
on both v11 and v12.
Normally, two PRs from the same owner with the same branch
name (source) are batched together, and that's represented by batching
them by label e.g. two PRs labelled
`odoo-dev:12.0-snailmail-address-format-anp` means they probably
should be merged together somehow.
*However* this is an issue when editing via the web interface: if the
editor doesn't have write access to the repository, github
automatically createa a branch called `patch-<n>` under their
ownership, where `<n>` is a sequence (of sorts?) *within the user's
fork*.
This means it's possible (and indeed easy) to create <foo>:patch-1 on
different (non-forks) but related repositories without them actually
being co-dependent, at which point they get blocked on one
another, which can lead to them being blocked (period) due to runbot
not currently handling co-dependencies between PRs.
Currently, if a staging is ongoing or failed one has to hunt for the
staging branches on the runbot dashboard in order to find out what
happens.
This adds a dropdown to the staging box/block providing direct status
and access to all the CI information whether the CI is ongoing or done,
successful or not.
rebase() can fail after merge(), during set_ref(), having already
updated the target.
Under the pre-rebase model, stage() assumed on a staging failure on a
given repo it only had to rollback stagings having succeeded. This
assumption fails in a post-rebase model as even a failed staging can
have modified the target, leading to the next staging (if multiple
batches are ready) containing the failed one.
Things can get really strange if the set_ref failure was (as it
probably is) some sort of transient failure, as the following staging
will likely succeed (under the assumption that most PRs/batches pass
staging) as PR1's content gets merged as part of PR2, and PR2 is
merged empty of content later on.
Send reponse comments when users mis-interact with robodoo e.g.
send comments they don't have the right to, or commands which don't make
sense in the PR's state, or tentative interactions with robodoo from
unmanaged PRs.
People get surprised/worried that their ready PR never gets picked up,
but it's because there is a non-ready (either unreviewed or failing CI)
pull request linked to it. This aims to at least warn them of the issue.
Taking 3 statements to create a branch before working with it is a bit
much, a simple helper reduces that to a single function call with 4
params (from 3 function calls with 1/4/1 params).
They're completely independent (or should be), and there's no reason
for the tagging cron to extend the "lifetime" of the main cron's
transaction (and thus extend the odds of racey behaviour).
If a staging covers multiple repositories and there's a fast-forward
issue on any but the first repo/target, runbot_merge attempted to
revert the commits it had fast-forwarded on the previous repos.
This doesn't work when branch-protection is active, unless runbot_merge
is a repository administrator (and branch protection is not configured
to apply to those): reverting is done by push-forcing the original head
back onto the ref, which branch-protection unconditionally precludes.
This commit does not entirely fix the race condition (it does not look
like github provides any way to do that), but it should significantly
reduce the race-condition window as it performs a semi-wet run of the
fast-forward process on the tmp branches before actually updating the
targets. That way the only remaining breakage should be when somebody
pushes on repositories 1.. between the test-FF on tmp branches and the
actual fast forward.
While at it, updated the github API thing to *always* dump the JSON body
on an error response, if the content-type is json.
Previously, runbot_merge assumed github would return commits in
topological order (from base to head of PR). However as in the UI
github sorts commits using the author's date field, so depending on
rebasing/cherrypick/... it's possible to have the head of the commit
be "younger" than the rest. In that case robodoo will try to merge
it *first*, then attempt to merge the rest on top of it (-ish, it'd
probably make a hash of it if that worked), at which point github
replies with a 204 (nothing to merge) because the PR head has already
included everything which topologically precedes it.
Fix: re-sort commits topologically when fetching the PR's log. That
way they're rebased in the proper order and correctly linked to one
another.
Example problematic PR: odoo/enterprise#2794, the commits are
773aef03a59d50b33221d7cdcdf54cd0cbe0c914
author.date: 2018-10-01T14:58:38Z
879547c8dd37e7f413a97393a82f92377785b50b (parent: 773aef03)
author.date: 2018-10-01T12:02:08Z
Because 879547c8 is "older" than 773aef03, github returns it first,
both in the UI and via the API.
Also fixed up support for committer & author metadata in fake_github
so the local tests would both expose the issue properly and allow
fixing it.
* [ADD] runbot_merge: more informative states to stagings on error
Currently, when a staging fails for other reasons than a CI failure:
* the staging having been cancelled is known implicitly, because the
staging will be deactivated but will never get a status beyond
pending (because it's not found when looking for it since it's not
`active`)
* the fast-forward having failed is completely silent (logging aside),
it looks for all the world like the staging succeeded
Timeout fails the PR already, but split-on-timeout was not so fix that
one bit.
* [FIX] odoo/odoo#cb2862ad2a60ff4ce66c14e7af2548fdf6fc5961
Closes#41
The webhook used the "sender" of the event as comment author, however
if the comment is edited by a maintainer github sends a
"issue_comment" event with that maintainer as sender.
This means a random user could create a comment with a robodoo
command, and if a registered reviewer happened to edit the comment the
command would suddenly be taken in account. This was not the intention.
I just spent 10mn trying to find out why staging 28 was cancelled
(a p=0 comment). Add a common prefix to all staging cancels to make
them easier to find.
staging delay was mistakenly commented in
bb664455ec
Also modified testing fixtures so the staging delay is not enabled when
running tests locally: on my box it increases the local runtime from
~70s to ~1500s (20s/staging, ~1 staging/test, 73 tests)
Before this, the bot would only acknowledge commands of the form
<botname>: <commands>
but since the bot is an actual user, people regularly use `@<botname>`
as it seems like it should work *and* provides for autocompletion.
Support that, as well as the octothorpe in case users want to pound
robodoo.
Related to odoo/runbot#38
A limitation to 50 commits PRs was put in place to avoid rebasing
huge PRs (as a rebase means 1 merge + 1 commit *per source commit*),
however the way it was done would also limit regular merges, and the
way the limitation was implemented was not clear.
* explicitly check that limit in the rebase case
* make error message on PR sizes (rebase 50 or merge 250) clearer
* remove limit from commits-fetching (check it beforehand)
* add a test to merge >50 commits PRs
* fix the local implementation of pulls/:number/commits to properly
paginate
2b1cd83b07 fixed a bug in PR
squashing (introduced when it was mis-rebuilt on top of rebase) which
was immediately committed & pushed so we could fix the running
mergebot.
This adds a test for that issue, it was checked to fail for
2b1cd83b075a99da7ed905b9e62d7e5acb48b253~1 and work as of the current
head.
Turns out the previous tests checked all the new/complex features to
see if they worked correctly, but I completely forgot that the
previously working squash had been rebuild.
Staging 13 tried merging 3 PRs (27085, 27083 and 27071) and supposedly
succeeded *but* only merged one of the 3 PRs despite marking all three
as merged. I tried building a few tests constructing multi-PR graphs
and checking them, but the only thing they exposed was the local
github implementation not correctly updating merge targets.
So fixed that, which is good.
Doesn't tell me why the staging didn't work right though.
a0063f9df0 slightly improved the error
message on non-PR ci failure (e.g. a community PR makes enterprise
break) by adding the failed commit, but that's still not exactly clear,
even for technical users (plus it requires having access to all the
repos which is not the case for everyone).
This commit further improves the situation by storing the target_url
and description fields of the commit statuses, and printing out the
target_url on failure if it's present.
That way the PR comment denoting build failure should now have a link to
the relevant failed build on runbot, as that's the target_url it
provides.
The change is nontrivial as it tries to be compatible with both old and
new statuses storage format, such that there is no migration to perform.
If CI fails on a non-PR'd branch of a staging (e.g. given repos A and B
and a PR to A, CI fails on the staging branch to B), the error message
(log and comment on the PR) is unhelpful as it states that the staging
failed for "unknown reason".
Improve this by providing the failed CI context and the commit, which
should allow finding out the branch & CI logs, and understanding the
why of the failure.
Fixes#36
Before this change, when staging batches only affecting one repo (of n)
the unaffected repositories would get a staging branch exactly matching
the target.
As a result, either runbot_merge or runbot would simply return the
result of an unrelated build, potentially providing incorrect
information and either failing a staging which should have succeeded
(e.g. change in repo A broke B, PR is making a change in repo A which
fixes B, but B's state is reported as the previous broken build) or
succeeding a staging which should have failed (change in repo A breaking
B except a previous build of the exact same B succeeded with a different
A and is returned).
To fix this issue, create a dummy commit at the head of each staging
branch. Because commit dates are included in the hash and have a second
precision it's pretty unlikely that we can get built duplicates, but
just to be completely sure some random bits are added to the commit
message as well.
Various tests fixed to correctly handle the extra dummy commit on
staging branches.
fixes#35
After discussion with mat, rco and moc, if a PR is updated it should
be unapproved for safety reasons: if a reviewer approves a PR, that's
what should be merged, if there are things to fix/change a reviewer
should at least rubberstamp the changes to avoid mistakes.
This is a bit more noisy/constraining, but can be changed or tuned
afterwards if it's considered too constraining.
rebase-and-merge (or squash-merge if pr.commits == 1) remains default,
but there are use cases like forward ports (merge branch X into branch
X+1 so that fixes to X are available in X+1) where we really really
don't want to rebase the source.
This commits implements two alternative merge methods:
If the PR and its target are ~disjoint, perform a straight merge (same
as old default mode).
However if the head of the PR has two parents *and* one of these
parents is a commit of the target, assume this is a merge commit to
fix a conflict (common during forward ports as X+1 will have changed
independently from and incompatibly with X in some ways).
In that case, merge by copying the PR's head atop the
target (basically rebase just that commit, only updating the link to
the parent which is part of target so that it points to the head of
target instead of whatever it was previously).
After discussion with al & rco, conclusion was default PR merging method
should be rebase-and-merge for cleaner history.
Add test for that scenario (w/ test for final DAG) and implement this
change.
* Add ids accessor to the remote Model fake
* Explicitly ignore order when unnecessary, a test fails since the
ordering of prs has been changed for UI purposes. This is only an
issue for Remote though it's unclear why (as the local Issue/PR
objects should still have a per-repo sequence)
The old "sync pr" thing is turning out to be a bust, while it
originally worked fine these days it's a catastrophe as the v4 API
performances seem to have significantly degraded, to the point that
fetching all 15k PRs by pages of 100 simply blows up after a few
hundreds/thousands.
Instead, add a table of PRs to sync: if we get notified of a
"compatible" PR (enabled repo & target) which we don't know of, create
an entry in a "fetch jobs" table, then a cron will handle fetching the
PR then fetching/applying all relevant metadata (statuses,
review-comments and reviews).
Also change indexation of Commit(sha) and PR(head) to hash, as btree
indexes are not really sensible for such content (the ordering is
unhelpful and the index locality is awful by design/definition).
Previously when splitting staging we'd create two never-staged
stagings. In a system where the stagings get deleted once done
with (succeeeded or failed) that's not really important, but now that
we want to keep stagings around inactive things get problematic as
this method gunks up the stagings table, plus the post-split stagings
would "steal" the original's batches, losing information (relation
between stagings and batches).
Replace these empty stagings with dedicated *split* objects. A batch
can belong to both a staging and a split, the split is deleted once a
new staging has been created from it.
Eventually we may want to make batches shared between stagings (so we
can track the entire history of a batch) but currently that's only
PR-level.
If we want a dashboard with a history of stagings, maybe not deleting
them would be a good idea.
A replacement for the headless stagings would probably be a good idea:
currently they're created when splitting a failed staging containing
more than one batch, but their only purpose is as splits of existing
batches to be deactivated/deleted to be re-staged (new batches &
stagings are created then as e.g. some of the batches may not be
merge-able anymore) and that's a bit weird.
Github apparently doesn't sync merged/closed PRs (which makes sense
but isn't really documented) so strip out test and assume that never
happens (with a log error in case it ever does).
Remote's labels are not entirely under our control as the part before
":" is the *owner* of the source repo => introduce additional "owned"
fixture to handle this case, as it may diverge from the "user" role if
running the tests against an organisation.
Can't really assume we can get the github logins "user" or "reviewer"
to run the test suite remotely, so add an indirection and backronym
those to *roles* instead. The local test suite has identical roles &
logins, but the remote version does not.
Also use the "other" role for any random user, and don't create its
partner up-front.
Also renamed the self-reviewer user to self_reviewer, that's a bit
less weird when dealing with e.g. ini files.
This is the preparation of an attempt to make these tests work with
both a local github mock (in-memory) and a remote actual github.
Move a bunch of fixtures relying on the specific github
implementation (and odoo-as-library access) to the "local" plugin,
including splitting the "repo" fixture.
The specific fixtures will likely have to be adjusted as the
remote endpoint is fleshed out.
AL thinks it's not useful and it's better to always squash/rebase a
single commit & merge multiple. Mark tests as xfail'd instead of
removing them.
Also mark test_edit_retarget_managed as skipped explicitly
Reviews are interpreted like comments and can contain any number of
commands, with the difference that APPROVED and REQUEST_CHANGES are
interpreted as (respectively) r+ and r- prefixes.
* p0 cancel existing stagings in order to be staged as soon as
possible
* p0 PRs should be picked over split batches
* p0 bypass PR-level CI and review requirements
* p0 can be set on any of a batch's PR, matched PRs will be staged
alongside even if their priority is the default