rebase-and-merge (or squash-merge if pr.commits == 1) remains default,
but there are use cases like forward ports (merge branch X into branch
X+1 so that fixes to X are available in X+1) where we really really
don't want to rebase the source.
This commits implements two alternative merge methods:
If the PR and its target are ~disjoint, perform a straight merge (same
as old default mode).
However if the head of the PR has two parents *and* one of these
parents is a commit of the target, assume this is a merge commit to
fix a conflict (common during forward ports as X+1 will have changed
independently from and incompatibly with X in some ways).
In that case, merge by copying the PR's head atop the
target (basically rebase just that commit, only updating the link to
the parent which is part of target so that it points to the head of
target instead of whatever it was previously).
After discussion with al & rco, conclusion was default PR merging method
should be rebase-and-merge for cleaner history.
Add test for that scenario (w/ test for final DAG) and implement this
change.
* Add ids accessor to the remote Model fake
* Explicitly ignore order when unnecessary, a test fails since the
ordering of prs has been changed for UI purposes. This is only an
issue for Remote though it's unclear why (as the local Issue/PR
objects should still have a per-repo sequence)
* avoid fetching PRs for un-managed branches if we know up-front
* avoid processing comments with no commands (avoids fetching the
corresponding PR which we know nothing about yet and which may or
may not be for a managed branch)
The old "sync pr" thing is turning out to be a bust, while it
originally worked fine these days it's a catastrophe as the v4 API
performances seem to have significantly degraded, to the point that
fetching all 15k PRs by pages of 100 simply blows up after a few
hundreds/thousands.
Instead, add a table of PRs to sync: if we get notified of a
"compatible" PR (enabled repo & target) which we don't know of, create
an entry in a "fetch jobs" table, then a cron will handle fetching the
PR then fetching/applying all relevant metadata (statuses,
review-comments and reviews).
Also change indexation of Commit(sha) and PR(head) to hash, as btree
indexes are not really sensible for such content (the ordering is
unhelpful and the index locality is awful by design/definition).
Previously when splitting staging we'd create two never-staged
stagings. In a system where the stagings get deleted once done
with (succeeeded or failed) that's not really important, but now that
we want to keep stagings around inactive things get problematic as
this method gunks up the stagings table, plus the post-split stagings
would "steal" the original's batches, losing information (relation
between stagings and batches).
Replace these empty stagings with dedicated *split* objects. A batch
can belong to both a staging and a split, the split is deleted once a
new staging has been created from it.
Eventually we may want to make batches shared between stagings (so we
can track the entire history of a batch) but currently that's only
PR-level.
If we want a dashboard with a history of stagings, maybe not deleting
them would be a good idea.
A replacement for the headless stagings would probably be a good idea:
currently they're created when splitting a failed staging containing
more than one batch, but their only purpose is as splits of existing
batches to be deactivated/deleted to be re-staged (new batches &
stagings are created then as e.g. some of the batches may not be
merge-able anymore) and that's a bit weird.
Github apparently doesn't sync merged/closed PRs (which makes sense
but isn't really documented) so strip out test and assume that never
happens (with a log error in case it ever does).
Remote's labels are not entirely under our control as the part before
":" is the *owner* of the source repo => introduce additional "owned"
fixture to handle this case, as it may diverge from the "user" role if
running the tests against an organisation.
Can't really assume we can get the github logins "user" or "reviewer"
to run the test suite remotely, so add an indirection and backronym
those to *roles* instead. The local test suite has identical roles &
logins, but the remote version does not.
Also use the "other" role for any random user, and don't create its
partner up-front.
Also renamed the self-reviewer user to self_reviewer, that's a bit
less weird when dealing with e.g. ini files.
Turns out PATCH /git/refs/:ref returns a 422 when the ref does not
exist, rather than the 404 I'd expected.
Also improve the error message by including the JSON body which tends
to be more descriptive/helpful than the reason for Github's API.
Maybe I should replace all raise_for_status() by printing the JSON
body instead...
This is the preparation of an attempt to make these tests work with
both a local github mock (in-memory) and a remote actual github.
Move a bunch of fixtures relying on the specific github
implementation (and odoo-as-library access) to the "local" plugin,
including splitting the "repo" fixture.
The specific fixtures will likely have to be adjusted as the
remote endpoint is fleshed out.
AL thinks it's not useful and it's better to always squash/rebase a
single commit & merge multiple. Mark tests as xfail'd instead of
removing them.
Also mark test_edit_retarget_managed as skipped explicitly
Reviews are interpreted like comments and can contain any number of
commands, with the difference that APPROVED and REQUEST_CHANGES are
interpreted as (respectively) r+ and r- prefixes.
* p0 cancel existing stagings in order to be staged as soon as
possible
* p0 PRs should be picked over split batches
* p0 bypass PR-level CI and review requirements
* p0 can be set on any of a batch's PR, matched PRs will be staged
alongside even if their priority is the default
In a near future, Odoo will use Chrome Headless instead of phantomjs.
Chrome needs a port to listen to and it was decided that it will be
http_port + 2.
With this commit, we ensure that this port is not used by another build.
Closes#30
Avoids overly generic vhost declaration, and makes config easier to
understand.
We only want to allow such hosts:
<build_dest>.runbotX.odoo.com
<build_dest>-<db_name_extension>.runbotX.odoo.com
Where <db_name_extension> is usually "all" or "base".
The unicode icon added in the build subject is not clear for the users.
In that state, it's not easy to add a title on the icon or the subject.
With this commit, a build type field is added to differentiate the
builds and add the appropriate icon and title.
Closes: #24
When a build with coverage is killed during the tests, the coverage
result is set to 100%.
With this commit, coverage result is verbosely skipped if the coverage
file does not exists.
Also, the coverage module is no longer user to retrieve the coverage
value as it was already calculated. It's faster to grep the html file
than to recompute the value.
As the coverage builds takes a longer time than normal builds, the
timeout is increased for those kind of builds (as it was already done for the
CPU limit).
Finally, the omitted patterns were wrong and are now fixed with this
commit.
Closes#25
When the Odoo instances are spawned for tests, they have a time limit
set on their CPU usage. This limit is hard-coded in the _spawn method
calls.
As the number of tests are increasing, their duration increases too.
As this limit is inherited by subprocesses, if a phantomjs test
last too long, the test is killed alone.
Finally, when the coverage is enabled, the tests duration is
approximately increased of 1.5 times.
With this commit, the cpu_limit of the two main tests jobs are
increased. When the coverage is used, the cpu_limit is increased.
Closes: #23
When a new commit is found and a new build is created, if a user
pushes more commits in the same branch a few minutes later, it causes
more builds in testing phase, resulting in a CPU waste.
With this commit, previous builds in testing phase in non sticky
branches are killed when a new build is created.
Closes: #20