* only provide fields which make sense for the mergebot
* provide formatting & searchability for review rights records so
they're visible from the list directly
Since cb05f2b9d8, when creating a database, the template1 is used, this
allows to customize the template and install some needed Postgress
extensions.
Unfortunately, it's also a source of build failures. For example, if the
pg_activity util is used on a runbot host, database creation may fail
with a message like this:
source database "template1" is being accessed by other users
It's because pg_activity needs a database and uses template1.
With this commit, template1 is still used by default but can be changed
with a system parameter. That way, a custom template can be created on
runbot hosts and used when creating DB in builds.
When a PR branch target is changed on Github, the change is not applied
in the runbot DB.
With this commit, the Github hook payload is taken into account to
detect such a change and the branch infos are recomputed accordingly.
Also, a button is now available on the branch form in order to manually
recompute those changes.
This is more of a sanity check as it normally should not be a factor:
labels generally contain the target name, and staging checks are
performed per-target so we're not mixing multiple targets anyway.
But let's say a third-party creates a fix-foo branch for A and a
fix-foo branch for B, we want to ensure they're not considered batched
together.
Rather than try to fix up various bits where we search & all and
wonder what index we should be using, make the column a CIText.
For mergebot the main use case would be properly handling
delegate=XXX: currently if XXX is not a case-sensitive match we're
going to create a new partner with the new github login and
give *them* delegation, and the intended target of the delegation
isn't going to work correctly.
Also try to install the citext extension if it's not in the database,
and run the database-creation process with `check=True` so if that
fails we properly bubble up the error and don't try to run tests on a
corrupted / broken DB.
Fixes#318
As the odds of having more projects or more repos with different
requirements in the same project, the need to have different sets of
reviewers for different repositories increases.
As a result, rather than be trivial boolean flags the review info
should probably depend on the user / partner and the repo. Turns out
the permission checks had already been extracted into their own
function so most of the mess comes from testing utilities which went
and configured their review rights as needed.
Incidentally it might be that the test suite could just use something
like a sequence of commoditized accounts which get configured as
needed and not even looked at unless they're used.
Before this change, `r-` on a pr[p=0] does essentially nothing. At
most it will unstage if the PR had been (somewhat unnecessarily) r+'d
in the past but then the PR will get re-staged immediately.
To avoid this odd behaviour, if r- is sent to a p=0 PR not only is
the PR unreviewed (if it was reviewed) it always gets unstaged, and
its priority gets reset to 1 (high priority but doesn't bypass CI and
review). Also send a comment on that subject so followers of the pr
are notified.
Fixes#313
During freezes it can be useful to notify viewers that nothing is
going to forward port or merge for a while, and that this is
intentional (not something that's broken).
Fixes#307
The staging cron was already essentially split between "check if one
of the stagings is successful (and merge it)" and "check if we should
create a staging" as these were two separate loops in the cron.
But it might be useful to disable these two operations separately
e.g. we might want to stop the creation of new staging but let the
existing stagings complete.
The actual splitting is easy but it turns out a bunch of tests were
"optimised" to only run the merge cron. Most of them didn't blow up
but it seems more prudent to fix them all.
fixesodoo/runbot#310
The PR creation had been fixed to always validate even without a
commit found (in case there was no need for a commit), but the update
of a PR in such a situation was not tested, and thus naturally did not
work because why would it work if it wasn't tested?
Also remove the conditional skip on updating a PR to a new head.
The test was checking things would work properly with
required_statuses being an empty string, because I'd also forgotten an
empty field becomes stored as `False` in the database, so trying
things out live neither the PRs nor the staging would work as their
assumption that they could straight split the required_statuses would
always fail.
Update the test to better match expectations, and hopefully this is
the end of that saga.
PRs transitioning to 'ready' had been checked and tested but turns out
I had completely forgotten to test that stagings would validate
properly therefore of course they didn't.
The issue here was I'd forgotten `''.split(',')` returns `['']` rather
than `[]`, so on an empty required_statuses the staging validator
would keep looking for a status matching the context `''` and would
never find it, keeping the staging pending until timeout. So most
likely the problem could have been resolved by just adding a condition
to
[r.strip() for r in repomap[c.sha].required_statuses.split(',')]
but I'd already done all the rest of the reorganisation by that point,
test pass and I think it's a somewhat better logic. Therefore I'll go
with that for now.
* properly handle empty required_statuses during staging validation
* remove the final postcondition, if we're missing commits which don't
require any statuse we should not care
* expand test to include up to merging PRs
* automatically create dummy commits when creating stagings, that way
the relevant commits are in the database (can't hurt)
PS: an other alternative would have been to filter out or skip ahead
on commits which don't require any statuses aka cmap &
required_statuse / cmap would not even have that entry
Refactor the selection thingie, hopefully in a way which doesn't
absolutely crater performances, so that it's possible to explain the
reason why a PR is considered blocked.
Despite the existing dedup' sometimes the "xxx failed on this
forward-port PR" would still get multiplicated due to split builds
e.g. in odoo/odoo#43935 4 such messages appear within ~5 minutes, then
one more 10mn later.
This is despite all of them having the same "build" (target_url) and
status (failure). Since the description is the only thing that's not
logged I assume that's the field which varies and makes the dedup'
fail. Therefore:
* add the description to the logging (when getting a status ping)
* exclude the description when checking if a new status should be
taken in account or ignored: the build (and thus url) should change
on rebuild
Hopefully fixes#281
As pdfminer is going to be used in Odoo attachement_indexation module and
that there is no python3-pdfminer package in Ubuntu Bionic, it was
decided to have an optional dependency.
In order to run the attachement_indexation tests, the pip package is
added to the runbot Docker image.
* Add some more information as to why the user *should* do on the PR
the message is printed on, the previous message left that to their
imagination
* The PR selection was *completely* wrong as it would select the old
PRs which really isn't what we want. And turns out there's no good
reason to create & send the feedback in the loop creating the
forward-port prs, that can be moved to a followup loop where we have
created hopefully created all the forward-port PRs.
Also technically we could do even better than currently and remap
the prs mapped to conflict data to the new PRs and know exactly
which of the forward-ported PRs is faulty, but that seems overkill
for now.
If a new branch is added to a project, there's an issue with *ongoing*
forward ports (forward ports which were not merged before the branch
was forked from an existing one): the new branch gets "skipped" and
might be missing some fixes until those are noticed and backported.
This commit hooks into updating projects to try and see if the update
consists of adding a branch inside the sequence, in which case it
tries to find the FP sequences to update and queues up new
"intermediate" forward ports to insert into the existing sequences.
Note: had to increase the cron socket limit to 2mn as 1mn blew up the
big staging cron in the test (after all the forward-port PRs are
approved).
Fixes#262
[FIX]
At some point pytest added support for dataclass & attrs introspection
by looking up some specific meta-fields when trying to format
objects (after an assertion fails).
It tries to access the attributes, falls back to something else if it
gets an AttributeError but apparently falls over if it gets something
else, which is what'd happen here as read() would generate a
ValueError which would get re-raised as-is on the client
side.. However pytest doesn't really make the issue clear, and the
logging from RPC likely got lost in the noise from the github logging.
The fix is to simply convert errors from read() into proper
AttributeError. And blacklist fields we know make no sense to avoid
confusing tracebacks in the log.
Mergebot & forwardbot have ultra-verbose logging of all github
interactions in order to better understand what happens exactly when
there are issues with gh integration (and/or provide to GH support).
However in most cases this is a pain in the ass when reviewing test
logs. So suppress these github_requests logs by default when testing.
A while back I implemented name_get/display_name to print PRs using
the canonical github format (owner/repo#number), however looks like
some of the logging calls were still using bespoke formatting.
Before this change, if multiple co-dependent PRs get forward-ported
and one of them has a conflict the notice on the others is very
limited: they're tagged as `conflict` but there is no other
information provided in the PR description or in the subsequent
message.
Add a small warning to these other PRs, for clarity.
Closes#302
Currently if the creation of a forward-port pull request fails:
* the branches are left un-cleaned
* preceding PRs are left open
* the PR whose creation failed may or may not have actually failed,
and may or may not still be open
We need to delete the forward port branches anyway, and IIRC
that *should* automatically close the PR. Sadly making it so github
predictably / reliably blows up when trying to create a PR via the API
is difficult so this is essentially untestable.
Closes#296
The forwardbot's command parsing was missing feedback when trying to
use commands without the proper ACL. This would make some situations
of comments seemingly being lost hard to diagnose.
Closes#300
Moving statuses from project to repo was originally developed on 11,
but since the PR was only merged after the 13.0 update, the script
migration script should be moved to match.
Add handling of branch filtering to the forwardport module:
* don't forward port (and trigger an error) when trying to port
PRs to different next targets
* otherwise port normally
e.g. given a project with repos A and B and branches a, b and c, with
branch b being excluded from repo B:
* a PR merged into A.a will be forward-ported to A.b and A.c
* a PR merged into B.a will be forward-ported to B.c (skipping the
excluded B.b)
* a PR set merged into (A.a, B.a) will *not* be forward-ported, and a
message will be posted to each PR denoting the incompatibility
The pytest suite had been partially unified between mergebot and
forwardport but because of session-scoped modules it could not run
across those.
Make the db cache lazy and able to cache multiple databases, and move
the "current required module" to function scoped, this way things
should (and seem to) work properly on runs involving mergebot & fwbot.
Next step: xdist! (need to randomise repo names for that, probably).
randomise the name of the repositories created so they don't collide
and lead to odd results when running concurrent test cases which
specify the same repo name (a common property).
As a result, ignore the "no delete" flag for creation: there should be
no way to land on a pre-existing repo name even if we didn't clean
them up.
Also stagger the check of a running ngrok process: when pytest starts
its worker processes, all workers will run the tunnel fixture, and
since the ngrok process takes some time to get into a stable run state
chances are multiple workers will fail to connect and try to start
ngrok concurrently, which blows up as ngrok just kills the extra
processes instead of merging / proxying into an existing session. A
proper lockfile would probably be better but...
Fixes#297
When an employee sadly leaves Odoo,
the Odoo production database (odoo.com) will call these routes
in order to remove the reviewer rights automatically.
So a user who no longer works for Odoo can't "r+" Github PRs.
This is related to odoo/internal#617