Old messages were quite inconsistent in their pinging of the PR author
and reviewer.
Reviewed messages (probably missed some but...) and try to more
consistently ping when the feedback requires some sort of action in
order to proceed.
Fixes#592
A few fixes and improvements after testing the feature:
- ensure the provisioned users are created as internal (not portal)
- assume oauth is installed and just crash if it's not
- handle a user not having an email (ignore)
- return value from json handler, otherwise JsonRequest sends no
payload which is *weird*
New accounts endpoint such that the SSO can push new pre-configured
users / employees directly. This lowers maintenance burden.
Also remove one of the source partners from the merge test, as
ordering seems wonky for unclear reasons leading to random failures of
that test.
The Commit test object now allows a tree of `None` (or an empty dict,
same diff) in which case it will create an empty commit (a commit
which uses the same tree as its parent).
* the repository apparently takes a lot more time to
propagate (randomly) now, so wait until we can *see* it
* also add sleep after modification operations on the new repository
Ideally this should be done a lot more ubiquitously, but for now this
seems to more or less suffice.
If a reviewer doesn't have an email set, the Signed-Off-By is an
`@users.noreply.github.com` address which just looks weird in the
final result.
Initially the thinking was that emails would be required for users to
*be* reviewers or self-reviewers, but since those are now o2ms / m2ms
it's a bit of a pain in the ass.
Instead, provide an action to easily try and fetch the public email of
a user from github.
Fixes#531
Since we have the model fields loaded up, we can just check into that
and assume anything that's not a field is a method.
That avoids having to go through `_call`, making things way less awkward.
Before the repo setup calls would be checked using `raise_for_status`
but that turns out to be less than great as github mostly returns
information via the body (as JSON), even if that information is often
useless.
Add a `check` utility which checks if the response is invalid and
prints the body.
Avoid logging below warning during the creation of the template db,
and don't emit `odoo.modules.loading` during tests.
That reduces log-spam a lot and makes tests results way more
readable (in case of failure, where the logs of the subprocess get
printed out).
Currently any access to a PR's information triggers an unconditional
fetch.
Cache the result and the conditional request data so we can perform
conditional requests on accesses 1+. And this could lead to somewhat
lower API usage as according to github conditional requests should not
count towards rate limit use.
Make the startup of ngrok more reliable: in some cases where the
machine is heavily loaded a 2s sleep after Popen-ing ngrok is not
sufficient, and the following POST still fails.
Add a small loop, with a more explicit availability check (and lower
the initial check to 1s wait).
Also:
- make the comments clearer as I'd forgotten half the things
- extract the ngrok API base URL (well it should not include /api
but...) to its own variable
* Remove the forwardport creating PRs in draft, that was mostly to
avoid codeowners triggering but we've removed the github one and
hand-rolled it, so not a concern anymore.
* Prevent merging `draft` PRs, the mergebot rejects approval on draft
PRs and insults people.
TBD (maybe): try to create *conflicting* forward-port PRs in draft so
it's clearer they need to be *fixed*? Issue of not being able to do
that on all private repositories remains so~~
Fixes#500
"Uniquifier" commits were introduced to ensure branches of a staging
on which nothing had been staged would still be rebuilt properly.
This means technically the branches on which something had been
staged never *needed* a uniquifier, strictly speaking. And those lead
to extra building, because once the actually staged PRs get pushed
from staging to their final destination it's an unknown commit to the
runbot, which needs to rebuild it instead of being able to just use
the staging it already has.
Thus only add the uniquifier where it *might* be necessary:
technically the runbot should not manage this use case much better,
however there are still issues like an ancillary build working with
the same branch tip (e.g. the "current master") and sending a failure
result which would fail the entire staging. The uniquifier guards
against this issue.
Also update rebase semantics to always update the *commit date* of the
rebased commits: this ensures the tip commit is always "recent" in the
case of a rebase-ff (which is common as that's what single-commit PRs
do), as the runbot may skip commits it considers "old".
Also update some of the utility methods around repos / commits to be
simpler, and avoid assuming the result is JSON-decodable (sometimes it
is not).
Also update the handling of commit statuses using postgres' ON
CONFLICT and jsonb support, hopefully this improves (or even fixes)
the serialization errors. Should be compatible with 9.5 onwards which
is *ancient* at this point.
Fixes#509
a45f7260fa had intended to use the
original authorship information for conflict commit even if there were
multiple commits, as long as there was only one author (/ committer)
for the entire sequence.
Sadly the deduplication was buggy as it took the *authorship date* in
account, which basically ensured commits would never be considered as
having the same authorship outside of tests (where it was possible for
commits to be created at the same second).
Related to #505
* The repo would only be registered at the very end of the creation,
meaning an error *during* the repo creation (e.g. while uploading the
first blob or setting up webhooks) would leave the repository
undeleted. Register the repository as soon as we know it was
created, in order to correctly dispose of it afterwards.
* Migrate logging.warning call to warnings.warn on repository deletion
failure: pytest will print warnings during its reporting, not so for
log warnings (?)
Apparently I wrote this when I was a dumbshit and did not think that
Python sets are implemented completely separately from dicts
and *remain unordered*.
As a result, the simplistic set union would not properly conserve
record order, which it should.
Apparently the "reason phrase" has been removed from HTTP/2 and github
has enabled HTTP/2, meaning responses may or may not contain a reason
phrase depending whether they're HTTP/1.1 or HTTP/2. Now I don't
understand how this triggers because requests isn't supposed to
support HTTP/2, and in a shell I do get a reason, but w/e.
Anyway the reason is used by a few tests to check more specifically
for the response they're getting, as that's used as the assertion
message by `get_ref`.
I guess I could replace the AssertionError by an HTTPError
but... w/e. Seems simpler to just fallback to the generic reason based
on the status code. It fixes both failing
tests (test_straightforward_flow and test_delete_normal) with very
little change.
Because github materialises every labels change in the
timeline (interspersed with comments), the increasing labels churn
contributes to PRs being difficult to read and review.
This change removes the update of labels on PRs, instead the mergebot
will automatically send a comment to created PRs serving as a
notification that the PR was noticed & providing a link to the
mergebot's dashboard for that PR where users should be able to see the
PR state in detail in case they wonder what's what.
Lots of tests had to be edited to:
- remove any check on the labels of the PR
- add checks on the PR dashboard (to ensure that they're at least on
the correct "view")
- add a helper to handle the comment now added to every PR by the 'bot
- since that helper is needed by both mergebot and forwardbot, the
utils modules were unified and moved out of the odoo modules
Probably relevant note: no test was added for the dashboard
ACL, though since I had to explicitly unset the group on the repo used
for tests for things to work it looks to me like it at least excludes
people just fine.
Fixes#419
Convert overridable CI to an m2m from partners, it's significantly
more convenient to manipulate as multiple users can (and likely will)
have access to the same overrides, add a name_search so the override
is easy to find from a partner, and provide a view for the
overrides (with partners as tags).
Also make the repository optional on CI overrides.
Fixes#420
Adds an `override` mergebot command. The ability to override is set on
an individual per-context per-repository basis, similar to but
independent from review rights. That is, a given individual may be
able to override the status X on repository A and unable to do so on
repository B.
Overrides are stored in the same format as regular statuses, but
independent from them in order to persist them across builds.
Only PR statuses can be overridden, statuses which are overridable on
PRs would simply not be required on stagings.
An alternative to implementing this feature in the mergebot would be
to add it to individual status-generating tools on a per-need
basis.
Pros of that alternative:
* display the correct status on PRs, currently the PR will be failing
status-wise (on github) but correct as far as the mergebot is
concerned
* remove complexity from the mergebot
Cons of that alternative:
* each status-generating tool would have to implement some sort of ACL
system
* each status-generating tool would have to receive & parse PR
comments
* each status-generating tool would have to maintain per-pr state in
order to track overrides
Some sort of helper library / framework ought make that rather easy
though. It could also be linked into the central provisioning system
thing.
Closes#376
Having the required statuses be a mere list of contexts has become a
bit too limiting for our needs as it doesn't allow e.g. adding new
required statuses on only some branches of a repository (e.g. only
master), nor does it allow putting checks on only branches, or only
stagings, which would be useful for overridable checks and the like,
or for checks which only make sense linked to a specific revision
range (e.g. "incremental" linting which would only check whatever's
been modified in a PR).
Split the required statuses into a separate set of objects, any of
which can be separately marked as applying only to some branches (no
branch = all branches).
Fixes#382
With a concurrency of 3 or more, it ends up being pretty easy to hit
github's rate limit (5000 requests/h), at which point the run hits a
cascading failure where every test from thereon blows up to the rate
limiting.
Add a handling for that case in some of the early github-querying
fixtures, so they can wait for the ratelimit delay to be restored.
This increases the need for a proper fake github thingie I could run
on a per-test basis.
The logic of the partner merge wizard is to collect all relevant data
from source partners, write them to a destination partner, then remove
the sources.
This... doesn't work when the field in question has a UNIQUE
constraint (like github_login), because it's going to copy the value
from a source onto a dest which will blow the constraint, and so the
copy fails. In that case the user first has to *move over* the unique
field's value then they can use the wizard.
Just fix for the github login: take all sources, remove (and store)
their github logins, then write the login onto the dst.
An alternative would have been to *defer* the constraint, however:
* it only works on unique constraints, not unique indexes
* it requires the constraint to be declared DEFERRABLE
Closes#301
This is useful as the author of the original PR doesn't necessarily
have (write) access to the repository where the forward-port PR was
created. As a result, while they can r+ the PR they're unable to close
it (via github's interface).
Since the forwardport bot created the PR, it can also close it, which
seems like a useful feature.
Closes#341
Rather than try to fix up various bits where we search & all and
wonder what index we should be using, make the column a CIText.
For mergebot the main use case would be properly handling
delegate=XXX: currently if XXX is not a case-sensitive match we're
going to create a new partner with the new github login and
give *them* delegation, and the intended target of the delegation
isn't going to work correctly.
Also try to install the citext extension if it's not in the database,
and run the database-creation process with `check=True` so if that
fails we properly bubble up the error and don't try to run tests on a
corrupted / broken DB.
Fixes#318
As the odds of having more projects or more repos with different
requirements in the same project, the need to have different sets of
reviewers for different repositories increases.
As a result, rather than be trivial boolean flags the review info
should probably depend on the user / partner and the repo. Turns out
the permission checks had already been extracted into their own
function so most of the mess comes from testing utilities which went
and configured their review rights as needed.
Incidentally it might be that the test suite could just use something
like a sequence of commoditized accounts which get configured as
needed and not even looked at unless they're used.
If a new branch is added to a project, there's an issue with *ongoing*
forward ports (forward ports which were not merged before the branch
was forked from an existing one): the new branch gets "skipped" and
might be missing some fixes until those are noticed and backported.
This commit hooks into updating projects to try and see if the update
consists of adding a branch inside the sequence, in which case it
tries to find the FP sequences to update and queues up new
"intermediate" forward ports to insert into the existing sequences.
Note: had to increase the cron socket limit to 2mn as 1mn blew up the
big staging cron in the test (after all the forward-port PRs are
approved).
Fixes#262
[FIX]
At some point pytest added support for dataclass & attrs introspection
by looking up some specific meta-fields when trying to format
objects (after an assertion fails).
It tries to access the attributes, falls back to something else if it
gets an AttributeError but apparently falls over if it gets something
else, which is what'd happen here as read() would generate a
ValueError which would get re-raised as-is on the client
side.. However pytest doesn't really make the issue clear, and the
logging from RPC likely got lost in the noise from the github logging.
The fix is to simply convert errors from read() into proper
AttributeError. And blacklist fields we know make no sense to avoid
confusing tracebacks in the log.
Mergebot & forwardbot have ultra-verbose logging of all github
interactions in order to better understand what happens exactly when
there are issues with gh integration (and/or provide to GH support).
However in most cases this is a pain in the ass when reviewing test
logs. So suppress these github_requests logs by default when testing.
The pytest suite had been partially unified between mergebot and
forwardport but because of session-scoped modules it could not run
across those.
Make the db cache lazy and able to cache multiple databases, and move
the "current required module" to function scoped, this way things
should (and seem to) work properly on runs involving mergebot & fwbot.
Next step: xdist! (need to randomise repo names for that, probably).
randomise the name of the repositories created so they don't collide
and lead to odd results when running concurrent test cases which
specify the same repo name (a common property).
As a result, ignore the "no delete" flag for creation: there should be
no way to land on a pre-existing repo name even if we didn't clean
them up.
Also stagger the check of a running ngrok process: when pytest starts
its worker processes, all workers will run the tunnel fixture, and
since the ngrok process takes some time to get into a stable run state
chances are multiple workers will fail to connect and try to start
ngrok concurrently, which blows up as ngrok just kills the extra
processes instead of merging / proxying into an existing session. A
proper lockfile would probably be better but...
Fixes#297
If a PR is *merged*, enqueue it for deletion (with a 2 weeks delay).
Mainly to avoid FW branches staying around long after they've been
merged (possibly eventually closed?), will also clean up regular
merged branches, including historical merges forgotten by their
author.
Fixes#230
* add a sorted method on fake models
* fix recordset equality to ignore ids order
* when creating commits on a ref, add a param to only *update* the ref
(forcefully): when simulating a force-push we don't want to *create*
a ref as that might silently be done in the wrong repository entirely
* fix pytest.skip call at the module level, not sure where it came
from and why I missed it until now
When posting a reminder that there are open / waiting forward ports on
a source PR, also post *which* PRs those are.
While at it, move the cron code in a proper python file (so we can use
stuff from odoo.tools), and fix display_name so we can straight use
display_name as a github ref' ({owner}/{repo}#{number}). This impacts
log-grepping but it seems like an improvement nonetheless.
Closesodoo/runbot#228
The fw-bot testing API should improve the perfs of mergebot tests
somewhat (less waiting around for instance).
The code has been updated to the bare minimum (context-managing repos,
change to PRs and replacing rolenames by explicit token provisions)
but extra facilities were used to avoid changing *everything*
e.g. make_commit (singular), automatic generation of PR refs, ...
The tests should eventually be updated to remove these.
Also remove the local fake / mock. Being so much faster is a huge
draw, but I don't really want to spend more time updating it,
especially when fwbot doesn't get to take advantage. A local /
lightweight fake github (as an external service over http) might
eventually be a good idea though, and more applicable (including to
third-parties).
Converge the pytest setups of runbot_merge and forwardport a bit
more (the goal is obviously to eventually share the infrastructure so
they run the same way).
Running multiple ngrok concurrently is only allowed from pro and up
(OOTB and without shenanigans) is only allowed from Pro and up. However
multiple tunnels through a single ngrok is allowed
-> when tunneling through ngrok, start the process without any tunnel,
use the API to create then remove the local tunnel, and shut down ngrok
IIF there's no tunnel left.
There's plenty of race conditions but given how slow tests are when they
involve github that's probably not an issue.
* extract method to create a PR object from a github result (from the
PR endpoint)
* move some of the remote's fixtures to a global conftest (so they can
be reused in the forwardbot)