Before this, if creating the DB failed the next worker would find
themselves with an empty `template-` file, so they would take the path
to *create* a template database, which would fail when trying to
`mkdir` the `shared-` directory as the directory would already exist.
The problem with this is this module would likely immediately fail and
take down their worker, triggering a test suite failure for themselves
and likely hiding the *true* failure cause (not sure why the
originally failed worker isn't the first one to trigger a failure but
there you go).
By skipping the tests instead, we provide a lot more opportunity for
the "true" failure to be revealed.
`read_tracking_value` likely never worked correctly in both branches,
but worked in 15.0 because the failures to do anything useful happened
to end in the right case?
The mergebot tests have always been pretty gentle on system load which
is nice, however it's just looking at the list of longest tests that I
realised / re-membered the hook wait duration is 10 seconds for the
benefit of github, which doesn't really matter locally. This means on
interaction / cron-heavy tests the test might only be using on the
order of 10% CPU or something, that is a waste of time.
TBF this is easily compensated by increasing the concurrency of the
test suite (e.g. from 16 to 32 when I switched machine, but it seems
as if not more sensible to lower the webhook wait delay to something
more reasonable. 1s seems to be a good fit here, on my new computer at
n=16 it leads to the test suite running in 15mn at 600% CPU (which is
pretty good on a 6/12 CPU as it loads the system heavily but doesn't
completely bog it down).
Reducing it to 0.5s, the test suite takes the same duration but CPU
load increases to 770%, and errors creep up, likely a mix of
concurrency issues in the DB and dummy-central sending webhooks too
slowly as we compete with it for CPU resources (could actually make
sense to restrict the number of threads tokio can use). Reducing
concurrency could make this work better, but I think at this point
we're in a pretty good state, it's even somewhat reasonable to run the
test suite sequentially (taking about 1h10 but being functionally
invisible in terms of load).
Unstaged changes can be useful or necessary for some tasks
e.g. absolute emergency (where even faking the state of a staging is
not really desirable, if that's even possible anymore), or changes
which are so broad they're difficult to stage (e.g. t10s updates).
Add a new object which serves as a queue for patch to direct-apply,
with support for either text patches (udiff style out of git show or
format-patch) or commits to cherry-pick. In the former case, the part
of the show / format-patch before the diff itself is used for the
commit metadata (author, committer, dates, message) whereas for the
commit version the commit itself is reused as-is.
Applied patches are simply disabled for traceability.
Fixes#926
Forgot to declare `defaultstatuses` when it was introduced in
f367a64481. pytest righteously warns
when markers are not declated, so do that (turns out running pytest
with `-We` can be useful, though it also requires `-W
ignore::DeprecationWarning::importlib._boostrap` because reportlab
does weird shit).
It's not entirely clear but I assume if / when a test fails or is
aborted (via C-c) pytest kills the stdout somehow. Either way this
causes a bunch of `OSError` to be logged out as we try to write to
closed pipes.
If that occurs, assume the test is gone and just bail out of the
thread.
A few crons (e.g.database maintenance) remain on timers, but most of
the work crons should be migrated to triggers.
The purpose of this change is mostly to reduce log spam, as crons get
triggered very frequently (most of them every minute) but with little
to do. Secondary purposes are getting experience with cron triggers,
and better sussing out dependencies between actions and crons /
queues.
Fixes#797
Since every cron runs on a fresh database, on the first `run_crons`
every single cron in the db will run even though almost none of them
is relevant.
Aside from the slight inefficiency, this creates unnecessary extra
garbage in the test logs.
By setting the `nextcall` of all crons to infinity in the template we
avoid this issue, only triggered crons (or the crons whose nextcall we
set ourselves) will trigger during calls.
This requires adjusting the branch cleanup cron slightly: it didn't
correctly handle the initial run (`lastcall` being false).
With the trigger-ification pretty much complete the only cron that's
still routinely triggered explicitly is the cross-pr check, and it's
that in all modules, so there's no cause to keep an overridable
fixture.
Mergebot / forwardport crons need to run in a specific ordering in
order to flow into one another correctly. The default ordering being
unspecified, it was not possible to use the normal cron
runner (instead of the external driver running crons in sequence one
at a time). This can be fixed by setting *sequences* on crons, as the
cron runner (`_process_jobs`) will use that order to acquire and run
crons.
Also override `_process_jobs` however: the built-in cron runner
fetches a static list of ready crons, then runs that.
This is fine for normal situation where the cron runner runs in a loop
anyway but it's any issue for the tests, as we expect that cron A can
trigger cron B, and we want cron B to run *right now* even if it
hadn't been triggered before cron A ran.
We can replace `_process_job` with a cut down version which does
that (cut down because we don't need most of the error handling /
resilience, there's no concurrent workers, there's no module being
installed, versions must match, ...). This allows e.g. the cron
propagating commit statuses to trigger the staging cron, and both will
run within the same `run_crons` session.
Something I didn't touch is that `_process_jobs` internally creates
completely new environments so there is no way to pass context into
the cron jobs anymore (whereas it works for `method_direct_trigger`),
this means the context values have to be shunted elsewhere for that
purpose which is gross. But even though I'm replacing `_process_jobs`,
this seems a bit too much of a change in cron execution semantics. So
left it out.
While at it tho, silence the spammy `py.warnings` stuff I can't do
much about.
- add context, in case that allows tracking overlap between tests
eventually
- use `-m` to run odoo, requires backporting `odoo/__main__.py`
but I'm not quite sure how I was running it previously, possibly
via odoo-bin? It certainly doesn't seem to work out with a global
`odoo` helper
Finally went to fix the issue that web.base.url is not correctly set
up by the test suite and thus things like `PullRequest.url` don't
return the right URL when running tests.
The answer I re-discovered yet again is that one only has to log in as
an admin (`base.group_system`) while providing a `base_location` env
key.
This is done automatically when logging via the frontend, under the
assumption that this is the "correct" url, but when logging in via RPC
it has to be done manually as that's more internal and we might use
e.g. scripts on the same server to manipulate the instance.
Previously we would copy *only* the database itself, which is mostly
not an issue, but when trying to debug *pages* using a browser, that
browser tries to load various assets, which are looked up in the
database, then in the filestore, where they are missing.
This is not an issue in and of itself, but it triggers ugly tracebacks
in the logs which is not convenient, and since
2ab06ca96b it fails the test even if the
test would otherwise work.
d4fa1fd353 added tracking to changes
from *comments* (as well as a few hacks around authorship transfer),
however it missed two things:
First, it set the `change-author` during comments handling only, so
changes from the `PullRequest` hook e.g. open, synchronise, close,
edit, don't get attributed to their actual source, and instead just
fall back to uid(1). This is easy enough to fix as the `sender` is
always provided, that can be resolved to a partner which is then set
as the author of whatever changes happen.
Second, I actually missed one of the message hooks: there's both
`_message_log` and `_message_log_batch` and they don't call one
another, so both have to be overridden in order for tracking to be
consistent. In this case, specifically, the *creation* of a tracked
object goes through `_message_log_batch` (since that's a very generic
message and so works on every tracked object created during the
transaction... even though batch has a message per record anyway...)
while *updates* go through `_message_log`.
Fixes#895
Seems like a good idea to better keep track of the log of an Odoo used
to testing, and avoid silently ignoring logged errors.
- intercept odoo's stderr via a pipe, that way we can still write it
back out and pytest is able to read & buffer it, pytest's capfd
would not work correctly: it breaks output capturing (and printing
on failure); and because of the way it hooks in it's unable to
capture from subprocesses inheriting the standard stream, cf
pytest-dev/pytest#4428
- update the env fixture to check that the odoo log doesn't have any
exception on failure
- make that check conditional on the `expect_log_errors` marker, this
way we can mark tests for which we expect errors to be logged, and
assert that that does happen
Currently webhook secrets are configured per *project* which is an
issue both because different repositories may have different
administrators and thus creates safety concerns, and because multiple
repositories can feed into different projects (e.g. on mergebot,
odoo-dev/odoo is both an ancillary repository to the main RD project,
and the main repository to the minor / legacy master-wowl
project). This means it can be necessary to have multiple projects
share the same secret as well, this then mandates the secret for more
repositories per (1).
This is a pain in the ass, so just detach secrets from projects and
link them *only* to repositories, it's cleaner and easier to manage
and set up progressively.
This requires a lot of changes to the tests, as they all need to
correctly configure the signaling.
For `runbot_merge` there was *some* setup sharing already via the
module-level `repo` fixtures`, those were merged into a conftest-level
fixture which could handle the signaling setup. A few tests which
unnecessarily set up repositories ad-hoc were also moved to the
fixture. But for most of the ad-hoc setup in `runbot_merge`, as well
as `forwardport` where it's all ad-hoc, events sources setup was just
appended as is. This should probably be cleaned up at one point, with
the various requirements collected and organised into a small set of
fixtures doing the job more uniformly.
Fixes#887
Move staging cancellation to the batch, remove its (complicated)
handling from the PRs.
This loses some precision in the cancellation message, but that could
likely be recovered (in part) by adding more precise checks &
diagnostic extractions in the compute.
When asserting a status fails, it's possible that this is a 400 or
500-type error which does not yield JSON data at all (e.g. forgot to
start the proxy or dummy), in which case trying to dump the json data
is actively harmful as it triggers cascading failures. And it's not
like the output message is formatted, so a re-structured assertion
message is not helpful.
It's not amazing, because cross-typing between the different conftests
and utility modules doesn't really work smoothly. So not entirely
convinced it's great.
While at it, inline a few `x.json()` local aliases when the jsons are
used in exclusive locations e.g. usually an assertion message on one
side and a result / process on the other.
ngrok 3 scrambled some of the tunnel configuration keys. Most notably,
it replaced the ill-named `bind_tls` by an explicit list of http
`schemes`. Although it *removed* `bind_tls` so the fix is necessary
for ngrok to work again (especially as ngrok2 is reaching EOL).
While at it, improve the tunnel setup somewhat:
- remove fixme which we're probably not going to fix after all
- if we spawn ngrok ourselves, keep the handle around so we can
- kill the process we spawned directly instead of looking it up
somewhat awkwardly
It has been a consideration for a while, but the pain of subtly
interacting with git via the ignominous CLI kept it back. Then ~~the
fire nation attacked~~ github got more and more tight-fisted (and in
some ways less reliable) with their API.
Staging pretty much just interacts with the git database, so it's both
a facultative github operator (it can just interact with git directly)
and a big consumer of API requests (because the git database endpoints
are very low level so it takes quite a bit of work to do anything
especially when high-level operations like rebase have to be
replicated by hand).
Furthermore, an issue has also been noticed which can be attributed to
using the github API (and that API's reliability getting worse): in
some cases github will fail to propagate a ref update / reset, so when
staging 2 PRs it's possible that the second one is merged on top of
the temporary branch of the first one, yielding a kinda broken commit
(in that it's a merge commit with a broken error message) instead of
the rebase / squash commit we expected.
As it turns out it's a very old issue but only happened very early so
was misattributed and not (sufficiently) guarded against:
- 41bd82244bb976bbd4d4be5e7bd792417c7dae6b (October 8th 2018) was
spotted but thought to be a mergebot issue (might have been one of
the opportunities where ref-checks were added though I can't find
any reference to the commit in the runbot repo).
- 2be25052e147b151d1d8a5bc73cceb351586ce03 (October 15th, 2019) was
missed (or ignored).
- 5a9fe7a7d05a9df7186072a7bffd60c6b428fd0e (July 31st, 2023) was
spotted, but happened at a moment where everything kinda broke
because of github rate-limiting ref updates, so the forensics were
difficult and it was attributed to rate limiting issues.
- f10d03bf0f2e8f88f62a5d8356b84f714196130f (August 24th, 2023) broke
the camel's back (and the head block): the logs were not too
interspersed with other garbage and pretty clear that github ack'd a
ref update, returned the correct oid when checking the ref, then
returned the wrong oid when fetching it later on.
No Working Copy
===============
The working copy turns out to not be necessary, the plumbing commands
we *need* work just fine on a bare repository.
Working without a WC means we had to reimplement the high level
operations (rebase) by hand much as we'd done previously, *but* we
needed to do that anyway as git doesn't seem to provide any way to
retrieve the mapping when rebasing/cherrypicking, and cherrypicking by
commit doesn't work well as it can't really find the *merge base* it
needs.
Forward-porting can almost certainly be implemented similarly (with
some overhead), issue #803 has been opened to keep track of the idea.
No TMP
======
The `tmp.` branches are no more, the process of creating stagings is
based entirely around oids, if staging something fails we can just
abandon the oids (they'll be collected by the weekly GC), we only
need to update the staging branches at the very end of the process.
This simplifies things a fair bit.
For now we have stopped checking for visibility / backoff as we're
pushing via git, hopefully it is a more reliable reference than the
API.
Commmit Message Formatting
==========================
There's some unfortunate churn in the test, as the handling of
trailing newlines differs between github's APIs and git itself.
Fixes#247
PS: It might be a good idea to use pygit2 instead of the CLI
eventually, the library is typed which is nice, and it avoids
shelling out although that's really unlikely to be a major cost.
Before this, when testing in parallel (using xdist) each worker would
create its own template database (per module, so 2) then would copy
the database for each test.
This is pretty inefficient as the init of a db is quite expensive in
CPU, and when increasing the number of workers (as the test suite is
rather IO bound) this would trigger a stampede as every worker would
try to create a template at the start of the test suite, leading to
extremely high loads and degraded host performances (e.g. 16 workers
would cause a load of 20 on a 4 cores 8 thread machine, which makes
its use difficult).
Instead we can have a lockfile at a known location of the filesystem,
the first worker to need a template for a module install locks it,
creates the templates, then writes the template's name to the
lockfile.
Every other worker can then lock the lockfile and read the name out,
using the db for duplication.
Note: needs to use `os.open` because the modes of `open` apparently
can't express "open at offset 0 for reading or create for writing",
`r+` refuses to create the file, `w+` still truncates, and `a+` is
undocumented and might not allow seeking back to the start on all
systems so better avoid it.
The implementation would be simplified by using `lockfile` but that's
an additional dependency plus it's deprecated. It recommends
`fasteners` but that seems to suck (not clear if storing stuff in the
lockfile is supported, it opens the lockfile in append mode). Here the
lockfiles are sufficient to do the entire thing.
Conveniently, this turns out to improve *both* walltime CPU time
compared to the original version, likely because while workers now
have to wait on whoever is creating the template they're not competing
for resources with it.
- github logins are case-insensitive while the db field is CI the dict
in which partners are stored for matching is not, And the caller may
not preserve casing.
Thus it's necessary to check the casefolded database values against
casefolded parameters, rather than exactly.
- users may get disabled by mistake or when one leaves the project,
they may also get switched from internal to portal, therefore it is
necessary to re-enable and re-enroll them if they come back.
- while at it remove the user's email when they depart, as they likely
use an organisational email which they don't have access to anymore
Side-note, but remove the limit on the number of users / partners
being created at once: because there are almost no modules in the
mergebot's instance, creating partner goes quite fast (compared to a
full instance), thus the limitation is almost certainly unnecessary
(creating ~300 users seems to take ~450ms).
Fixes ##776
1cea247e6c tried to improve staging
checks to avoid staging PRs in the wrong state, however it had two
issues:
PR state
--------
The process would reset the PR's state to open, but unless the head
was being resync'd it wouldn't re-apply the statuses on the state,
leading to a PR with all-valid statuses, but a missing CI.
Message
-------
The message check didn't compose the PR message the same way PR
creation / update did (it did not trim the title and description
individually, only after concatenation), resulting in a
not-actually-existing divergence getting signaled in the case where
the PR title ends or the description starts with whitespace.
Expand relevant test, add a utility function to compose a PR message
and use it everywhere for coherence.
Also update the logging and reporting to show a diff of all the
updated items (hidden behind a `details` element).
Seems to be a new github weirdness: when forking a repo, when hitting
it fast enough it's apparently possible to see the repository in an
incomplete state (without its content).
Obviously this causes tests to fail.
Complete the post-fork testing by listing the branches of the fork,
and considering the repo created once we see a non-empty list (the
source repo should always have at least one branch, which should be
copied over when forking).
The `statuses` field of a staging is always "live" because it's a
computed non-stored field. This is an issue when a staging finishes in
whatever state, then someone gets new statuses sent on one of the head
commits, either by rebuilding (part of) the staging or by just using
the same commit for one of their branches.
This makes the reporting of the main dashboard confusing, as one might
look at a failed staging and see all the required statuses
successful. It also makes post-mortem analysis more complicated as the
logs have to be trawled for what the statuses used to be (and they
don't always tell).
Solve this by storing a snapshot of the statuses the first time a
staging moves away from `pending`, whether it's to success or failure.
Fixes#667
Turns out I was running "15.0" except just on the runbot, enterprise
and community were still the 14.0 repos, so some of the changes were
missing.
While at it, bundle fixes for 3.10, as that's what Jammy needs, and
the mergebot/15.0 will be running on that.
Runs the test instances of Odoo using `coverage` in parallel mode.
- useful for finding out under-tested parts of the code
- because it only instruments mergeport/forwardport, and the tests do
so much IO, the wallclock performance impacts are minimal (~2%
increase with branch coverage analysis, for an increase in CPU
of ~20%, for the entire testsuite)
- for reporting, the scattered coverage reports need to be aggregated
using `coverage combine`, followed by rendering with `coverage
html`, these work out of the box, no parameterization is necessary
- coverage does not run on the test suite, only the modules under test
Not sure how I missed this but apparently pytest fixtures can't return
or yield, it's one or the other.
Which makes a lot of sense but means the tunnel fixture was broken
when using local webhooks (= no tunnel) as it returned the local url
rather than yield it.
With dummy central now kinda sorta working, it makes sense to avoid going
through a tunnel for webhooks, especially as ngrok has lowered the number of
tunnels to 2 per agent for free accounts (or possibly fixed the bug which made
it not be enforced?)
I'm surprised this ever worked, I guess concurrent tests stopped
working long before that? Or I misunderstood some of the historical
failures as transient?
During the cleanup of the forwardport test, I'd empty out the
`user_cache_dir('forwardport') / owner`, except the owner is always
the same (more or less) so all the tests check out their repos (and
working copies) in the same directory. If one test is cleaning up
while an other is performing a forward port, the second will blow up.
Also move the filestore to a tempdir, especially during creation of
the template db: it gets leaked so over time that generates gigabytes
of data which doesn't get cleaned up. But the template db filestore is
only "necessary" during the creation of the template, once the
template's been created it's of no use and won't be copied to create
the test dbs (though it could be, I guess).
The previous version of the code assumed `pr['body']` is always a
string, which is not correct, when the PR body is emptied the body
itself is removed (its value is `None`).
Add a case for this in the PR edition test, and avoid blowing up (or
adding empty newlines) when the PR body is empty. For PR creation this
issue was fixed in c2db5659d8 but
apparently I missed that the exact same issue occurs just a few lines
above.
Also turns out github does *not* send change information when the body
is updated from (or to?) `None`, so don't even bother with that, just
check every time if the overall message has been updated.
Fixes#629
Old messages were quite inconsistent in their pinging of the PR author
and reviewer.
Reviewed messages (probably missed some but...) and try to more
consistently ping when the feedback requires some sort of action in
order to proceed.
Fixes#592
A few fixes and improvements after testing the feature:
- ensure the provisioned users are created as internal (not portal)
- assume oauth is installed and just crash if it's not
- handle a user not having an email (ignore)
- return value from json handler, otherwise JsonRequest sends no
payload which is *weird*
New accounts endpoint such that the SSO can push new pre-configured
users / employees directly. This lowers maintenance burden.
Also remove one of the source partners from the merge test, as
ordering seems wonky for unclear reasons leading to random failures of
that test.