If the stagings are going to be created locally (via a git working
copy rather than the github API), the mergebot part needs to have
access to the cache, so move the cache over. Also move the maintenance
cron.
In an extermely minor way, this prefigures the (hopeful) eventual
merging of the ~~planes~~ modules.
- add formatting for a bunch of backend objects
- add cross-links in order to use toplevel navigation between objects
e.g. project -> branch -> staging -> PR with breadcrumbs instead of
shitty dialog boxes
Relates to #802
When I updated the status storage (including `previous_failure`) for
some reason I didn't just migrate from the old to the new format, and
added bridge functions instead.
This is not really necessary (or useful), so convert all the legacy
data and remove the conversion helpers.
Relates to #802
In 81ce4ea02b the delta for PRs being
listed in the `/forwardport/outstanding` page was increased from 3
days to 7 (1 week), however the warning box in the home page still
used the old cutoffs leading to
An inconsistency between the two and an effective severe overcounting,
as the reason why the cutoff was increased to a week is forward ports
can take a while or the author / reviewer can be a touch busy at end
of week, so 3~4 days are routine when a PR is merged on thursday or
friday (and even worse if there's bank holidays in the mix).
Mostly a temporary safety feature after the events of 07-31: it's
still not clear whether that was a one-off issue or a change in
policy (I was not able to reproduce locally even doing several
set_refs a second) and the gh support is not super talkative, but it
probably doesn't hurt to commit the workaround until #247 gets
implemented.
On 2023-07-31, around 08:30 UTC, `set_ref` started failing, a lot
(although oddly enough not continuously), with the unhelpful message
that
> 422: Reference cannot be updated
This basically broke all stagings, until a workaround was implemented
by adding a 1s sleep before `set_ref` to ensure no more than 1
`set_ref` per second, which kinda sorta has been the github
recommendation forever but had never been an issue
before. Contributing to this suspicion is that in late 2022, the
documentation of error 422 on `PATCH git/refs/{ref}` was updated to:
> Validation failed, or the endpoint has been spammed.
Still would be nice if GH was clear about it and sent a 429 instead.
Technically the recommendation is:
> If you're making a large number of POST, PATCH, PUT, or DELETE
> requests for a single user or client ID, wait at least one second
> between each request.
So... actually implement that. On a per-worker basis as for the most
part these are serial processes (e.g. crons), we can still get above
the rate limit due to concurrent crons but it should be less likely.
Also take `Retry-After` in account, can't hurt, though we're supposed
to retry just the request rather than abort the entire thing. Maybe a
future update can improve this handling.
Would also be nice to take `X-RateLimit` in account, although that's
supposed to apply to *all* requests so we'd need a second separate
timestamp to track it. Technically that's probably also the case for
`Retry-After`. And fixing #247 should cut down drastically on the API
calls traffic as staging is a very API-intensive process, especially
with the sanity checks we had to add, these days we might be at 4
calls per commit per PR, and up to 80 PRs/staging (5 repositories and
16 batches per staging), with 13 live branches (though realistically
only 6-7 have significant traffic, and only 1~2 get close to filling
their staging slots).
`/runbot_merge/stagings`
========================
This endpoint is a reverse lookup from any number of commits to a
(number of) staging(s):
- it takes a list of commit hashes as either the `commits` or the
`heads` keyword parameter
- it then returns the stagings which have *all* these commits as
respectively commits or heads, if providing all commits for a
project the result should always be unique (if any)
- `commits` are the merged commits, aka the stuff which ends up in the
actual branches
- `heads` are the staging heads, aka the commits at the tip of the
`staging.$name` branches, those may be the same as the corresponding
commit, or might be deduplicator commits which get discarded on
success
`/runbot_merge/stagings/:id`
============================
Returns a list of all PRs in the staging, grouped by batch (aka PRs
which have the same label and must be merged together).
For each PR, the `repository` name, `number`, and `name` in the form
`$repository#$number` get returned.
`/runbot_merge/stagings/:id1/:id2`
==================================
Returns a list of all the *successfully merged* stagings between `id1`
and `id2`, from oldest to most recent. Individual records have the
form:
- `staging` is the id of the staging
- `prs` is the contents of the previous endpoint (a list of PRs
grouped by batch)
`id1` *must* be lower than `id2`.
By default, this endpoint is inclusive on both ends, the
`include_from` and / or `include_to` parameters can be passed with the
`False` value to exclude the corresponding bound from the result.
Related to #768
`auto_session_tracking` causes issues when not specified on the super
old version of the client which is available on ubuntu.
Also disable tracing as it seems less useful than hoped for, and I've
not been using what's been collected so far.
Currently the heads of a staging (both staging heads and merged heads)
are just JSON data on the staging itself. Historically this was
convenient as the heads were mostly of use to the staging process, and
thus accessed directly through the staging essentially exclusively.
However this makes finding stagings from merged commits e.g. for
forensic research almost impossible, because querying based on
the *values* of a JSON map is expensive, and indexing it is difficult.
To make this use case more feasible, split the `heads` field into two
join tables, one for the staging heads and one for the merged heads,
this makes looking for stagings by commits much more
efficient (although the queries may not be trivial). Also add two
utility RPC methods, so that it's possible to query stagings
reasonably easily and efficiently based on a set of commits (branch
heads).
related to #768
Allow filtering stagings by state (success or failure), and provide a
control to explicitly update the staging date limit.
Should make it easier to drill through stagings when looking for
specific information.
Related to #751
Fix outstanding query to make a positive `state` filtering, instead of
negative, matching 3b52b1aace8674259812a76b1566260937dbcacb.
Also manually create a map of stagings (grouped by branch) sharing a
single prefetch set.
For odoo the mergebot home page has 12 branches in the odoo project
and 8 in spreadsheet, 6 stagings each. This means 120 queries to
retrieve all the heads (Odoo stagings have 5 heads and spreadsheet
have 1, but that seems immaterial).
By fixing `_compute_statuses` and creating a single prefetch set for
all stagings of all branches we can fetch all the commits in a single
query instead of 120.
- add support for authorship (not just approval)
- make display counts directly
- fix `state` filter: postgres can't do negative index lookups
- add indexes for author and reviewed_by as we look them up
- ensure we handle the entire source filtering via a single subquery
Closes#778
Before this, when testing in parallel (using xdist) each worker would
create its own template database (per module, so 2) then would copy
the database for each test.
This is pretty inefficient as the init of a db is quite expensive in
CPU, and when increasing the number of workers (as the test suite is
rather IO bound) this would trigger a stampede as every worker would
try to create a template at the start of the test suite, leading to
extremely high loads and degraded host performances (e.g. 16 workers
would cause a load of 20 on a 4 cores 8 thread machine, which makes
its use difficult).
Instead we can have a lockfile at a known location of the filesystem,
the first worker to need a template for a module install locks it,
creates the templates, then writes the template's name to the
lockfile.
Every other worker can then lock the lockfile and read the name out,
using the db for duplication.
Note: needs to use `os.open` because the modes of `open` apparently
can't express "open at offset 0 for reading or create for writing",
`r+` refuses to create the file, `w+` still truncates, and `a+` is
undocumented and might not allow seeking back to the start on all
systems so better avoid it.
The implementation would be simplified by using `lockfile` but that's
an additional dependency plus it's deprecated. It recommends
`fasteners` but that seems to suck (not clear if storing stuff in the
lockfile is supported, it opens the lockfile in append mode). Here the
lockfiles are sufficient to do the entire thing.
Conveniently, this turns out to improve *both* walltime CPU time
compared to the original version, likely because while workers now
have to wait on whoever is creating the template they're not competing
for resources with it.
This reverts commit 9e7441e098.
This doesn't work as expected because of db filter.
Will eb changed latter, reverting for now
Token field is kept, could still be used later.
When investigating kernel logs e.g.: for finding oom killed containers,
the kernel does not log the name of the incriminated container but only
the id. With this commit the runbot will also log the container short id
which is enough to correlate the logs.
There are two wizards for the runbot build errors:
- One to close an error with a reason
- One to update the team/user or PR
With this commit, the two wizards are merged into one wizard that helps
to update errors in bulk.
Also, a button is added in the list view that allow to save a mouse
click.
The `NEW` button is removed from the tree view as it should not be of
any use.
Pausing a host can be usefull in some case, mainly when testing new code
The loop will have no effect avoiding to break some build wainting for
testing.
Profile will help to identify potential performance flows during the
loop.
One of the most common custom trigger is to restore a build before
starting some test, either to create a multibuild or make the execution
and debug of some test faster.
It is somethimes tedious to use because we need to give an url of a
build to restore. This build must correspond to the right commits,
must still exixt, ... this means that the dump url must be adapted
everytime a branch is rebased.
The way the dump_url is defined is by going on the last batch, following
the link to the `base_reference_batch_id`, finding a slot corresponding
to the right repo set, (ex: Custom enterprise -> enterprise), and
copying the dump_url in this build.
The base_reference_batch_id is eay to automated but we have to find the
right trigger, this is now a parameter of the custom trigger wizard.
There are actually 2 strategy now to define how to download the dump:
- `url`, using `restore_ dump_url`
- `auto`, using `restore_trigger_id` and `restore_database_suffix`
To ease the setup, a `restore_trigger_id` is added on a trigger, so that
when selecting a trigger, lets say `Custom enterprise`, the defined
`trigger.restore_trigger_id` is automatically chosen for the
`custom_trigger.restore_trigger_id` and the `restore_mode` is setted to
auto.
Two actions are also added to the header of a bundle, a shorcut to
setup a multi build (restore in children) or a restore and test build
(restore in parent).
In view list widget are not always instanciated and a formater is used
instead. This means that the t-esc will try to output a jsonb field
without nowing how to render it, making the page crash.
This is quickly fixed by forcing the widget on the field in tree view.
The python steps can be long and interresting to track but the change is
actually hard to see since a block of code is logged instead of the diff.
Also, the whitespaces are not preserverd since we are note in a <pre>
block making it hard to read.
This proposes an alternative to display python code tracking values as
a diff with options to copu the raw content of the old and new version.
(as well as showing unchanged lines or not)