Allow to create a trigger without project, that execute for every
project. In this case the repos are all the repo of the project
that where updated.
This is mainly for psxrunbot.
Also allows to create a trigger that cross category
It's almost certainly not useful, save as a minor convenience for
tests: decorrelating the branch sequence and the fp sequence seems
like it would only be extremely confusing, and on the mergebot all the
fp_sequence values are set to the default while the sequence values
are set to something useful and sensible (kinda).
Fixes#584
After review, there doesn't seem to be a single integer field created
by the mergebot or fortwardbot modules for which a `group_operator`
makes sense, let alone the default of `sum`.
So just disable them all.
Fixes#674
If commits have different authors (/ committers), the mergebot would
ask github to create a commit with an author (/ committer) of `None` /
`null`.
Apparently github really does not like that, and complains that
> nil is not an object
So remove the key entirely. Also fix the collision between `author`
and the `Co-Authored-By` list, which could lead to trying to set an
`author` of `[name, email]` instead of an object, which is also not
accepted by github.
Fixes to the new bits which didn't really work:
- Fix borked view layout
- Add some help to the label fields
- Improve the resolution of label -> pr, and fix
- Also make the feature actually work for bump PRs
- Also make pr -> label work more reliably, now allows setting one PR
and getting the other PRs of the same batch (with the same label)
even without setting the label by hand
An autocomplete for the label has been considered but there is no
autocomplete field for char/selection fields, and it seems way too
much work for the utility:
- either create a brand new widget for 15.0 which will have to be
entirely rewritten in 16
- or create a transient model composed entirely of fake records to
provide an m2o to records which don't actually exist as label
bearers, which is also a lot of unnecessary work
NOTE: we want to support partial freezing (aka not freeze all the
branches because some of them have different release models
than others), so some project repos *not* having a release
PR is fine and normal, such a validation should not be added.
Fixes#664
In case where the last branch (before the branch being frozen) is
disabled, the forwardport inserter screws up, and fails to correctly
create the intermediate forwardports from the new branch.
Also when disabling a branch, if there are FW PRs which target that
branch and have not been forward-ported further, automatically
forward-port them as if the branch had been disabled when they were
created, this should limit data loss and confusion.
Also change the message set on PRs when disabling a branch: because of
user conflicts in test setup, the message about a branch being
disabled would close the PRs, which would then orphan the followup,
leading to unexpected / inconsistent behaviour.
Fixes#665
Since the forwardport bot works off of PRs, when it was created
leveraging the magic refs of github (refs/pull/*/head) seemed
sensible: when updating the cache repo, the magic refs would be
updated alongside and the forward-porting would have all the commits
available.
Turns out there are a few issues with this:
- the magic refs have become a bit unreliable, so we need a fallback
(b1c16fce8768080d30430f4e6d3788b60ce13de7)
- the forwardport bot only needs the commits on a transient basis, but
the magic refs live forever and diverge from all other content
(since we rarely `merge` PRs), for a large repo with lots of PRs
this leads to a significant inflation in size of repo (both on-disk
and objects count) e.g. odoo/odoo has about 25% more objects
with the pull refs included (3486550 to 4395319) and takes nearly
50% more space on disk (1.21G to 1.77)
As a result, we can just stop configuring the pull refs entirely, and
instead fetch the heads (~refs) we need as we need them. This can be a
touch more expensive at times as it requires two `fetch` calls, and
we'll have a few redundant calls as every forward port of a chain will
require a fetch of the root's head, *however* it will avoid retrieving
PR data for e.g. prs to master, as they don't get forward-ported, they
also won't get parasite updates from PRs being ignored or eventually
closed.
Eventually this could be optimised further by
- performing an update of the cache repo at the start of the cron iff
there are batches to port
- creating a temp clone for the batch
- fetching the heads of all PRs to forward port in the temp clone in a
single `fetch`
- performing all the ports by cloning the temp clone (and not
`fetch`-ing into those)
- then cleaning up the temp clone after having cleaned up individual
forward port clones
Small followup for #489
The current system makes / lets GC run during fetching. This has a few
issues:
- the autogc consumes resources during the forward-porting
process (not that it's hugely urgent but it seems unnecessary)
- the autogc commonly fails due to the combination of large repository
(odoo/odoo) and low memory limits (hardmem for odoo, which get
translated into soft ulimits)
As a result, the garbage collection of the repository sometimes stops
entirely, leading to an increase in repository size and a decrease in
performances.
To mitigate this issue, disable the automagic gc and maintenance
during normal operation, and instead add a weekly cron which runs an
aggressive GC with memory limits disabled (as far as they can get, if
the limits are imposed externally there's nothing to be done).
The maintenance is implemented using a full lockout of the
forward-port cron and an in-place GC rather than a copy/gc/swap, as
doing this maintenance at the small hours of the week-end (sat-sun
night) seems like a non-issue: currently an aggressive GC of odoo/odoo
(using the default aggressive options) takes a total of 2:30 wallclock
(5h user) on a fairly elderly machine (it's closer to 20mn wallclock
and 2h user on my local machine, also turns out the cache repos are
kinda badly configured leading to ~30% more objects than necessary
which doesn't help).
For the record, a fresh checkout of odoo/odoo right now yields:
| Overall repository size | |
| * Commits | |
| * Count | 199 k |
| * Total size | 102 MiB |
| * Trees | |
| * Count | 1.60 M |
| * Total size | 2.67 GiB |
| * Total tree entries | 74.1 M |
| * Blobs | |
| * Count | 1.69 M |
| * Total size | 72.4 GiB |
If this still proves insufficient, a further option would be to deploy
a "generational repacking" strategy:
https://gitlab.com/gitlab-org/gitaly/-/issues/2861 (though apparently
it's not yet been implemented / deployed on gitlab so...).
But for now we'll see how it shakes out.
Close#489
The `statuses` field of a staging is always "live" because it's a
computed non-stored field. This is an issue when a staging finishes in
whatever state, then someone gets new statuses sent on one of the head
commits, either by rebuilding (part of) the staging or by just using
the same commit for one of their branches.
This makes the reporting of the main dashboard confusing, as one might
look at a failed staging and see all the required statuses
successful. It also makes post-mortem analysis more complicated as the
logs have to be trawled for what the statuses used to be (and they
don't always tell).
Solve this by storing a snapshot of the statuses the first time a
staging moves away from `pending`, whether it's to success or failure.
Fixes#667
In the branch lists of stagings, the timestamps in the left column and
the labels in the data cells can not be selected, because they're
buttons and anyway bootstrap explicitly sets
.btn {
...
user-select: none;
}
This can be frustrating, as timestamps and labels are useful
information to cross-reference, the ability to copy them is
convenient.
Custom-set the reverse via our own CSS.
Fixes#668
- avoid pinging the author of the fw PR (which is the forward-bot
itself)
- instead ping the author and reviewer of the source, and possibly the
reviewer of the PR if any
- might also be a good idea to ping reviewers of intermediate PRs?
Partially revert 0c882fc0df
This turns out to be more bane than boon, as it breaks forward-port
chains and confuses people (despite the message). Update notification
message and don't close the PR anymore.
While at it, disable any pending staging on the branch being deactivated.
Fixes#654
Github can fail to create the magic refs on PRs
(`pull/refs/?/head`). Since forwardport relies on these refs to fetch
PR content this is an issue when it occurs, as the forward ports fail
in a loop.
After discussion with Github support, it turns out Github enabled
`allowReachableSHA1InWant` a while back, meaning it's possible to
fetch content by commit (rather than ref) as long as the content is
"in network".
Use this property as fallback when checking if we can see the PR head
before forward porting.
Also:
- remove explicit configuration of GC during fetch, it doesn't disable
the autogc (yet?) but that's likely going to happen anyway
- update logging and logger hierarchy during forward port to make
things clearer and easier to extract, although based on PR id rather
than number
- rate limit failing forward ports to avoid running them on every cron
(~ every minute), run them every ~30mn instead, this provides higher
odds of recovery with less log garbage in case of transient github
failure, and if the PR is stuck it limits the log pollution
Fixes#658
af016f4239 did a half-assed job and
didn't fix the one test which actually checks the dashboard.
TBF I was in a bit of a hurry trying to make the mergebot work and be
presentable again, but still...
Force new batch buttons can be sometimes confusing for user.
Creating a group to show this button for advanced user only will help
avoiding useless new batch when it's not needed.
New batch is only needed:
- to create a new slot when a new trigger is added/modified through a
custom trigger
- take last databases into account for upgrades, mainly when backporting
a new field or strange usually forbiden operations
- avoiding to need to push again to rebase when a r+ was added on one
pr but one of them needs to be rebased or adapted.
Thos case are unusuall but the button is used most of the time thinking
this is a kind or rebuild or maybe it will rebase and push the branch
on the pr. Only user with basic knowledge of when it is needed should
have access to these buttons.
15.0 (or 14.0) dropped some of the BS3 (?) compatibility stuff, which
the mergebot was (apparently) relying on. This lead to a visual
degradation as well as the frontend dropdown looking absolutely awful.
Fix that, on both style and templates.
15.0 (or 14.0) also dropped the bespoke responsive utility classes,
switch to bootstrap's.
Before the commit the build ir_logging was sent from the build instance
to the main runbot ir.logging table. As the number of runbot hosts
increases, it introduce a lot of concurrency.
e.g.: 80 hosts with 8 builds means 640 instances trying to insert
records in the ir.logging table.
With this commit, a special database is used on the builder host in
order to receive ir.logging's from the build instances.
Regulary, the table is emptied by the builder and the logs are inserted
in the runbot leader ir.logging table.
When marking multiple build error as fixed, it's sometimes necessary to
explain why it was decided to close the error. When working with a few
errors, this can be done manually ... But most of the time we want to
close a lot of false negatives in batch.
With this commit, a simple wizard is made available that will post a
reason in the chatter of the build_errors.
Turns out I was running "15.0" except just on the runbot, enterprise
and community were still the 14.0 repos, so some of the changes were
missing.
While at it, bundle fixes for 3.10, as that's what Jammy needs, and
the mergebot/15.0 will be running on that.
The build error view was unstructured and contains to much information.
This commit organize fields in groups and also validate some
modification on records in order to avoid build error manager to
disable test-tags by mistake.
An error cannot be deactivated if it appeared less than 24 hours ago to
avoid disabling a non forxardported pr that will fail the next nightly
generating another build error.
Test tags can only be added or removed by administrators.
Also adds a menu for easier User managerment
Also fixed the dname search and display.
Runs the test instances of Odoo using `coverage` in parallel mode.
- useful for finding out under-tested parts of the code
- because it only instruments mergeport/forwardport, and the tests do
so much IO, the wallclock performance impacts are minimal (~2%
increase with branch coverage analysis, for an increase in CPU
of ~20%, for the entire testsuite)
- for reporting, the scattered coverage reports need to be aggregated
using `coverage combine`, followed by rendering with `coverage
html`, these work out of the box, no parameterization is necessary
- coverage does not run on the test suite, only the modules under test
If no stream data was captured (no stderr and no stdout), would just
log
git call error:
as error, with no further information.
Don't do that if we have neither stderr nor stdout data, since we're
re-raising the exception anyway, it's just confusing.
- if stderr was empty or had been redirected to stdout, no useful
information would be show, making debugging more complicated
- the fallback is the error itself, but since it's reraised odds are
pretty high the caller will eventually log the error itself, so
it's redundant
=> fallback to stdout if stderr is empty, and only log if either is
non-empty
Not sure how I missed this but apparently pytest fixtures can't return
or yield, it's one or the other.
Which makes a lot of sense but means the tunnel fixture was broken
when using local webhooks (= no tunnel) as it returned the local url
rather than yield it.