Current dump version doesn't include filestore. This new
version adds the filestore trying to match odoo backup format
in order to ease restore.
manifest.json file is not create since it isn't usefull,
but an info.json is added, with build info.
Creating multi builds configs can be tedious. One must create 2 build
configs and 2 build config steps in the right order.
With this commit, a simple wizard is added that creates those 4
configurations by simply filling 4 fields.
Also, a new field, group, is added in order to be able to gather
config's and config steps into groups. The group is a Many2one on a
config.
While at it, the runbot menu has been a bit rearranged with everything
about config's in a parent menu named Configs.
Config's and config's steps tree views have been enhanced to show the
config group and add some filters in the search views.
With this commit, a new boolean field "flamegraph" is added on the
build_config to allow a flamegraph generation.
In order to be able to generate a flamegraph during a runbot build, the
flamegraph package is added to the Docker image as well as the
flamegraph.pl tool.
Dump a db at the end of a build, using a new 'finals' cmd part
added in order to execute dump even if build fails.
Add a link in last step log to download dump.
In different situations, a docker container may stay alive even if the
build global_state is done. This can lead to a build failure when a
build wants to go in running state and tries to expose the same ports as
the left over build.
This reverts commit 1207daded1.
A too quick review, setting a default value is a good idea but since field is a float now,
default value should be time.time
Actually some Odoo modules are black_listed from a set hardcoded in the
runbot code. In some cases, one needs to blacklist custom modules,
preferably in a config_step.
With this commit, the repo.modules, branch.modules,
config_step.install_modules fields are concatained in a comma separated
list of fnmatch patterns. The patterns can be prefixed with a dash to
exclude the matching module(s).
Co-authored by @Xavier-Do
If the ref we asked for does not exist, github apparently decides to
fall-back to prefix-matching. So if we're trying to delete
already-deleted branch A and someone called their branch A-x we're
going to get it as a result.
Thankfully they were apparently smart enough to return a list even if
there's only a single fuzzy match. So if we get a list (instead of a
dict) as response to git/refs/heads assume the branch was already
deleted as if we got a 404.
When a build is running, a cron, an evil query or something else can
start to fill and bloat the runbot ir_logging table.
With this commit, a log_counter field is added on the build, starting at
100. The SQL trigger decrement this counter after a line is inserted.
When the counter drops to 0, a the last log line contains a message
stating that the limit has been reached. Further log lines are dropped
for this build step.
The counter is reset to a default of 100 before each step.
This value is configurable through an optional ir.config_parameter
runbot_maxlogs.
The runbot itself is still able to add logs lines through the build _log
method.
Thanks @Xavier-Do for the smart idea.
If a PR is *merged*, enqueue it for deletion (with a 2 weeks delay).
Mainly to avoid FW branches staying around long after they've been
merged (possibly eventually closed?), will also clean up regular
merged branches, including historical merges forgotten by their
author.
Fixes#230
When a build only create sub-builds, the build_time is verry small (a few seconds),
and this information is not relevant. This commit propagates end_time to parent_build
if parent_build is done or running.
In the case where we have a co-dependent forward port (co-dependent
PRs got forward-ported, which they should be together) where *one* of
the PRs got explicitly updated, the batch would "fall into a hole"
being handled as neither "this is part of a forward-port sequence" nor
"this is a new merge to forward-port" (the latter being the proper
one).
Modify & remove guards which checked that either no or all PRs in a
batch have parents: should be either all or not all.
Fixes#231
Turns out tagging PRs requires a pretty significant level of ACLs
which we may not want to give to the forwardbot?
Anyway use the mergebot ACLs (which already include tagging) for this.
* add a sorted method on fake models
* fix recordset equality to ignore ids order
* when creating commits on a ref, add a param to only *update* the ref
(forcefully): when simulating a force-push we don't want to *create*
a ref as that might silently be done in the wrong repository entirely
* fix pytest.skip call at the module level, not sure where it came
from and why I missed it until now
The closing or reopening of PRs was not logged at all, which can be
inconvenient when trying to find out why PRs are closed (or not) in
the backend.
Also leverage PR display_name improvements from
3ce3dd9569 for more regular PR names in
logs.
Due to the title formatting of FP PRs, we'd get incorrectly formatted
commit messages if the PR was *merged* (rather than squashed /
fast-forwarded) due to either "merge" or "rebase-merge" integration
mode: in that case the PR message would be used as message for the
merge commit and that'd be along the lines of "Forward Port of #xxx to
<somebranch> (failed)", followed by the old PR message (e.g. see this
commit message).
* re-extract and reuse original PR title, just prefix with "[FW]"
* finally add support for tagging, and use that to tag the PRs,
especially for the failed / conflict marker which is quite important
Closes#229
When posting a reminder that there are open / waiting forward ports on
a source PR, also post *which* PRs those are.
While at it, move the cron code in a proper python file (so we can use
stuff from odoo.tools), and fix display_name so we can straight use
display_name as a github ref' ({owner}/{repo}#{number}). This impacts
log-grepping but it seems like an improvement nonetheless.
Closesodoo/runbot#228
* shorten the postfix, forwardbot is now a bigram!
* shorten the uniquifier: go from 5 to 3 bytes, and use urlsafe base64
that way we only have a 4-char uniquifier instead of 8
* while at it, fix deprecated calls to logging.warn (should be
logging.warning)
Fixes#226
The fw-bot testing API should improve the perfs of mergebot tests
somewhat (less waiting around for instance).
The code has been updated to the bare minimum (context-managing repos,
change to PRs and replacing rolenames by explicit token provisions)
but extra facilities were used to avoid changing *everything*
e.g. make_commit (singular), automatic generation of PR refs, ...
The tests should eventually be updated to remove these.
Also remove the local fake / mock. Being so much faster is a huge
draw, but I don't really want to spend more time updating it,
especially when fwbot doesn't get to take advantage. A local /
lightweight fake github (as an external service over http) might
eventually be a good idea though, and more applicable (including to
third-parties).
The queue would get items to process one at a time, process, commit,
and go to the next. However this is an issue if one of the item fails
systematically for some reason (aka it's not just a transient
failure): the cron fails, then restarts at the exact same point, and
fails again with the same issue, leading to following items never
getting processed.
Fix by getting all the queue contents at once, processing them one by
one and "skipping" any item which fails (leaving it in place so it can
get re-processed later).
That way, even if an item causes issues, the rest of the queue gets
processed normally. The interruption was an issue following
odoo/enterprise#5670 not getting properly updated in the
backend (backend didn't get notified of the last two updates /
force-push to the PR, so it was trying to forward-port a commit which
didn't exist - and failing).
Attempt to avoid some of the comment spam by dedup-ing input (only
signaling when the status actually changes and ignoring identity
transformations) and in case of failing CI keeping the last failed
status and not signaling on the next update if it's the same failure.
Closes#225
* only show 2 stagings on cellphones as 4 is way too much, moving to a
vertical layout would probably be a bad idea as stagings can already
be very tall and then we have multiple branches stacked on one
another, unless we also make branches foldable
the more complete list of stagings (per branch) is available on the
branch's page anyway so providing a not-completely-broken home looks
more useful, and at a fundamental level the current / last staging
is really the one we care about
* remove the size bounds on stagings to avoid smushing all the cells
together and overlapping text, sadly can't overflow scroll the
stagings element because you can't have an overflow-x: scroll and an
overflow-y: visible (that becomes auto)
When running tests on some machines, it's apparently possible for the
PR-creation webhook to come back before the PR creation request has,
leading to the creation of the PR from the API call duplicating that of
the webhook and blowing up.
To fix, immediately commit the transaction then check if we already have
the PR we just created in the system, and only create it explicitly if
not.
User should probably be warned when they try to set the limit ("up
to") in a context where it's going to be ignored:
* on a forward port PR (should be set on the source)
* on a merged source (should be set before the PR is merged)
Closes#213