Initial model was a (build_id, key, value) where key is in fact a two
part information: `category.key` (where key is usually a module)
This means that for each module, we will have one entry per
modules*category.
We have between 200 and 400 modules per build * 4 keys -> around 1000
entries per build.
The hudge amount of total entries lead to a fast overflow of the table
sequence + this create important indexes.
Also, most of the time, the js will manage the display of stats meaning
that python will transform
(build_id, category.key, value)
into
(build_id, {key: value}) for one category
A new model makes use of a json field to store values for different
modules in a json dict, on entry per category*build. (4 entry per build)
The table will be renamed and migrate later
The initial pull_info_failures feature was introduced before the separate
process for the leader. Since this is here for a rare case, this
wasn't tested since then.
Testing the new infrastructure without token, there was many
pull_info_failure, leading to an error on cleanup.
When rendering templates for git configuration and nginx configuration,
the templates are rendered as Markup, while a byte-like object was
expected for the saving.
With this commit, the Markup is casted into str and the files are saved
as text files.
The `utils.escape` utility was deprecated in werkzeug 2.0.0, so it can
be replaced with our `html_escape` which is itself `markupsafe.escape`.
Also, with this change, the double quotes are now escaped in `"`
instead of `"`, so we fix the test too.
Some non tested code was making running build available without
an nginx config (using port and local address instead of using a proper
dns/nginx config).
Removing this part to reduce complexity.
Source cleanup will check in multiple place for potentially used
commits. This maked the cleanup logic complex, plus limit the usable
commits in python steps. The current use case is to export the mergebase
commits.
The poposed solution is to save the exported commit and clean this list
when the build is done. All commit in this list cannot be cleaned.
The main motivation is to be able to see existing config_data on params
and edit configuration on triggers.
A simple version would be to use the `FieldChar` with a simple
`JSON.stringify` formater, but having some indent on field make it
easier to read and edit.
The multiline need make it closet to the `FieldText` than the `FieldChar`
but the reset makes the browser freeze.
The actual way to add a custom multibuild on a dev bundle will be:
- use the multibuild wizard to create 2 configs and 2 steps
- add this config on the bundle
- customize this config to make it fatser by installing/restoring...
The usefull parameters are the number of build, the test tags and
modules to install.
Another usefull operation is to restore a dump instead of installing,
especially for post install test only breaking on full databases.
This commit proposes to replace the multi build wizard with a custom
trigger wizard. The main idea is to have generic config, parametrized by
config_data. The wizard will only help to generate the corresponding
config_data.
When a build is garbage collected and the local folders are cleaned, the
log files are kept to allow build errors investigations.
With this commit, a full local cleanup will be done 365 days after the
garbage collect. This means that the build dir will be completely
removed. The default number of days can be tweaked with the
`runbot.full_gc_days` ir.config_parameter.
Also, the _local_cleanup method can be called with a `full` boolean
parameter to force a full cleanup. e.g.: when called in a config step.
While at it, the res_config_settings view is a bit reorganized.
When a git archive fails, the partially exported source tree is left in
place. If another builds tries to use the same commit, the tree is not
exported anymore as the directory exists. This leads to non
deterministic behaviors.
Right now single version repo like upgrade are managed using
a regex to limit name prefix, this avoid grouping branches when
mergebot wont be able to merge them together but the ci can be painfull
since the branch needs to be renamed (closing existing pr) or a manual
operation to move the branch into a new bundle must be performed.
This commit proposes to replace the forbidden_regex mechanism with
an explicit single_version mechanism.
In this case the reference name will be automatically prefixes with
the version. The prefix contains `---` to indicate that some
magic was applied.
Runbot layout modifies the website/portal base layout to remove navbar,
footer, overides some custom styles. A lot of assets are loaded but not
used. The only real usefull elements are base assets (bootstrap, ...)
and the login button.
Migrating to the next version of odoo is usually painfull because some
xpath may break, extra element added, or some style change may break the
page, needing to add more and more xpath, css rules, ... for very little
benefits.
This cleanup creates a custom base layout for runbot independant from
base odoo templates.
Also add a breadcrumb, navigation arrow, and improve batch links
- searching on number will search for both pr and branche name
- hooks are now using payload to define repo when not given in url
- fixes .git cleaning in repo
(remove rstrip since it can fail for repo starting with g, i, t)
- recompute base on prepare if base was not found
- remove local_result form write values if there is a single record
(instead of raising, makes python step easier to write).
- avoid stucked build/loop after removing a step from a config.
- avoid to send ci for linked base_commit
- add a fallback mechanism for base if no master branch is found
- add option on project to avoid to keep sticky running, usefull
when using a lots of projects
WARNING: this is a change of default behaviour, need to update
existing projects.
- always discover new commits for branch matching base paterns.
This is especially usefull to discover old versions on project with
low merge frequency.
- always create a batch, event if there is now trigger. This helps to
notice that commits are discovered
- add line-through on death branches/pr
- manual trigger are now displayed on main page
As for the builder, this give the ability to run the discovery of new
commits and all related logic in a separate process.
This will mainly be usefull to restart frontend without waiting for cron
or restart "leader" without stoping the frontend. This will also be
usefull for optimisation purpose.
As a custom codeowner system was successfully implemented in a python
step on our runbot instance, it's now time to have a real model for
that.
This commit adds a skeleton Codeowner model in order to be used for a
basic usage.
This should be improved in the future after some battle testing.
With the increasing usage of runbot to test various things and to take
care of random bugs in tests, the need of a team dashboard arose.
This commit adds a `runbot.team` model. Internal users can be
linked to the team. Module wildcards can be used to automatically assign
build errors to a team at 'build.error` creation.
Also, an upgrade exception can be assigned to a team in order to display
it on a dashboard.
A dashboard model is used to create custom dashboards on the team
frontend page. By default, a dashboard is meant to display a list of
failed builds. The failed builds are selected by specifying a project, a
trigger category (e.g. nightly), a config and a domain (which select
failed builds by default).
The dashboard can be customized by specifying a custom view.
Each created team has a frontend page that displays all the team
dashboards and the errors assigned to the team.
A few other improvement also come with this commit:
* The cleaned error is now in a tab on the build error form
* Known errors are displayed as "known" on the build log page
* The build form shows the config used for the build
When a build error is archived, a linked children with an assigned fixer
may still appear on the error frontend page.
With this commit, old children are not showing up again.
Since the order was changed, the first values are actually the older ones.
This commit inverse newer_build_stats and older_build_stats
values in order to always have the new keys. Before this commit the new
keys where not displayed. A future improvement may be to combine keys
from all builds.
This commit also proposes to give a 0 value if the key did not exist in
the older build. This means that new keys will appear with a big
difference. This is maybe not a good idea and needs some testing. A
better solution would be to search for the first apparition.
Since the frontend_assets are loaded with `defer="defer"`,
the page sometimes fail with the message:
```
stats.js:212 Uncaught ReferenceError: Chart is not defined
at updateChart (stats.js:212)
at stats.js:158
at XMLHttpRequest.xhttp.onreadystatechange (stats.js:53)
```
This commit checks that Chart is available before tring to render the graph.
Thanks to @kebeclibre for the help.
When getting pull info, the alive state can be determined easily,
meaning that this field can join the "_compute_branch_infos" familly
Hook was catching some changes made on pr and was conditionnaly updating some fields
and triggering some other operations conditionaly depending on the action flag.
All of the information needed to update the pull info should always be present in the
payload body, meaning that all fields can be updated at once in case some hook was missed,
and additionnal operation can be triggered based on fields changes.
In some conditions, it appears that a containerized build can eat up
all memory of the container host. This leads to disturbance of other
builds as the kernel OOM killer enters the dance.
With this commit, the docker ability to limit memory usage of a
container is used. The OOM killer will choose its victim among the
container processes.
The containers memory limit has to be set in the runbot settings. If not
set, no memory limit is used.
runbot servers are running with a log-level debug in order to have usefull
debug information, but this also causes some noise comming from odoo.
This pr changes most debug to info.
- Add draft pr management to avoid to trigger code owner on draft pr.
- Add check on falsy config on trigger id (avoid crash, usefull to disable trigger)
- Add extra params on custom trigger to avoid to write specific config every time.
- Trigger a new batch automaticaly when updating target/draft
Some projects may use a totally different Dockerfile. In order to avoid
new branches of those projects to automatically build with the generic
default Dockerfile, this commit adds the possibility to configure a
Default Dockerfile on a project.
Since Odoo 14.0, the recommended Ubuntu LTS release is Focal, the
default Docker should be updated accordingly.
A custom template with bionic have to be manually created on runbot
instances that still build Odoo < 14.0.
* A small change is made in the templates logic that builds the
Dockerfile: a `runbot_pip` dict entry now exists in order to install the
python libs required by the runbot. On the other hand, `additional_pip`
should only be used to install optional python libs for Odoo.
* Upgrade Chrome version to 90.0.4430.93-1 as this one is currently in
use on our current runbot instance. Just keep in mind that the Odoo
screencast feature does not work anymore since Chrome 88.
* gsfont is added because of a bug [0] that affects python-reportlab in
Focal.
* pyCrypto package is removed. It was used in an Odoo addon that
disappeared in odoo/odoo@2738341c21
* dbfread, websocket-client are now installed as a deb package as they exists in Bionic and
Focal
* pdfminer.six is now removed because a deb package exists in Focal but
not in Bionic. It means that it has to be added in deb_packages_python
in Dockerfiles for odoo > 13.0 and in additional_pip for odoo <= 13.0
[0]: https://bugs.launchpad.net/ubuntu/+source/python-reportlab/+bug/1918107
The test in the original code will never fire because the value searched
for is not in the keys of the dictionary, but in one of the lists which
are in the values. Work around this by maintaining a reverse dictionary
module name -> commit and use this for the test.
When a dockerfile to_build field is False for any reason and a new
runbot is setup, it's easy to miss the point and builds that involves
this particular Dockerfile will fail on the new runbot.
With this commit, the Dockerfiles tree view is improved to easily spot
those kind of problems:
- to_build field is visible and the Falsy lines are in yellow (warning)
- the empty Dockerfile's are in red (danger)
- versions are now visible in the tree view too
When a build slot is hidden in the batch tile but is responsible of the
batch failure, the failure reason may not be obvious for the user.
With this commit, an hidden slot appears if the slot build is in
failure.
Sometimes a pr pull info can fail.
- Most of the time it is only temporary and it will be successfull on next try.
- In some rare case the pr will always fail (github inconsistency) The pr exist in git but not on github api.
For this rare case, we store the pr in memory in order to unstuck other pr/branches update.
We consider that this error should not remain, in this case github needs to fix the inconsistency.
This is why the runbot model don't handle such a case for now.
Another solution would be to create the pr with fake pull info. This idea is not the best one
since we want to avoid to have many pr with fake pull_info in case of temporary failure of giothub services.
With this solution, the pr will be retried once every cron loop.
We dont except to have pr with this kind of persistent failure more than every few mounths/years.
When code blocks were containing markdown like text, the inside of the code
block was also formated.
This commit removes the code blocks before applying other formating and
place them back at the end.
closes#481
- Adds a complete legend enabelling to display a custom subset of modules.
This is mainly to enable a vertical scroll on list since chart-js default
legend will be displayed on multiple column.
- Adds a "Noisy" order mode to find non-deterministic modules.
- Changes the build selection mode to a center one to easylly center
build of interrest and add a forward button.
- Small ui tweaks/fix to match new selection logic.
Since 360e31ade4, it's possible to add statistics values to build
results but there was no practical way to analyze them.
With this commit, there is a new button on the bundle page that leads to
a chart page that displays those values.
The default reference build is last known good build of the bundle.
Values are filtered by key and only the most significant values are
displayed. The user can then refine the chart by changing the reference
build or the key and a few other options.
Co-author: Xavier-Do <xdo@odoo.com>
Before this commit, the `requirements.txt` from a specified odoo branch
(master by default) was always installed in the Dockerfile's.
We now allow to disable this feature to test Odoo with vanilla
distributions.
When choosing to install Chrome in a Dockerfile, the chrome version is
downloaded from Odoo nightly server. This make it difficult to test
with different versions of Chrome.
With this commit, we allow to install from Google in Docker files.
By default, the install remains from Odoo Nightly server but if the key
`custom_values['chrome_source']` is set to 'google' in a Dockerfile,
the specified version will be downloaded from Google servers when the
Docker image is built.
If you try to manually create bundle, Odoo will crash, because it will
try to use `name` value that is not set yet. For that we start computing
host_id once `name` is entered.
aka: make it clean
This build database_ids field was generated a few months, waiting for the database to update with
this new sceme before using it.
It is still a little early to use it in cleanup methods, but this can be used to
generate the connect links dynamicaly.
To follow al specs introduced in previous commit, main fa-sign-in button should link to the -all.
It will almost always be the first one in database_ids in alphabetic order.
Then, in the dropdown, all other database are listed.
This will fix the previously broken design_theme connect link (no base nor all).
For this purpose nginx regex needs to be adapted to accept database name with underscore.
Using database name as subdomain will set the web.base.url correctly when connection for the first time using this link
This will unfortunatelly break connect all and connect base links if dns and nginx are not setup.
This won't fix the web.base.url when connecting with the Connect (database selector) button either.
When a status cannot be sent to github, the status is created in
database but the `sent_date` field is not set.
Because of that, the status cannot be sent manually from the frontend.
With this commit, that kind of status can by tried again.
When a build is reaching the run_run_odoo step, a database has to be
set. If none are found in the build params, the one from the last step
is choosen. Historically, the last one in the `Split` config was `all`
but now, the last one is `base`.
With this commit, if none are found in build params, `all` is choosen if
found in any install config steps. As a default, the one from the first
step is choosen.
Since 3657a65b20 docker_run is called outside of the step and python
steps have to set the `docker_params` variable. This breaks the computed
`log_list` because the string `docker_run(` is searched in python
steps code to determine if it's a docker_step.
With this commit, the `docker_params = ` is searched instead.