In order to keep it coherent, the cpu_limit computation can be done in
one place instead of defining it in all _run* methods.
In python steps, it can still be overridden when returning
docker_params.
When build eats all the CPU's resources, it may interfere with other
builds and cause collateral damages.
With this commit, a new settings parameter `Containers CPUs` is added in
order to limit the usage of available CPU's on runbot instances.
If left to 0, no limts are applied.
Otherwise, the cpu_quota docker parameter is computed as Containers
CPU's * (logical cpu's count / nb parallel builds) * cpu period which defaults to 100000.
e.g.:
- on a host with 16 logical CPU's
- with 8 parallel builds allowed
- with Containers CPUs set to 1.5
- with the default cpu_period
cpu_quota will be:
(16/8) * 1.5 * 100000 = 300000
This system parameter can be overridden by the `container_cpus` field on
steps.
When using a repo as a dependency for another trigger, the default
module filter for a repo is not always ideal
As an example, when using odoo as a dependency for another repo,
we may only want to install the module from the new repo.
This iss done right now by creating a custom config but this lead to
duplicates config and steps only to customize the module to install.
This commit proposes a new model to store the filters.
Note that this may be used later as module blacklist on repo too.
Git gc can last a few minutes, it's not a big deal since it's executed
once a day but the transaction is kept idele during this time wich is
not useful. This commit should help to avoid this.
Because of a bad dependency on the compute, the first seen date and last
seen date are not always updated.
e.g.: a new build is scanned and a build is added to a linked error, the
parent error seen dates are not updated.
A test is added to reproduce the case.
remove all attrs in xml views
To help with that, a scripts was written, minimal but sufficent
#!/usr/bin/python3
import glob
import re
from ast import literal_eval
def leaf_to_python(leaf):
if len(leaf) != 3:
raise ValueError('This script doesnt support leaf', leaf)
field, operator, value = leaf
if operator == '=':
return f'not {field}' if value is False else field if value is True else f'{field} == {value!r}'
if operator == '!=':
return f'not {field}' if value is True else field if value is False else f'{field} != {value!r}'
if operator == 'in':
return f'{field} in {value!r}'
if operator == 'not in':
return f'{field} not in {value!r}'
raise ValueError('This script doesnt support operator', operator)
for file in glob.glob('**/*.xml', recursive=True):
with open(file) as f:
content = f.read()
attrs_list = re.findall(r'attrs="{.*}"', content)
if attrs_list:
for attrs in attrs_list:
match = re.match(r'''attrs="{'(invisible|readonly)': ?(\[.*\])}"''', attrs)
attr = match.groups()[0]
domain = literal_eval(match.groups()[1])
condition = ' and '.join([leaf_to_python(leaf) for leaf in domain])
replace = f'{attr}="{condition}"'
content = content.replace(attrs, replace)
with open(file, 'w') as fw:
fw.write(content)
When build errors are merged together, the builds of the merged errors
should be moved to the only error that will be kept.
It 's not the case because the merge method is assigning a compute field
and moreover it was hidden in the tests because the compute was not
triggered.
With this commit, the build_error_link is updated to point to the new
error. The test is modified to properly check the case and also to add a
case when the link already exists.
The access rights are updated to allow admin to unlink the
build_error_link records. Otherwise the action could fail when the link
already exists.
When a build that contains the same error that appears two times is
parsed, it crashes because of the unique constraint on build error link.
With this commit, only one link with the same error is created.
Two tests are added for the two cases:
- a new error appearing two times in a same build
- an existing error appearing two times in a same build
Since c6f9d1f0c a new model was added to link build errors and builds.
The _search_version and _search_trigger_ids were not adapted to work
with this new model.
In the build error view, a list of build is displayed with a confusing
create date. The create date in the list is the creation date of the
build, leading to a confusion with the creation of the build log
creation.
With this commit, the real log creation is used in this view.
To achieve that, the many2many relation is extended with a
log_date which is filled when a build log entry is parsed.
It happens that a user edits or annotates a build error form without
noticing that the error is linked to another one. In that case, the
modification or the note is probably useless. So a warning ribbon should
grab his attention.
While at it, we change the warning ribbon when an error has a test-tag
to not be confused with the link ribbon and also because removing a
test-tag can lead to a real danger (for the mergebot stagings).
When filtering the build error tree view based on the versions equality,
the results may not be what you expect.
e.g.: searching for `versions is equal to 16.0` gives the errors that
appeared in `16.0` (hopefully) but also those which appeared in other
versions too.
With this commit, this search will give the errors that appeared in the
specified version only. When the user wants to list errors that appeared
in `16.0` and other versions too, he has to use the `contains 16.0`
criteria.