In some circumstances, a commit is created from scratch with only a hash
for sole information. This is not very convenient and can lead to issues
that may be difficult to investigate.
With this commit, when creating this kind of commit object, we try to
get commit informations from git repo.
When exporting a commit, the commit date is used in the `tar` command to
set the date of the exported folder. On the other hand it happens that a
commit is not found in the database and should be quickly created on the
fly. e.g.: with the `_get` method. In this case, if the commit needs to
be exported later, the method fails and may break a runbot build.
It happened with a custom python step.
- clean thread username
- allow to write on params for debug (was mainly usefull to forbid it
at the beginning)
- imrpove some guidelines about method and actions naming/ ordering
- move some code for a cleaner organisation.
- remove some useless request.env.user (not useful anymore)
When configuring a custom trigger on a bundle by using the wizard, the
child extra params field is often too small to display all the
parameters.
e.g., specify two long test-tags as it's often the case for
multi-builds.
With this commit, the field span over 4 columns.
The runbot settings view is a bit messy and the 16.0 upgrade added mess
on the existing one.
This commit is an attempt to make it a bit clearer and cleaner.
This commit will replace the symlink used for upgrade by the
upgrade-path.
The symlink was used before because old version does not support upgrade
path, but the decision was taken to now limit the testing to version
suporting upgrade paths in order to be able to support utils in another
repository latter.
When trying to open a linked error or an error from the history, the
object is opened in a useless modal. With this commit, the object is
opened in a regular form view.
This reverts commit 9e7441e098.
This doesn't work as expected because of db filter.
Will eb changed latter, reverting for now
Token field is kept, could still be used later.
When investigating kernel logs e.g.: for finding oom killed containers,
the kernel does not log the name of the incriminated container but only
the id. With this commit the runbot will also log the container short id
which is enough to correlate the logs.
There are two wizards for the runbot build errors:
- One to close an error with a reason
- One to update the team/user or PR
With this commit, the two wizards are merged into one wizard that helps
to update errors in bulk.
Also, a button is added in the list view that allow to save a mouse
click.
The `NEW` button is removed from the tree view as it should not be of
any use.
Pausing a host can be usefull in some case, mainly when testing new code
The loop will have no effect avoiding to break some build wainting for
testing.
Profile will help to identify potential performance flows during the
loop.
One of the most common custom trigger is to restore a build before
starting some test, either to create a multibuild or make the execution
and debug of some test faster.
It is somethimes tedious to use because we need to give an url of a
build to restore. This build must correspond to the right commits,
must still exixt, ... this means that the dump url must be adapted
everytime a branch is rebased.
The way the dump_url is defined is by going on the last batch, following
the link to the `base_reference_batch_id`, finding a slot corresponding
to the right repo set, (ex: Custom enterprise -> enterprise), and
copying the dump_url in this build.
The base_reference_batch_id is eay to automated but we have to find the
right trigger, this is now a parameter of the custom trigger wizard.
There are actually 2 strategy now to define how to download the dump:
- `url`, using `restore_ dump_url`
- `auto`, using `restore_trigger_id` and `restore_database_suffix`
To ease the setup, a `restore_trigger_id` is added on a trigger, so that
when selecting a trigger, lets say `Custom enterprise`, the defined
`trigger.restore_trigger_id` is automatically chosen for the
`custom_trigger.restore_trigger_id` and the `restore_mode` is setted to
auto.
Two actions are also added to the header of a bundle, a shorcut to
setup a multi build (restore in children) or a restore and test build
(restore in parent).
In view list widget are not always instanciated and a formater is used
instead. This means that the t-esc will try to output a jsonb field
without nowing how to render it, making the page crash.
This is quickly fixed by forcing the widget on the field in tree view.
The python steps can be long and interresting to track but the change is
actually hard to see since a block of code is logged instead of the diff.
Also, the whitespaces are not preserverd since we are note in a <pre>
block making it hard to read.
This proposes an alternative to display python code tracking values as
a diff with options to copu the raw content of the old and new version.
(as well as showing unchanged lines or not)
A first step to get more independant from web and website was done in
15.0 but some file were moved and it looks like the bootstrap version
changed breaking the frontend again. (4.3->5.1)
This commit simply copies the libs from odoo/15.0 to avoid losing time
fixing the frontend look and feel for the new bootstrap version for now.
A future refactoring could change the vendored version to addapt to 5.1
while modernizing the frontend style.
Firefox blocks downloads from http link if you are on an https page
Allow to deactivate via an ICP in case the runbot is configured over
HTTP (you shouldn't really)
When the runbot tries to drop a local database, if the that raises an
exception, it goes in a loop failure. It mays happen for example if
someone forgot to close a psql during an investigation :-)
With this commit, the exceptions are catched and at least the database
name is logged.
Since all versions will have a defined dockerfile, the project one
will alway be ignored. The idea here is that for a project, we may
definea default dockerfile_id so that we don't have to set it on all
bundle to make it work.
The _kill method was called in multiple case, usually when something
wrong happen:
- exception initiating pending
- kill requested manually
- testing time exceeded
- exception running a job
- ...
But it will also be called when killing a running build.
It was usually not an issue since the status remains the same, but it is
not true if the same commit is used in two build, the new one is green,
the old one is red (enterprise commit remaining the same but community
commit changed as an example)
In this situation, the enterprise commit may receive the red ci from the
old build while the last one is green.
Since with the last version, the github status responsibility is left to
write method, this github status is not useful anymore, updating the
state and result is enough.
This commit also removes the commit since it is not always a god idea.
Most of the time the transaction will be comited quite fast after that
with the new scheduler.
Note that checking in github status if no status has a more recent build
may be a good idea. Only the most recent build using a commit could
sending a status? This would not alway be helpful Imagine a commit used
in 2 branches by mistake, the last build is not always the one we want
(usually fixed by rebuilding a subbuild of the good build)
In some case, a build can add a lot of info in a log, there
is already a limit to the number of entry but not to the size of an
entry. This will limit the database usage in case of mistake/abuse.
When filtering bundles in the frontend, the user is not able to search
for its final trigram because of the `like`search.
With this commit, if the search contains a `%` symbol, the `=like`
operator is used permitting more accurate searches.
With this commit, a custom widget is added to go to the reunbot frontend
from a Char field. This allows to go from the bundle backend page to the
bundle frontend page wich is more useful in some situations.
e.g.: when creating a custom trigger with the wizard, this allows to
test the trigger with 2 clicks.
* show only the all builds tab
* hide linked errors tab when there is no linked errors
* hide error history tab when there is no history
* add some readonly
* add a link to the fixing PR on github
* add a warning ribbon on test-tagged errors
* show different colors in tree view to spot fixed PR's
* add some search filters
The initial idea to have a wakeup for public users stopped being viable
due to some abuse of the system, maybe unintentional crawling of
some build page but still, this feature will now be limited to internal
users only.
The current post_install build mecanism is using extra params to give
test-tags. Unfortunately this disables the support for auto tags
and this have to be done manually. This means that auto tags are in the
build extra-params and not dynamic at rebuild of a post_install.
Also, using extraparams in the post install creation was removing
extra_params comming from custom trigger.
With this commit, the test-tags can be given inside config_data
and will be combined with config step test-tags and auto-tags.
This was an opportunity to simplify the logic.
This commit also fixes the test_install_tags that was broken.
SInce the previous version the build end is written when going in any
done state. This means that when a build is skipped, it has a end
but no start.
Adapat the build dime to manage this use case.
The force buttons were hidden because unfortunately miss used as a
rebuild in some case instead. The position of the button was to obvious
and used as a "magic fix" when the intended behavior was only for really
specific cases.
Unfortunately the routes were know and still used manually. This commit
blocs the access giving a message to ask for the group if needed.
Those feature would benefit for some documentation.
When a build is created, it will first check for another
build having the same params. It is usually a good idea to avoid
to much load. In some case, a build can be found, but a killed one.
This is not what we want:
The first scenario is to consecutive force push,
commit1 -> commit2 -> commit1
The build of commit1 may be killed because of commit2, then when
forcepushing commit1 again, it will be linked to a killed build.
A even more problematic problem was discovered because of a delay In
odoo/odoo repo hook. An odoo-dev/odoo 16.0-... branch was discovered
first using this commit, and a build was created.
Then, the branch was forcedpushed and the build was killed.
Finally, the 16.0 commit was discovered, and was linked to the killed
build. This was mainly an issue because the build was a template.
With this changes, the 16.0 would have created a new build, not linking
to a killed one.
Note that linking to a red build is not an error. Only a killed one.
The assigned build are in the same count of the pending build. This can
sometimes create a false queue, because you can have 1000 pending builds
on one host, this doesn't mean that a new standard build cannot be
immediatly taken by another host. This is mainly to hide the false queue
created by the full charge zfs build currently running and creating
~400 assigned build.
The main motivation is to allow to create params from data
the "new" method was called with a value list instead of a dict.
Also, makes it possible to update params when the registry is not laoded
The initial motivation is to remove the flush when a log_counter is
written. This flush was initially usefull when the limit was in a
psql trigger, but finally add a side effect to flush everything before
starting the docker. This was limiting concurrent update after starting
the docker, but we still have no garantee that the transaction is
commited after starting the docker. The use case where the docker is
started but the transaction is not commited was not handled well and was
leading to an infinite loop of trying to start a docker (while the
docker was already started)
This refactoring returns the docker to the scheduler so that the
schedulter can commit before starting the docker.
To achieve this, it is ideal to have only one method that could return
a callable in the _scheduler loop. This is done by removing the run_job
from the init_pending method. All satellite method like make result
are also modified and adapted to make direct write: the old way was
technical debt, useless optimization from pre-v13.
Other piece of code are moved arround to prepare for future changes,
mainly to make the last commit easier to revert if needed.
[FIX] runbot: adapt tests to previous refactoring
Trying to log when the transaction is in error is useless and create
noise in the logs.
Flushing is also useless there now that we have the local logs,
and it makes the error confusing since the error does not come from the
log_counter update but from the update of the global state on the
parents global_results.
When parenting a build error, if a test_tag is set on it, the tag is
transferred to the parent and cleared to an empty string.
In that case, a single `-` appears in the disabling tags and leads to an
apocalyptic situation ... the runbot builds don't not run any tests.
With this commit, the test_tags is set to `False`.
Since removeprefix was not available in ubuntu 20.04, a easier alternative
using rebase was used.
The initial assumption was that the prefix would look like `odoo/addons/`
and won't be in the filename.
When the repo is enterprise, the prefix is `enterprise/` meaning that
module name ending with `enterprise` will be truncated
`repo._get_module('enterprise/mail_enterprise/static/src/widgets/form_renderer/form_renderer.js')`
will output
`mail_static`
When adding a new project, if no branch matches a base name,
the created bundles won't have a version and it will fail.
A simple fix will be to add a master bundle for all projects.
When an error is linked to another one, we don't expect it to appear on
team and user dashboard. When adding a parent, this will transfer the
responsible from the child to the parent when applicable.
The XML-RPC implementation does not allow for receiving or sending
`None` values (both as query parameters and response).
Since the `write` method of `runbot.bundle` was overriden without
returning a value, an exception is raised when the method is called
through the external API.
This makes the `write` method return the value from its call
to `super()` which should be equal to `True` if all went well.
The auto disable host is mainly usefull when there are a lot of host for
well configured repositories.
If for any reason a repo is corrupted on one host, this host will be
disabled until a manual intervention cleans the repo.
For other cases, where thjere are many repositories with not so many
host, it is most likely that a fetch will fail because of an invalid
repository configuration. Disabling the host in this case is not a good
idea.
With this commit, a settings allows to enable or disable this feature.
Right now, a branch with a numerical name will be added to the database,
but it can conflict with pr since the name of a pr is a number.
This means that a unique (name, repo_id) constraints can be broken.
We could use the 'is_pr' in the unicity constraints to avoid this issue
but searching on branch name will give confusing result if some of them
can be numerical.
Moreover, a runbot branch name should start with the version name
meaning that a numerical branch name was a bad idea from the begining.
The main idea is to allow some build to use an extra slot from all host
if a bundle is in priority mode.
This is mainly for quick step debuging, mainly when modifying python
steps when the runbot is fully loaded.
This will also be usefull to concurrency test, by starting a build on
each host at the same time, even when some host are already fully loaded
Right now, multiple build are read when managing build to schedule.
This is not usefull since the transaction is commited between each
of them. Moreover, the read build can be written from another host
adding another possibility to have a conccurent update.
Removing the prefetch_ids may help a little.
Add a small documentation for users, mainly about teams and codeowners.
Improves some views and hide some menu_items to keep interface easier
to navigate.
Since the custom create was introduced, if a config is added in the
config data of a create step, the config can be dynamic. If the given
config contains a create step, this become recursive.
This is fixed in this commit by:
- Checking the parent_path depth in add_child. This will also work for
python config.
- Consuming the params when adding the child
- Also cleanup the base custom multi config to use a specific step
Removing log_access has as side effect to add the foreign key to the
create_uid and write_uid fields. This is quite slow and will slow the
insert
Removing the fields is also not an good idea on such a large table
Puthing the value in cache and flushing should do the trick.
This idea was postpone for a while since this was most a mergebot
responsability but having the github login of the user will help
for some team feature requests.
The main one is to only ping a team if the pr was not opened by a member
of the team. We want to let the team manager manage that as much as
possible so the team manager group will be able to write the user
github login (as well as the user himself) and add a list of non user
github_login to consider if not all user have a account on runbot.
This commit also improves the views for team managers.
If the db_name does not stat with a build ind (or at least an int)
the query will fail because of 'local_pg_cursor'
Since a database can be create with an invalid name from the db manager
but still log in the runbot_logs, we need to manage all format.
Also add a limit to catchup if the db is full of logs, to avoid a Memory
error.
Force new batch buttons can be sometimes confusing for user.
Creating a group to show this button for advanced user only will help
avoiding useless new batch when it's not needed.
New batch is only needed:
- to create a new slot when a new trigger is added/modified through a
custom trigger
- take last databases into account for upgrades, mainly when backporting
a new field or strange usually forbiden operations
- avoiding to need to push again to rebase when a r+ was added on one
pr but one of them needs to be rebased or adapted.
Thos case are unusuall but the button is used most of the time thinking
this is a kind or rebuild or maybe it will rebase and push the branch
on the pr. Only user with basic knowledge of when it is needed should
have access to these buttons.
Before the commit the build ir_logging was sent from the build instance
to the main runbot ir.logging table. As the number of runbot hosts
increases, it introduce a lot of concurrency.
e.g.: 80 hosts with 8 builds means 640 instances trying to insert
records in the ir.logging table.
With this commit, a special database is used on the builder host in
order to receive ir.logging's from the build instances.
Regulary, the table is emptied by the builder and the logs are inserted
in the runbot leader ir.logging table.
When marking multiple build error as fixed, it's sometimes necessary to
explain why it was decided to close the error. When working with a few
errors, this can be done manually ... But most of the time we want to
close a lot of false negatives in batch.
With this commit, a simple wizard is made available that will post a
reason in the chatter of the build_errors.
The build error view was unstructured and contains to much information.
This commit organize fields in groups and also validate some
modification on records in order to avoid build error manager to
disable test-tags by mistake.
An error cannot be deactivated if it appeared less than 24 hours ago to
avoid disabling a non forxardported pr that will fail the next nightly
generating another build error.
Test tags can only be added or removed by administrators.
Also adds a menu for easier User managerment
Also fixed the dname search and display.
Traceback (most recent call last):
File "/home/odoo/src/odoo/15.0/odoo/addons/base/models/ir_http.py", line 237, in _dispatch
result = request.dispatch()
File "/home/odoo/src/odoo/15.0/odoo/http.py", line 687, in dispatch
result = self._call_function(**self.params)
File "/home/odoo/src/odoo/15.0/odoo/http.py", line 359, in _call_function
return checked_call(self.db, *args, **kwargs)
File "/home/odoo/src/odoo/15.0/odoo/service/model.py", line 94, in wrapper
return f(dbname, *args, **kwargs)
File "/home/odoo/src/odoo/15.0/odoo/http.py", line 348, in checked_call
result = self.endpoint(*a, **kw)
File "/home/odoo/src/odoo/15.0/odoo/http.py", line 916, in __call__
return self.method(*args, **kw)
File "/home/odoo/src/odoo/15.0/odoo/http.py", line 535, in response_wrap
response = f(*args, **kw)
File "/home/odoo/src/odoo/15.0/addons/web/controllers/main.py", line 1347, in call_kw
return self._call_kw(model, method, args, kwargs)
File "/home/odoo/src/odoo/15.0/addons/web/controllers/main.py", line 1339, in _call_kw
return call_kw(request.env[model], method, args, kwargs)
File "/home/odoo/src/odoo/15.0/odoo/api.py", line 464, in call_kw
result = _call_kw_multi(method, model, args, kwargs)
File "/home/odoo/src/odoo/15.0/odoo/api.py", line 451, in _call_kw_multi
result = method(recs, *args, **kwargs)
File "/home/odoo/src/odoo/15.0/odoo/models.py", line 6489, in onchange
snapshot1 = Snapshot(record, nametree)
File "/home/odoo/src/odoo/15.0/odoo/models.py", line 6271, in __init__
self.fetch(name)
File "/home/odoo/src/odoo/15.0/odoo/models.py", line 6281, in fetch
self[name] = record[name]
File "/home/odoo/src/odoo/15.0/odoo/models.py", line 5888, in __getitem__
return self._fields[key].__get__(self, type(self))
File "/home/odoo/src/odoo/15.0/odoo/fields.py", line 1054, in __get__
self.recompute(record)
File "/home/odoo/src/odoo/15.0/odoo/fields.py", line 1243, in recompute
self.compute_value(recs)
File "/home/odoo/src/odoo/15.0/odoo/fields.py", line 1265, in compute_value
records._compute_field_value(self)
File "/home/odoo/src/odoo/15.0/odoo/models.py", line 4255, in _compute_field_value
getattr(self, field.compute)()
File "/home/odoo/runbot/extra/runbot/models/version.py", line 36, in _compute_version_number
version.number = '.'.join([elem.zfill(2) for elem in re.sub(r'[^0-9\.]', '', version.name).split('.')])
File "/usr/lib/python3.8/re.py", line 210, in sub
return _compile(pattern, flags).sub(repl, string, count)
Exception
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/odoo/src/odoo/15.0/odoo/http.py", line 643, in _handle_exception
return super(JsonRequest, self)._handle_exception(exception)
File "/home/odoo/src/odoo/15.0/odoo/http.py", line 301, in _handle_exception
raise exception.with_traceback(None) from new_cause
TypeError: expected string or bytes-like object
When the builds directory is filled with a lot of build directories
(around 100000) the garbage collection process may take up to 2 minutes.
The root cause is that each build directory is scanned to clean it up
even if it was already cleaned.
With this commit, a stamp file is used to mark directories that were
already garbage collected.
The sleep 1 was ok with a few builder bur regarding the number of
request on the database when no build are running, this become
problematic.
An ideal solution would be to detect if
-> me managed some testing build
-> There is load (pendings)
In both case, we don't want to sleep to much.
In other cases, we may want to wait a little longer.
A simple quick fix will just wait longer in all cases.
Before this commit, the build errors page was neither sortable neither
filterable.
This commit adds a way to filter by:
- all
- unassigned
- seen more than once
It also allows to sort by:
- last seen
- nb seen
Typical need is to sort by nb seen and filter out the only seen once to
be able to figure which one is supposed to be checked in priority.
In some rare cases, a docker container has a status of exited, removed
or in removal and the `end-`file has been written right after the code
checked the existence of the file.
To mitigate the issue, this commit checks the `end-` file existence
after the container status have been checked. That way, if the file
exists we can be pretty confident that the build really ended.
When exporting a commit, it will be useful to freeze the modification
time of the exported files. The idea behind that is to pre-generate
bundles at the install time of the Odoo instances, that way when running
the post install tests, the bundles does not have to be generated for
each test.
Because the login link redirected to `/`, when logging in the user would have
to re-navigate to their previous page unless they'd remembered this issue and
kept the original page around.
Fix this because it's annoying and dumb: we know our URL when rendering
templates, so we can redirect back there after login.
Consideration: this could also be done on logout, however it seems likely that
in that case the original page is "privileged" and when coming back we'd just
get an access error. So don't do it for now.
In bd4cf76b7 a new argument `--progress-bar off` was added to the pip
command line. Unfortunately, it appears that this argument is not
available in some pip versions, typically the one from Ubuntu Bionic
(9.0.1) [0]. The argument appeared in version 10.0.0 [1].
Also, in the DockerFile, we allow the user to use `sudo` for
`/usr/bin/pip3` and the main reason for that was to avoid having `pip`
to re-install the packages that are pre-installed by the distribution
package manager.
That said, it appears that since pip 10.0.0 when installing packages
locally for the user, pip now properly detects distribution installed
packages.
So, as the solution to fix the progress bar issue is to upgrade pip to a
newer version, we can take advantage of it to get rid of the `sudo`.
Besides the fact that `--user` is the default on Debian based
distributions [2] , we enforce it in case another distribution is used
in the Dockerfile.
[0] https://packages.ubuntu.com/bionic-updates/python3-pip
[1] https://pip.pypa.io/en/stable/news/#id744
[2] https://wiki.debian.org/Python#Deviations_from_upstream
In some conditions, the google-chrome crashes and a core dump is written
in the bind mounted directory.
With this commit, the core dumps are disabled in the containers.
While using the docker cli as a subprocess was KISS and convenient,
the python Docker SDK is mature, easy to use, available as a Debian
package and much more powerful.
It will permit to monitor the containers memory consumption and will
help to spot memory leaks.
On the actual runbot deployments, the `git gc` command is handled by a
unix cron. From time to time, some repositories get corrupted and we
suspect that some concurrent action may be involved as stated in
documentation [0].
For those reasons, with this commit, the `git gc` will be run by the
runbot clients themselves in order to avoid concurrent operations.
By default, the first gc will occur a few minutes after the start of the
client and the next gc are scheduled a two hours and a few minutes later.
Also, this commit ensures that the git config is written regularly in
case of change.
[0] https://git-scm.com/docs/git-gc
Regarding the number of refs in odoo repo (arround 18 million at this
time), the parsing of the date was significant when filtering old refs.
Using unix time allows a direct comparaison without parsing the date,
and improved performance, going from ~7 seconds to ~1.3 seconds.
The status are currently sent by leader when build are created and by
workers on post_commit.
If the leader fails during the preparing of a batch (while creating
builds) the transaction is rollbacked and the status are send again.
The number of status to send makes it quite slow, making the transaction
longuer, and the retry even more expensive. This leads to preparing time
being quite important, sometimes ten minutes after many retries.
This commit proposes to send status in another dedicated transaction.
Since status are sent in batch, we can also try to use a unique session,
and uniquify commit+context status.
This allows to remove the postcommit logic
A further improvement would be to wait before sending status in order to
skip the pending status if the build is verry fast.
An option is also added on the remote to skip the status: if the remote
is a fork, sending the satus on the main remote should be enough.
Initial model was a (build_id, key, value) where key is in fact a two
part information: `category.key` (where key is usually a module)
This means that for each module, we will have one entry per
modules*category.
We have between 200 and 400 modules per build * 4 keys -> around 1000
entries per build.
The hudge amount of total entries lead to a fast overflow of the table
sequence + this create important indexes.
Also, most of the time, the js will manage the display of stats meaning
that python will transform
(build_id, category.key, value)
into
(build_id, {key: value}) for one category
A new model makes use of a json field to store values for different
modules in a json dict, on entry per category*build. (4 entry per build)
The table will be renamed and migrate later
The initial pull_info_failures feature was introduced before the separate
process for the leader. Since this is here for a rare case, this
wasn't tested since then.
Testing the new infrastructure without token, there was many
pull_info_failure, leading to an error on cleanup.
When rendering templates for git configuration and nginx configuration,
the templates are rendered as Markup, while a byte-like object was
expected for the saving.
With this commit, the Markup is casted into str and the files are saved
as text files.
The `utils.escape` utility was deprecated in werkzeug 2.0.0, so it can
be replaced with our `html_escape` which is itself `markupsafe.escape`.
Also, with this change, the double quotes are now escaped in `"`
instead of `"`, so we fix the test too.
Some non tested code was making running build available without
an nginx config (using port and local address instead of using a proper
dns/nginx config).
Removing this part to reduce complexity.
Source cleanup will check in multiple place for potentially used
commits. This maked the cleanup logic complex, plus limit the usable
commits in python steps. The current use case is to export the mergebase
commits.
The poposed solution is to save the exported commit and clean this list
when the build is done. All commit in this list cannot be cleaned.
The main motivation is to be able to see existing config_data on params
and edit configuration on triggers.
A simple version would be to use the `FieldChar` with a simple
`JSON.stringify` formater, but having some indent on field make it
easier to read and edit.
The multiline need make it closet to the `FieldText` than the `FieldChar`
but the reset makes the browser freeze.
The actual way to add a custom multibuild on a dev bundle will be:
- use the multibuild wizard to create 2 configs and 2 steps
- add this config on the bundle
- customize this config to make it fatser by installing/restoring...
The usefull parameters are the number of build, the test tags and
modules to install.
Another usefull operation is to restore a dump instead of installing,
especially for post install test only breaking on full databases.
This commit proposes to replace the multi build wizard with a custom
trigger wizard. The main idea is to have generic config, parametrized by
config_data. The wizard will only help to generate the corresponding
config_data.
When a build is garbage collected and the local folders are cleaned, the
log files are kept to allow build errors investigations.
With this commit, a full local cleanup will be done 365 days after the
garbage collect. This means that the build dir will be completely
removed. The default number of days can be tweaked with the
`runbot.full_gc_days` ir.config_parameter.
Also, the _local_cleanup method can be called with a `full` boolean
parameter to force a full cleanup. e.g.: when called in a config step.
While at it, the res_config_settings view is a bit reorganized.
When a git archive fails, the partially exported source tree is left in
place. If another builds tries to use the same commit, the tree is not
exported anymore as the directory exists. This leads to non
deterministic behaviors.
Right now single version repo like upgrade are managed using
a regex to limit name prefix, this avoid grouping branches when
mergebot wont be able to merge them together but the ci can be painfull
since the branch needs to be renamed (closing existing pr) or a manual
operation to move the branch into a new bundle must be performed.
This commit proposes to replace the forbidden_regex mechanism with
an explicit single_version mechanism.
In this case the reference name will be automatically prefixes with
the version. The prefix contains `---` to indicate that some
magic was applied.
Runbot layout modifies the website/portal base layout to remove navbar,
footer, overides some custom styles. A lot of assets are loaded but not
used. The only real usefull elements are base assets (bootstrap, ...)
and the login button.
Migrating to the next version of odoo is usually painfull because some
xpath may break, extra element added, or some style change may break the
page, needing to add more and more xpath, css rules, ... for very little
benefits.
This cleanup creates a custom base layout for runbot independant from
base odoo templates.
Also add a breadcrumb, navigation arrow, and improve batch links
- searching on number will search for both pr and branche name
- hooks are now using payload to define repo when not given in url
- fixes .git cleaning in repo
(remove rstrip since it can fail for repo starting with g, i, t)
- recompute base on prepare if base was not found
- remove local_result form write values if there is a single record
(instead of raising, makes python step easier to write).
- avoid stucked build/loop after removing a step from a config.
- avoid to send ci for linked base_commit
- add a fallback mechanism for base if no master branch is found
- add option on project to avoid to keep sticky running, usefull
when using a lots of projects
WARNING: this is a change of default behaviour, need to update
existing projects.
- always discover new commits for branch matching base paterns.
This is especially usefull to discover old versions on project with
low merge frequency.
- always create a batch, event if there is now trigger. This helps to
notice that commits are discovered
- add line-through on death branches/pr
- manual trigger are now displayed on main page
As for the builder, this give the ability to run the discovery of new
commits and all related logic in a separate process.
This will mainly be usefull to restart frontend without waiting for cron
or restart "leader" without stoping the frontend. This will also be
usefull for optimisation purpose.
As a custom codeowner system was successfully implemented in a python
step on our runbot instance, it's now time to have a real model for
that.
This commit adds a skeleton Codeowner model in order to be used for a
basic usage.
This should be improved in the future after some battle testing.
With the increasing usage of runbot to test various things and to take
care of random bugs in tests, the need of a team dashboard arose.
This commit adds a `runbot.team` model. Internal users can be
linked to the team. Module wildcards can be used to automatically assign
build errors to a team at 'build.error` creation.
Also, an upgrade exception can be assigned to a team in order to display
it on a dashboard.
A dashboard model is used to create custom dashboards on the team
frontend page. By default, a dashboard is meant to display a list of
failed builds. The failed builds are selected by specifying a project, a
trigger category (e.g. nightly), a config and a domain (which select
failed builds by default).
The dashboard can be customized by specifying a custom view.
Each created team has a frontend page that displays all the team
dashboards and the errors assigned to the team.
A few other improvement also come with this commit:
* The cleaned error is now in a tab on the build error form
* Known errors are displayed as "known" on the build log page
* The build form shows the config used for the build
When a build error is archived, a linked children with an assigned fixer
may still appear on the error frontend page.
With this commit, old children are not showing up again.
Since the order was changed, the first values are actually the older ones.
This commit inverse newer_build_stats and older_build_stats
values in order to always have the new keys. Before this commit the new
keys where not displayed. A future improvement may be to combine keys
from all builds.
This commit also proposes to give a 0 value if the key did not exist in
the older build. This means that new keys will appear with a big
difference. This is maybe not a good idea and needs some testing. A
better solution would be to search for the first apparition.
Since the frontend_assets are loaded with `defer="defer"`,
the page sometimes fail with the message:
```
stats.js:212 Uncaught ReferenceError: Chart is not defined
at updateChart (stats.js:212)
at stats.js:158
at XMLHttpRequest.xhttp.onreadystatechange (stats.js:53)
```
This commit checks that Chart is available before tring to render the graph.
Thanks to @kebeclibre for the help.
When getting pull info, the alive state can be determined easily,
meaning that this field can join the "_compute_branch_infos" familly
Hook was catching some changes made on pr and was conditionnaly updating some fields
and triggering some other operations conditionaly depending on the action flag.
All of the information needed to update the pull info should always be present in the
payload body, meaning that all fields can be updated at once in case some hook was missed,
and additionnal operation can be triggered based on fields changes.
In some conditions, it appears that a containerized build can eat up
all memory of the container host. This leads to disturbance of other
builds as the kernel OOM killer enters the dance.
With this commit, the docker ability to limit memory usage of a
container is used. The OOM killer will choose its victim among the
container processes.
The containers memory limit has to be set in the runbot settings. If not
set, no memory limit is used.
runbot servers are running with a log-level debug in order to have usefull
debug information, but this also causes some noise comming from odoo.
This pr changes most debug to info.
- Add draft pr management to avoid to trigger code owner on draft pr.
- Add check on falsy config on trigger id (avoid crash, usefull to disable trigger)
- Add extra params on custom trigger to avoid to write specific config every time.
- Trigger a new batch automaticaly when updating target/draft
Some projects may use a totally different Dockerfile. In order to avoid
new branches of those projects to automatically build with the generic
default Dockerfile, this commit adds the possibility to configure a
Default Dockerfile on a project.
Since Odoo 14.0, the recommended Ubuntu LTS release is Focal, the
default Docker should be updated accordingly.
A custom template with bionic have to be manually created on runbot
instances that still build Odoo < 14.0.
* A small change is made in the templates logic that builds the
Dockerfile: a `runbot_pip` dict entry now exists in order to install the
python libs required by the runbot. On the other hand, `additional_pip`
should only be used to install optional python libs for Odoo.
* Upgrade Chrome version to 90.0.4430.93-1 as this one is currently in
use on our current runbot instance. Just keep in mind that the Odoo
screencast feature does not work anymore since Chrome 88.
* gsfont is added because of a bug [0] that affects python-reportlab in
Focal.
* pyCrypto package is removed. It was used in an Odoo addon that
disappeared in odoo/odoo@2738341c21
* dbfread, websocket-client are now installed as a deb package as they exists in Bionic and
Focal
* pdfminer.six is now removed because a deb package exists in Focal but
not in Bionic. It means that it has to be added in deb_packages_python
in Dockerfiles for odoo > 13.0 and in additional_pip for odoo <= 13.0
[0]: https://bugs.launchpad.net/ubuntu/+source/python-reportlab/+bug/1918107
The test in the original code will never fire because the value searched
for is not in the keys of the dictionary, but in one of the lists which
are in the values. Work around this by maintaining a reverse dictionary
module name -> commit and use this for the test.