Force new batch buttons can be sometimes confusing for user.
Creating a group to show this button for advanced user only will help
avoiding useless new batch when it's not needed.
New batch is only needed:
- to create a new slot when a new trigger is added/modified through a
custom trigger
- take last databases into account for upgrades, mainly when backporting
a new field or strange usually forbiden operations
- avoiding to need to push again to rebase when a r+ was added on one
pr but one of them needs to be rebased or adapted.
Thos case are unusuall but the button is used most of the time thinking
this is a kind or rebuild or maybe it will rebase and push the branch
on the pr. Only user with basic knowledge of when it is needed should
have access to these buttons.
Before the commit the build ir_logging was sent from the build instance
to the main runbot ir.logging table. As the number of runbot hosts
increases, it introduce a lot of concurrency.
e.g.: 80 hosts with 8 builds means 640 instances trying to insert
records in the ir.logging table.
With this commit, a special database is used on the builder host in
order to receive ir.logging's from the build instances.
Regulary, the table is emptied by the builder and the logs are inserted
in the runbot leader ir.logging table.
When marking multiple build error as fixed, it's sometimes necessary to
explain why it was decided to close the error. When working with a few
errors, this can be done manually ... But most of the time we want to
close a lot of false negatives in batch.
With this commit, a simple wizard is made available that will post a
reason in the chatter of the build_errors.
The build error view was unstructured and contains to much information.
This commit organize fields in groups and also validate some
modification on records in order to avoid build error manager to
disable test-tags by mistake.
An error cannot be deactivated if it appeared less than 24 hours ago to
avoid disabling a non forxardported pr that will fail the next nightly
generating another build error.
Test tags can only be added or removed by administrators.
Also adds a menu for easier User managerment
Also fixed the dname search and display.
Traceback (most recent call last):
File "/home/odoo/src/odoo/15.0/odoo/addons/base/models/ir_http.py", line 237, in _dispatch
result = request.dispatch()
File "/home/odoo/src/odoo/15.0/odoo/http.py", line 687, in dispatch
result = self._call_function(**self.params)
File "/home/odoo/src/odoo/15.0/odoo/http.py", line 359, in _call_function
return checked_call(self.db, *args, **kwargs)
File "/home/odoo/src/odoo/15.0/odoo/service/model.py", line 94, in wrapper
return f(dbname, *args, **kwargs)
File "/home/odoo/src/odoo/15.0/odoo/http.py", line 348, in checked_call
result = self.endpoint(*a, **kw)
File "/home/odoo/src/odoo/15.0/odoo/http.py", line 916, in __call__
return self.method(*args, **kw)
File "/home/odoo/src/odoo/15.0/odoo/http.py", line 535, in response_wrap
response = f(*args, **kw)
File "/home/odoo/src/odoo/15.0/addons/web/controllers/main.py", line 1347, in call_kw
return self._call_kw(model, method, args, kwargs)
File "/home/odoo/src/odoo/15.0/addons/web/controllers/main.py", line 1339, in _call_kw
return call_kw(request.env[model], method, args, kwargs)
File "/home/odoo/src/odoo/15.0/odoo/api.py", line 464, in call_kw
result = _call_kw_multi(method, model, args, kwargs)
File "/home/odoo/src/odoo/15.0/odoo/api.py", line 451, in _call_kw_multi
result = method(recs, *args, **kwargs)
File "/home/odoo/src/odoo/15.0/odoo/models.py", line 6489, in onchange
snapshot1 = Snapshot(record, nametree)
File "/home/odoo/src/odoo/15.0/odoo/models.py", line 6271, in __init__
self.fetch(name)
File "/home/odoo/src/odoo/15.0/odoo/models.py", line 6281, in fetch
self[name] = record[name]
File "/home/odoo/src/odoo/15.0/odoo/models.py", line 5888, in __getitem__
return self._fields[key].__get__(self, type(self))
File "/home/odoo/src/odoo/15.0/odoo/fields.py", line 1054, in __get__
self.recompute(record)
File "/home/odoo/src/odoo/15.0/odoo/fields.py", line 1243, in recompute
self.compute_value(recs)
File "/home/odoo/src/odoo/15.0/odoo/fields.py", line 1265, in compute_value
records._compute_field_value(self)
File "/home/odoo/src/odoo/15.0/odoo/models.py", line 4255, in _compute_field_value
getattr(self, field.compute)()
File "/home/odoo/runbot/extra/runbot/models/version.py", line 36, in _compute_version_number
version.number = '.'.join([elem.zfill(2) for elem in re.sub(r'[^0-9\.]', '', version.name).split('.')])
File "/usr/lib/python3.8/re.py", line 210, in sub
return _compile(pattern, flags).sub(repl, string, count)
Exception
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/odoo/src/odoo/15.0/odoo/http.py", line 643, in _handle_exception
return super(JsonRequest, self)._handle_exception(exception)
File "/home/odoo/src/odoo/15.0/odoo/http.py", line 301, in _handle_exception
raise exception.with_traceback(None) from new_cause
TypeError: expected string or bytes-like object
When the builds directory is filled with a lot of build directories
(around 100000) the garbage collection process may take up to 2 minutes.
The root cause is that each build directory is scanned to clean it up
even if it was already cleaned.
With this commit, a stamp file is used to mark directories that were
already garbage collected.
The sleep 1 was ok with a few builder bur regarding the number of
request on the database when no build are running, this become
problematic.
An ideal solution would be to detect if
-> me managed some testing build
-> There is load (pendings)
In both case, we don't want to sleep to much.
In other cases, we may want to wait a little longer.
A simple quick fix will just wait longer in all cases.
Before this commit, the build errors page was neither sortable neither
filterable.
This commit adds a way to filter by:
- all
- unassigned
- seen more than once
It also allows to sort by:
- last seen
- nb seen
Typical need is to sort by nb seen and filter out the only seen once to
be able to figure which one is supposed to be checked in priority.
In some rare cases, a docker container has a status of exited, removed
or in removal and the `end-`file has been written right after the code
checked the existence of the file.
To mitigate the issue, this commit checks the `end-` file existence
after the container status have been checked. That way, if the file
exists we can be pretty confident that the build really ended.
When exporting a commit, it will be useful to freeze the modification
time of the exported files. The idea behind that is to pre-generate
bundles at the install time of the Odoo instances, that way when running
the post install tests, the bundles does not have to be generated for
each test.
Because the login link redirected to `/`, when logging in the user would have
to re-navigate to their previous page unless they'd remembered this issue and
kept the original page around.
Fix this because it's annoying and dumb: we know our URL when rendering
templates, so we can redirect back there after login.
Consideration: this could also be done on logout, however it seems likely that
in that case the original page is "privileged" and when coming back we'd just
get an access error. So don't do it for now.
In bd4cf76b7 a new argument `--progress-bar off` was added to the pip
command line. Unfortunately, it appears that this argument is not
available in some pip versions, typically the one from Ubuntu Bionic
(9.0.1) [0]. The argument appeared in version 10.0.0 [1].
Also, in the DockerFile, we allow the user to use `sudo` for
`/usr/bin/pip3` and the main reason for that was to avoid having `pip`
to re-install the packages that are pre-installed by the distribution
package manager.
That said, it appears that since pip 10.0.0 when installing packages
locally for the user, pip now properly detects distribution installed
packages.
So, as the solution to fix the progress bar issue is to upgrade pip to a
newer version, we can take advantage of it to get rid of the `sudo`.
Besides the fact that `--user` is the default on Debian based
distributions [2] , we enforce it in case another distribution is used
in the Dockerfile.
[0] https://packages.ubuntu.com/bionic-updates/python3-pip
[1] https://pip.pypa.io/en/stable/news/#id744
[2] https://wiki.debian.org/Python#Deviations_from_upstream
In some conditions, the google-chrome crashes and a core dump is written
in the bind mounted directory.
With this commit, the core dumps are disabled in the containers.
While using the docker cli as a subprocess was KISS and convenient,
the python Docker SDK is mature, easy to use, available as a Debian
package and much more powerful.
It will permit to monitor the containers memory consumption and will
help to spot memory leaks.
On the actual runbot deployments, the `git gc` command is handled by a
unix cron. From time to time, some repositories get corrupted and we
suspect that some concurrent action may be involved as stated in
documentation [0].
For those reasons, with this commit, the `git gc` will be run by the
runbot clients themselves in order to avoid concurrent operations.
By default, the first gc will occur a few minutes after the start of the
client and the next gc are scheduled a two hours and a few minutes later.
Also, this commit ensures that the git config is written regularly in
case of change.
[0] https://git-scm.com/docs/git-gc
Regarding the number of refs in odoo repo (arround 18 million at this
time), the parsing of the date was significant when filtering old refs.
Using unix time allows a direct comparaison without parsing the date,
and improved performance, going from ~7 seconds to ~1.3 seconds.
The status are currently sent by leader when build are created and by
workers on post_commit.
If the leader fails during the preparing of a batch (while creating
builds) the transaction is rollbacked and the status are send again.
The number of status to send makes it quite slow, making the transaction
longuer, and the retry even more expensive. This leads to preparing time
being quite important, sometimes ten minutes after many retries.
This commit proposes to send status in another dedicated transaction.
Since status are sent in batch, we can also try to use a unique session,
and uniquify commit+context status.
This allows to remove the postcommit logic
A further improvement would be to wait before sending status in order to
skip the pending status if the build is verry fast.
An option is also added on the remote to skip the status: if the remote
is a fork, sending the satus on the main remote should be enough.
Initial model was a (build_id, key, value) where key is in fact a two
part information: `category.key` (where key is usually a module)
This means that for each module, we will have one entry per
modules*category.
We have between 200 and 400 modules per build * 4 keys -> around 1000
entries per build.
The hudge amount of total entries lead to a fast overflow of the table
sequence + this create important indexes.
Also, most of the time, the js will manage the display of stats meaning
that python will transform
(build_id, category.key, value)
into
(build_id, {key: value}) for one category
A new model makes use of a json field to store values for different
modules in a json dict, on entry per category*build. (4 entry per build)
The table will be renamed and migrate later
The initial pull_info_failures feature was introduced before the separate
process for the leader. Since this is here for a rare case, this
wasn't tested since then.
Testing the new infrastructure without token, there was many
pull_info_failure, leading to an error on cleanup.
When rendering templates for git configuration and nginx configuration,
the templates are rendered as Markup, while a byte-like object was
expected for the saving.
With this commit, the Markup is casted into str and the files are saved
as text files.
The `utils.escape` utility was deprecated in werkzeug 2.0.0, so it can
be replaced with our `html_escape` which is itself `markupsafe.escape`.
Also, with this change, the double quotes are now escaped in `"`
instead of `"`, so we fix the test too.
Some non tested code was making running build available without
an nginx config (using port and local address instead of using a proper
dns/nginx config).
Removing this part to reduce complexity.
Source cleanup will check in multiple place for potentially used
commits. This maked the cleanup logic complex, plus limit the usable
commits in python steps. The current use case is to export the mergebase
commits.
The poposed solution is to save the exported commit and clean this list
when the build is done. All commit in this list cannot be cleaned.
The main motivation is to be able to see existing config_data on params
and edit configuration on triggers.
A simple version would be to use the `FieldChar` with a simple
`JSON.stringify` formater, but having some indent on field make it
easier to read and edit.
The multiline need make it closet to the `FieldText` than the `FieldChar`
but the reset makes the browser freeze.
The actual way to add a custom multibuild on a dev bundle will be:
- use the multibuild wizard to create 2 configs and 2 steps
- add this config on the bundle
- customize this config to make it fatser by installing/restoring...
The usefull parameters are the number of build, the test tags and
modules to install.
Another usefull operation is to restore a dump instead of installing,
especially for post install test only breaking on full databases.
This commit proposes to replace the multi build wizard with a custom
trigger wizard. The main idea is to have generic config, parametrized by
config_data. The wizard will only help to generate the corresponding
config_data.
When a build is garbage collected and the local folders are cleaned, the
log files are kept to allow build errors investigations.
With this commit, a full local cleanup will be done 365 days after the
garbage collect. This means that the build dir will be completely
removed. The default number of days can be tweaked with the
`runbot.full_gc_days` ir.config_parameter.
Also, the _local_cleanup method can be called with a `full` boolean
parameter to force a full cleanup. e.g.: when called in a config step.
While at it, the res_config_settings view is a bit reorganized.
When a git archive fails, the partially exported source tree is left in
place. If another builds tries to use the same commit, the tree is not
exported anymore as the directory exists. This leads to non
deterministic behaviors.
Right now single version repo like upgrade are managed using
a regex to limit name prefix, this avoid grouping branches when
mergebot wont be able to merge them together but the ci can be painfull
since the branch needs to be renamed (closing existing pr) or a manual
operation to move the branch into a new bundle must be performed.
This commit proposes to replace the forbidden_regex mechanism with
an explicit single_version mechanism.
In this case the reference name will be automatically prefixes with
the version. The prefix contains `---` to indicate that some
magic was applied.
Runbot layout modifies the website/portal base layout to remove navbar,
footer, overides some custom styles. A lot of assets are loaded but not
used. The only real usefull elements are base assets (bootstrap, ...)
and the login button.
Migrating to the next version of odoo is usually painfull because some
xpath may break, extra element added, or some style change may break the
page, needing to add more and more xpath, css rules, ... for very little
benefits.
This cleanup creates a custom base layout for runbot independant from
base odoo templates.
Also add a breadcrumb, navigation arrow, and improve batch links
- searching on number will search for both pr and branche name
- hooks are now using payload to define repo when not given in url
- fixes .git cleaning in repo
(remove rstrip since it can fail for repo starting with g, i, t)
- recompute base on prepare if base was not found
- remove local_result form write values if there is a single record
(instead of raising, makes python step easier to write).
- avoid stucked build/loop after removing a step from a config.
- avoid to send ci for linked base_commit
- add a fallback mechanism for base if no master branch is found
- add option on project to avoid to keep sticky running, usefull
when using a lots of projects
WARNING: this is a change of default behaviour, need to update
existing projects.
- always discover new commits for branch matching base paterns.
This is especially usefull to discover old versions on project with
low merge frequency.
- always create a batch, event if there is now trigger. This helps to
notice that commits are discovered
- add line-through on death branches/pr
- manual trigger are now displayed on main page
As for the builder, this give the ability to run the discovery of new
commits and all related logic in a separate process.
This will mainly be usefull to restart frontend without waiting for cron
or restart "leader" without stoping the frontend. This will also be
usefull for optimisation purpose.
As a custom codeowner system was successfully implemented in a python
step on our runbot instance, it's now time to have a real model for
that.
This commit adds a skeleton Codeowner model in order to be used for a
basic usage.
This should be improved in the future after some battle testing.
With the increasing usage of runbot to test various things and to take
care of random bugs in tests, the need of a team dashboard arose.
This commit adds a `runbot.team` model. Internal users can be
linked to the team. Module wildcards can be used to automatically assign
build errors to a team at 'build.error` creation.
Also, an upgrade exception can be assigned to a team in order to display
it on a dashboard.
A dashboard model is used to create custom dashboards on the team
frontend page. By default, a dashboard is meant to display a list of
failed builds. The failed builds are selected by specifying a project, a
trigger category (e.g. nightly), a config and a domain (which select
failed builds by default).
The dashboard can be customized by specifying a custom view.
Each created team has a frontend page that displays all the team
dashboards and the errors assigned to the team.
A few other improvement also come with this commit:
* The cleaned error is now in a tab on the build error form
* Known errors are displayed as "known" on the build log page
* The build form shows the config used for the build
When a build error is archived, a linked children with an assigned fixer
may still appear on the error frontend page.
With this commit, old children are not showing up again.
Since the order was changed, the first values are actually the older ones.
This commit inverse newer_build_stats and older_build_stats
values in order to always have the new keys. Before this commit the new
keys where not displayed. A future improvement may be to combine keys
from all builds.
This commit also proposes to give a 0 value if the key did not exist in
the older build. This means that new keys will appear with a big
difference. This is maybe not a good idea and needs some testing. A
better solution would be to search for the first apparition.
Since the frontend_assets are loaded with `defer="defer"`,
the page sometimes fail with the message:
```
stats.js:212 Uncaught ReferenceError: Chart is not defined
at updateChart (stats.js:212)
at stats.js:158
at XMLHttpRequest.xhttp.onreadystatechange (stats.js:53)
```
This commit checks that Chart is available before tring to render the graph.
Thanks to @kebeclibre for the help.
When getting pull info, the alive state can be determined easily,
meaning that this field can join the "_compute_branch_infos" familly
Hook was catching some changes made on pr and was conditionnaly updating some fields
and triggering some other operations conditionaly depending on the action flag.
All of the information needed to update the pull info should always be present in the
payload body, meaning that all fields can be updated at once in case some hook was missed,
and additionnal operation can be triggered based on fields changes.
In some conditions, it appears that a containerized build can eat up
all memory of the container host. This leads to disturbance of other
builds as the kernel OOM killer enters the dance.
With this commit, the docker ability to limit memory usage of a
container is used. The OOM killer will choose its victim among the
container processes.
The containers memory limit has to be set in the runbot settings. If not
set, no memory limit is used.
runbot servers are running with a log-level debug in order to have usefull
debug information, but this also causes some noise comming from odoo.
This pr changes most debug to info.
- Add draft pr management to avoid to trigger code owner on draft pr.
- Add check on falsy config on trigger id (avoid crash, usefull to disable trigger)
- Add extra params on custom trigger to avoid to write specific config every time.
- Trigger a new batch automaticaly when updating target/draft
Some projects may use a totally different Dockerfile. In order to avoid
new branches of those projects to automatically build with the generic
default Dockerfile, this commit adds the possibility to configure a
Default Dockerfile on a project.
Since Odoo 14.0, the recommended Ubuntu LTS release is Focal, the
default Docker should be updated accordingly.
A custom template with bionic have to be manually created on runbot
instances that still build Odoo < 14.0.
* A small change is made in the templates logic that builds the
Dockerfile: a `runbot_pip` dict entry now exists in order to install the
python libs required by the runbot. On the other hand, `additional_pip`
should only be used to install optional python libs for Odoo.
* Upgrade Chrome version to 90.0.4430.93-1 as this one is currently in
use on our current runbot instance. Just keep in mind that the Odoo
screencast feature does not work anymore since Chrome 88.
* gsfont is added because of a bug [0] that affects python-reportlab in
Focal.
* pyCrypto package is removed. It was used in an Odoo addon that
disappeared in odoo/odoo@2738341c21
* dbfread, websocket-client are now installed as a deb package as they exists in Bionic and
Focal
* pdfminer.six is now removed because a deb package exists in Focal but
not in Bionic. It means that it has to be added in deb_packages_python
in Dockerfiles for odoo > 13.0 and in additional_pip for odoo <= 13.0
[0]: https://bugs.launchpad.net/ubuntu/+source/python-reportlab/+bug/1918107
The test in the original code will never fire because the value searched
for is not in the keys of the dictionary, but in one of the lists which
are in the values. Work around this by maintaining a reverse dictionary
module name -> commit and use this for the test.
When a dockerfile to_build field is False for any reason and a new
runbot is setup, it's easy to miss the point and builds that involves
this particular Dockerfile will fail on the new runbot.
With this commit, the Dockerfiles tree view is improved to easily spot
those kind of problems:
- to_build field is visible and the Falsy lines are in yellow (warning)
- the empty Dockerfile's are in red (danger)
- versions are now visible in the tree view too
When a build slot is hidden in the batch tile but is responsible of the
batch failure, the failure reason may not be obvious for the user.
With this commit, an hidden slot appears if the slot build is in
failure.
Sometimes a pr pull info can fail.
- Most of the time it is only temporary and it will be successfull on next try.
- In some rare case the pr will always fail (github inconsistency) The pr exist in git but not on github api.
For this rare case, we store the pr in memory in order to unstuck other pr/branches update.
We consider that this error should not remain, in this case github needs to fix the inconsistency.
This is why the runbot model don't handle such a case for now.
Another solution would be to create the pr with fake pull info. This idea is not the best one
since we want to avoid to have many pr with fake pull_info in case of temporary failure of giothub services.
With this solution, the pr will be retried once every cron loop.
We dont except to have pr with this kind of persistent failure more than every few mounths/years.
When code blocks were containing markdown like text, the inside of the code
block was also formated.
This commit removes the code blocks before applying other formating and
place them back at the end.
closes#481
- Adds a complete legend enabelling to display a custom subset of modules.
This is mainly to enable a vertical scroll on list since chart-js default
legend will be displayed on multiple column.
- Adds a "Noisy" order mode to find non-deterministic modules.
- Changes the build selection mode to a center one to easylly center
build of interrest and add a forward button.
- Small ui tweaks/fix to match new selection logic.
Since 360e31ade4, it's possible to add statistics values to build
results but there was no practical way to analyze them.
With this commit, there is a new button on the bundle page that leads to
a chart page that displays those values.
The default reference build is last known good build of the bundle.
Values are filtered by key and only the most significant values are
displayed. The user can then refine the chart by changing the reference
build or the key and a few other options.
Co-author: Xavier-Do <xdo@odoo.com>
Before this commit, the `requirements.txt` from a specified odoo branch
(master by default) was always installed in the Dockerfile's.
We now allow to disable this feature to test Odoo with vanilla
distributions.
When choosing to install Chrome in a Dockerfile, the chrome version is
downloaded from Odoo nightly server. This make it difficult to test
with different versions of Chrome.
With this commit, we allow to install from Google in Docker files.
By default, the install remains from Odoo Nightly server but if the key
`custom_values['chrome_source']` is set to 'google' in a Dockerfile,
the specified version will be downloaded from Google servers when the
Docker image is built.
If you try to manually create bundle, Odoo will crash, because it will
try to use `name` value that is not set yet. For that we start computing
host_id once `name` is entered.
aka: make it clean
This build database_ids field was generated a few months, waiting for the database to update with
this new sceme before using it.
It is still a little early to use it in cleanup methods, but this can be used to
generate the connect links dynamicaly.
To follow al specs introduced in previous commit, main fa-sign-in button should link to the -all.
It will almost always be the first one in database_ids in alphabetic order.
Then, in the dropdown, all other database are listed.
This will fix the previously broken design_theme connect link (no base nor all).
For this purpose nginx regex needs to be adapted to accept database name with underscore.
Using database name as subdomain will set the web.base.url correctly when connection for the first time using this link
This will unfortunatelly break connect all and connect base links if dns and nginx are not setup.
This won't fix the web.base.url when connecting with the Connect (database selector) button either.
When a status cannot be sent to github, the status is created in
database but the `sent_date` field is not set.
Because of that, the status cannot be sent manually from the frontend.
With this commit, that kind of status can by tried again.
When a build is reaching the run_run_odoo step, a database has to be
set. If none are found in the build params, the one from the last step
is choosen. Historically, the last one in the `Split` config was `all`
but now, the last one is `base`.
With this commit, if none are found in build params, `all` is choosen if
found in any install config steps. As a default, the one from the first
step is choosen.
Since 3657a65b20 docker_run is called outside of the step and python
steps have to set the `docker_params` variable. This breaks the computed
`log_list` because the string `docker_run(` is searched in python
steps code to determine if it's a docker_step.
With this commit, the `docker_params = ` is searched instead.
For now, the sticky flag is used to define bundle to use as target to upgrade.
The plan for future is to continue to test upgrade, even if the bundle is not sticky anymore.
This new flag will allow to remove sticky for old branche.
This will also allow to disable upgrda tests for still sticky branches when needed.
This commit also filter source bundle based on this flag
(before that all base bundle where used as source, even if not sticky)
Params mechanism are a way to avoid to rebuild with the exact same params.
Commit is the main identifier to know if two builds are the same, ususally
commit_link are not usefull to uniquify params, exept when the mergebase is
in use as in security check (based on diff).
We would also like to avoid to have red diff-based build in sticky/base branches when the
ci is ignored on a pr, but we want to keep the diff display on new batches
(lines and files changed).
We need a way to know if it is a is_base bundle. This was an initially forbidden behaviour
because of duplicate detection.
This commit add a batch_dependent flag. This will enable the use of the batch in the
fingerprint, making it unique per batch and disabling duplicate detection/ in this
case we can use the "is_base" information from the bundle + avoid false duplicate
detection if trigger is based on mergebase-commit diff. (pr based on another pr now merged)
On the build error web page, a regular assigned error is not shown to
all users.
With this commit, a regular build error (not only random) will be shown
if the error is assigned.
When builds logs are parsed by using the contextual button, the client
stays on the same page even if a build error is created.
With this commit, the client is now redirected to the created/found
build error(s).
When calling a step from a python step, it is impossible to alter some parameter, the only solution is to copy all step code.
With this change, python step are now able to override docker_run parameter of another step by modifying the dict returned by run_* steps
and assigning the result to "docker_params".
This will mainly be used to set the correct docker_image for migration pre and post test. This can also be used to alter a command,
like removing the pip install or adding extra pre/post operation on the command.
Currently, runbot is using a single Dockerfile maintained in a data file
in the source code. This situation is not convenient for testing Odoo in
different environments.
With this commit, a Dockerfile Odoo model is used to allow usage of
multiple Docker containers.
This model comes with a pre-defined Dockerfile that can be used to build
the current Odoo supported versions (12.0 up to 14.0).