For now, the sticky flag is used to define bundle to use as target to upgrade.
The plan for future is to continue to test upgrade, even if the bundle is not sticky anymore.
This new flag will allow to remove sticky for old branche.
This will also allow to disable upgrda tests for still sticky branches when needed.
This commit also filter source bundle based on this flag
(before that all base bundle where used as source, even if not sticky)
Params mechanism are a way to avoid to rebuild with the exact same params.
Commit is the main identifier to know if two builds are the same, ususally
commit_link are not usefull to uniquify params, exept when the mergebase is
in use as in security check (based on diff).
We would also like to avoid to have red diff-based build in sticky/base branches when the
ci is ignored on a pr, but we want to keep the diff display on new batches
(lines and files changed).
We need a way to know if it is a is_base bundle. This was an initially forbidden behaviour
because of duplicate detection.
This commit add a batch_dependent flag. This will enable the use of the batch in the
fingerprint, making it unique per batch and disabling duplicate detection/ in this
case we can use the "is_base" information from the bundle + avoid false duplicate
detection if trigger is based on mergebase-commit diff. (pr based on another pr now merged)
On the build error web page, a regular assigned error is not shown to
all users.
With this commit, a regular build error (not only random) will be shown
if the error is assigned.
When builds logs are parsed by using the contextual button, the client
stays on the same page even if a build error is created.
With this commit, the client is now redirected to the created/found
build error(s).
When calling a step from a python step, it is impossible to alter some parameter, the only solution is to copy all step code.
With this change, python step are now able to override docker_run parameter of another step by modifying the dict returned by run_* steps
and assigning the result to "docker_params".
This will mainly be used to set the correct docker_image for migration pre and post test. This can also be used to alter a command,
like removing the pip install or adding extra pre/post operation on the command.
Currently, runbot is using a single Dockerfile maintained in a data file
in the source code. This situation is not convenient for testing Odoo in
different environments.
With this commit, a Dockerfile Odoo model is used to allow usage of
multiple Docker containers.
This model comes with a pre-defined Dockerfile that can be used to build
the current Odoo supported versions (12.0 up to 14.0).
The old order based on status is not really needed anymore:
- Scheduled builds have a special condition and already have a low priority.
- Indirect builds don't exist anymore
- It is actually questionnable to postpone rebuild. Sometimes they are needed but stuck.
This new proposition will keep subbuild scheduling close to parent build. Some build may take 2 hours
because they are parallelized and children are stuck. Priority should be defined by top parent.
The pdf417gen python library is needed for the l10n_cl_edi. As this module
is not mandatory, the library was not added in the requirements.txt
file.
In order to test the feature on the runbot, this commit adds the library
in the Docker container.
Related work:
odoo/odoo#54995odoo/enterprise#12080odoo/enterprise@8290bfaf
Runbot initial architechture was working for a single odoo repo, and was
adapted to build enterprise. Addition of upgrade repo and test began
to make result less intuitive revealing more weakness of the system.
Adding to the oddities of duplicate detection and branch matching,
there was some room for improvement in the runbot models.
This (small) commit introduce the runbot v5.0, designed for a closer
match of odoo's development flows, and hopefully improving devs
experience and making runbot configuration more flexible.
**Remotes:** remote intoduction helps to detect duplicate between odoo and
odoo-dev repos: a commit is now on a repo, a repo having multiple remote.
If a hash is in odoo-dev, we consider that it is the same in odoo.
Note: github seems to manage commit kind of the same way. It is possible
to send a status on a commit on odoo when the commit only exists in
odoo-dev.
This change also allows to remove some repo duplicate configuration
between a repo and his dev corresponding repo.
(modules, server files, manifests, ...)
**Trigger:** before v5.0, only one build per repo was created, making it
difficult to tweak what test to execute in what case. The example use
case was for upgrade. We want to test upgrade to master when pushing on
odoo. But we also want to test upgrade the same way when pushing on
upgrade. We introduce a build that should be ran on pushing on either
repo when each repo already have specific tests.
The trigger allows to specify a build to create with a specific config.
The trigger is executed when any repo of the trigger repo is pushed.
The trigger can define depedencies: only build enterprise when pushing
enterprise, but enterprise needs odoo. Test upgrade to master when pushing
either odoo or upgrade.
Trigger will also allows to extract some build like cla that where
executed on both enterprise and odoo, and hidden in a subbuild.
**Bundle:** Cross repo branches/pr branches matching was hidden in build
creation and can be confusing. A build can be detected as a duplicate
of a pr, but not always if naming is wrong or traget is invalid/changes.
This was mainly because of how a community ref will be found. This was
making ci on pr undeterministic if duplicate matching fails. This was
also creating two build, with one pointing to the other when duplicate
detection was working, but the visual result can be confusing.
Associtaions of remotes and bundles fix this by adding all pr and
related branches from all repo in a bundle. First of all this helps to
visualise what the runbot consider has branch matching and that should
be considered as part of the same task, giving a place where to warn
devs of some possible inconsistencies. Associate whith repo/remote, we
can consider branches in the same repo in a bundle as expected to have
the same head. Only one build is created since trigger considers repo,
not remotes.
**Batch:** A batch is a group of build, a batch on a bundle can be
compared to a build on a branch in previous version. When a branch
is pushed, the corresponding bundle creates a new batch, and wait for
new commit. Once no new update are detected in the batch for 60 seconds,
All the trigger are executed if elligible. The created build are added
to the batch in a batch_slot. It is also possible that an corresponding
build exists (duplicate) and is added to the slot instead of creating a
new build.
Co-authored-by d-fence <moc@odoo.com>
73f720a55c refactored the runbot tests,
and amongst other things created a single patch point for the "mock
root" as a testcase attribute.
One of the tests was missed during that refactoring, likely because
it's skipped by default.
The ocrmypdf suite of tools are needed by task #2238654.
Altough this package will be optional for Odoo users, it has to be
usable by dev's in order to test and/or fix the feature.
Docker container names are derived from the dest and step name. The dest
is itself derived from the branch name.
In some rare cases, it happens that a character not allowed by Docker
appears in the container name computed by the runbot.
With this commit, a sanitize_container_name function is used to remove
unallowed characters at the container utility level.
The postgresql-client in the Dockerfile is the one provided by the
Debian package. When the postgresql server on the host has a higher
version than the client, some builds may fail (for example, dumping a
database with the pg_dump).
With this commit, the postgresql-client 12 from the postgresql repo is
used in the Dockerfile.
Runbot can send status multiple time for the same hash:
- if transaction fails in scheduler and is retried
- if multiple subbuild are failing
Leading to multiple issues:
- when github receive more than one failure status, mergebot will
be notified multiple times and send multiple mail (for forward ports mainly)
- github will answer `422 Unprocessable Entity for url...` after
1000 status.
This fix proposes to limit number of status:
- By avoiding to send status for orphan build (parent status will never change)
- By storing last send status to avoid to notify multiple time
- By sending status post commit to avoid to contact github in case of failure.
This will also slightly reduce transaction time by removing an http request.
Sometimes, it happens that a `git fetch` fails with an error code 128
for example. When this happens, the runbot host is immediately disabled.
During investigations of such cases, we found that simply retrying the
fetch command works.
With this commit, the fetch command is tried 5 times with an increasing
delay before deciding to disable the runbot host.
Since we store the target_branch_name, filtering out pull head names
that contains `patch-` is not necessary anymore.
This commit is one first step towards a clean refactoring.
When a build is done, various numerical informations could be extracted
from log files. e.g.: global query count or tests query count ...
The extraction regular expression could be hard-coded in a custom step
but there is no place holder where to store the retrieved information.
In order to compare results, we need to store it.
With this commit, a new model `runbot.build.stat` is used to store
key/values pair linked to a build/config_step. That way, extracted
values can be stored.
Also, another `runbot.build.stat.regex` is used to store regular
expressions that can be used to grep log files and extract values.
The regular expression must contain a named group like this:
`(?P<value>.+)`
The text catched by this group MUST be castable into a float.
Optionally, another named group can be used in the regular expresion
like this:
`(?P<key>.+)`
This `key` group will then be used to augment the key name in the
database.
Example:
Consider a log line like this one:
`odoo.addons.website_blog.tests.test_ui tested in 10.35s`
A regular expression like this, named `test_duration`:
`odoo.addons.(?P<key>.+) tested in (?P<value>\d+\.\d+)s`
Should store the following key:value:
`{
'key': 'test_duration.website_blog.tests.test_ui',
'value': 10.35
}`
A `generic` boolean field is present on the build.stat.regex object,
meaning that when no regex are linked to a make_stats config step, then,
all the generic regex will be applied.
A wizard is added to help the creation the regular expressions, allowing
to test if they work against a user provided example.
A _make_stats method is added to the ConfigStep model which is called
during the _schedule of a build, just before calling the next step.
The regex search is only apllied in steps that have the `make_stats`
boolean field set to true. Also, the build branch have to be flagged
`make_stats` too or the build can have a key `make_stats` in its
config_data field.
The `make_stats` field on the branch is a compute stored field.
That way, sticky branches are automaticaly set `make_stats' true.
Finally, an SQL view is used to facilitate the stats visualisation.
When coverage is computed, a post command is used to generate the HTML
report. In order to use the coverage result locally the HTML report is
not enough.
With this commit, an XML report is also generated. It's a single xml
file, downloadable from the build result web page.
The _post_install_command method is renamed into its plural form because
it was useless to return only one command.
From times to times, a git repo gets corrupted, making builds fail in
chain.
With this commit, the host on which the git fetch fails will be reserved
if not more than half the hosts are reserved.
Before intensive python steps, every build exept create build should have logs.
This is not true since now many python steps are creating build, leading
to 404 links to inexisting logs file.
Checking That a step is a docker build looks like a good heuristic since since
most of the time the log file will be the one created by docker. In all case we
can access other logs files in browsing /logs.
A future improvement would be to listdir logs to create all links.
With this commit, a kind of markdown is allowed in log messages and
build.description.
Following patterns are supported:
**strong**
~~striketrough~~
__underline__
`code`
[link](target)
@icon-font-awesome-class
When a build.config is copied, the config steps are not copied.
The steps have to be explicitly ordered otherwise it leads to a
traceback when trying to copy a 'run' step which have to be the last
one.
When creating a subbuild from a custom python job/cron, some data can
be given through config_data or extra_params to change another step behaviour.
An example is to give a specific -i to an install job in order to
install different modules from the parent build. Anyway since we
are using the same install step in this case, all databases will
have the same name, and the name may not be correct in regard to
the database content.
This commit allows to give a config_param on build giving the default
db_name to use in an install step.
If a branch triggers an hidden build on a another one (sticky ususally),
the last build of the branch will be considered to be this one and
trigger a new build. The same problem can occur when cloning a subbuild
to slighlty change a parameter instead of making an exact rebuild
since the build type may still be normal in this case.
This commit will simply ignore subbuild in this case.
When a new commit is found, a rebuild is forced on sticky branches
builds in repositories that depends on the new commit repository.
If one of the repo is protected by groups, the main page gives access
errors for public users and finally leads to a CPU overload.
With this commit, the feature is removed as it will be re-implemented in
custom config steps.
On non sticky branches, when a new build is found while another is
already testing, the older build is killed. This happens during when the
main runbot instance is discovering new commits and create new builds.
As a result, concurrent updates may occur while the builders access the
concerned build.
With this commit, this garbage collecting procedure occurs during the
scheduler loop that runs on runbot builder hosts.
Also, the logic changed in a way that the kill is requested only if the
host needs room to handle pending builds.
When a build age reaches the gc_days parameter, its database is dropped
and its directory is removed.
With this commit, two fields are added in order to keep some builds
longer that the defined gc_days.
The gc_delay field on the build allows to add a delay (in number of
days) that is added to its gc_days to compute the gc_date.
The gc_date field is the date when the cleaning will occur.
Also, a test is added and the RunbotCase test class is improved to allow
the stop of a patcher.
With this commit, it will be possible to play with different chrome
versions without impacting all runbot instances.
Chrome issues summary:
* 71: innerText linefeed issue
* 74: random chrome crash
* 80: V8 pointer compression exceed Odoo memory limits
Since cb05f2b9d8, when creating a database, the template1 is used, this
allows to customize the template and install some needed Postgress
extensions.
Unfortunately, it's also a source of build failures. For example, if the
pg_activity util is used on a runbot host, database creation may fail
with a message like this:
source database "template1" is being accessed by other users
It's because pg_activity needs a database and uses template1.
With this commit, template1 is still used by default but can be changed
with a system parameter. That way, a custom template can be created on
runbot hosts and used when creating DB in builds.
When a PR branch target is changed on Github, the change is not applied
in the runbot DB.
With this commit, the Github hook payload is taken into account to
detect such a change and the branch infos are recomputed accordingly.
Also, a button is now available on the branch form in order to manually
recompute those changes.
As pdfminer is going to be used in Odoo attachement_indexation module and
that there is no python3-pdfminer package in Ubuntu Bionic, it was
decided to have an optional dependency.
In order to run the attachement_indexation tests, the pip package is
added to the runbot Docker image.
At this moment, the Docker image is built at the beginning of each
runbot build. This blocks the _scheduler while the image is built.
With this commit, the image is built before calling the _scheduler and
is not linked to a runbot build.
Also, the necessary dirs are created in the static path before starting
the loop.
Since 857821e4 a screenshot can be saved in the build directory under
the "tests" subdirectory.
Unfortunately, this directory is cleaned in case of local_cleanup.
With this commit, the 'tests' subdirectory is kept in place.
Currently killing, waking or rebuilding a build then redirects to the
repository root which is worse than useless: you've lost the build you
come from, and for rebuilds it's no help getting to the new build.
Since it's very easy to go from a build to its repository, redirect to
the "most useful" build instead:
* if rebuilding the current build (from its page), open the page of
the new build which was just created
* otherwise reload the current page whatever it is
If a docker is killed from outside, the start file will still be there but
not the end file. (when restarting docker service for instance)
This commits add a docker_state specific to non running docker
with start a file, that should be handled like unknow state:
This state is acceptable for a while, but build should be killed
if this state remains for to long.
When two steps in the same build needs to exchange informations, some
hacks have to be used. E.g. using the extra_params fields to store comma
separated values.
With this commit, a config_data field is added alongside with a
JsonDictField that automatically transform the data into json.
When changing the sticky value in a branch form, it triggers the
computes of previous version and intermediate versions.
In the onchange situation, the branch.id is a NEW id and it fails with a
traceback.
With this commit, a verification is made to ensure that the id is there.
As the Odoo requirements were recently updated, the last RUN entry in
the Docker image was rebuilt on our runbot instances.
Since that moment the coverage builds are failing with an import error.
After investigation, it appears that the latest coverage version modifies the sys.path.
A bug report was written for coverage: https://github.com/nedbat/coveragepy/issues/919
For that reason, this commit freeze the coverage version until this bug
is fixed.
Also, the os import problem in conatainer.py is fiexed.
ORM does not support non_searchable.non_stored dependency.
thus, the closest_sticky.previous_version dependency will log an error
when previous_version is written.
this dependency is usefull to make the compute recursive, avoiding to have
both record and record.closest_sticky in self, in that order, making the record.previous_version
empty in all cases.
Writing self on sticky will mitigate the problem. but it is still posible to
have computation errors if defined_sticky is not sticky. (which is not a normal use case)
This reverts commit 54f9b9b546.
The main reason is linked to inconsistency in state compute because
of error in nb_ computations.
This was to avoid concurrent update, witch is not a such a big problem
now since workers are no longer using crons. (retry on failure is faster).
When trying to remove test_tags on a build_error, the validation fails
because it tries to iterate on False. Also, the ValidationError
exception was not properly imported.
With this commit, the validation is fixed and a test is added.
When a repo is set to "no_build", there is no way to force a branch to
build from the backend.
With this commit, a button is added on a branch form to ask rebuild of the
branch, even when the repo is set to "no_build".
Migration tests comming on runbot, it will be usefull to have quick
way to obtain branches related to current build.
This commit adds a field for the colsest sticky branch, previous version
and intermediates stickies.
Example when last sticky is saas-13.2:
branch_name: master-test-tri
closest_sticky: master
previous_version: 13.0
intermediate_stickies: saas-13.1, saas-13.2
This commit add the possibility to add custom checks to python steps,
as well as ignoring triggered result if log of level error/warning
is not considered as a problem.
As the --upgrade-paths options does not work as expected in Odoo, a
symbolic link has to be created in odoo/addons/base/maintenance pointing
to the migration scripts.
The runbot uses Docker read-only volumes to access the sources that are
shared between builds, preventing the creation of such a link.
With this commit, a symbolic link is created right after the export of a
commit only when the repo is a "server" repo.
This link is broken outside of the Docker volumes but uses the mount
points of the sources inside the container.
Two ir.config_parameter's are used to enable and configure this feature:
* runbot_migration_ln: the relative path where the link should be placed
(typically odoo/addons/base/maintenance)
* runbot_migration_repo_id: the id of the migration scripts repo, used
to determine the name of the mount point inside the Docker container
A change is also made in the "reverse dependcy build" to avoid the
creation of a build in the migration repo for each push in its
dependency. Simply set the no_build field on this migration repo.
`getmtime` will return a 6 digit float when postgress will only store 5.
Depending on rounding, _get_refs have 1/2 chance to make an update
when it shouldn't. Rounding below psql precision before comparing and
storing should fix this.
Sometimes, sheduler may have a hard time to create build.
The transaction can be verry long when there are many repo and
a lot of new commits. Writing get_ref_time on repo will fail
due to concurrent update rollbacking the whole transaction.
This is supposedly because of hook occuring during the transaction.
With this new model, hook will only perform an insert, and shouldn't
interfer with ref_times.
docker_is_running is ambiguous since we dont know if it was started once.
This new feature tries to add tools to know if a docker was started or not.
The main reason of this is that sometimes docker_run may take more than 15 seconds
creating unpredictable errors on build when the second step is launched and
the previous one is still running. Hopefully this fix will help to solve this
issue and detect late docker run.
When rebuilding a build/subbuild, the last commit of the branch
won't have the same sha of the branch last commit. find_new_commit
will kill the running builds in the branch and create a new one,
that may be a duplicate of one of the killed commits.
This fix only take normal builds into account when checking for
existing builds.
This commit was initially there for tests, when no repo exist, but
get_param will also crash if commit does not exist, wich may be
a problem on user rebuild.
A build can sometimes fail and be stuck in a running state
without corresponding sources. In this case, source are not gc anymore
This commit fixes that by always applying gc even when an inconsistency
is detected.
When creating an .odoorc file to store configuration that are proper for
the runbot, the command line options were used as key.
The problem is that the Odoo config file use undescore instead of dash
for options keys.
Also, the Command object was not recreated with the config_tuples
parameter. Because of that, when adding a command in the list, the
config_tuples were losts.
As a consequence, on the Odoo runbot instance, the data dir was created
in the default dir and thus, not included in the zip file of the dump,
causing some runbot steps to fail.
A lot of things have to be mocked during runbot tests, as a consequence,
a lot of patch decorators accumulate in a big stack uppon some tests
methods.
Also, a lot of mocks are used multiple times among tests.
With this commit, a new RunbotClass is added that comes with patches
ready to be started. A start_patcher helper method is available to start
a patch and add the appropriate stop in a cleanup.
Also, when a build is created in the tests, the _get_params method is
always called, resulting in an annoying git warning.
With this commit, a create_build method is added on the test class, that
way the _get_params is always mocked when a build is created.
_find_new_commits will check if a build exists with current branch HEAD
before creating build. This is crutial to avoid to create a new build
at each loop turn. The problem is that in some rare cases, when
force-pushing an old head on a branch, the build won't appear and the only
way to update the branch is to find the corresponding build that may be
hidden in hitory. This may be confusing for the user that will rebuild the
created build with a commit that doesn't represent the head of the branch.
This commit only search for the last build of each branch, in order to
only skip build creation if the last build as the same hash. The new
created build should be marked as the duplicate of the first one.
When starting an odoo instance with Docker, a very long command line is
computed and appears in the logs.
With this commit, an .odoorc configuration file is written ind the build
dir and mounted in the Docker container.
Previously, the runbot .odoorc/.openerprc file was mounted to share some
parameters. Now, if that file exsists, its content is merged with build
.odoorc.
When using auto_tags, most of the time, the enhanced version are used.
For example, using "-:TestPoSStock" to disable the test class.
If the tested Odoo version does not support this kind of tag, they are
considered as simple tags, thus disabling all tests.
It 's the case for Odoo saas-11.3.
With this commit, the auto_tags are only used on Odoo versions that
support the new test tags.
When runbot is installed to test customs addons, we don't
want to build all odoo commit, but we need to update branches
in order to make _get_closest_branch work.
This commit will allow a user to set odoo in poll mode
with no_build set to True, to create branches only.
(And a small fix for additionnal_env)
Since 81fefee, the container.py CLI does not work as expected.
With this commit, the CLI is working, a new arg was added to test
flamegraphs and the dump is adapted to mimic the runbot.
Also, a small issue is fixed in the zip file creation. Before the zip
creation, the directory is changed, if the directory change fails, the
zip is created from the current directory which is removed by zip at the
end of the process. That could lead to the deletion of the build dir.
A typical use case when an error is detected is to disable
this test by adding a negated test-tags on config
step 'all' and 'split_all'. This commit will help
to do that by adding a test_tags management on build error.
The user define a test_tag that will only execute failling test.
if a config step has the flag enable_auto_tags, the test tag will
be negated and added to config test-tags.
This commit also add some information for monitoring.
Current dump version doesn't include filestore. This new
version adds the filestore trying to match odoo backup format
in order to ease restore.
manifest.json file is not create since it isn't usefull,
but an info.json is added, with build info.
Creating multi builds configs can be tedious. One must create 2 build
configs and 2 build config steps in the right order.
With this commit, a simple wizard is added that creates those 4
configurations by simply filling 4 fields.
Also, a new field, group, is added in order to be able to gather
config's and config steps into groups. The group is a Many2one on a
config.
While at it, the runbot menu has been a bit rearranged with everything
about config's in a parent menu named Configs.
Config's and config's steps tree views have been enhanced to show the
config group and add some filters in the search views.
With this commit, a new boolean field "flamegraph" is added on the
build_config to allow a flamegraph generation.
In order to be able to generate a flamegraph during a runbot build, the
flamegraph package is added to the Docker image as well as the
flamegraph.pl tool.
Dump a db at the end of a build, using a new 'finals' cmd part
added in order to execute dump even if build fails.
Add a link in last step log to download dump.
In different situations, a docker container may stay alive even if the
build global_state is done. This can lead to a build failure when a
build wants to go in running state and tries to expose the same ports as
the left over build.
This reverts commit 1207daded1.
A too quick review, setting a default value is a good idea but since field is a float now,
default value should be time.time
Actually some Odoo modules are black_listed from a set hardcoded in the
runbot code. In some cases, one needs to blacklist custom modules,
preferably in a config_step.
With this commit, the repo.modules, branch.modules,
config_step.install_modules fields are concatained in a comma separated
list of fnmatch patterns. The patterns can be prefixed with a dash to
exclude the matching module(s).
Co-authored by @Xavier-Do
When a build is running, a cron, an evil query or something else can
start to fill and bloat the runbot ir_logging table.
With this commit, a log_counter field is added on the build, starting at
100. The SQL trigger decrement this counter after a line is inserted.
When the counter drops to 0, a the last log line contains a message
stating that the limit has been reached. Further log lines are dropped
for this build step.
The counter is reset to a default of 100 before each step.
This value is configurable through an optional ir.config_parameter
runbot_maxlogs.
The runbot itself is still able to add logs lines through the build _log
method.
Thanks @Xavier-Do for the smart idea.
When a build only create sub-builds, the build_time is verry small (a few seconds),
and this information is not relevant. This commit propagates end_time to parent_build
if parent_build is done or running.
When a build_error active field is changed, the onchange leads to a
traceback. Anyway, the onchange was not a good idea as it only reflects
UI changes.
With this commit, the write method is overwritten to change the
child_ids active fields too. Also, the active_test context is used to
correctly compute the childs_ids and children_build_ids.
A test is also added for all that.
The new feature in odoo/enterprise#4879 needs the firebase-admin
package. As it cannot be added to the requirements.txt, the package is
added in the Dockerfile to be able to test it on the runbot instances.
- Add a keep running flag on the build to allow a build to stay in
running state until the flag is switched off ( or the build killed)
- Do not update configs and config_steps data
- Add a first/last_seen_build and first/last_seen_date on build.error
- Children error builds now include the parent builds too
- Use a notebook on build.error form view to display builds and linked
errors
- Update result when a build triggers a change from 'warn' to 'ko' too
- Add the sticky flag on the error logs stored sql view
When a build error appears with the same fingerprint as already known
one which was supposedly fixed, the build is simply added to the known
build error.
In order to keep an eye on such reappearing bugs and keep the fixing
history separated, this commit simply creates a new build_error.
Old build errors with the same hash (or child_ids 's hashes) appears in
a computed field error_history_ids.
When using the runbot frontend, it's sometimes very frustrating when
trying to copy branch name, some mouse gym is necessary.
With this commit, a copy to clipboard button is added near the branch
name on the frontend.
The new feature in odoo/enterprise#4892 needs the dbfread module.
As this lib is not required for other Odoo modules, it cannot be added
to the requirements.txt file.
In order to run the tests and use this new feature on the runbot, this
commit installs the dbfread lib in the Docker image.