A prototype of feature was added some times a go.
No really tested, this commit improves parmater format
and makes dependency closest_branch_id not required
since a repo/sha is all we need.
Docker can take some time to be considered as running after docker_run. This
issue can appear when we speedup sheduling loop. To avoid that when can add an
time condition to consider if a docker is running, but we want to avoid to wait
to much since some jobs are fast.
This solution check if a job is a docker run before waiting, and will also
update job_start after a checkout since this can take some time if
a git fetch is performed.
When a build is killed, result will be set to manually killed,
removing the 'error' or 'warn' result.
This commit removes this behaviour in order to keep error result
in this case.
If a user really wants to keep a database up for a long time, he has the possibility
to wake it up multiple times.
Using last job end as reference will allow to keep a database alive longer.
The main motivation of this commit is to be able to notify github status only
when all children are done.
Until today, children where only used for dev branches and nightly. The needs to
use this system for staging need to enforce github_status behaviour.
Before this commit, a parent won't send github status since he will only
create childrens. And childrens are not awared of other children state,
so sending a succes may be wrong if another one failed.
Asking the parent to make the github_status looks the easiest solution:
-If top parent config will have update_github_state False, we also want to take that into account.
-If a child wants to contact github for failfast, parent will be in global_state error too and will send
message immediatly.
-If a child want to contact github for succsess, we actually want to wait for last child, parent will
be in waiting global_state and notify nothing (or pending).
Only last child will be able to notiffy success since global_state will be running or done at this step.
Orphan builds wont have any impact on result in with this scenario.
Some data are logged at each loop turn even if nothing interresting was done:
- ... builds [] where allocated to runbot
- reload nginx
That kind of info was interresting for debug but now this noise makes
logs heavier and more difficult to read.
Reload ngnix will be done only if file changed and this this will avoid
a log at each loop turn.
We also display difference between existing sources and source that should
be there instead of complete lists.
Sources can be easily exported if needed since they are in the bare repository
most of the time. To avoid using to much space, this commit will garbage collect
all sources at the beginning of a long_running cron.
Only real side effect of this is that it will be impossible to wake up
a build that was force pushed since source cannot be fetched anymore.
We may imagine that we could keep sources of recent build, maybe for
48 hours, but keeping build specific data (logs, database)
is more interresting.
When a build is completely dead, with directory and db deleted, the
wake-up system fails.
With this commit, a wake-up is not allowed on such dead builds.
In order to stores other things than logs, that could be accessible by
end users, for example screenshots and screencasts, a "tests" directory
is allowed thruough the nginx template in the builds directories.
Also, the "with" context manager is used to open the nginx configuration
to ensure that the file descriptor is released during long running crons.
When an Odoo instance is run in a Docker container, it listen on all
interfaces by default and a bridge interface is used to communicate with
the outside world.
This bridge interface is necessary to allow the instance to send logging
messages into the runbot postgresql database.
Even though Docker isolate the container from the oustide world, it's
still possible to reach the Odoo instace from the runbot host and worse,
from other Docker containers using the same bridge interface.
To confirm the Murphy's law, it finally happened with this commit
odoo/enterprise@0ba0ef99de that scanned the ip range of the interface,
disturbing other builds.
One solution would be to create a bridge interface for each instance to
isolate each Docker but that would imply a big change to garbage collect
forgotten bridges ...
An 'icc' (Inter Container Communication) option exists for the Docker
daemon which defaults to True. Setting it to False was tested and works
but it appears that this option can be messed-up by the firewall on the
runbot host.
Finally, if applied, this commit will prevent the Odoo instance to
listen on the bridge interface by only listening to 127.0.0.1 during
tests. A local_only parameter is a added on the _cmd method which is
true by default and means that the Odoo instance will only listen on
127.0.0.1. This parameter is set to False for the running step to allow
the running to be contacted through the Docker exposed ports.
The requirements path and python version where defined from
server in cmd. Since in coverage we add a 'python' before server,
it is difficult to define which element of the cmd is the server.
A solution here is simply to define requirements install and
python version when building cmd since we have access to all
build/source informations. We also add python part in every
cases, and coverage params are now a _cmd python_params.
The _cmd method now returns a Command object instead of a
list, which behave has a list for the cmd part but also contains
a pres and posts list.
pres are requirement install, preparation, ...
cmd is the original cmd list, element can be append or added, this
will allow to keep existing python job without to much changes.
posts are post cmd commands, like coverage result making.
This commit also fix issue with create_job dependencies.
Multibuild can create generate a lots of checkout, especially for small
and fast jobs, which can overload runbot discs since we are trying not
to clean build immediatly. (To ease bug fix and allow wake up)
This commit proposes to store source on a single place, so that
docker can add them as ro volume in the build directory.
The checkout is also moved to the installs jobs, so that
builds containing only create builds steps won't checkout
the sources.
This change implies to use --addons-path correctly, since odoo
and enterprise addons wont be merged in the same repo anymore.
This will allow to test addons a dev will do, with a closer
command line.
This implies to change the code structure a litle, some changes
where made to remove no-so-usefull fields on build, and some
hard-coded logic (manifest_names and server_names) are now
stored on repo instead.
This changes implies that a build CANNOT write in his sources.
It shouldn't be the case, but it means that runbot cannot be
tested on runbot untill datas are written elsewhere than in static.
Other possibilities are possible, like bind mounting the sources
in the build directory instead of adding ro volumes in docker.
Unfortunately, this needs to give access to mount as sudo for
runbot user and changes docjker config to allow mounts
in volumes which is not the case by default. A plus of this
solution would be to be able to make an overlay mount.
In some conditions, Docker can take a little time to start a container.
In that case, if the runbot checks that the container is running before
it starts, runbot consider the job as finished. It the tries to grep the
logs and, as expected, it does not find the "Modules loaded".
With this commit, we consider young builds (less than 15 sec) as
running, giving more time to Docker for starting it.
When creating multi builds config steps, the force_build option is often
forgotten. In that case, the multi builds are detected as duplicate of
the first one.
With this commit, when asking for more that one multi build, the
force_build is chnaged to to True.
When updating repo, order is odoo/odoo, odoo-dev/odoo, odoo/enterprise, odoo-dev/enterprise
The problem with this is that if a community hash and an enterprise hash arrives exactly
between odoo/odoo and odoo/enterprise update, enterprise could have a newest version
than community when creating builds.
By inversing this order, we have less chances to have this cornercase (as unlikelly as
it could be)
get_ref_time was cast from float to datetime and thus,
milliseconds where lost. Storing it as float make code
easier to read and avoid this rounding that was breaking
this feature.
When getting new refs, a lot of them are really old and the
find_new_commits is called for each one and thus browsing branches.
With this commit, refs older than configured max_age are ignored.
Co-authored-by: Xavier Dollé (xdo@odoo.com)
Accessing childrens can create rollback, especially for builds with
a lot of them, since other runbot will concurently access and update
states. This commit tries to improve this by incrementing and
decrementing counters instead of counting all of them each time.
A slow query was detected on the runbot, causing a latency when loading
any page. After some investigations (thnks jle), we found that an index
on local_state could improve speed.
When updating github statuses, it happens that we face a "Bad gateway"
from github. In that case, the error is logged in the runbot logs and
that's it. As a consequence, when the runbot_merge is waiting status for
the staging branch and this kind of error occurs, the runbot_merge
timeouts and the users vainly search the reason.
With this commit, the runbot tries to update the status at least twice.
If it fails, an INFO message is logged on the build itself.
Nightly build have a low priority but once they have a slot, they keep it.
When pushing a branch or asking robodoo to nicelly merge a branch for the
third time at 22:00, there may be no slot left since all the nightly build are created.
This commit will only assign scheduled build if there is no other build to create and
will always keep a free slot for other builds.
When using a "python job", it's sometimes useful to write a file or
create a directory.
Instead of giving a wide open access to the os module in the
"_run_python" context, this commit adds a write_file and make_dirs
methods on the build which is usable in the _run_python eval context.
When using the read_method in a "python job", it's sometimes needed to
read a file in binary mode.
With this commit, the mode can be specified when calling the method.
Since the use of the "python jobs", we spotted various needs that were
not fulfilled. In order to add flexibility to "python jobs", this commit
adds some useful objetcs in the _run_python eval context.
Also, the glob.glob function is given instead of the whole glob module
to avoid giving access to the os module via glob.os.
When a runbot instance is scheduling builds, the numbers of builds
depends of a global ir.config_parameter. Even if one of the runbot
instance is running on a more powerful systsem, its number of workers is
limited by this global parameter.
With this commit, this parameter still exists but can be overriden by
specific ir.config_parameter.
For example, if the host 'runbot24.odoo.com' has more cpu power, the
number of workers for this host can be specified in the
ir.config_parameter named 'runbot24.odoo.com.workers'.
In a create config, a parent result is computed based on children
results
In some situations, it could be handy to ignore the result of some
sub-builds.
Example: the nightly tests are just the children of one nightly build
with a create config. The external tests are failing randomly and as a
consequence, the nightly result is always red. On the other hand,
keeping the test running, just to have logs is a good idea.
With this commit, a config_step of type create can be marked as
orphan_result, that way, the result is not taken into account in the
parent build result.
When the quickconnect button is used, the last running build is
searched in the last 10 builds. If no running build is found, the last
one is rebuilt, even if it's a nightly build.
With this commit, the quickconnect build is choosen only among the ones
with the same config.
With recursive states computation, schedule is
most likely to have transactionnal errors.
This is particularly a problem when external
operations are done during the transaction,
like running a docker.
Adding some commits will help to reduce
transactionnal errors, and ensure that the db
is consistent with docker states.
When searchings for new refs to create builds, the for-each-ref git
commit is run and each ref is searched in the database which is a
somewhat heavy operation.
With this commit, the timestamp of the last database update with the
refs is stored in a field on the repo. This timestamp is checked each
time a for-each-ref is needed, running the operation only when
necessary.
This commit aims to replace static jobs by fully configurable build config.
Each build has a config (custom or inherited from repo or branch).
Each config has a list of steps.
For now, a step can test/run odoo or create a new child build. A python job is
also available.
The mimic the previous behaviour of runbot, a default config is available with
three steps, an install of base, an install+test of all modules, and a last step
for run.
Multibuilds are replace by a config containing cretaion steps.
The created builds are not displayed in main views, but are available
on parent build log page. The result of a parent takes the result of
all children into account.
This new mechanics will help to create some custom behaviours for specifics
use cases, and latter help to parallelise work.
The cpu limit used in job_20 uses the runbot_timeout config_parameter
since b539112a7e. When measuring coverage, this parameter is multiplied
and leads to an error because the type of ir.config_parameter.get_param
method is str.
With this commit, the this value is converted into integer before usage
in job_20.
When running Odoo in the Docker container, the username used to connect
to the database is the username defined in the docker container
(actually odoo).
A problem may arise if the user of the runbot process is not the same.
An authentication error is then raised by postgres because of the
username mismatch.
With this commit, the '-r' parameter of Odoo is added to the command
with the username used by the runbot process.
While at it, unused imports are removed.
When a build exceeds the cpu limit, it is simply killed by the kernel.
As a safeguard the "Initiating shutdown." sentence should be searched
in the log file, and the build marked as "ko" if not found.
Unfortunateley, there is no period (.) at the end of the sentence in the
Odoo logs (see: https://github.com/odoo/odoo/blob/12.0/odoo/service/server.py#L444)
Thus, this condition is never fulfilled.
On top of that, this was masked by the first part of the condition,
checking that the 'test/common.py' has no "post_install" string.
The "test" directory does not exists in Odoo ( but "tests" exists) , so
the condition was always falsy.
Finally, a build can be marked as "ok" when he is killed and no errors
are found until the kill.
With this commit:
* The legacy grep for post_install is removed as it now exists in
all Odoo supported versions.
* The period typo is fixed.
* A log is inserted when the final sentence is not found.
* The cpu_limit is set as the same as the runbot_timeout parameter
for better consitency.
* The time exceeded log message is now logged in the build instead
of the runbot log.
Co-authored-by: @Xavier-Do
At the end of the _update_git method, the "git fetch" command is run.
That makes it diffcult to override to change its behavior (for example
to avoid fetching pull requests).
With this commit, the command is separated in a new small method that
can be easily overriden.
When searching for new builds by parsing git refs, the new branches are
created as well as the pending builds in the same _find_new_commits
method.
With this commit, this behavior is splitted into two methods, that way,
it's now possible to create missing branches without creating new
builds. The closest_branch detection is enhanced because all the new
branches are created before the builds (separated loops).
The find_new_commits method uses an optimized way to search for
existsing builds. Before this commit, a build search was performed for
each git reference, potentially a huge number.
With this commit, a raw sql query is performed to create a set of tuples
(branch_id, sha) which is enough to decide if a build already exists.
A test was added to verify that new refs leads to pending builds.
Also, a performance test was added but is skipped by default because it
needs an existing repo with 20000 branches in the database so it will
not work with an empty database. This test showed a gain of performance
from 8sec to 2sec when creating builds from new commits.
co-authored by @Xavier-Do
Before this commit, dependencies (i.e. community commit to use when testing enterprise)
were computed at checkout, when the build was going from pending to testing state and
were not stored.
Since the duplicate detection was done at create, the get_closest_branch_name was called
in a loop for each posible duplicate candidate, then a last time at checkout. The main idea of this
pr is to store the build dependecies on build at create, making the duplicate detection
faster (especially when the build name is matching many indirect builds).
The side effect of this change is that the build dependencies won't be affected if a new
commit is pushed between the build creation and the checkout. The build is fully
determined at creation. get_closest_branch is only called once per build
The duplicate detection will also be more precise since we are matching on the commits groups
that were used to run the build, and not only the branch name.
Some work has also been done to rework the closest branch detection in order to manage new corner
cases. Hopefully, everything should work as before (or in a better way).
In a soon future, it will also be possible to use this information to make an "exact rebuild"
or to find corresponding community build.
Pr: #117
When some special builds are scheduled during the night, free slots on
runbot instances are used. Depending on the number of scheduled builds,
all the slots can be used. That prevents people to use the runbot for
normal builds during this time.
To mitigate the problem, the scheduled builds were postponed to the
middle of the night ... the CET night. It means that it could be morning
in India.
With this commit, a build priority is given to normal builds. On the
other hand, scheduled builds are pushed at the end of the queue.
So even if there are plenty of builds during the Belgian night, if
someone pushes a commit in between, it will be built in priority before
the scheduled pending builds.
When using a local git repo, the git name does not have colon, making
the frontend crash.
With this commit, a non-stored computed field 'short_name' is added to
compute a shortest version of the name.
When the docker_run function is called, the odoo command is decorated
with a pip command to install required packages.
This pollute the docker_run function if a runbot job_ method wants to
use docker for something else that starting an odoo instance (like
pg_dump) for example.
With this commit, command modification is made in an optional helper
function named build_odoo_cmd.
the docker_run function now needs the command to run as a string instead
of a list of odoo cmd and its parameters.
Asking for the kill of a build which is the duplicate of another fails
because the state of this build is "duplicate", so the _ask_kill method
has no effect on it.
With this commit, the effect of _ask_kill is applied on the duplicate in
the above mentioned case.
When searching the builds for the frontend the resulting query can last
a very long time (up to 7sec).
With this commit, the search result is strictly limited to 100 builds,
the limit query parameter is removed and the search string length is limited to
60 chars.
The guess_result method is now optimized to guess results for testing
builds only. The others have the same value as the final result.
A few tests were added for this method.
Thanks @KangOl for the optimization code.
When github reaches the hook controller, the repo hook_time field is
updated. That way, a git fetch is done only when the hook_time is newer
that the last fetch. If the hook_time is updated during the long running
cron that runs the _cron_fetch_and_schedule method, the hook_time is cached
and only the old hook time is seen until the cron's end. The cursor
commit is not enough. As a result, the new builds are scheduled in the
next cron run.
With this commit, the cache is invalidated after the commit, that way,
the hook_time field contains the correct value.
When a PR is created in odoo/enterprise but without a corresponding
PR in odoo/community BUT a corresponding branch in odoo-dev/community,
the closest_branch detection fails. Moreover, the duplicate detection
fails too.
As a consequence, the PR build will probably fail because it will be
built with the default target branch that could not be suited for it.
If the branch built succeeds, it leads to inconsistent results.
With this commit, a new case is added on the _get_closests_branch_name
to handle this case.
The serever_match field also reflects the difference as this case will
be marked as 'no PR'.
When a PR also exists in odoo/community, the server_match field will be
'exact PR'. This change should not imply migration.
This commit also adds a bunch of tests to test the closests branch name
detection and the duplicates.
Co-authored by @Xavier-Do
When a runbot build ends without error but with one or more warning,
status are not sent to github. As a result, the PR stays in pending
state.
With this commit, the github status is set to failure when a build ends
in a "warn" result.
At checkout time when a build has no server (e.g. enterprise),
the dependency repo that contains the server needs to be extracted too.
It happens that this dependency repo is not up to date.
With this commit, the dependency repo is updated before its extracttion.
When searching for duplicate builds, a git ls-remote is used to verify
that the branch still exists. This command is time consuming (up to 2
seconds).
If the number of build is significant, it can last a very long time.
When a user push one ore more new branches without new commits, the
number of duplicate builds found may be very large (more than 92).
This loop blocks the cron wroker in charge of creating new builds.
This quick fix will limit the number of duplicate to 1 but if the
closest name is not the same, it will not be considered as a duplicate.
When a runbot execute the cron_fetch_and_build method, the repo is
updated only if the webhook time is newer than the last fetch
time.
As the cron is now split into long running crons, the hook_time field is
cached. The runbot instance that sees a new build pending use this
cached value to estimate if the repo update is needed.
With this commit, the repo update is done right before exporting the
repo and only if the commit hash is not found.
As a bonus, the environment is reset in the long running cron of the
runbot builders to update the cached values.
The Runbot Cron is executed on each runbot instance. When the number of
instances scales, the time needed for an instance to obtain the cron
increases.
With this commit, the original runbot_cron is removed. Instead, a cron
have to be created to run the _cron_fetch_and_schedule method.
This method will fetch the repo and create pending builds. This cron is
intended to run only on one runbot instance. This method needs a host
parameter to specify which runbot instance will be in charge of this
task.
On the other hand, a dedicated cron have to be manually created for each
runbot instance that will have the build task.
Those cron's only have to call the _cron_fetch_and_build method with the
runbot hostname as a parameter. This method will then self
assign pending builds if there are slots available.
All available build slots are reserved in a single LOCKED SQL query.
Both methods are intended to last a large amount of time, just a few
minutes below the cron timeout to maximize the cron productivity.
The timeout is randomized to avoid deadlocks if the runbot instances are
started at the same time.
So the --limit-time-real parameter have to be set to a minimum of 180
sec (600 or 1200 are probably better targets).
When a user checks the runbot frontend, the guess_result field is used
to change the color of the build state. But github is not notified of
this guessed result.
As a consequence, the runbot_merge is not aware the build is failed and
will continue to wait.
With this commit, as soon as the guess_result detects a failure, the
status is sent to github, that way, runbot_merge will stop waiting
sooner.
When running the _job_10 method, a database is created with base module
alone. Tests are enabled during this job. Those tests are run again with
the _job_20 method. Moreover, even if the tests fail during _job_10,
they are not taken into account for the final result. The _job_10 method
duration is approximately 4 min.
With this commit, the tests are not enabled during _job_10.
Adapt test for eb7f5de . The mentioned commit fixes an issue that occurs
when updating github status. A test already exists but was assuming that
the build is in 'done' state when reaching the job_29.
With this commit, the build used in test is set to 'testing' state like
in the real cases.
Also, a new test is added to test the job_00_init which also send
github status but with a minimal build.
Finally, the runtime that should appear in status description was
forgotten in the previous commit. Now the runtime is always sent with
the github status.
Since d7c7e54 the github status is send in job_29. At this moment, the
state of the build is still 'testing'. For that reason, the github
status is set to 'pending'.
With this commit, once a result is available, the github status is
updated with the right value even if the build state is 'testing'.
When a build reach the job30_run method, results from a previous testing
methods are computed.
With the previous commit 8c73e6a901 this
job can now be skipped. In that case, the results are not set.
With this commit, the results are computed in a separate method.
Since 8c73e6a it's possible to skip jobs from a build by using the
job_type field on the branch. If a branch job_type is set to 'none', the
builds are created but they stay in 'pending' state.
With this commit, the build is not even created if the 'job_type' is
'none'.
Since the runbot_merge module, some branches does not need to be built.
For example the tmp.* branches.
Some other branches does need to be tested but it could be useless to
keep them running. For example the staging branches.
Finally, some builds are generated by server actions during the night.
Those builds does not need to be kept running despite the branch configuration.
For example, the master branch can be configured to create builds with
testing and running but nightly multiple builds can be generated with
testing only.
For that purpose, this commit adds a job_type selection field on the
branch. That way, a branch can be configured by selecting the type of
jobs wanted.
A same kind of job_type was also added on the build that uses the
branch's value if nothing is specified at build creation.
A decorator is used on the job_ methods to specify their job types.
For example, a job method decorated by 'testing' will run if the
branch/build job_type is 'testing' or 'all'.
When a build is created, the --log-db command line argument is built
using the same db and credentials that the one used by the runbot.
With this commit, this argument is built based on a postgress connect
URI given as ir.config_parameter in the settings.
A dedicated role must be created beforehand on the runbot postgresql
server, accordingly to the given URI.
Also, care should be taken to give minimal privileges to this user only
granting "update" on the table ir_logging_id_seq and
"insert,select,udpate" on the table ir_logging.
When a build is running, the stmp is the localhost.
Since Docker builds, the localhost is the container which does not catch
port 25 smtp. Mails are lost in the limbo.
With this commit, the default gateway of the Docker network is used as
smtp host for the builds. It's the responsability of the runbot host to
catch smtp traffic from the container.
This bridge interface exists by default on a system where Docker is
running. However, Docker is affected by this issue:
https://github.com/moby/moby/issues/26799
The first time the Docker daemon is installed, the Gateway is not
defined on the bridge interface. When the Docker daemon is restarted,
the gateway is correctly defined. Pay attention that restarting the
Docker daemon will kill all the running/testing builds.
When building Odoo, the instance is started on the same host as the
runbot. It means that all the required python packages have to be
installed on each runbot hosts with the same versions. Also there is no
real separation between builds. Finally, from a security point of view,
arbitrary code could be executed on the runbot host.
With this commit, the runbot uses Docker containers to build Odoo.
During the tests, Odoo http ports are not exposed to the outside,
meaning that nobody could interact with that instance.
The Docker image used for containers is valid for Odoo branches 10.0,
11.0, 12.0 and master.
When building, right before starting the Odoo tests, the tested branch's
requirements.txt is now taken into account to adapt the container.
On a runbot host, the "docker ps -a" command can be used to have the
list of the current builds. The containers are named using the build
dest field and the current running job. For example:
123456-12-0-123456_job_30_run
Prerequisites:
Docker have to be installed on the runbot hosts and the user that runs
the runbot should be able to use Docker. Typically, the runbot user have
to be added to the docker unix group.
On the first build, the Docker image will be built from scratch. It
can last several minutes locking the runbot cron during this time.
It means that on a multi-runbot configuration, this process will be
repeated for each runbot and during this time there will be no builds.
To avoid such a situation, the Docker image can be built from the
command line. The container.py file can be started like this:
python3 container.py build /tmp/build_dir
The /tmp/build_dir directory will be created to store the Dockerfile.
When the process is done, the "docker images" command should show an
image tagged runbot_tests in the odoo repository. At that time, the
runbot instance can be started, it will use this image for the builds.
Api change:
The 'job_*' methods signature has changed, the lock_path is not needed anymore.
Docker image informations:
Currently, the Docker image is built based on Ubuntu bionic to
benefit of the python 3.6 version.
Chrome and phantomjs are both installed.
The latest wkhtmltopdf (0.12.5) is installed as recommended on our wiki:
https://github.com/odoo/odoo/wiki/Wkhtmltopdf
When a PR is a duplicate of a branch, only the branch CLA status are
update. The same issue for build status was fixed in commit 4f1a55da9.
With this commit, there is new method than can be used in runbot_cla.
This method updates commit given status in each repo.
When commit is built serveral times, each time the status is updated, a
request is sent to github for each build.
As a side effect, if the first build is a failure and the last one a
success, github is wrongly updated to failure.
This bug is particulary annoying on PR in conjunction with the
runbot_merge, preventing a the PR to be merged even after several
rebuild.
With this commit, only the latest build per repository is used to update
the github status. The amount of github status updates is also reduced.
In the case of PR, the name contains 'refs/pull/3175', the branch_name
contains '3175' and because of the previous fix e095170f8c, the
pull_head_name sontains something like 'blah:12.0-something'.
In that case, the _get_closest_branch_name reaches the fallback.
With this commit, the target_branch_name is stored in a new field and is
used as the fallback.
When searching for a matching PR in a target repo, the match was made on
the 'base ref' which is the base branch that the PR targets.
This means that the second case of the _get_closest_branch method was
never reached.
Example:
An enterprise PR is created that targets Odoo branch 12.0 but with a
matching community PR with a branch with the same name.
The corresponding PR is never found by the runbot because the first
rule match: '12.0' is the base ref of the PR.
This could have been fixed by using the branch pull_head_name field to
build the domain but that leads to a problem with the second rule:
If a PR branch is named 'patch-1', the domain will match each PR with
the same pull_head_name.
With this commit, the pull_head_name field will store the pull head
label. It's not a problem with older PR as a newly created PR in
enterprise will not accidentally match with older ones.
Another issue appeared with github branch naming like 'patch-'.
If someone creates a PR with a branch auto-named 'patch-1',
targeting the community repo and later creates an unrelated PR,
targetting another repo depending of the community, they will be
matched.
To avoid that, the pull head names that ends with 'patch-n' are not
stored. Those pull requests will be built against the target branch
head.
The actual behavior, before this commit, is a blocking point for the
runbot_merge which is based on PR's only. It means that when a community
PR is wrongly matched with with an enterprise PR runbot_merge will not
merge the PR's.
This commit permits to prioritize a branch when scheduling builds.
It's main purpose is for the runbot_merge module. It avoids to have
staging branches as sticky and pollutes the main repo view.
Also, that branch can benefit of the autokill feature when a newer build
is found, freeing ressources for other builds.
Closes#43
In a near future, Odoo will use Chrome Headless instead of phantomjs.
Chrome needs a port to listen to and it was decided that it will be
http_port + 2.
With this commit, we ensure that this port is not used by another build.
Closes#30
The unicode icon added in the build subject is not clear for the users.
In that state, it's not easy to add a title on the icon or the subject.
With this commit, a build type field is added to differentiate the
builds and add the appropriate icon and title.
Closes: #24
When a build with coverage is killed during the tests, the coverage
result is set to 100%.
With this commit, coverage result is verbosely skipped if the coverage
file does not exists.
Also, the coverage module is no longer user to retrieve the coverage
value as it was already calculated. It's faster to grep the html file
than to recompute the value.
As the coverage builds takes a longer time than normal builds, the
timeout is increased for those kind of builds (as it was already done for the
CPU limit).
Finally, the omitted patterns were wrong and are now fixed with this
commit.
Closes#25
When the Odoo instances are spawned for tests, they have a time limit
set on their CPU usage. This limit is hard-coded in the _spawn method
calls.
As the number of tests are increasing, their duration increases too.
As this limit is inherited by subprocesses, if a phantomjs test
last too long, the test is killed alone.
Finally, when the coverage is enabled, the tests duration is
approximately increased of 1.5 times.
With this commit, the cpu_limit of the two main tests jobs are
increased. When the coverage is used, the cpu_limit is increased.
Closes: #23
When a new commit is found and a new build is created, if a user
pushes more commits in the same branch a few minutes later, it causes
more builds in testing phase, resulting in a CPU waste.
With this commit, previous builds in testing phase in non sticky
branches are killed when a new build is created.
Closes: #20
When someone tries to log in an old runbot build that is not running
anymore, he lands on the runbot instance that was running the build.
Also, all the running builds are allowed on all runbot instances,
leading to the same behavior.
With this commit, only the builds that are running on the runbot
instance can be reached, others are defaulted to a 404.
closes#21
When the coverage is activated on a branch, the coverage result is not
stored.
With this commit, the coverage result will be stored on a build.
The last result will be shown on the frontend for sticky branches.
Also, an extra_parameter field is added on the build model.
When a build fails in a repo that depends on another repo, it's difficult
to figure out from which commit it comes in the depending repo.
If this commit is applied, when a new commit is found in a repo's sticky
branch, the latest build from the same branch name, in the depending
repo will be forced to rebuild.
The refresh method is deprecated and invalidate_cache should be used
instead. Also, since the new API, the cache is automatically
invalidated, hence this removal.
Actually, when a build is a duplicate of another build, its github status is not updated.
e.g. a pull request may not have a github status but its corresponding
branch have a gtihub status.
With this commit, all builds will have their github status updated based
on the corresponding build status.
Thanks for your help @xmo-odoo
closes: #3
When code coverage was processed the 'coverage' utility was called the
same way regardless of the Odoo version. That was the cause of two
problems:
1) In some OS packages, the 'coverage' executable was renamed to
'python-coverage' and 'python3-coverage'
2) Since version 11.0, Odoo needs python3
With this commit, the coverage module is called from python '-m'
argument and the python version is chosen from the Odoo executable
shebang.
closes: #12
When a server is built based on dependency_ids, only the branch refs was
logged. In this situation, it's difficult to reproduce a build locally
in the exact same conditions.
With this commit, the latest commit hash and message of this
branch is also logged.
When searching for the closest branch name in a target repo, if the PR
head names are equals, a branch method is called on a dictionary, causing
a traceback.
With this commit, the method is called on a branch object instance.
closes#11
When installing a clean runbot instance a bad query is made because
there is not any build yet.
With this commmit, the query is not made when there are no builds
directories found on the filesystem.
Instead of marking most fields as `copy=False` to be able to use the
copy method for rebuilds, we create the build explicitly. We also
forbid to copy builds as it doesn't make much sense to begin with.
As for duplicates, it wasn't always possible to rebuild them. The
rebuild now injects a specific context key (force_rebuild). This allows
duplicates to undergo a rebuild. The side-effect of writing on previous
builds is also removed[1].
[1]: it's not obvious from the diff but the porting to the V8 API
should have yielded
duplicate.write({'duplicate_id': build_id.id})
instead of
build_id.write({'duplicate_id': build_id.id})
Before this commit, the job_end was always None because of a time2str
function with no return. So the job_time was the same as the job age.
Also, the result and times were copied from the prvious build. Displayed
time and results were wrong.
The previous code of runbot and runbot_cla was made for Odoo API version
8.0. This commit makes it work with Odoo API 11.0 and Python 3.
Also, the present refactoring splits the code into multiple files to
make it easier to read (I hope).
The main change due to Python 3 is the job locking mechanism:
Since PEP-446 file descriptors are non-inheritable by default.
A new method (os.set_inheritable) was introduced to explicitely make
fd inheritable. Also, the close_fds parameter of the subprocess.Popen
method is now True by default.
Finally, PEP-3151 changed the exception raised by fcntl.flock from IOError to OSError
(and IOError became an alias of OSError).
As a consequence of all that, the runbot locking mechanism to check if a
job is finished was not working in python3.