Hook can represent label changes, closed pr, ....
We only want to fetch is some push or synchronize are sent.
TODO We also may want to catch retarget later in order to update branch.
indirect state was writen on parent leading to unconsistent info.
indirect was using last build regardless of build_type. Now, indirect
will only use normal build to avoid red-chain after a sticky rebuild.
A prototype of feature was added some times a go.
No really tested, this commit improves parmater format
and makes dependency closest_branch_id not required
since a repo/sha is all we need.
Docker can take some time to be considered as running after docker_run. This
issue can appear when we speedup sheduling loop. To avoid that when can add an
time condition to consider if a docker is running, but we want to avoid to wait
to much since some jobs are fast.
This solution check if a job is a docker run before waiting, and will also
update job_start after a checkout since this can take some time if
a git fetch is performed.
When a build is killed, result will be set to manually killed,
removing the 'error' or 'warn' result.
This commit removes this behaviour in order to keep error result
in this case.
If children are killed, they will all look the same in the parent view
making difficult to find the failed one in staging branches.
This commit displays result rather than status in priority if build
is in failure.
If a user really wants to keep a database up for a long time, he has the possibility
to wake it up multiple times.
Using last job end as reference will allow to keep a database alive longer.
The main motivation of this commit is to be able to notify github status only
when all children are done.
Until today, children where only used for dev branches and nightly. The needs to
use this system for staging need to enforce github_status behaviour.
Before this commit, a parent won't send github status since he will only
create childrens. And childrens are not awared of other children state,
so sending a succes may be wrong if another one failed.
Asking the parent to make the github_status looks the easiest solution:
-If top parent config will have update_github_state False, we also want to take that into account.
-If a child wants to contact github for failfast, parent will be in global_state error too and will send
message immediatly.
-If a child want to contact github for succsess, we actually want to wait for last child, parent will
be in waiting global_state and notify nothing (or pending).
Only last child will be able to notiffy success since global_state will be running or done at this step.
Orphan builds wont have any impact on result in with this scenario.
Some data are logged at each loop turn even if nothing interresting was done:
- ... builds [] where allocated to runbot
- reload nginx
That kind of info was interresting for debug but now this noise makes
logs heavier and more difficult to read.
Reload ngnix will be done only if file changed and this this will avoid
a log at each loop turn.
We also display difference between existing sources and source that should
be there instead of complete lists.
Sources can be easily exported if needed since they are in the bare repository
most of the time. To avoid using to much space, this commit will garbage collect
all sources at the beginning of a long_running cron.
Only real side effect of this is that it will be impossible to wake up
a build that was force pushed since source cannot be fetched anymore.
We may imagine that we could keep sources of recent build, maybe for
48 hours, but keeping build specific data (logs, database)
is more interresting.
This commit add a wake up button in place of connect button when build is
not running and may be wake up.
Connect button will also be visible immediatly when local_state is 'running'
since we don't need to wait sub build to finish.
On a PR being updated, closed or unreviewed, if it is part of an
active staging that staging would get cancelled (yay). However, if the
PR was part of a pending *split*, then the split would *not* get
cancelled / updated (to remove the PR from it), and the PR could go on
to get staged as if everything were right in the world which is an
issue.
It doesn't look like it actually happened (at least I got no echo of
it), but it almost did at least once.
fixes#160
Also add test for it & feedback of an approved PR failing CI, and fix
corner case with it (might not send a warning immediately on CI failure
depending on status requirement ordering).
Fixes#158
* when rebasing, store a map of rebased to source, that way it'll be
possible to link cherry-picked forward ports to the originally
integrated commit rather than just the one from the PR (which was
likely not itself integrated as the straight merge mode is somewhat
rare: as of 5600 PRs merged so far only 100 were straight merged)
* while at it, store the "merge head" of the PR (whether squashed,
merged or rebased) and put *that* in the commit message
fixes#161
When a build is completely dead, with directory and db deleted, the
wake-up system fails.
With this commit, a wake-up is not allowed on such dead builds.
In order to stores other things than logs, that could be accessible by
end users, for example screenshots and screencasts, a "tests" directory
is allowed thruough the nginx template in the builds directories.
Also, the "with" context manager is used to open the nginx configuration
to ensure that the file descriptor is released during long running crons.
When an Odoo instance is run in a Docker container, it listen on all
interfaces by default and a bridge interface is used to communicate with
the outside world.
This bridge interface is necessary to allow the instance to send logging
messages into the runbot postgresql database.
Even though Docker isolate the container from the oustide world, it's
still possible to reach the Odoo instace from the runbot host and worse,
from other Docker containers using the same bridge interface.
To confirm the Murphy's law, it finally happened with this commit
odoo/enterprise@0ba0ef99de that scanned the ip range of the interface,
disturbing other builds.
One solution would be to create a bridge interface for each instance to
isolate each Docker but that would imply a big change to garbage collect
forgotten bridges ...
An 'icc' (Inter Container Communication) option exists for the Docker
daemon which defaults to True. Setting it to False was tested and works
but it appears that this option can be messed-up by the firewall on the
runbot host.
Finally, if applied, this commit will prevent the Odoo instance to
listen on the bridge interface by only listening to 127.0.0.1 during
tests. A local_only parameter is a added on the _cmd method which is
true by default and means that the Odoo instance will only listen on
127.0.0.1. This parameter is set to False for the running step to allow
the running to be contacted through the Docker exposed ports.
The requirements path and python version where defined from
server in cmd. Since in coverage we add a 'python' before server,
it is difficult to define which element of the cmd is the server.
A solution here is simply to define requirements install and
python version when building cmd since we have access to all
build/source informations. We also add python part in every
cases, and coverage params are now a _cmd python_params.
The _cmd method now returns a Command object instead of a
list, which behave has a list for the cmd part but also contains
a pres and posts list.
pres are requirement install, preparation, ...
cmd is the original cmd list, element can be append or added, this
will allow to keep existing python job without to much changes.
posts are post cmd commands, like coverage result making.
This commit also fix issue with create_job dependencies.
Multibuild can create generate a lots of checkout, especially for small
and fast jobs, which can overload runbot discs since we are trying not
to clean build immediatly. (To ease bug fix and allow wake up)
This commit proposes to store source on a single place, so that
docker can add them as ro volume in the build directory.
The checkout is also moved to the installs jobs, so that
builds containing only create builds steps won't checkout
the sources.
This change implies to use --addons-path correctly, since odoo
and enterprise addons wont be merged in the same repo anymore.
This will allow to test addons a dev will do, with a closer
command line.
This implies to change the code structure a litle, some changes
where made to remove no-so-usefull fields on build, and some
hard-coded logic (manifest_names and server_names) are now
stored on repo instead.
This changes implies that a build CANNOT write in his sources.
It shouldn't be the case, but it means that runbot cannot be
tested on runbot untill datas are written elsewhere than in static.
Other possibilities are possible, like bind mounting the sources
in the build directory instead of adding ro volumes in docker.
Unfortunately, this needs to give access to mount as sudo for
runbot user and changes docjker config to allow mounts
in volumes which is not the case by default. A plus of this
solution would be to be able to make an overlay mount.
In some conditions, Docker can take a little time to start a container.
In that case, if the runbot checks that the container is running before
it starts, runbot consider the job as finished. It the tries to grep the
logs and, as expected, it does not find the "Modules loaded".
With this commit, we consider young builds (less than 15 sec) as
running, giving more time to Docker for starting it.
When creating multi builds config steps, the force_build option is often
forgotten. In that case, the multi builds are detected as duplicate of
the first one.
With this commit, when asking for more that one multi build, the
force_build is chnaged to to True.
When updating repo, order is odoo/odoo, odoo-dev/odoo, odoo/enterprise, odoo-dev/enterprise
The problem with this is that if a community hash and an enterprise hash arrives exactly
between odoo/odoo and odoo/enterprise update, enterprise could have a newest version
than community when creating builds.
By inversing this order, we have less chances to have this cornercase (as unlikelly as
it could be)
get_ref_time was cast from float to datetime and thus,
milliseconds where lost. Storing it as float make code
easier to read and avoid this rounding that was breaking
this feature.