This commit add the possibility to add custom checks to python steps,
as well as ignoring triggered result if log of level error/warning
is not considered as a problem.
As the --upgrade-paths options does not work as expected in Odoo, a
symbolic link has to be created in odoo/addons/base/maintenance pointing
to the migration scripts.
The runbot uses Docker read-only volumes to access the sources that are
shared between builds, preventing the creation of such a link.
With this commit, a symbolic link is created right after the export of a
commit only when the repo is a "server" repo.
This link is broken outside of the Docker volumes but uses the mount
points of the sources inside the container.
Two ir.config_parameter's are used to enable and configure this feature:
* runbot_migration_ln: the relative path where the link should be placed
(typically odoo/addons/base/maintenance)
* runbot_migration_repo_id: the id of the migration scripts repo, used
to determine the name of the mount point inside the Docker container
A change is also made in the "reverse dependcy build" to avoid the
creation of a build in the migration repo for each push in its
dependency. Simply set the no_build field on this migration repo.
`getmtime` will return a 6 digit float when postgress will only store 5.
Depending on rounding, _get_refs have 1/2 chance to make an update
when it shouldn't. Rounding below psql precision before comparing and
storing should fix this.
Sometimes, sheduler may have a hard time to create build.
The transaction can be verry long when there are many repo and
a lot of new commits. Writing get_ref_time on repo will fail
due to concurrent update rollbacking the whole transaction.
This is supposedly because of hook occuring during the transaction.
With this new model, hook will only perform an insert, and shouldn't
interfer with ref_times.
docker_is_running is ambiguous since we dont know if it was started once.
This new feature tries to add tools to know if a docker was started or not.
The main reason of this is that sometimes docker_run may take more than 15 seconds
creating unpredictable errors on build when the second step is launched and
the previous one is still running. Hopefully this fix will help to solve this
issue and detect late docker run.
When rebuilding a build/subbuild, the last commit of the branch
won't have the same sha of the branch last commit. find_new_commit
will kill the running builds in the branch and create a new one,
that may be a duplicate of one of the killed commits.
This fix only take normal builds into account when checking for
existing builds.
This commit was initially there for tests, when no repo exist, but
get_param will also crash if commit does not exist, wich may be
a problem on user rebuild.
A build can sometimes fail and be stuck in a running state
without corresponding sources. In this case, source are not gc anymore
This commit fixes that by always applying gc even when an inconsistency
is detected.
When creating an .odoorc file to store configuration that are proper for
the runbot, the command line options were used as key.
The problem is that the Odoo config file use undescore instead of dash
for options keys.
Also, the Command object was not recreated with the config_tuples
parameter. Because of that, when adding a command in the list, the
config_tuples were losts.
As a consequence, on the Odoo runbot instance, the data dir was created
in the default dir and thus, not included in the zip file of the dump,
causing some runbot steps to fail.
A lot of things have to be mocked during runbot tests, as a consequence,
a lot of patch decorators accumulate in a big stack uppon some tests
methods.
Also, a lot of mocks are used multiple times among tests.
With this commit, a new RunbotClass is added that comes with patches
ready to be started. A start_patcher helper method is available to start
a patch and add the appropriate stop in a cleanup.
Also, when a build is created in the tests, the _get_params method is
always called, resulting in an annoying git warning.
With this commit, a create_build method is added on the test class, that
way the _get_params is always mocked when a build is created.
_find_new_commits will check if a build exists with current branch HEAD
before creating build. This is crutial to avoid to create a new build
at each loop turn. The problem is that in some rare cases, when
force-pushing an old head on a branch, the build won't appear and the only
way to update the branch is to find the corresponding build that may be
hidden in hitory. This may be confusing for the user that will rebuild the
created build with a commit that doesn't represent the head of the branch.
This commit only search for the last build of each branch, in order to
only skip build creation if the last build as the same hash. The new
created build should be marked as the duplicate of the first one.
When starting an odoo instance with Docker, a very long command line is
computed and appears in the logs.
With this commit, an .odoorc configuration file is written ind the build
dir and mounted in the Docker container.
Previously, the runbot .odoorc/.openerprc file was mounted to share some
parameters. Now, if that file exsists, its content is merged with build
.odoorc.
Previous version incorrectly browsed the PR *number* (rather than ID)
so at best it would do nothing and at worst it might go and notify the
wrong PR entirely.
Discussing #238 with @odony, the main concern was the difficulty of
understanding if things merged in one repo were related to things
merged in an other repo: currently, knowing this requires going to the
merged PR, getting its label, and checking the PRs with the same HEAD
in the other repository to see if there's a correlation (e.g. PRs
merged around the same time).
The current structure of the mergebot makes it reasonably easy to add
the other PRs of the batch in the pseudo-headers, such that we get
links to all "related" PRs in the head commit (and links back from the
commits which is probably less useful but...)
Fixes#238
1. if we try to stage a PR and realize we'd stored / checked the wrong
head, cancel the staging and notify the PR
2. provide a command to forcefully update pr heads (or at least check
that a PR's head is up to date)
Closes#241
Before this:
* the forwardport bot always sets itself as the committer when it
creates a forward-port branch
* when creating squashed / conflict commits, it becomes both author
and committer
This is not great as it loses a fair amount of authorship information
and doesn't properly give credit where credit is due. Improve this in
the following ways:
* use the original authorship information as-is when forward-porting
commits
* the the forward port fails, use the original authorship information
as-is if there's a single commit to forward port
* otherwise if there's only a single author / committer across the
branch being forwardported use that, if there are multiple give up
and use the identity of the 'bot, since the branch will probably
need to be re-done in full the authorship information of the
placeholder commit should not matter overly much
Uses git's magic ENV variables as that's pretty much the only way to
properly override the COMMITTER date: conf items only provide for
author and committer *identity* (name and email), and while `commit`
allows overriding the *authorship* date (`--date`) it doesn't provide
any option for the *commit* date.
Fixes#255
When closing a PR, github completely separates the events "close the
PR" and "comment on the PR" (even when using "comment and close" in
the UI, a feature which isn't even available in the API). It doesn't
aggregate the notifications either, so users following the PR for
one reason or another get 2 notifications / mails every time a PR
gets merged, which is a lot of traffic, even more so with
forward-ported PRs multiplying the amount of PRs users are involved
in.
The comment on top of the closure itself is useful though: it allows
tracking exactly where and how the PR was merged from the PR, this
information should not be lost.
While more involved than a simple comment, *deployments* seem like
a suitable solution: they allow providing links as permanent
information / metadata on the PRs, and apparently don't trigger
notifications to users.
Therefore, modify the "close" method so it doesn't do
"comment-and-close", and provide a way to close PRs with non-comment
feedback: when the feedback's message is structured (parsable as
json) assume it's intended as deployment-bound notifications.
TODO: maybe add more keys to the feedback event payload, though in my
tests (odoo/runbot#222) none of the deployment metadata
outside of "environment" and "target_url" is listed on the PR
UI
Fixes#224
It should have already been working, added an additional check for
update-then-retarget just in case but that worked out of the box. So
not sure why odoo/odoo#40106 failed.
Closes#256
If odoo is configured with a logfile, log to a separate file in the
same directory.
* log request / response when querying github
* log *received* requests for webhooks
Either way log the entire request metadata, though only the first 400
bytes/chars of the entity bodies.
This is intended to help mostly with post-mortem debugging: timestamps
from the main log can be correlated with the timestamps from the
github log in order to have more relevant information, both for
internal use and to send to gh support.
Closes#257
When using auto_tags, most of the time, the enhanced version are used.
For example, using "-:TestPoSStock" to disable the test class.
If the tested Odoo version does not support this kind of tag, they are
considered as simple tags, thus disabling all tests.
It 's the case for Odoo saas-11.3.
With this commit, the auto_tags are only used on Odoo versions that
support the new test tags.
Ensure that the commits we're creating are based on the commit we're
expecting.
This is the second cause (and really the biggest issue) of the "Great
Reset" of master on November 6: a previous commit explains the issue
with non-linear github operations (update a branch, get the branch
head, they don't match).
The second issue is why @awa-odoo's PR was merged with a reversion of
@tivisse's as part of its first commit.
The stage for this issues is based on the incoherence noted above:
having updated a branch, getting that branch's head afterward may
still return the old head. However either delays allow that update to
be visible *or* different operations can have different views of the
system. Regardless it's possible that `repos/merges` "sees" a
different branch head than a `git/refs/heads` which preceded it by a
few milliseconds. This is an issue because github's API does not
provide a generic "rebase" operation, and the operation is thus
implemented by hand:
1. get the head of the branch we're trying to rebase a PR on
2. for each commit of the PR (oldest to newest), *merge* commit on the
base and associate the merge commit with the original
3. reset the branch to the head we stored previously
4. for each commit of the PR, create a new commit with:
- the metadata of the original
- the tree of the merge commit
- the "current head" as parent
then update the "current head" to that commit's ref'
If the head fetched at (1) and the one the first merge of (2) sees are
different, the first commit created during (4) will look like it has
not only its own changes but also all the changes between the two
heads, as github records not changes but snapshots of working
copies (that's what a git tree is, a complete snapshot of the entire
state of a working copy).
As a result, we end up not only with commits from a previous staging
but the first commit of the next PR rollbacks the changes of those
commits, making a mess of the entire thing twice over. And because the
commits of the previous staging get reverted, even if there was a good
reason for them to fail (not the case here it was a false positive)
the new staging might just go through.
As noted at the start, mitigate that by asserting that the merge
commits created at (2) have the "base parent" (left parent / parent
from the base branch) we were expecting, and cancel the staging if
that's not the case.
This can probably be removed if / when odoo/runbot#247 happens.
When runbot is installed to test customs addons, we don't
want to build all odoo commit, but we need to update branches
in order to make _get_closest_branch work.
This commit will allow a user to set odoo in poll mode
with no_build set to True, to create branches only.
(And a small fix for additionnal_env)
Since 81fefee, the container.py CLI does not work as expected.
With this commit, the CLI is working, a new arg was added to test
flamegraphs and the dump is adapted to mimic the runbot.
Also, a small issue is fixed in the zip file creation. Before the zip
creation, the directory is changed, if the directory change fails, the
zip is created from the current directory which is removed by zip at the
end of the process. That could lead to the deletion of the build dir.
It's a waste to lose the entire staging if it's only a short blip /
delay thing, so retry multiple times. Add utility function to make
backoff functions easier (though the UI is not great ATM).
Also log the "left" parent of a merge commit (which should be the
"base") when creating it, for additional post-mortem information.
A typical use case when an error is detected is to disable
this test by adding a negated test-tags on config
step 'all' and 'split_all'. This commit will help
to do that by adding a test_tags management on build error.
The user define a test_tag that will only execute failling test.
if a config step has the flag enable_auto_tags, the test tag will
be negated and added to config test-tags.
This commit also add some information for monitoring.
Turns out not only can that operation fail, that operation can succeed
but have its effect delayed. To try and guard against that,
immediately check that we get the correct ref' after having reset it.
This is the cause of the November 6 mess: when preparing a staging,
the mergebot does the following,
1. get the head of <branch>
2. hard-reset tmp.<branch> to that
3. start merging PRs, which requires getting the current state of
tmp.<branch> back
On the 6ths, these steps looked like this
```text
2019-11-06 10:03:21,588 head(odoo/odoo, master) -> ab6d0c38512e4944458b0b6f80f38d6c26b6b597
2019-11-06 10:03:22,375 set_ref(update, odoo/odoo, tmp.master, ab6d0c38512e4944458b0b6f80f38d6c26b6b597 -> 200 (OK)
2019-11-06 10:03:28,674 head(odoo/odoo, tmp.master) -> de2a852e7cc1f390e50190cfc497bc253687fba8
2019-11-06 10:03:30,292 head(odoo/odoo, tmp.master) -> de2a852e7cc1f390e50190cfc497bc253687fba8
```
So the 'bot fetched the commit at the head of master (ab6d0c), reset
tmp.master to that... and then got a different commit when it fetched
the tmp head to stage a PR on it.
That different head being of course a previous rejected staging. When
the new staging succeeded, it brought the entire thing in and made a
mess.
This was compounded by an issue I still have to investigate: the
staging of the new PR took the wrong base commit *but the right base
tree*, as a result the first thing it did was *reverse the entire
previous commit* (without that we could probably have left it as-is
rather than need to force-push master -- twice).
Current dump version doesn't include filestore. This new
version adds the filestore trying to match odoo backup format
in order to ease restore.
manifest.json file is not create since it isn't usefull,
but an info.json is added, with build info.
Creating multi builds configs can be tedious. One must create 2 build
configs and 2 build config steps in the right order.
With this commit, a simple wizard is added that creates those 4
configurations by simply filling 4 fields.
Also, a new field, group, is added in order to be able to gather
config's and config steps into groups. The group is a Many2one on a
config.
While at it, the runbot menu has been a bit rearranged with everything
about config's in a parent menu named Configs.
Config's and config's steps tree views have been enhanced to show the
config group and add some filters in the search views.