Docker container names are derived from the dest and step name. The dest
is itself derived from the branch name.
In some rare cases, it happens that a character not allowed by Docker
appears in the container name computed by the runbot.
With this commit, a sanitize_container_name function is used to remove
unallowed characters at the container utility level.
The postgresql-client in the Dockerfile is the one provided by the
Debian package. When the postgresql server on the host has a higher
version than the client, some builds may fail (for example, dumping a
database with the pg_dump).
With this commit, the postgresql-client 12 from the postgresql repo is
used in the Dockerfile.
While the head gets updated (properly), the squash flag did not which
could lead to odd results. Since a PR can only be reopened if it was
regular-pushed to (not after a force push) there are two scenarios:
* the PR updated to have 0 commits, closed, pushed to with one commit
then reopened, after reopening the PR would be marked as !squash and
would ask for a merge method (that's what happened with
odoo/odoo#51763)
* the PR has a single commit, is closed, pushed to then reopened,
after reopening the PR would still be marked a squash and potentially
straight rebased without asking for a merge method
Nothing would break per-se but both scenarios are undesirable.
Close#373
The exponential backoff offsets from the write_date of the children
PRs, however it doesn't reset, so the offsetting gets bumped up way
more than originally expected or designed if the child PRs are under
active development for some reason.
Fix this by adding a field to specifically record the date of merge of
a PR, and check that feature against the backoff offset. This should
provide more regular and reliable backoff.
Fixes#369
Given a PR batch getting forward-ported together, if one of the PRs
has a conflict the others should be considered "in conflict" as well,
and should have a note pointing in that direction and indicating that
the PR should be approved the normal way eventually. Which they do.
However, the message is confusing as it gets bolted on the normal
non-conflicting message, either noting that it's part of a chain
or (worse because it gives conflicting indication) the "terminal"
message recommending using the forwardbot to approve of the entire
chain.
I've no idea why I did it that way instead of just adding a case to
the conditional, and the commit message provides no indication. But
perform that change, it seems innocuous, hopefully there weren't good
reasons I forgot about for doing it the other way around.
Fixes#367
Runbot can send status multiple time for the same hash:
- if transaction fails in scheduler and is retried
- if multiple subbuild are failing
Leading to multiple issues:
- when github receive more than one failure status, mergebot will
be notified multiple times and send multiple mail (for forward ports mainly)
- github will answer `422 Unprocessable Entity for url...` after
1000 status.
This fix proposes to limit number of status:
- By avoiding to send status for orphan build (parent status will never change)
- By storing last send status to avoid to notify multiple time
- By sending status post commit to avoid to contact github in case of failure.
This will also slightly reduce transaction time by removing an http request.
Sometimes, it happens that a `git fetch` fails with an error code 128
for example. When this happens, the runbot host is immediately disabled.
During investigations of such cases, we found that simply retrying the
fetch command works.
With this commit, the fetch command is tried 5 times with an increasing
delay before deciding to disable the runbot host.
When it updated tagging e82de3136b also
incorrectly replaced a `pr` by `pr.display_name`, probably leftover
from an attempt to update a callsite from `str(pr)` to
`pr.display_name` which I missed when reverting that.
Anyway at that section, `pr` is an integer (as it comes from an SQL
query) not an object.
Provides a `skipci` command to PR reviewers. This makes it so the
followup PRs (after the first one) get created immediately, without
waiting for CI to succeed on a given forward-port PR.
This can be useful if for some reason a change *must* be merged in
branch N+1 before it can be merged in branch N.
Fixes#363
e9e08fec3c attempted to fix the issue
but obviously failed as it still occurs: when creating a PR through
the API, it's possible that the webhook gets triggered fast enough the
transaction creating the PR from the webhook commits before we get
around to creating our own PR from the API call. In which case the
forward port process aborts.
The process is re-run later on and generally succeeds fully, but we're
left with a dangling PR we created but couldn't do anything with as
its use broke.
This issue seems to be getting more frequent so it's becoming quite
important to fix it. Therefore we give our Raging Bull a Big Gun and
now he has 20 attack *cough cough* we lock the bloody table down
tight (only allow concurrent `SELECT`) until we've got the PR back and
we've done the updates we need to it and nobody can mess with it...
probably.
This is not ideal as it's going to block updates to completely
unrelated PRs but it doesn't seem like postgres really allows for
locking out creations without locking out the rest, short of using
advisory locks maybe? E.g. in the `create` override get a
`pg_advisory_xact_lock_shared`, then get a `pg_advisory_xact_lock` in
the forward-port process that way we're just blocking the concurrent
creation of PRs during forward port, but creations don't block one
another and we don't block updates.
Application-level locks wouldn't really work as the 'bot could be
deployed using multi-worker scenarios so we'd need cross-process locks
or something.
Hopefully fixes#352
Since we store the target_branch_name, filtering out pull head names
that contains `patch-` is not necessary anymore.
This commit is one first step towards a clean refactoring.
When a build is done, various numerical informations could be extracted
from log files. e.g.: global query count or tests query count ...
The extraction regular expression could be hard-coded in a custom step
but there is no place holder where to store the retrieved information.
In order to compare results, we need to store it.
With this commit, a new model `runbot.build.stat` is used to store
key/values pair linked to a build/config_step. That way, extracted
values can be stored.
Also, another `runbot.build.stat.regex` is used to store regular
expressions that can be used to grep log files and extract values.
The regular expression must contain a named group like this:
`(?P<value>.+)`
The text catched by this group MUST be castable into a float.
Optionally, another named group can be used in the regular expresion
like this:
`(?P<key>.+)`
This `key` group will then be used to augment the key name in the
database.
Example:
Consider a log line like this one:
`odoo.addons.website_blog.tests.test_ui tested in 10.35s`
A regular expression like this, named `test_duration`:
`odoo.addons.(?P<key>.+) tested in (?P<value>\d+\.\d+)s`
Should store the following key:value:
`{
'key': 'test_duration.website_blog.tests.test_ui',
'value': 10.35
}`
A `generic` boolean field is present on the build.stat.regex object,
meaning that when no regex are linked to a make_stats config step, then,
all the generic regex will be applied.
A wizard is added to help the creation the regular expressions, allowing
to test if they work against a user provided example.
A _make_stats method is added to the ConfigStep model which is called
during the _schedule of a build, just before calling the next step.
The regex search is only apllied in steps that have the `make_stats`
boolean field set to true. Also, the build branch have to be flagged
`make_stats` too or the build can have a key `make_stats` in its
config_data field.
The `make_stats` field on the branch is a compute stored field.
That way, sticky branches are automaticaly set `make_stats' true.
Finally, an SQL view is used to facilitate the stats visualisation.
The logic of the partner merge wizard is to collect all relevant data
from source partners, write them to a destination partner, then remove
the sources.
This... doesn't work when the field in question has a UNIQUE
constraint (like github_login), because it's going to copy the value
from a source onto a dest which will blow the constraint, and so the
copy fails. In that case the user first has to *move over* the unique
field's value then they can use the wizard.
Just fix for the github login: take all sources, remove (and store)
their github logins, then write the login onto the dst.
An alternative would have been to *defer* the constraint, however:
* it only works on unique constraints, not unique indexes
* it requires the constraint to be declared DEFERRABLE
Closes#301
The reminder feature is a bit brutal when people go on holidays or
whatever as it keeps commenting every day.
This should comment every day for a few days, then quickly taper down.
Closes#285
Up till now if an FF failed with an exception having neither cause nor
context the cancel reason would be an empty string.
Fallback on stringifying the exception itself as a last resort.
Because the reminder cron uses groupby to "merge" open PRs related to the
same source and send a single message for all of them (e.g. PR 6548
forward-ported to 6587 and 6591 should have a single reminder message per
day not one per descendant), the PRs with the same source need to be
consecutive in the search sequence.
However there was no order specified so the search would yield PRs in id
order or something, and if there happened to be an other forward-port PR
inbetween the descendants of the original would not get coalesced and would
therefore trigger a message per descendant per day (doubling or tripling the
intended spam rate).
Ordering by source_id should fix the issue as it ought make all PRs
forward-ported from the same thing contiguous, and therefore grouped
together before sending reminder messages.
An alternatively solution would be to use `groupby` instead of `search` but
it would require more modifications as we'd need to re-browse the sources
and descendants, etc...
First part of fixing #285 as this is likely why odoo/enterprise#7204 got
spammed so much: its descendants were odoo/enterprise#7367 and
odoo/enterprise#7369 and it just so happens that odoo/enterprise#7368 was
*also* a forward port PR, causing the issue explained above.
Currently the PR becomes successful-green as soon as CI fully passes
but before it's merged, which can be an issue as e.g. merging might be
delayed (there's no visible difference between "CI success" and
"staging merged") or it might ultimately failed (FF error).
Create an intermediate color for "successful" stagings which are still
pending merge.
Also add a fallback message for fast-forward errors instead of en
empty string.
Closes#308
Genericise runbot_merge's tagging (move states to the "UI" but only
store / manage actual tags), and remove forwardport.tagging as it's
now redundant.
Closes#232
When coverage is computed, a post command is used to generate the HTML
report. In order to use the coverage result locally the HTML report is
not enough.
With this commit, an XML report is also generated. It's a single xml
file, downloadable from the build result web page.
The _post_install_command method is renamed into its plural form because
it was useless to return only one command.
From times to times, a git repo gets corrupted, making builds fail in
chain.
With this commit, the host on which the git fetch fails will be reserved
if not more than half the hosts are reserved.
Before intensive python steps, every build exept create build should have logs.
This is not true since now many python steps are creating build, leading
to 404 links to inexisting logs file.
Checking That a step is a docker build looks like a good heuristic since since
most of the time the log file will be the one created by docker. In all case we
can access other logs files in browsing /logs.
A future improvement would be to listdir logs to create all links.
approving a PR which failed CI should trigger a feedback message since
6cb58a322d (#158), the code has not been
removed and the tests still pass.
However fwbot r+ would go through its own process for r+ which would
explain why that feedback is sometimes gone / lost (cf #327 and #336).
* make fwbot r+ delegate to mergebot r+
* add dedicated logging for this operation to better analyze
post-mortem
* automatically ping the reviewer to specifically tell them they're idiots
* move the feedback item out of the state change bit, send it even if
it's a useless r+ (because it's already r+'d)
* add a test for forward-ports
Closes#327, closes#336
With this commit, a kind of markdown is allowed in log messages and
build.description.
Following patterns are supported:
**strong**
~~striketrough~~
__underline__
`code`
[link](target)
@icon-font-awesome-class
When a build.config is copied, the config steps are not copied.
The steps have to be explicitly ordered otherwise it leads to a
traceback when trying to copy a 'run' step which have to be the last
one.
This is useful as the author of the original PR doesn't necessarily
have (write) access to the repository where the forward-port PR was
created. As a result, while they can r+ the PR they're unable to close
it (via github's interface).
Since the forwardport bot created the PR, it can also close it, which
seems like a useful feature.
Closes#341
Remove original-signed-off-by, doesn't actually seem useful given the
semantics of signed-off-by according to the kernel doc'. Plus it
didn't actually work as the intent was to keep the signoff of the
original PR in the forward-port, but that signoff is not part of the
commit we're cherrypicking (it gets added on the fly when the commit
is merged).
Therefore explicitly get the ack-chain into the PR: when merging an FP
PR, try to integrate the signoff of the original PR, that of the final
FP pr, and while at it that of the last explicit update in the commit
chain (e.g. in case there's been a conflict or something).
Fixes#284
When creating a subbuild from a custom python job/cron, some data can
be given through config_data or extra_params to change another step behaviour.
An example is to give a specific -i to an install job in order to
install different modules from the parent build. Anyway since we
are using the same install step in this case, all databases will
have the same name, and the name may not be correct in regard to
the database content.
This commit allows to give a config_param on build giving the default
db_name to use in an install step.
If a branch triggers an hidden build on a another one (sticky ususally),
the last build of the branch will be considered to be this one and
trigger a new build. The same problem can occur when cloning a subbuild
to slighlty change a parameter instead of making an exact rebuild
since the build type may still be normal in this case.
This commit will simply ignore subbuild in this case.
When a new commit is found, a rebuild is forced on sticky branches
builds in repositories that depends on the new commit repository.
If one of the repo is protected by groups, the main page gives access
errors for public users and finally leads to a CPU overload.
With this commit, the feature is removed as it will be re-implemented in
custom config steps.