approving a PR which failed CI should trigger a feedback message since
6cb58a322d (#158), the code has not been
removed and the tests still pass.
However fwbot r+ would go through its own process for r+ which would
explain why that feedback is sometimes gone / lost (cf #327 and #336).
* make fwbot r+ delegate to mergebot r+
* add dedicated logging for this operation to better analyze
post-mortem
* automatically ping the reviewer to specifically tell them they're idiots
* move the feedback item out of the state change bit, send it even if
it's a useless r+ (because it's already r+'d)
* add a test for forward-ports
Closes#327, closes#336
With this commit, a kind of markdown is allowed in log messages and
build.description.
Following patterns are supported:
**strong**
~~striketrough~~
__underline__
`code`
[link](target)
@icon-font-awesome-class
When a build.config is copied, the config steps are not copied.
The steps have to be explicitly ordered otherwise it leads to a
traceback when trying to copy a 'run' step which have to be the last
one.
This is useful as the author of the original PR doesn't necessarily
have (write) access to the repository where the forward-port PR was
created. As a result, while they can r+ the PR they're unable to close
it (via github's interface).
Since the forwardport bot created the PR, it can also close it, which
seems like a useful feature.
Closes#341
Remove original-signed-off-by, doesn't actually seem useful given the
semantics of signed-off-by according to the kernel doc'. Plus it
didn't actually work as the intent was to keep the signoff of the
original PR in the forward-port, but that signoff is not part of the
commit we're cherrypicking (it gets added on the fly when the commit
is merged).
Therefore explicitly get the ack-chain into the PR: when merging an FP
PR, try to integrate the signoff of the original PR, that of the final
FP pr, and while at it that of the last explicit update in the commit
chain (e.g. in case there's been a conflict or something).
Fixes#284
When creating a subbuild from a custom python job/cron, some data can
be given through config_data or extra_params to change another step behaviour.
An example is to give a specific -i to an install job in order to
install different modules from the parent build. Anyway since we
are using the same install step in this case, all databases will
have the same name, and the name may not be correct in regard to
the database content.
This commit allows to give a config_param on build giving the default
db_name to use in an install step.
If a branch triggers an hidden build on a another one (sticky ususally),
the last build of the branch will be considered to be this one and
trigger a new build. The same problem can occur when cloning a subbuild
to slighlty change a parameter instead of making an exact rebuild
since the build type may still be normal in this case.
This commit will simply ignore subbuild in this case.
When a new commit is found, a rebuild is forced on sticky branches
builds in repositories that depends on the new commit repository.
If one of the repo is protected by groups, the main page gives access
errors for public users and finally leads to a CPU overload.
With this commit, the feature is removed as it will be re-implemented in
custom config steps.
On non sticky branches, when a new build is found while another is
already testing, the older build is killed. This happens during when the
main runbot instance is discovering new commits and create new builds.
As a result, concurrent updates may occur while the builders access the
concerned build.
With this commit, this garbage collecting procedure occurs during the
scheduler loop that runs on runbot builder hosts.
Also, the logic changed in a way that the kill is requested only if the
host needs room to handle pending builds.
When a build age reaches the gc_days parameter, its database is dropped
and its directory is removed.
With this commit, two fields are added in order to keep some builds
longer that the defined gc_days.
The gc_delay field on the build allows to add a delay (in number of
days) that is added to its gc_days to compute the gc_date.
The gc_date field is the date when the cleaning will occur.
Also, a test is added and the RunbotCase test class is improved to allow
the stop of a patcher.
With this commit, it will be possible to play with different chrome
versions without impacting all runbot instances.
Chrome issues summary:
* 71: innerText linefeed issue
* 74: random chrome crash
* 80: V8 pointer compression exceed Odoo memory limits
* only provide fields which make sense for the mergebot
* provide formatting & searchability for review rights records so
they're visible from the list directly
Since cb05f2b9d8, when creating a database, the template1 is used, this
allows to customize the template and install some needed Postgress
extensions.
Unfortunately, it's also a source of build failures. For example, if the
pg_activity util is used on a runbot host, database creation may fail
with a message like this:
source database "template1" is being accessed by other users
It's because pg_activity needs a database and uses template1.
With this commit, template1 is still used by default but can be changed
with a system parameter. That way, a custom template can be created on
runbot hosts and used when creating DB in builds.
When a PR branch target is changed on Github, the change is not applied
in the runbot DB.
With this commit, the Github hook payload is taken into account to
detect such a change and the branch infos are recomputed accordingly.
Also, a button is now available on the branch form in order to manually
recompute those changes.
This is more of a sanity check as it normally should not be a factor:
labels generally contain the target name, and staging checks are
performed per-target so we're not mixing multiple targets anyway.
But let's say a third-party creates a fix-foo branch for A and a
fix-foo branch for B, we want to ensure they're not considered batched
together.
Rather than try to fix up various bits where we search & all and
wonder what index we should be using, make the column a CIText.
For mergebot the main use case would be properly handling
delegate=XXX: currently if XXX is not a case-sensitive match we're
going to create a new partner with the new github login and
give *them* delegation, and the intended target of the delegation
isn't going to work correctly.
Also try to install the citext extension if it's not in the database,
and run the database-creation process with `check=True` so if that
fails we properly bubble up the error and don't try to run tests on a
corrupted / broken DB.
Fixes#318
As the odds of having more projects or more repos with different
requirements in the same project, the need to have different sets of
reviewers for different repositories increases.
As a result, rather than be trivial boolean flags the review info
should probably depend on the user / partner and the repo. Turns out
the permission checks had already been extracted into their own
function so most of the mess comes from testing utilities which went
and configured their review rights as needed.
Incidentally it might be that the test suite could just use something
like a sequence of commoditized accounts which get configured as
needed and not even looked at unless they're used.
Before this change, `r-` on a pr[p=0] does essentially nothing. At
most it will unstage if the PR had been (somewhat unnecessarily) r+'d
in the past but then the PR will get re-staged immediately.
To avoid this odd behaviour, if r- is sent to a p=0 PR not only is
the PR unreviewed (if it was reviewed) it always gets unstaged, and
its priority gets reset to 1 (high priority but doesn't bypass CI and
review). Also send a comment on that subject so followers of the pr
are notified.
Fixes#313
During freezes it can be useful to notify viewers that nothing is
going to forward port or merge for a while, and that this is
intentional (not something that's broken).
Fixes#307
The staging cron was already essentially split between "check if one
of the stagings is successful (and merge it)" and "check if we should
create a staging" as these were two separate loops in the cron.
But it might be useful to disable these two operations separately
e.g. we might want to stop the creation of new staging but let the
existing stagings complete.
The actual splitting is easy but it turns out a bunch of tests were
"optimised" to only run the merge cron. Most of them didn't blow up
but it seems more prudent to fix them all.
fixesodoo/runbot#310
The PR creation had been fixed to always validate even without a
commit found (in case there was no need for a commit), but the update
of a PR in such a situation was not tested, and thus naturally did not
work because why would it work if it wasn't tested?
Also remove the conditional skip on updating a PR to a new head.
The test was checking things would work properly with
required_statuses being an empty string, because I'd also forgotten an
empty field becomes stored as `False` in the database, so trying
things out live neither the PRs nor the staging would work as their
assumption that they could straight split the required_statuses would
always fail.
Update the test to better match expectations, and hopefully this is
the end of that saga.
PRs transitioning to 'ready' had been checked and tested but turns out
I had completely forgotten to test that stagings would validate
properly therefore of course they didn't.
The issue here was I'd forgotten `''.split(',')` returns `['']` rather
than `[]`, so on an empty required_statuses the staging validator
would keep looking for a status matching the context `''` and would
never find it, keeping the staging pending until timeout. So most
likely the problem could have been resolved by just adding a condition
to
[r.strip() for r in repomap[c.sha].required_statuses.split(',')]
but I'd already done all the rest of the reorganisation by that point,
test pass and I think it's a somewhat better logic. Therefore I'll go
with that for now.
* properly handle empty required_statuses during staging validation
* remove the final postcondition, if we're missing commits which don't
require any statuse we should not care
* expand test to include up to merging PRs
* automatically create dummy commits when creating stagings, that way
the relevant commits are in the database (can't hurt)
PS: an other alternative would have been to filter out or skip ahead
on commits which don't require any statuses aka cmap &
required_statuse / cmap would not even have that entry