Broken (can't run odoo at all):
- In Odoo 17.0, the `pre_init_hook` takes an env, not a cursor, update
`_check_citext`.
- Odoo 17.0 rejects `@attrs` and doesn't say where they are or how to
update them, fun, hunt down `attrs={'invisible': ...` and try to fix
them.
- Odoo 17.0 warns on non-multi creates, update them, most were very
reasonable, one very wasn't.
Test failures:
- Odoo 17.0 deprecates `name_get` and doesn't use it as a *source*
anymore, replace overrides by overrides to `_compute_display_name`.
- Multiple tracking changes:
- `_track_set_author` takes a `Partner` not an id.
- `_message_compute_author` still requires overriding in order to
handle record creation, which in standard doesn't support author
overriding.
- `mail.tracking.value.field_type` has been removed, the field type
now needs to be retrieved from the `field_id`.
- Some tracking ordering have changed and require adjusting a few
tests.
Also added a few flushes before SQL queries which are not (obviously
at least) at the start of a cron or controller, no test failure
observed but better safe than sorry (probably).
odoo/odoo@1341d52 added native support for tracking / overriding
tracking message so we don't need `change-message `anymore. The calls
had to be modified a bit as `_track_set_log_message` has to be invoked
on the records for which the tracking message is being set / updated.
Sadly the same for authors was only added in 16.3 *and* it's
unsuitable for our needs as we want to set the author without knowing
the scope of affected records (at least in the controller).
That way hopefully in 17.0 we can remove the `_message_compute_author`
override and things should work out. And maybe for 19.0 we can get the
ability to set a per-model (or even global) fallback author into
standard.
Because the first operation the notification task performs is updating
the commit, this has to be flushed in order to read-back the commit
hash (probably fixed in later version).
This means if updating the flag triggers a serialization
failure (which happens and is normal) we transition to the `exception`
handler, which *tries to retrieve the sha again* and we get an "in
failed transaction" error on top of the current "serialization
failure", leading to a giant traceback.
Also an unhelpful one since a serialization failure is expected in
this cron.
- Move the reads with no write dependency out of the `try`, there's no
reason for them to fail, and notably memoize the `sha`.
- Split the handling of serialization failures out of the normal one,
and just log an `info` (we could actually log nothing, probably).
- Also set the precommit data just before the commit, in case staging
tracking is ever a thing which could use it.
Fixes#909
The staging cron turns out to be pretty reasonable to trigger, as we
already have a handler on the transition of a batch to `not blocked`,
which is exactly when we want to create a staging (that and the
completion of the previous staging).
The batch transition is in a compute which is not awesome, but on the
flip side we also cancel active stagings in that exact scenario (if it
applies), so that matches.
The only real finesse is that one of the tests wants to observe the
instant between the end of a staging (and creation of splits) and the
start of the next one, which because the staging cron is triggered by
the failure of the previous staging is now "atomic", requiring
disabling the staging cron, which means the trigger is skipped
entirely. So this requires triggering the staging cron by hand.
The merge cron is the one in charge of checking staging state and
either integrating the staging into the reference branch (if
successful) or cancelling the staging (if failed).
The most obvious trigger for the merge cron is a change in staging
state from the states computation (transition from pending to either
success or failure). Explicitly cancelling / failing a staging marks
it as inactive so the merge cron isn't actually needed.
However an other major trigger is *timeout*, which doesn't have a
trivial signal. Instead, it needs to be hooked on the `timeout_limit`,
and has to be re-triggered at every update to the `timeout_limit`,
which in normal operations is mostly from "pending" statuses bumping
the timeout limit. In that case, `_trigger` to the `timeout_limit` as
that's where / when we expect a status change.
Worst case scenario with this is we have parasitic wakeups of this
cron, but having half a dozen wakeups unnecessary wakeups in an hour
is still probably better than having a wakeup every minute.
These are pretty simple to convert as they are straightforward: an
item is added to a work queue (table), then a cron regularly scans
through the table executing the items and deleting them.
That means the cron trigger can just be added on `create` and things
should work out fine.
There's just two wrinkles in the port_forward cron:
- It can be requeued in the future, so needs a conditional trigger-ing
in `write`.
- It is disabled during freeze (maybe something to change), as a
result triggers don't enqueue at all, so we need to immediately
trigger after freeze to force the cron re-enabling it.
Merge errors are logical failures, not technical, it doesn't make
sense to log them out because there's nothing to be done technically,
a PR having consistency issues or a conflict is "normal". As such
those messages are completely useless and just take unnecessary space
in the logs, making their use more difficult.
Instead of sending them to logging, log staging attempts to the PR
itself, and only do normal logging of the operation as an indicative
item. And remove a bunch of `expect_log_errors` which don't stand
anymore.
While at it, fix a missed issue in forward porting: if the `root.head`
doesn't exist in the repo its `fetch` will immediately fail before
`cat-file` can even run, so the second one is redundant and the first
one needs to be handled properly. Do that. And leave checking
for *that* specific condition as a logged-out error as it should mean
there's something very odd with the repository (how can a pull request
have a head we can't fetch?)
Apparently I'd already fixed that in
286c1fdaee but it has yet to be
deployed.
While at it, add a feedback message to clarify that, unlike `r+`, `r-`
on forward ports does *not* propagate.
Fixes#912
- Update branch name to prefix with project as it can be hard to
differentiate when filtering by or trying to set targets, given some
targets are extremely common (e.g. `master`/`main`) and not all
fields are filtered by project (or even can be).
- Add a proper menu item and list view for batches, maybe it'll be of
use one day.
- Upgrade label in PR search, it's more likely to be needed than
author or target.
- Put PRs first in the mergebot menu, as it's *by far* the most likely
item to look for, unless it's staging in order to cancel one.
Previously PR descriptions were displayed as raw text in the PR
dashboard. While not wrong per se, this was pretty ugly and not always
convenient as e.g. links had to be copied by hand.
Push descriptions through pymarkdown for rendering them, with a few
customisations:
- Enabled footnotes & tables & fenced code blocks because GFM has
that, this doesn't quite put pymarkdown's base behaviour on par with
gfm (and py-gfm ultimately gave up on that effort moving to just
wrap github's own markdown renderer instead).
- Don't allow raw html because too much of a hassle to do it
correctly, and very few people ever do it (mostly me I think).
- Added a bespoke handler / renderer for github-style references.
Note: uses positional captures because it started that way and named
captures are not removed from that sequence so mixing and matching
is not very useful, plus python does not support identically named
groups (even exclusive) so all 4 repo captures and all 3 issue
number captures would need different names...
- And added a second bespoke handler for our own opw/issue references
leading to odoo.com, that's something we can't do via github[^1] so
it's a genuine value-add.
Fixes#889
[^1]: github can do it (though possibly not with the arbitrary
unspecified nonsense I got when I tried to list some of the
reference styles, some folks need therapy), but it's not available
on our plan
I thought I'd removed the error message when approving an already
approved PR but apparently not?
However we can improve the message in that specific case, to make the
expected operation clearer.
Fixes#906
After seeing it be used, I foresee confusion around the current
behaviour (where it sets the limit), as one would expect the `fw=`
flags to affect one another when it looks like that would make sense
e.g. no/default/skipci/skipmerge all specify how to forward port, so
`fw=default` not doing anything after you've said `fw=no` (possibly by
mistake) would be fucking weird.
Also since the author can set limits, allow them to reset the fw
policy to default (keep skipci for reviewers), and for @d-fence add a
`fw=disabled` alias.
Fixes#902
If a PR is cancel=staging, even if it's not the
urgentest (priority=alone) odds are good it's being staged to fix the
split. And even if it's not, it probably can't hurt.
So cancel splits in order to stage it. This may be slightly harmful if
the split is legit and has nothing to do with the PR being
prioritised, but that seems like the less likely scenario. And having
to update staging priorities on the fly seems like a bad idea. Though
obviously it might do nothing if the PRs are in "default" priority.#
Also simplify the unstage trigger from the PRs becoming ready:
- the user is useless as it's always the system user
- the batch id is not really helpful
Noticed that while writing up the docs on the wiki, seems like an
unnecessary restriction, and an inconvenient one to boot: the author
could r+, then realize they forgot to do an update they needed to do
on the fw, so they should be able to cancel the staging without
needing a reviewer.
d4fa1fd353 added tracking to changes
from *comments* (as well as a few hacks around authorship transfer),
however it missed two things:
First, it set the `change-author` during comments handling only, so
changes from the `PullRequest` hook e.g. open, synchronise, close,
edit, don't get attributed to their actual source, and instead just
fall back to uid(1). This is easy enough to fix as the `sender` is
always provided, that can be resolved to a partner which is then set
as the author of whatever changes happen.
Second, I actually missed one of the message hooks: there's both
`_message_log` and `_message_log_batch` and they don't call one
another, so both have to be overridden in order for tracking to be
consistent. In this case, specifically, the *creation* of a tracked
object goes through `_message_log_batch` (since that's a very generic
message and so works on every tracked object created during the
transaction... even though batch has a message per record anyway...)
while *updates* go through `_message_log`.
Fixes#895
- When a redundant approval is sent to a PR, notify but don't ignore
the entire command set, there's no actual risk.
- Indicate that the entire comment was ignored when finding something
which does not parse.
Fixes#892, fixes#893
The commit cron needs to be triggered any time we:
- create a new commit
- update a commit to set its `to_check`
So do that in create and write as well as the SQL query in the
webhook handler.
This should mean we don't need the periodic cron anymore, but for
safety's sake run it on 30mn for now.
TBF even if we miss triggers, the next `status` webhook hitting will
check all the relevant commits anyway...
This is useful to repro issues.
60c4b5141d added `inverse=readonly`
hooks to various newly computed fields to ensure they can not be *written*
to, either overwriting the content (stored) or silently being
dropped (non-stored).
However because they're `inverse` hooks this had the effect of making
them writeable from the backend UI since the ORM uses `inverse` as a
signal to make the field writeable. This then caused the web client to
send stuff for those fields, which are not necessarily even visible in
the form, leading to write errors when trying to save a PR creation.
By marking the fields as `readonly` explicitly we make sure that
doesn't happen, and we can create PRs from the backend UI (kinda, I
think the label is still an issue).
The method was not marked as a create, following which it did not
allow creating commits via the UI (annoying for testing / reproducing
issues involving statuses).
If a PR gets approved *then* fails CI, there should be a notification
warning the author & reviewer since
48e08b657b, it even has a test, which
passes (in fact it has *two*, one of which is redundant, so merge
`test_ci_failure_after_review` into the later `test_ci_approved`).
*However* this is in runbot_merge, turns out in
fafa7ef437 some refactoring was done in
order to override the notification and customise it for *forward
ports* with a failed status... except that override never called its
`super()`, so as soon as forwardport is installed the base
notification stops working, and that's been that since October
2019 (had been added in March that year, ignoring deployment lag).
This can be revealed by adding the corresponding check in the
*forwardport* tests, revealing the failure.
This was a pain to track down, thankfully it reproduced relatively
easily locally.
While this could be resolved in the override, might as well fold it
into the base method in furtherance of #789: the mergebot is only
used by odoo, and only with both modules combined, so splitting them
is not useful. And furthermore it things should work fine with the
forwardport installed but unused.
Fixes#894
- Instead of warning about the merge method on ready PRs, also warn on
*approved* (but exclude staged just cuz), as that's really when the
user wants to know that they forgot to set the merge method
- The cron only triggers hourly, but *if* a user approves a PR *and*
the merge method is not set yet, chances are good they'll need a
reminder (if they `r+ rebase-merge` or w/e the cron will just ignore
the PR and it's no skin off our back), so `_trigger` the cron for
validation.
- Also do the same when skipchecks is set as it's very similar.
In reality we might want to hook off of the state transitioning to
reviewed but I'm not sure there's good ways to do that (triggering a
cron inside a compute doesn't seem like a good idea).
Update a pair of tests which would approve a multi-commit PR without
setting a merge method, just because the helper they use to build the
PR happens to create multiple commits.
Fix#891
This is a low issue as the prs of a commit are only listed from the
form so the compute is pretty much always called with a single record,
but still an unforced error which can easily be fixed.
Without fw-bot being its bearer, "ignore" is a lot less clear than it
used to as it looks to be asking to ignore the PR entirely (as if it
was targeted to an unmanaged branch).
Deprecate this command, and tack on the shortcut to the fw
subcommand. It is slightly sub-par as technically it does not quite
fit with the other subcommands, and furthermore can't be disabled via
fw=default... although maybe it could be? Maybe instead of setting the
limit fw=no could set that value to the forwardport mode, and the
fw_policy users could check that? It would require some more finessing
tho:
- `DEFAULT` would need to be accessible to the author as well as the
reviewers so the author could toggle between `NO` and `DEFAULT`.
- There should probably be a warning of some sort when setting a limit
to an unportable PR.
- The dashboards would need to take `NO` in account (though I guess
that's just defaulting the limit to the target).
And filter it to only consider branches in the same project as the PR,
and a lower sequence than its target. That way it's harder to fuck up
when trying to set limits from the backend.
Setting the PR state directly really doesn't work as it doesn't
correctly save (and can get overwritten by any dependency of which
there are many).
This caused setting odoo/odoo#165777 in error to fail, leading to it
being re-staged (and failing) repeatedly, and the PR being spammed
with comments.
- create a more formal helper for preventing directly setting computed
functions (without an actual inverse)
- replace direct state setting by setting the corresponding dependency
e.g. `error` for error and `skipchecks` to force a PR to ready
- add a `skipchecks` inverse to the PR so it can also set itself as
reviewed, and is convenient, might be worth also adding stuff to
`Batch.write`
Add support for the ability to validate *stagings* over RPC rather
than via webhook. This may later be expanded to PRs as well.
The core motivation for this is to avoid bouncing through github which
sometimes drops the ball on statuses, and it's frustrating to have a
staging time out because GH fucked up.
Implemented via RPC, requiring both the staging itself (by id) and the
head commit being affected, as that is necessary to know what CIs are
required for that head and correctly report cross branch on the
various PRs.
Fix#881 (kinda)
Rather than compute staging state directly from commit statuses, copy
statuses into the staging's `statuses_cache` then compute state based
on that. Also refactor `statuses` and `staging_end` to be computed
based on the statuses cache instead of the commits (or `state`
update).
The goal is to allow non-webhook validation of stagings, for direct
communications between the mergebot and the CI (mostly runbot).
Computed on the fly for now. Formatted nicely in the frontend, there
does not seem to be any sort of duration widget in the backend so
just display the integer number of seconds.
Fixes#865
if rebased. Untouched commits (straight merge) remain unalterated, but
all rebased or squashed commits now get signoff and `Related` headers
added on top of the already previously added `part-of`.
Implement by generalising `_build_merge_message` to `_build_message`
and having `add_self_references` delegate to it, removes some of the
redundancy / differential handling.
Update the `part_of` helper to also add the S-O-B header to the PR,
although it currently does not reference the entire forward port
chain.
Fixes#876
Shove a bunch of stuff in notebook tabs, add a few
affordances (e.g. github and frontend links, links from m2m), surface
a few missing fields.
Hopefully makes the backend form both easier to navigate and easier to
administrate from.
Initially wanted to skip this only for FW PRs, but after some thinking
I feel this info could still be valuable even for non-fw PRs which
were never merged in the first place.
Requires a few adjustments to not break *everything*: `batch.prs`
excludes closed PRs by default as most processes only expect to be
faced by a closed PR inside a batch, and we *especially* want to avoid
that before the batch is merged (as we'd risk staging a closed PR).
However since PRs don't get removed from batches anymore (and batches
don't get deleted when they have no PRs) we now may have a bunch of
batches whose PRs (usually a single one) are all closed, this has two
major side-effects:
- a new PR may get attached to an old batch full of closed PRs (as
batches are filtered out on being *merged*), which is weird
- the eventual list of batches gets polluted with a bunch of
irrelevant batches which are hard to filter out
The solution is to reintroduce an `active` field, as a stored compute
field based on the state of batch PRs. This way if all PRs of a batch
are closed it switches to inactive, and is automatically filtered out
by search which solves both issues.
Batch ordering in stagings is important in order to correctly
reconstitute the full project history.
In the old mergebot, since batches are created on the fly during
staging this information is reified by the batch ids. But since batch
ids are now persistent and there is no relationship between the
creation of a batch and its merging (especially not relative to other
batches) it's an issue as reconstituting sub-staging git history would
be impossible.
Which is not the worst, but is not great.
The solution is to reify the join table between stagings and batches
in order for *that* to keep history (simply via the sequential PK),
and in converting to the new system carefully generate the new links
in an order matching the old batch ids.
Given a batch which has been merged, and been forward-ported, to
multiple branches (because skipci was set or ci passed on the repos
the batch covers).
There might come the need to add a PR for one of the uncovered
repos. This raises the question of what to do with it, since the
forward-ports for the batch already exist it's not going to get
forwardported normally, nor may we want to, possibly?
Options are:
- don't do anything, such additions don't get ported, this is
incongruous and unexpected as by default PRs are forward-ported, and
if the batch wasn't an intermediate (but e.g. a conflict) it
probably would be ported forward
- port on merge, this allows configuring the PR properly (as it might
need its own limit) but it means further batches may get
unexpectedly merged (or at least retied) without the additional PR
even though we likely want it in
- immediately port the additional PR on creation, this makes the limit
harder or impossible to configure but it makes the *batch sequence*
more consistent
We ended up selecting the latter, it feels closer to the updates
system, and it creates more consistent batches through the
sequence. It's also technically easier to ad-hoc port a PR through a
bunch of branches than it is to update the "normal" forward-port
process to handle partial fixups.
Test and refine the handling of batch forward ports around branch
deactivation, especially with differential. Notably, fix an error in
the conversion of the FW process to batches: individual PR limit was
not correctly taken in account during forward port unless *all* PRs
were done, even though that is a primary motivation for the
change.
Partial forward porting should now work correctly, and the detection
and handling of differential next target should be better handled to
boot.
Significantly rework the interplay between batches and PRs being
closed in order to maintain sequencing / consistency of forward port
sequences: previously a batch would get deleted if all its PRs are
closed, but that is an issue when it is part of a forward port
sequence as we now lose information.
Instead, detach the PRs from the batch as before but have the batch
skip unlinking if it has historical value (parent or child
batch). Currently the batch's state is a bit weird as it doesn't get
merged, but...
While at it, significantly simplify `_try_closing` as it turns out to
have a ton of incidental / historical complexity from old attempts at
fixing concurrency issues, which should not be necessary anymore and
in fact actively interfere with the new and more compute-heavy state
of things.
Thank god I have a bunch of tests because once again I forgot / missed
a bunch of edge cases in doing the conversion, which the tests
caught (sadly that means I almost certainly broke a few untested edge
cases).
Important notes:
Handling of parent links
------------------------
Unlike PRs, batches don't lose their parent info ever, the link is
permanent, which is convenient to trawl through a forward port
(currently implemented very inefficiently, maybe we'll optimise that
in the future).
However this means the batch having a parent and the batch's PRs
having parents are slightly different informations, one of the edge
cases I missed is that of conflicting PRs, which are deparented and
have to be merged by hand before being forward ported further, I had
originally replaced the checks on a pr and its sibling having parents
by just the batch.
Batches & targets
-----------------
Batches were originally concepted as being fixed to a target and PRs
having that target, a PR being retargeted would move it from one batch
to an other.
As it turns out this does not work in the case where people retarget
forward-port PRs, which I know they do because #551
(2337bd8518). I could not think of a
good way to handle this issue as is, so scrapped the moving PRs thing,
instead one of the coherence checks of a batch being ready is that all
its PRs have the same target, and a batch only has a target if all its
PRs have the same target.
It's possible for somewhat odd effects to arise, notably if a PR is
closed (removed from batch), the other PRs are retargeted, and the new
PR is reopened, it will now be on a separate batch even if it also
gets retargeted. This is weird. I don't quite know how I should handle
it, maybe batches could merge if they have the same target and label?
however batches don't currently have a label so...
Improve limits
--------------
Keep limits on the PRs rather than lift them on the batchL if we can
add/remove PRs of batches having different limits on different PRs of
the same batch is reasonable.
Also leave limit unset by default: previously, the limit was eagerly
set to the tip (accessible) branch. That doesn't really seem
necessary, so stop doing that.
Also remove completely unnecessary `max` when trying to find a PR's
next target: `root` is either `self` or `self.source_id`, so it should
not be possible for that to have a later target.
And for now ensure the limits are consistent per batch: a PR defaults
to the limit of their batch-mate if they don't have one, and if a
limit is set via command it's set on all PRs of a batch.
This commit does not allow differential limits via commands, they are
allowed via the backend but not really tested. The issue is mostly
that it's not clear what the UX should look like to have clear and not
super error prone interactions. So punt on it for now, and hopefully
there's no hole I missed which will create inconsistent batches.
It's a bit weird and inconsistent to have a PR being staged while
unreviewed or unapproved or w/e. If we compute the state based on
skipchecks then everything is consistent.
Also remove the implicit override of all statuses when explicitly
marking the pr as `ready`, it risks creating difficult to understand
states, and it's unnecessary since `skipchecks` gets set.
Also as with setting skipchecks, sets the current user as reviewer on
all PRs of the batch without a reviewer.
Move staging cancellation to the batch, remove its (complicated)
handling from the PRs.
This loses some precision in the cancellation message, but that could
likely be recovered (in part) by adding more precise checks &
diagnostic extractions in the compute.
Simplifies the `ready_prs` query a bit and allows it to be converted
to an ORM search, by moving the priority check outside. This also
allows the caller to not need to post-process the records list
anywhere near the previous state of affairs.
`ready_prs` now returns *either* the "alone" batches, or the non-alone
batches, rather than mixing both into a single sequence. This requires
correctly applying the search filters to not retrieve priority of
batches in error or targeting other branches.
Staging readiness is a batch-level concerns, and many of the markers
are already there though a few need to be aggregated from the PRs. As
such, staging has no reason to be performed in terms of PRs anymore,
it should be performed via batches directly.
There is a bit of a mess in order not to completely fuck up when
retargeting PRs (implicitly via freeze wizard, or explicitely) as for
now we're moving PRs between batches in order to keep the
batches *mostly* target-bound.
Some of the side-effects in managing the coherence of the targeting
and moving PRs between batches is... not great. This might need to be
revisited and cleaned up with those scenarios better considered.