That's a bit of a weird one: apparently the boolean_toggle widget has
an `autosave` option which should be `true` by default, effecting the
row as soon as the toggle is toggled[^1]. But in 15.0 and 18.0 it
seems to have no effect, the `boolean_toggle` always just stores the
change in the parent form and that gets "committed on save.
In 16.0 and 17.0 however it does have an effect, toggling the control
will immediately save its value *without going through the parent
form*, resulting in the override to `Project.write` managing
new/existing branches to not be called, thus not calling
`Project_followup_prs`, and ultimately not creating the followup
forward ports.
After contacting AAB to get more info (and grepping a bit):
- autosave was added (enabled by default) in 16.0 after the owl
rewrite (odoo/odoo@28e6b7eb83)
- toggle was added in 17.0
(odoo/odoo@a449b05221)
- feature was removed in 18.0
(odoo/odoo@6bd2c1fdfb)
Which explains why the issue occurs in 16.0 and 17.0, and not in 15.0
or 18.0.
Fixes#1051
[^1]: but apparently not going through the parent form...
This is a bit of an odd case which was only noticed because of
persistent forwardport.batches, which ended up having a ton of related
traceback in the logs (the mergebot kept trying to create forward
ports from Jan 27th to Feb 10th, thankfully the errors happened in git
so did not seem to eat through our API rate limiting).
The issue was triggered by the addition of odoo/enterprise#77876 to
odoo/odoo#194818. This triggered a completion job which led to the
creation of odoo/enterprise#77877 to odoo/enterprise#77880, so far so
good.
Except the bit of code responsible for creating completion jobs only
checked if the PR was being added to a batch with a descendant. That
is the case of odoo/enterprise#77877 to odoo/enterprise#77879 (not
odoo/enterprise#77880 because that's the end of the line). As a
result, those triggered 3 more completion jobs, which kept failing in
a loop because they tried pushing different commits to their
next-siblings (without forcing, leading git to reject the non-ff push,
hurray).
A completion request should only be triggered by the addition of a
new *source* (a PR without a source) to an existing batch with
descendants, so add that to the check. This requires updating
`_from_json` to create PRs in a single step (rather than one step to
create based on github's data, and an other one for the hierarchical
tracking) as we need the source to be set during `create` not as a
post-action.
Although there was a test which could have triggered this issue, the
test only had 3 branches so was not long enough to trigger the issue:
- Initial PR 1 on branch A merged then forward-ported to B and C.
- Sibling PR 2 added to the batch in B.
- Completed to C.
- Ending there as C(1) has no descendant batch, leading to no further
completion request.
Adding a 4th branch did surface / show the issue by providing space
for a new completion request from the creation of C(2). Interestingly
even though I the test harness attempts to run all triggered crons to
completion there can be misses, so the test can fail in two different
ways (being now checked for):
- there's a leftover forwardport.batch after we've created all our
forwardports
- there's an extra PR targeting D, descending from C(2)
- in almost every case there's also a traceback in the logs, which
does fail the build thanks to the `env` fixture's check
For historical reasons pretty much all tests used to use the contexts
legal/cla and ci/runbot. While there are a few tests where we need the
interactions of multiple contexts and that makes sense, on the vast
majority of tests that's just extra traffic and noise in the
test (from needing to send multiple statuses unnecessarily).
In fact on the average PR where everything passes by default we could
even remove the required statuses entirely...
Should limit the risk of cases where the fork contains outdated
versions of the reference branches and we end up with odd outdated /
not up to date basis for branches & updates, which can lead to
confusing situations.
Skipmerge creates forward-ports before the source PR is even merged.
- In a break from the norm, skipmerge will create forwardports even in
the face of conflicts.
- It will also not *detach* pull requests in case of conflicts, this
is so the source PR can be updated and the update correctly cascades
through the stack (likewise for any intermediate PR though *that*
will detach as usual).
Note that this doesn't really look at outstandings, especially as they
were recently updated, so it might need to be fixed up in case of
freakout, but I feel like that should not be too much of an issue, the
authors will just get their FW reminders earlier than usual. If that's
a hassle we can always update the reminder job to ignore forward ports
whose source is not merged I guess.
Fixes#418
Not entirely sure about just allowing any PR author to set the merge
method as it gives them a lot more control over commits (via the PR
message), and I'm uncertain about the ramifications of doing that.
However if the author of the PR is classified as an
employee (via a provisioned user linked to the github account) then
allow it. Should not be prone to abuse at least.
Fixes#1036
All `pull_request` events seem to provide the `commits` count
property. As such we can use them all to check the `squash` state even
if we don't otherwise care for the event.
Also split out notifying an approved pull request about its missing
merge method into a separate cron from the one notifying a PR that its
siblings are ready.
Fixes#1036
The forward port process adds a uniquifier to the branch name as it's
possible to reuse branch names for different sets of PRs (and some
individuals do that systematically) which can cause collisions in the
fw branches.
Originally this uniquifier was random by necessity, however now that
batches are reified and forward port is performed batch-wise we don't
need the randomness we can just use the source batch's id as it's
unique per sequence of forward ports. This means it'll eventually be
possible for an external system to retrieve the source(s) of a forward
port by reading the batch object, and it's also possible to correlate
forward ports through the batch id (although not possible to find the
source set without access to the mergebot's information).
Do the same thing for backports, because why not.
- Update `get_reviewers` to returning all active users, as that makes
more sense for the server side operations, this is now redundant
with `list_users`, so eventually internals should be fixed and it
removed.
- Update `remove_users` to deactivate the user as well as move it to
portal, since the mergebot doesn't use portal keeping enabled portal
users doesn't make much sense.
- Reorganise `provision_user` to make the update path more obvious.
Fixes#968
pycharm is a bit dumb so it considers `[ ]` to be unnecessary even in
`re.VERBOSE` regexes. But it has a bit of a point in that it's not
super clear, so inject the space characters by (unicode) name, the
behaviour should be the exact same but the regexes should be slightly
less opaque.
Link stagings to the staging commits instead of the commits directly
in order to have the repository name and not need to try the commit on
every repository.
Also make the trees editable, in combination with `readonly="1"` this
does not make the trees editable but does prevent clicking on a row to
open a dialog with the linked objects, which is convenient if one
wants to copy the commit hash.
Having the staged and merged heads via an easy to reach endpoint is
nice, but having *just* the hashes is not as it requires basically
exhaustively checking commits against repos to see which is
where. That's annoying.
Make the head serializations a map of repository to commit
hash. That's a lot more convenient.
Hopefully this is the last fix to the patcher. From the start of the
implementation I relied on the idea that `git show` was adding a line
composed of a single space (and a newline) before and after the patch
message, as that is what I observed in my terminal, and it's
consistent with RFC 3676 signatures (two dashes, a space, and a
newline).
Turns out that single space, while present in my terminal indeed, was
completely made up by `less(1)`. `git show` itself doesn't generate
that, neither does it appear when using most pagers, or even when
piping the output of `less` into something (a file, an other pager,
...). It's pretty much just something `less(1)` sends to a terminal
during interactive sessions to fuck with you.
Fixes#1037
- use lingo `staged` and `merged` for clarity (instead of `heads` and
`commits`)
- convert from json-rpc to regular HTTP to make easier to call
- add the staged and merged heads when returning staging data
- return more complete objects (rather than a list of ids) when
matching stagings based on commits / heads
Fixes#1018
The old controller system required `type='json'` which only did
JSON-RPC and prevented returning proper responses.
As of 17.0 this is not the case anymore, `type='http'` controllers can
get `content-type: application/json` requests just fine and return
whatever they want. So change that:
- `type='json'`.
- Return `Response` objects with nice status codes (and text
mimetypes, as otherwise werkzeug defaults to html).
- Update ping to bypass normal flow as otherwise it requires
authentication and event sources which is annoying (it might be a
good idea to have both in order to check for configuration, however
it's not possible to just send a ping via the webhook UI on github
so not sure how useful that would be).
- Add some typing and improve response messages while at it.
Note: some "internal" errors (e.g. ignoring event actions) are
reported as successes because as far as I can tell webhooks only
support success (2xy) / failure (4xy, 5xy) and an event that's ignored
is not really *failed* per se.
Some things are reported as failures even though they are on the edge
because that can be useful to see what's happening e.g. comment too
large or unable to lock rows.
Fixes#1019
Probably should create a mixin for this: when a model is used as a
task queue for a cron, the cron should automatically be triggered on
creation. Requiring an explicit trigger after a creation is error
prone and increase the risks that some of the triggers will be
forgotten/missed.
Adds a very limited ability to try and look for false positive /
non-determinstic staging errors. It tries to err on the side of
limiting false false positives, so it's likely to miss many.
Currently has no automation / reporting, just sets a flag on the
stagings which are strongly believed to have failed due to false
positives.
While at it, add link between a "root" staging and its splits. It's
necessary to clear the "false positive" flag, and surfacing it in the
UI could be useful one day.
Fixes#660
Rather than add individual tunnel methods to conftest, just allows
specifying a tunnel script and have that do whatever.
Protocol is uncomplicated: workers run the `$tunnel` with an arbitrary
port, script should create a tunnel to `localhost:$port`, print the
ingress of the tunnel to `STDOUT` with a terminating newline then
close `STDOUT`, and wait for `SIGINT` or `SIGTERM`, doing whatever
cleanup they need when receiving either of those.
`ngrok` and `localtunnel` adapter scripts are provided out of the box,
though the ngrok one doesn't *really* work when using xdist without a
pre-spawned ngrok worker. Then again using xdist against github actual
is suicidal (because concurrency limits + rate limits) so likely
irrelevant.
Fixes#729
Requires parsing the commit messages as github plain doesn't collate
the info from there, but also the descriptions: apparently github only
adds those to the references if the PR targets the default
branch. That's not a limitation we want.
Meaning at the end of the day, the only thing we're calling graphql
for is explicit manual linking, which can't be tested without
extending the github API (and then would only be testable on DC), and
which I'm not sure anyone bothers with...
Limitations
-----------
- Links are retrieved on staging (so if one is added later it won't be
taken in account).
- Only repository-local links are handled, not cross-repository.
Fixes#777
In that case, ignore the reopen, close the PR, and tell the idiot to
fuck off.
Also the case where a PR is reopened while its batch is staged was
already handled, but probably not tested: it was implicit in
forcefully updating the HEAD of the PR, which triggers an unstage
since c8a06601a7.
Now that scenario is tested, which should lower the odds of breaking
it in the future.
Fixes#965
The `to_pr` helper was added a *while* ago to replace the pretty
verbose process of looking a PR with the right number in the right
repository after a `make_pr`. However while I did some ad-hoc
replacement of existing `search` calls as I had to work with existing
tests I never did a full search and replace to try and excise searches
from the test suite.
This is now done. I've undoubtedly missed a few, but a hundred-odd
lines of codes have been simplified.
BS5 namespaced the data-attributes it uses to trigger JS
behaviours. So for dropdowns `@data-toggle` doesn't do anything
anymore, one has to use `@data-bs-toggle`. Missed that while testing
the migrations.
Also seems like `@aria-expanded` was misapplied when I added the
dropdowns:
> When a menu is displayed, the button object that toggles the
> visibility of that menu has aria-expanded="true" set. When the menu
> is hidden, aria-expanded can be omitted. If specified when the menu
> is hidden, it should be set as aria-expanded="false".
Since the dropdowns are hidden by default, the button should be
`@aria-expanded="false"`.
As the number of projects is starting to grow pretty large, provide
quick links and stable jump targets for projects on the home page /
main dashboard.
Fixes#991
- don't *fail* in `_compute_identity`, it causes issues when the token
is valid but doesn't have `user:email` access as the request is
aborted and saving doesn't work
- make `github_name` and `github_email` required rather than ad-hoc
requiring them in `_compute_identity` (which doesn't work correctly)
- force copy of `github_name` and `github_email`, with o2ms being
!copy this means duplicating projects now works out of the box (or
should...)
Currently errors in `_compute_identity` are reported via logging which
is not great as it's not UI visible, should probably get moved to
chatter eventually but that's not currently enabled on projects.
Fixes#990
Show patch metadata on the patch screen, so it's easier to understand
what the parser sees in case of issues.
Behaviour is not *entirely* consisten, `file_ids` is correctly set but
it looks like during the initial `web_read` it gets stripped out in at
least some cases and the files list is empty even though files have
been found in the patch. nm.
Fixes#987
- Apparently if a user is on windows the ACE editor can swap out their
line end from unix to windows. The patch parsers were predicated
upon all patches being in unix mode (because git, and diff).
Fixup both parsers to convert windows-style line end to unix before
trying to parse the patch data. Also add a few fallbacks to limit
the odds of an unhelpful `StopIteration` (though that might hide
errors more than reveal them...)
- Make sure we support `format-patch --no-signature`, just requires
using the correct partition direction: I assume I used `rpartition`
as a form of micro-optimisation *but*
- If the separator is not found the "patch body" ends up in the
third parameter rather than the first, which makes the fallback
difficult.
- There doesn't seem to be anything preventing *multiple* signature
separators in a message, and logically the first one should hold
and the rest is all part of the signature.
As a result, for both reasons we need to look *forwards* for the
signature separator, not backwards. Hence `str.partition`.
Fixes#992
Turns out to not work well in 17.0, and after consideration moc hasn't
really used the auto-update feature (or the required-prs gate in
general to be honest), usually he knows what PRs he's waiting for and
only validates once he's confirmed every which way.
So it's probably not worth fixing the thing. According to jpp, this
should probably use something based on bus subscriptions to update
just the field (though tbf the `root.update` call doesn't really seem
to be "deep" anymore, so in reality rather than update the *form*'s
record I should probably have tried reloading the required_pr_ids
records to fetch the new color).
Closes#997
Basically the next part of aa1df22657
which requires replacing @attrs by the corresponding attribute &
python predicates: new attrs were added to 15.0 since.
Because of tracking, `test_patch_acl` needs access to message subtypes
during patch creation. If the user *only* has
`runbot_merge.group_patcher` or `runbot_merge.group_admin` they don't
have any of the "core" user groups (public, portal, internal) and thus
don't have access to mail subtypes.
Fix that by having the runbot_merge groups imply being an internal
user.
`get_content` round-trips the text part through `ascii` with
`error=replace`, so if the input is not ascii it screws up
tremendously, which leads to either failing to apply patches (the more
likely situation) or corrupting the patches.
`get_payload`, if called without `decode`, pretty much just returns
the payload unless it needs a decoding pass (e.g. because it contains
raw surrogates, but that should not be an issue for us). So this is
really what we want.
While at it, increase `patch`'s verbosity in case it can give us more
info.
If a branch is created via SQL directly, one might forget to set its
`write_date`, leading the computation of the `last_modified` to be
*really* unhappy as it's asked to compare a bunch of datetime objects
to `False` in order to compute the `max`.
Substitute missing `write_date` with `datetime.min` so they are
extremely unlikely to influence the computation until and unless the
branch gets updated.
If a comment causes an unknown PR to be fetched, it's a bit odd to
ping the author (and possibly reviewer) anyway as they're not super
concerned (and technically we could be ignoring the purported /
attempted reviewer).
So if a fetch job was created because of a comment, remember the
comment author and ping *them* instead of using the default ping
policy.
Fixes#981
If staging gets re-enabled on a branch (or the branch itself gets
re-enabled), immediately run a staging cron as there may already be
PRs waiting, and no trigger enqueued: cron triggers have no payload,
they just get removed when the cron runs which means if a bunch of PRs
become ready for branch B with staging disabled, the cron is going to
run, it's going to stage nothing on that branch (because staging is
disabled) then it's going to delete all the triggers.
Fixes#979
Turns out you can configure format-patch with `--no-prefix` and some
people (*cough cough* mat) have that in their standard setup, so the
assumption of needing to strip 1 level of prefix does not necessarily
hold.
Also fix a few more issues:
- some people (*cough cough* still mat) also use `-n` by default,
which adds the series sequence (`n/m`) even for a single patch,
handle that correctly
- logging patch application errors is pretty useful when patching
fails and I'm trying to get the information via logs, do that
- especially when I decide to add error messages to tracking *but
forgot to show the chatter by default*, fix that as well
The commit-based patcher worked first try, and patch-based would have
worked too if not for those meddling kids. In the future it might be a
good idea to reify the stripping level (`-p`) on the patch object
though, and maybe provide a computed preview of the list of files to
patch, so issues are easier for the operator to diagnose.
- fix incorrect view specs (the action id comes first)
- add a wizard form and hook it into the PR, completely forgot to do
that
- usability improvements: filter branches to be in the same project as
the PR being backported, and older than the current PR's branch
The latter is a somewhat incomplete condition: ideally we'd want to
only allow selecting branches preceding the target of the *source* of
the PR being backported, that way we don't risk errors when
backporting forward-ports (the condition should be checked in the
final action but still).
Also we're only filtering by sequence, so we're missing the name part
of the ordering, hence if multiple branches have the same sequence we
may not allow selecting some of the preceding branches.
pymarkdown's footnotes plugin *saves footnotes across invocations by
default*. Even if I understand the documented use case it seems wild
that it's not opt-in...
Anyway disable that resetting all internal state. Thanks rfr for the
inital report that things were looking odd.