"Run manually" is a bit meh, as it runs the cron synchronously (so you
have to wait for it, and hope it doesn't run for longer than the
request timeout which may be smaller than the cron timeout) and it can
run in a subtly different environment than normal, which can lead to
different behaviour.
Instead add a button to enqueue a cron trigger, now that they exist
that's much closer to what we actually want, and it does run the cron
in a normal / expected environment.
This can yield odd effects when playing around e.g. opening freeze
wizards then cancelling them. There is a check that there are no
wizards left, but if a freeze is performed then an other is created
and cancelled it will immediately re-enable forward ports,
unexpectedly.
Since forever, if a PR gets added to an existing forward ported batch
it will be forward ported immediately, leading to a forwardport having
a source with no merge date.
Since the addition of the skipmerge feature, it's even more possible
than before.
As a result, the `outstanding` listing should not assume the
merge_date of a source is always set.
Hit issues where this didn't seem to work after deactivating 17.2 and
trying to recover from the autoport being broken (see previous
commits), but apparently this says everything works just fine so
╮( ̄▽ ̄"")╭
Fixes#1052
In the case where multiple batch completions are triggered (see 3
commits earlier) one of the issues is that the "already ported" check
is incorrect, as it checks the (source, target) pair against the PR
used to trigger the completion check. That PR *should* be the one PR
which was just added to a batch, but if one of its descendants is used
to re-trigger a port then the dupe check fails and we try to port the
thing again.
Remember to deref' the subject's source in order to perform the dupe
check, as well as set the source for the forwardport we create.
Fixes#1052
When an fw batch fails, log a message to its chatter (so the reason
for the failure doesn't necessarily have to be hunted down in the
logs, although depending on the quality of the error that might still
be an issue).
Also if a batch keeps failing (fails for more than a day at a retry
every hour -- increased from a previous 30mn), stop retrying it and
flag it in the list view, it's clearly not going to go through.
This is because we hit an issue with a completion fw batch created on
January 27th which kept failing until it was deleted on February
10th. Thankfully it failed on `git push` and git operations apparently
are not rate limited at all, but still it's not great stewartship to
keep trying a forwardport which keeps failing. Retrying in case of
transient failure makes sense, but after 24 attempts over a day it's
either not transient, or it's not working because github is down and
hammering it won't help.
That's a bit of a weird one: apparently the boolean_toggle widget has
an `autosave` option which should be `true` by default, effecting the
row as soon as the toggle is toggled[^1]. But in 15.0 and 18.0 it
seems to have no effect, the `boolean_toggle` always just stores the
change in the parent form and that gets "committed on save.
In 16.0 and 17.0 however it does have an effect, toggling the control
will immediately save its value *without going through the parent
form*, resulting in the override to `Project.write` managing
new/existing branches to not be called, thus not calling
`Project_followup_prs`, and ultimately not creating the followup
forward ports.
After contacting AAB to get more info (and grepping a bit):
- autosave was added (enabled by default) in 16.0 after the owl
rewrite (odoo/odoo@28e6b7eb83)
- toggle was added in 17.0
(odoo/odoo@a449b05221)
- feature was removed in 18.0
(odoo/odoo@6bd2c1fdfb)
Which explains why the issue occurs in 16.0 and 17.0, and not in 15.0
or 18.0.
Fixes#1051
[^1]: but apparently not going through the parent form...
This is a bit of an odd case which was only noticed because of
persistent forwardport.batches, which ended up having a ton of related
traceback in the logs (the mergebot kept trying to create forward
ports from Jan 27th to Feb 10th, thankfully the errors happened in git
so did not seem to eat through our API rate limiting).
The issue was triggered by the addition of odoo/enterprise#77876 to
odoo/odoo#194818. This triggered a completion job which led to the
creation of odoo/enterprise#77877 to odoo/enterprise#77880, so far so
good.
Except the bit of code responsible for creating completion jobs only
checked if the PR was being added to a batch with a descendant. That
is the case of odoo/enterprise#77877 to odoo/enterprise#77879 (not
odoo/enterprise#77880 because that's the end of the line). As a
result, those triggered 3 more completion jobs, which kept failing in
a loop because they tried pushing different commits to their
next-siblings (without forcing, leading git to reject the non-ff push,
hurray).
A completion request should only be triggered by the addition of a
new *source* (a PR without a source) to an existing batch with
descendants, so add that to the check. This requires updating
`_from_json` to create PRs in a single step (rather than one step to
create based on github's data, and an other one for the hierarchical
tracking) as we need the source to be set during `create` not as a
post-action.
Although there was a test which could have triggered this issue, the
test only had 3 branches so was not long enough to trigger the issue:
- Initial PR 1 on branch A merged then forward-ported to B and C.
- Sibling PR 2 added to the batch in B.
- Completed to C.
- Ending there as C(1) has no descendant batch, leading to no further
completion request.
Adding a 4th branch did surface / show the issue by providing space
for a new completion request from the creation of C(2). Interestingly
even though I the test harness attempts to run all triggered crons to
completion there can be misses, so the test can fail in two different
ways (being now checked for):
- there's a leftover forwardport.batch after we've created all our
forwardports
- there's an extra PR targeting D, descending from C(2)
- in almost every case there's also a traceback in the logs, which
does fail the build thanks to the `env` fixture's check
Looks like `--force-with-lease` can work in multi-head updates, and
this should allow just one push per repository for the entire sequence
of update, then commit the entire thing if it succeeds.
For historical reasons pretty much all tests used to use the contexts
legal/cla and ci/runbot. While there are a few tests where we need the
interactions of multiple contexts and that makes sense, on the vast
majority of tests that's just extra traffic and noise in the
test (from needing to send multiple statuses unnecessarily).
In fact on the average PR where everything passes by default we could
even remove the required statuses entirely...
Should limit the risk of cases where the fork contains outdated
versions of the reference branches and we end up with odd outdated /
not up to date basis for branches & updates, which can lead to
confusing situations.
- Use f-strings where appropriate.
- Get the repo url from github instead of composing it by hand
- Resolve URI template for the contents endpoint instead of
hand-building the URL. URI template implementation is janky AF.
Since the tunnel value is now a path to a script to run (and get the
public facing address out of), it's pretty easy to give the path to
the wrong program e.g. `--tunnel ngrok` will run ngrok itself, which
is going to yield an error message and do nothing, and then break when
we try to use the nonsensical result (e.g. an empty string or an error
message) as a URL.
So assert that the result does parse as a URL-ish with at least a
scheme and netloc.
Skipmerge creates forward-ports before the source PR is even merged.
- In a break from the norm, skipmerge will create forwardports even in
the face of conflicts.
- It will also not *detach* pull requests in case of conflicts, this
is so the source PR can be updated and the update correctly cascades
through the stack (likewise for any intermediate PR though *that*
will detach as usual).
Note that this doesn't really look at outstandings, especially as they
were recently updated, so it might need to be fixed up in case of
freakout, but I feel like that should not be too much of an issue, the
authors will just get their FW reminders earlier than usual. If that's
a hassle we can always update the reminder job to ignore forward ports
whose source is not merged I guess.
Fixes#418
There should only be 1 PR with a given parent at most anyway, but
still if it's a boolean we can stop at the first occurrence if any. pg
might even be able to do something more efficient in that case.
Not entirely sure about just allowing any PR author to set the merge
method as it gives them a lot more control over commits (via the PR
message), and I'm uncertain about the ramifications of doing that.
However if the author of the PR is classified as an
employee (via a provisioned user linked to the github account) then
allow it. Should not be prone to abuse at least.
Fixes#1036
All `pull_request` events seem to provide the `commits` count
property. As such we can use them all to check the `squash` state even
if we don't otherwise care for the event.
Also split out notifying an approved pull request about its missing
merge method into a separate cron from the one notifying a PR that its
siblings are ready.
Fixes#1036
The forward port process adds a uniquifier to the branch name as it's
possible to reuse branch names for different sets of PRs (and some
individuals do that systematically) which can cause collisions in the
fw branches.
Originally this uniquifier was random by necessity, however now that
batches are reified and forward port is performed batch-wise we don't
need the randomness we can just use the source batch's id as it's
unique per sequence of forward ports. This means it'll eventually be
possible for an external system to retrieve the source(s) of a forward
port by reading the batch object, and it's also possible to correlate
forward ports through the batch id (although not possible to find the
source set without access to the mergebot's information).
Do the same thing for backports, because why not.
- Update `get_reviewers` to returning all active users, as that makes
more sense for the server side operations, this is now redundant
with `list_users`, so eventually internals should be fixed and it
removed.
- Update `remove_users` to deactivate the user as well as move it to
portal, since the mergebot doesn't use portal keeping enabled portal
users doesn't make much sense.
- Reorganise `provision_user` to make the update path more obvious.
Fixes#968
The API doesn't correctly match the server-side `Environment`/`Model`
API, but neither did `with_user`, so probably fine for now, can
revisit this thing if I start needing it more than once in a blue
moon.
After 6 months of sitting unmerged, forward ports will now trigger a
monthly email.
Or would, if an email server was configured on the mergebot, but
adding this now should allow debugging its working and tracking the
generation of emails through the "Delivery Failed" collection. This
way if we ever decide to actually do the thing we can just enable an
SMTP server.
Fixes#801
Previously the alert would show the total number of outstanding
forward ports, which most people don't care about (if they're even
logged which probably isn't the case), so that was mostly useful
for... me.
Add the current user's tally to the message as well. If the user is
logged. Which is always, because the alert only shows up if the user
has read access to PRs.
Fixes#801
In the case where both author and reviewer of a PR have the same
team (which is very common e.g. self review or a team member approving
your PR) then the `outstanding_per_group` entry would be incremented
twice, leading to overcounting outstanding forward ports for the team.
Fixes#801
pycharm is a bit dumb so it considers `[ ]` to be unnecessary even in
`re.VERBOSE` regexes. But it has a bit of a point in that it's not
super clear, so inject the space characters by (unicode) name, the
behaviour should be the exact same but the regexes should be slightly
less opaque.
- only remind weekly initially (not daily)
- root reminders on the forward port's creation, not the source's
merge date
- cap reminder interval at 4 weeks (instead of doubling every time)
- track reminders on forward ports, don't share between siblings
- remove `forwardport_updated_before` from the testing system, it's
now possible to just update `reminder_next` to a past date and see
if it gets triggered (it should to nothing on sources, or on forward
port in a state which does not warrant concern)
Fixes#801
Link stagings to the staging commits instead of the commits directly
in order to have the repository name and not need to try the commit on
every repository.
Also make the trees editable, in combination with `readonly="1"` this
does not make the trees editable but does prevent clicking on a row to
open a dialog with the linked objects, which is convenient if one
wants to copy the commit hash.
Having the staged and merged heads via an easy to reach endpoint is
nice, but having *just* the hashes is not as it requires basically
exhaustively checking commits against repos to see which is
where. That's annoying.
Make the head serializations a map of repository to commit
hash. That's a lot more convenient.
Hopefully this is the last fix to the patcher. From the start of the
implementation I relied on the idea that `git show` was adding a line
composed of a single space (and a newline) before and after the patch
message, as that is what I observed in my terminal, and it's
consistent with RFC 3676 signatures (two dashes, a space, and a
newline).
Turns out that single space, while present in my terminal indeed, was
completely made up by `less(1)`. `git show` itself doesn't generate
that, neither does it appear when using most pagers, or even when
piping the output of `less` into something (a file, an other pager,
...). It's pretty much just something `less(1)` sends to a terminal
during interactive sessions to fuck with you.
Fixes#1037
- use lingo `staged` and `merged` for clarity (instead of `heads` and
`commits`)
- convert from json-rpc to regular HTTP to make easier to call
- add the staged and merged heads when returning staging data
- return more complete objects (rather than a list of ids) when
matching stagings based on commits / heads
Fixes#1018
The old controller system required `type='json'` which only did
JSON-RPC and prevented returning proper responses.
As of 17.0 this is not the case anymore, `type='http'` controllers can
get `content-type: application/json` requests just fine and return
whatever they want. So change that:
- `type='json'`.
- Return `Response` objects with nice status codes (and text
mimetypes, as otherwise werkzeug defaults to html).
- Update ping to bypass normal flow as otherwise it requires
authentication and event sources which is annoying (it might be a
good idea to have both in order to check for configuration, however
it's not possible to just send a ping via the webhook UI on github
so not sure how useful that would be).
- Add some typing and improve response messages while at it.
Note: some "internal" errors (e.g. ignoring event actions) are
reported as successes because as far as I can tell webhooks only
support success (2xy) / failure (4xy, 5xy) and an event that's ignored
is not really *failed* per se.
Some things are reported as failures even though they are on the edge
because that can be useful to see what's happening e.g. comment too
large or unable to lock rows.
Fixes#1019
Probably should create a mixin for this: when a model is used as a
task queue for a cron, the cron should automatically be triggered on
creation. Requiring an explicit trigger after a creation is error
prone and increase the risks that some of the triggers will be
forgotten/missed.
Most of the repos we create we want to hook into the odoo instance,
but in some rare cases we want repositories for just push/pull, in
that case webhooks configuration just creates log noise (as the events
get rejected on intake).
- add support for reading commit entries
- add support for reading tree entries in non-recursive mode
Previously the helper would only return blob entries, which does not
allow testing submodule support.
Seems like remnants of old debugging of the rate limiting, and can
create odd situations when it fires as it replaces `sys.stdout` with
an in-memory buffer, which doesn't have a `fileno()`, which breaks
`server._move`.