Commit Graph

39 Commits

Author SHA1 Message Date
Xavier Morel
3fe29ba8f6 [IMP] forwardport: batch list
Since b45ecf08f9 forwardport batches
which fail have a delay set in order to avoid spamming. However that
delay was not displayed anywhere, which made things confusing as the
batch would not get run even after creating new triggers.

Show the delay if it's set (to a value later than now), as a relative
delta for clarity (as normally the delay is in minutes so a full blown
date is difficult to read / aprehend), and allow viewing and setting
it in the form view.

Fixes #982
2024-11-18 14:18:25 +01:00
Xavier Morel
cf4d162907 [ADD] *: PR backport wizard
This is not a full user-driven backport thingie for now, just one
admins can use to facilitate thing and debug issues with the
system. May eventually graduate to a frontend feature.

Fixes #925
2024-10-22 11:41:58 +02:00
Xavier Morel
10ca096d86 [FIX] forwardport: cron create prototype
Apparently when I added these to trigger the corresponding cron the
lack of decorator caused them to not work at all, at least when
invoked from the interface, consequently I notably wasn't able to
force a forward port via creating one such task when trying to work
around #953
2024-09-16 12:46:44 +02:00
Xavier Morel
029957dbeb [IMP] *: trigger-ify task queue type crons
These are pretty simple to convert as they are straightforward: an
item is added  to a work queue (table), then a cron regularly scans
through the table executing the items and deleting them.

That means the cron trigger can just be added on `create` and things
should work out fine.

There's just two wrinkles in the port_forward cron:

- It can be requeued in the future, so needs a conditional trigger-ing
  in `write`.
- It is disabled during freeze (maybe something to change), as a
  result triggers don't enqueue at all, so we need to immediately
  trigger after freeze to force the cron re-enabling it.
2024-08-02 15:14:50 +02:00
Xavier Morel
dd17730f4c [IMP] *: crons tests running for better triggered compatibility
Mergebot / forwardport crons need to run in a specific ordering in
order to flow into one another correctly. The default ordering being
unspecified, it was not possible to use the normal cron
runner (instead of the external driver running crons in sequence one
at a time). This can be fixed by setting *sequences* on crons, as the
cron runner (`_process_jobs`) will use that order to acquire and run
crons.

Also override `_process_jobs` however: the built-in cron runner
fetches a static list of ready crons, then runs that.

This is fine for normal situation where the cron runner runs in a loop
anyway but it's any issue for the tests, as we expect that cron A can
trigger cron B, and we want cron B to run *right now* even if it
hadn't been triggered before cron A ran.

We can replace `_process_job` with a cut down version which does
that (cut down because we don't need most of the error handling /
resilience, there's no concurrent workers, there's no module being
installed, versions must match, ...). This allows e.g. the cron
propagating commit statuses to trigger the staging cron, and both will
run within the same `run_crons` session.

Something I didn't touch is that `_process_jobs` internally creates
completely new environments so there is no way to pass context into
the cron jobs anymore (whereas it works for `method_direct_trigger`),
this means the context values have to be shunted elsewhere for that
purpose which is gross. But even though I'm replacing `_process_jobs`,
this seems a bit too much of a change in cron execution semantics. So
left it out.

While at it tho, silence the spammy `py.warnings` stuff I can't do
much about.
2024-08-02 09:00:34 +02:00
Xavier Morel
b1d3278de1 [CHG] forwardport: perform forward porting without working copies
The goal is to reduce maintenance and odd disk interactions &
concurrency issues, by not creating concurrent clones, not having to
push forks back in the repository, etc... it also removes the need to
cleanup "scratch" working copies though that looks not to have been an
issue in a while.

The work is done on isolated objects without using or mutating refs,
so even concurrent work should not be a problem.

This turns out to not be any more verbose (less so if anything) than
using `cherry-pick`, as that is not really designed for scripted /
non-interactive use, or for squashing commits thereafter. Working
directly with trees and commits is quite a bit cleaner even without a
ton of helpers.

Much of the credit goes to Julia Evans for [their investigation of
3-way merges as the underpinnings of cherry-picking][3-way merge],
this would have been a lot more difficult if I'd had to rediscover the
merge-base trick independently.

A few things have been changed by this:

- The old trace/stderr from cherrypick has disappeared as it's
  generated by cherrypick, but for a non-interactive use it's kinda
  useless anyway so I probably should have looked into removing it
  earlier (I think the main use was investigation of the inflateinit
  issue).
- Error on emptied commits has to be hand-rolled as `merge-tree`
  couldn't care less, this is not hard but is a bit annoying.
- `merge-tree`'s conflict information only references raw commits,
  which makes sense, but requires updating a bunch of tests. Then
  again so does the fact that it *usually* doesn't send anything to
  stderr, so that's usually disappearing.

Conveniently `merge-tree` merges the conflict marker directly in the
files / tree so we don't have to mess about moving them back out of
the repository and into the working copy as I assume cherry-pick does,
which means we don't have to try and commit them back in ether. That
is a huge part of the gain over faffing about with the working copy.

Fixes #847

[3-way merge]: https://jvns.ca/blog/2023/11/10/how-cherry-pick-and-revert-work/
2024-07-08 14:37:14 +02:00
Xavier Morel
ceb2bd457e [IMP] forwardport: automatically approve freeze backfills
if the corresponding master PR was approved. That the backfill needs
approval can be considered a race condition: its dual is approved
and *might* have been merged before the freeze, but was not, so a
backfill was needed.

Copying over the approval is a convenience feature with pretty much no
actual risk I can think of.

Fixes #884
2024-06-07 15:02:28 +02:00
Xavier Morel
e7e81bf375 [IMP] *: handle the addition of a new PR to a fw-ported batch
Given a batch which has been merged, and been forward-ported, to
multiple branches (because skipci was set or ci passed on the repos
the batch covers).

There might come the need to add a PR for one of the uncovered
repos. This raises the question of what to do with it, since the
forward-ports for the batch already exist it's not going to get
forwardported normally, nor may we want to, possibly?

Options are:

- don't do anything, such additions don't get ported, this is
  incongruous and unexpected as by default PRs are forward-ported, and
  if the batch wasn't an intermediate (but e.g. a conflict) it
  probably would be ported forward
- port on merge, this allows configuring the PR properly (as it might
  need its own limit) but it means further batches may get
  unexpectedly merged (or at least retied) without the additional PR
  even though we likely want it in
- immediately port the additional PR on creation, this makes the limit
  harder or impossible to configure but it makes the *batch sequence*
  more consistent

We ended up selecting the latter, it feels closer to the updates
system, and it creates more consistent batches through the
sequence. It's also technically easier to ad-hoc port a PR through a
bunch of branches than it is to update the "normal" forward-port
process to handle partial fixups.
2024-05-29 07:55:07 +02:00
Xavier Morel
a4a067e7e9 [CHG] *: move forward-porting over to batches
Thank god I have a bunch of tests because once again I forgot / missed
a bunch of edge cases in doing the conversion, which the tests
caught (sadly that means I almost certainly broke a few untested edge
cases).

Important notes:

Handling of parent links
------------------------

Unlike PRs, batches don't lose their parent info ever, the link is
permanent, which is convenient to trawl through a forward port
(currently implemented very inefficiently, maybe we'll optimise that
in the future).

However this means the batch having a parent and the batch's PRs
having parents are slightly different informations, one of the edge
cases I missed is that of conflicting PRs, which are deparented and
have to be merged by hand before being forward ported further, I had
originally replaced the checks on a pr and its sibling having parents
by just the batch.

Batches & targets
-----------------

Batches were originally concepted as being fixed to a target and PRs
having that target, a PR being retargeted would move it from one batch
to an other.

As it turns out this does not work in the case where people retarget
forward-port PRs, which I know they do because #551
(2337bd8518). I could not think of a
good way to handle this issue as is, so scrapped the moving PRs thing,
instead one of the coherence checks of a batch being ready is that all
its PRs have the same target, and a batch only has a target if all its
PRs have the same target.

It's possible for somewhat odd effects to arise, notably if a PR is
closed (removed from batch), the other PRs are retargeted, and the new
PR is reopened, it will now be on a separate batch even if it also
gets retargeted. This is weird. I don't quite know how I should handle
it, maybe batches could merge if they have the same target and label?
however batches don't currently have a label so...

Improve limits
--------------

Keep limits on the PRs rather than lift them on the batchL if we can
add/remove PRs of batches having different limits on different PRs of
the same batch is reasonable.

Also leave limit unset by default: previously, the limit was eagerly
set to the tip (accessible) branch. That doesn't really seem
necessary, so stop doing that.

Also remove completely unnecessary `max` when trying to find a PR's
next target: `root` is either `self` or `self.source_id`, so it should
not be possible for that to have a later target.

And for now ensure the limits are consistent per batch: a PR defaults
to the limit of their batch-mate if they don't have one, and if a
limit is set via command it's set on all PRs of a batch.

This commit does not allow differential limits via commands, they are
allowed via the backend but not really tested. The issue is mostly
that it's not clear what the UX should look like to have clear and not
super error prone interactions. So punt on it for now, and hopefully
there's no hole I missed which will create inconsistent batches.
2024-05-24 09:08:56 +02:00
Xavier Morel
473f89f87d [CHG] *: persistent batches
This probably has latent bugs, and is only the start of the road to v2
(#789): PR batches are now created up-front (alongside the PR), with
PRs attached and detached as needed, hopefully such that things are
not broken (tests pass but...), this required a fair number of
ajustments to code not taking batches into account, or creating
batches on the fly.

`PullRequests.blocked` has also been updated to rely on the batch to
get its batch-mates, such that it can now be a stored field with the
right dependencies.

The next step is to better leverage this change:

- move cross-PR state up to the batch (e.g. skipchecks, priority, ...)
- add fw info to the batch, perform forward-ports batchwise in order
  to avoid redundant batch-selection work, and allow altering batches
  during fw (e.g. adding or removing PRs)
- use batches to select stagings
- maybe expose staging history of a batch?
2024-05-23 07:58:58 +02:00
Xavier Morel
f44b0c018e [IMP] forwardport: allow updating the fw limit after merging the source
Currently, once a source PR has been merged it's not possible to set
or update a limit, which can be inconvenient (e.g. people might have
forgotten to set it, or realise after the fact that setting one is not
useful, or might realise after the fact that they should *unset* it).

This PR relaxes that constraint (which is not just a relaxation as it
requires a bunch of additional work and validation), it should now be
possible to:

- update the limit to any target, as long as that target is at or
  above the current forwardport tip's
  - with the exception of a tip being forward ported, as that can't be
    cancelled
- resume a forward port stopped by a previously set limit (just
  increase it to whatever)
- set a limit from any forward-port PR
- set a limit from a source, post-merge

The process should also provide pretty chatty feedback:

- feedback on the source and closest root
- feedback about adjustments if any (e.g. setting the limit to B but
  there's already a forward port to C, the 'bot should set the limit
  to C and tell you that it happened and why)

Fixes #506
2023-10-06 13:19:01 +02:00
Xavier Morel
9de18de454 [CHG] *: move repo cache from forwardbot to mergebot
If the stagings are going to be created locally (via a git working
copy rather than the github API), the mergebot part needs to have
access to the cache, so move the cache over. Also move the maintenance
cron.

In an extermely minor way, this prefigures the (hopeful) eventual
merging of the ~~planes~~ modules.
2023-08-25 15:04:48 +02:00
Xavier Morel
ed0fd88854 [ADD] runbot_merge: sentry instrumentation
Currently sentry is only hooked from the outside, which doesn't
necessarily provide sufficiently actionable information.

Add some a few hooks to (try and) report odoo / mergebot metadata:

- add the user to WSGI transactions
- add a transaction (with users) around crons
- add the webhook event info to webhook requests
- add a few spans to the long-running crons, when they cover multiple
  units per iteration (e.g. a span per branch being staged)

Closes #544
2023-06-21 14:26:19 +02:00
Xavier Morel
270dfdd495 [REF] *: move most feedback messages to pseudo-templates
Current system makes it hard to iterate feedback messages and make
them clearer, this should improve things a touch.

Use a bespoke model to avoid concerns with qweb rendering
complexity (we just want GFM output and should not need logic).

Also update fwbot test setup to always configure an fwbot name, in
order to avoid ping messages closing the PRs they're talking
about, that took a while to debug, and given the old message I assume
I'd already hit it and just been too lazy to fix. This requires
updating a bunch of tests as fwbot ping are sent *to*
`fp_github_name`, but sent *from* the reference user (because that's
the key we set).

Note: noupdate on CSV files doesn't seem to work anymore, which isn't
great. But instead set tracking on the template's templates, it's not
quite as good but should be sufficient.

Fixes #769
2023-06-14 16:01:45 +02:00
Xavier Morel
c35b721f0e [IMP] forwardport: gc/maintenance of local repo caches
The current system makes / lets GC run during fetching. This has a few
issues:

- the autogc consumes resources during the forward-porting
  process (not that it's hugely urgent but it seems unnecessary)
- the autogc commonly fails due to the combination of large repository
  (odoo/odoo) and low memory limits (hardmem for odoo, which get
  translated into soft ulimits)

As a result, the garbage collection of the repository sometimes stops
entirely, leading to an increase in repository size and a decrease in
performances.

To mitigate this issue, disable the automagic gc and maintenance
during normal operation, and instead add a weekly cron which runs an
aggressive GC with memory limits disabled (as far as they can get, if
the limits are imposed externally there's nothing to be done).

The maintenance is implemented using a full lockout of the
forward-port cron and an in-place GC rather than a copy/gc/swap, as
doing this maintenance at the small hours of the week-end (sat-sun
night) seems like a non-issue: currently an aggressive GC of odoo/odoo
(using the default aggressive options) takes a total of 2:30 wallclock
(5h user) on a fairly elderly machine (it's closer to 20mn wallclock
and 2h user on my local machine, also turns out the cache repos are
kinda badly configured leading to ~30% more objects than necessary
which doesn't help).

For the record, a fresh checkout of odoo/odoo right now yields:

    | Overall repository size      |           |
    | * Commits                    |           |
    |   * Count                    |   199 k   |
    |   * Total size               |   102 MiB |
    | * Trees                      |           |
    |   * Count                    |  1.60 M   |
    |   * Total size               |  2.67 GiB |
    |   * Total tree entries       |  74.1 M   |
    | * Blobs                      |           |
    |   * Count                    |  1.69 M   |
    |   * Total size               |  72.4 GiB |

If this still proves insufficient, a further option would be to deploy
a "generational repacking" strategy:
https://gitlab.com/gitlab-org/gitaly/-/issues/2861 (though apparently
it's not yet been implemented / deployed on gitlab so...).

But for now we'll see how it shakes out.

Close #489
2022-12-01 10:57:32 +01:00
Xavier Morel
b45ecf08f9 [IMP] forwardport: handling of missing magic refs
Github can fail to create the magic refs on PRs
(`pull/refs/?/head`). Since forwardport relies on these refs to fetch
PR content this is an issue when it occurs, as the forward ports fail
in a loop.

After discussion with Github support, it turns out Github enabled
`allowReachableSHA1InWant` a while back, meaning it's possible to
fetch content by commit (rather than ref) as long as the content is
"in network".

Use this property as fallback when checking if we can see the PR head
before forward porting.

Also:

- remove explicit configuration of GC during fetch, it doesn't disable
  the autogc (yet?) but that's likely going to happen anyway
- update logging and logger hierarchy during forward port to make
  things clearer and easier to extract, although based on PR id rather
  than number
- rate limit failing forward ports to avoid running them on every cron
  (~ every minute), run them every ~30mn instead, this provides higher
  odds of recovery with less log garbage in case of transient github
  failure, and if the PR is stuck it limits the log pollution

Fixes #658
2022-12-01 10:57:32 +01:00
Xavier Morel
65c2ffc997 [MERGE] from 13.0
Get mergebot updates from since the runbot was upgraded.

NOTE: updates forwardport.models.forwardport.Queue with slots for
compatibility with commit
odoo/odoo@ea3e39506a "use slots for
BaseModel", otherwise we get

    TypeError: __bases__ assignment: 'ForwardPortTasks' object layout differs from 'BaseModel'
2022-08-23 14:41:35 +02:00
Xavier Morel
f430c014c1 [IMP] *: review mergebot & forwardbot messages for pinging
Old messages were quite inconsistent in their pinging of the PR author
and reviewer.

Reviewed messages (probably missed some but...) and try to more
consistently ping when the feedback requires some sort of action in
order to proceed.

Fixes #592
2022-06-30 15:07:49 +02:00
Xavier Morel
2337bd8518 [FIX] forwardport: chain crash on insert in forward-port cron
On two of the freezes, thereafter the logs showed serial crashes in
the forwardport cron when trying to find the insertion point for a new
forward-port.

The first time was not really diagnosed, the second time the cause was
found to be a retargeted PR which led to a failure of the "insertion"
forward port, which did not take that possibility in account (it
assumed -- sensibly I believed -- that an intermediate FP following a
branch insertion would always succeed, sadly the malevolent universe
had other plans).

So only insert the new forward port inside its sequence (if necessary)
if the forward port actually succeeded, otherwise ignore it.

Fixes #551
2022-02-10 13:51:33 +01:00
Xavier Morel
e05cc77a57 [FIX] forwardport: don't forwardport freeze PRs
The freeze wizard has support for merging freeze / release PRs on each
of the newly created branches. But since this would be done by, well,
merging, those PRs would get forward-ported to master, and would have
to be closed there.

This creates additional work for the freeze master, and noise /
parasitic PRs.

Obviously it's possible for the freeze master to set some nonsense `up
to` (nonsense because the "real" limit doesn't exist yet at that
point), but really it never makes any sense to forward port release
PRs, so the wizard should do it.
2022-02-08 10:11:57 +01:00
Xavier Morel
c6755a045a [FIX] forwardport: possible race in forwardport followup update
Normally when a forwardport is updated the forward-bot cascades the
update onto its followups (if it has any), but takes care to keep the
followups attached as they were not updated "by hand".

In the case of odoo/odoo#77677 however that did not work and the
followup PRs got detached. Looking at the logs, it becomes flagrant
that there was a race condition: either Git took a long time to
respond to the push, or there was an IO slowdown which led to the
"local update" taking a very long time. Either way this allowed the
"synchronize" webhook from github to arrive before the current
transaction was committed, rolling back said transaction and making
the forwardbot assume this was a "real" sync and detach the followup
from its parent.

Locking the PR row up-front ought fix the issue, and also move the
local update to before having pushed: the "extra" commits in the local
cache don't matter too much even if pushing to github fails, they'll
be cleaned up by a GC eventually.

Also migrate the `-f` on push to `--force-if-includes` in order to
avoid possible race conditions on the push (since we're not fetching
the current branch, use the full-syntax explicit CAS form, that's
exactly what we're looking for anyway).

Fixes #541 (hopefully)
2021-10-20 14:36:50 +02:00
Xavier Morel
678d2216b8 [IMP] forwardport: provide clearer picture of conflicts
On conflicts in multi-commit PRs developers sometimes get confused as
to what happened why.

If a conflict occurs and the source pull request had multiple
commits, list all the source commits and show which one broke.

Related to #505
2021-08-24 15:39:47 +02:00
Xavier Morel
82174ae66e [IMP] *: add draft support to mergebot, kinda
* Remove the forwardport creating PRs in draft, that was mostly to
  avoid codeowners triggering but we've removed the github one and
  hand-rolled it, so not a concern anymore.
* Prevent merging `draft` PRs, the mergebot rejects approval on draft
  PRs and insults people.

TBD (maybe): try to create *conflicting* forward-port PRs in draft so
it's clearer they need to be *fixed*? Issue of not being able to do
that on all private repositories remains so~~

Fixes #500
2021-08-24 15:39:47 +02:00
Xavier Morel
6b1f698c23 [IMP] forwardport: handling of updates causing conflicts on followups
If a PR is updated and has extent forward-ports, those forwardports
get updated automatically ("followup").

However there is an issue if the udpate causes a conflict in the
followup: the conflict gets silently pushed, and may fairly easily get
merged if it occurs in an area which the CI doesn't cover.

It's unclear what the policy really should be for this issue, and
there is no real way to *block* a pull request at the moment (save by
putting it in error at the mergebot level I guess?), so for now
clearly notify the user on both the modified PR and the followup,
with a comment on both.

We may want to revisit this eventually.

Fixes #467
2021-08-24 15:39:47 +02:00
Xavier Morel
0b1e33da7c [IMP] forwardport: mitigate cat-file not finding commit on updates
Fix #457 hopefully: I didn't manage to repro / create a test for.

It looks like in some cases during the update process the PR ref lags
behind the branch itself. This means `forwardport.updates` creates a
new commit, pushes it, then on the next iteration updates the local
cache, tries to find the commit we just pushed... and that fails.

I can only assume this is because when there's enough load on the
github side the update to the `info/refs` pseudo-file can fall
behind (it's now 4MB and holding nearly 65k refs).

So cheat: take the commit we just pushed to the dev remote
and... immediately push it to the local cache under a dummy branch,
which we delete. Since we only gc "1 day ago" this should not vacuum.
2021-03-02 14:28:32 +01:00
Xavier Morel
2aeecb68b9 [FIX] forwardport: completely update PR data when forwarding updates
The process did properly update the state, but not the squash state.

It's somewhat unclear whether the state should be fully reset and
require reapproval though. Maybe only the validation should be reset?
The CI will eventually run and either succeed (re-validating) or
fail (devalidating, hopefully) but I'm not entirely sure this is
correct.
2021-03-02 14:28:32 +01:00
Xavier Morel
db9e86f760 [IMP] forwardport: reliability of PR reminders
The exponential backoff offsets from the write_date of the children
PRs, however it doesn't reset, so the offsetting gets bumped up way
more than originally expected or designed if the child PRs are under
active development for some reason.

Fix this by adding a field to specifically record the date of merge of
a PR, and check that feature against the backoff offset. This should
provide more regular and reliable backoff.

Fixes #369
2020-05-26 15:56:36 +02:00
Xavier Morel
6b5731f175 [FIX] forwardport: PR update through a closed PR
Fixes #328
2020-02-26 16:21:24 +01:00
Xavier Morel
9aac1b4a3e [ADD] forwardport: special handling of adding branches to projects
If a new branch is added to a project, there's an issue with *ongoing*
forward ports (forward ports which were not merged before the branch
was forked from an existing one): the new branch gets "skipped" and
might be missing some fixes until those are noticed and backported.

This commit hooks into updating projects to try and see if the update
consists of adding a branch inside the sequence, in which case it
tries to find the FP sequences to update and queues up new
"intermediate" forward ports to insert into the existing sequences.

Note: had to increase the cron socket limit to 2mn as 1mn blew up the
big staging cron in the test (after all the forward-port PRs are
approved).

Fixes #262

[FIX]
2020-01-29 15:59:43 +01:00
Xavier Morel
7c46a2006f [FIX] forwardport: the fix
Of course I forgot the most relevant bit
2019-10-18 12:01:47 +02:00
Xavier Morel
5f8041552b [FIX] forwardport: apparently git/refs/heads can fuzzy-match
If the ref we asked for does not exist, github apparently decides to
fall-back to prefix-matching. So if we're trying to delete
already-deleted branch A and someone called their branch A-x we're
going to get it as a result.

Thankfully they were apparently smart enough to return a list even if
there's only a single fuzzy match. So if we get a list (instead of a
dict) as response to git/refs/heads assume the branch was already
deleted as if we got a 404.
2019-10-18 11:22:13 +02:00
Xavier Morel
13d76fdfb9 [FIX] forwardport: the fix
it's a per-call parameter not a per-instance one
2019-10-18 08:11:27 +02:00
Xavier Morel
c1cef0c18b [FIX] forwardport: gh api raises by default, avoid that here 2019-10-18 08:02:04 +02:00
Xavier Morel
ea410ab6d1 [ADD] forwardport: automatic branch deleter
If a PR is *merged*, enqueue it for deletion (with a 2 weeks delay).

Mainly to avoid FW branches staying around long after they've been
merged (possibly eventually closed?), will also clean up regular
merged branches, including historical merges forgotten by their
author.

Fixes #230
2019-10-17 11:55:20 +02:00
Xavier Morel
401787b7ae [FIX] forwardport: co-dependent FPs where one PR is updated
In the case where we have a co-dependent forward port (co-dependent
PRs got forward-ported, which they should be together) where *one* of
the PRs got explicitly updated, the batch would "fall into a hole"
being handled as neither "this is part of a forward-port sequence" nor
"this is a new merge to forward-port" (the latter being the proper
one).

Modify & remove guards which checked that either no or all PRs in a
batch have parents: should be either all or not all.

Fixes #231
2019-10-15 08:54:25 +02:00
Xavier Morel
bad016b830 [FIX] forwardport: queue reliability changes
Previous version would break if _process_item itself committed which
was bad
2019-10-11 09:13:55 +02:00
Xavier Morel
036ae3a8ee [IMP] forwardbot: reduce length of fw branch name
* shorten the postfix, forwardbot is now a bigram!
* shorten the uniquifier: go from 5 to 3 bytes, and use urlsafe base64
  that way we only have a 4-char uniquifier instead of 8
* while at it, fix deprecated calls to logging.warn (should be
  logging.warning)

Fixes #226
2019-10-10 11:37:27 +02:00
Xavier Morel
557878afe9 [IMP] forwardport: processing queue reliability
The queue would get items to process one at a time, process, commit,
and go to the next. However this is an issue if one of the item fails
systematically for some reason (aka it's not just a transient
failure): the cron fails, then restarts at the exact same point, and
fails again with the same issue, leading to following items never
getting processed.

Fix by getting all the queue contents at once, processing them one by
one and "skipping" any item which fails (leaving it in place so it can
get re-processed later).

That way, even if an item causes issues, the rest of the queue gets
processed normally. The interruption was an issue following
odoo/enterprise#5670 not getting properly updated in the
backend (backend didn't get notified of the last two updates /
force-push to the PR, so it was trying to forward-port a commit which
didn't exist - and failing).
2019-10-10 08:41:33 +02:00
Xavier Morel
f671dcc828 [ADD] forwardbot
* Cherrypicking is handrolled because there seems to be no easy way to
  programmatically edit commit messages during the cherrypicking
  sequence: `-n` basically squashes all commits and `-e` invokes a
  subprocess. `-e` with `VISUAL=false` kinda sorta works (in that it
  interrupts the process before each commit), however there doesn't
  seem to be clean status codes so it's difficult to know if the
  cherrypick failed or if it's just waiting for a commit of this step.

  Instead, cherrypick commits individually then edit / rewrite their
  commit messages:

  * add a reference to the original commit
  * convert signed-off-by to something else as the original commit was
    signed off but not necessarily this one

* Can't assign users when creating PRs: only repository collaborators
  or people who commented on the issue / PR (which we're in the
  process of creating) can be assigned.

  PR authors are as likely to be collaborators as not, and we can have
  non-collaborator reviewers. So pinging via a regular comment seems
  less fraught as a way to notify users.
2019-09-05 10:00:07 +02:00