Commit Graph

30 Commits

Author SHA1 Message Date
Xavier Morel
98bb01edfc [ADD] runbot_merge, tests: opentelemetry support
Mostly for tests: it can be really difficult to correlate issues as
there are 3 different processes involved (the test runner, the odoo
being tested, and dummy-central (as github)) and the intermixing of
logs between the test driver and the odoo being tested is
not *amazing*.

- `pytest-opentelemetry`'s `--export-traces` is the signal for running
  tests with tracing enabled, that way just having
  `pytest-opentelemetry` installed does not do anything untowards.
- Until chrisguidry/pytest-opentelemetry#34 is merged, should probably
  use the linked branch as the default / base mode of having a single
  trace for the entire test suite is not great when there are 460
  tests, especially as local clients (opentelemetry-desktop-viewer,
  venator) mostly work on traces and aren't very good at zooming on
  *spans* at least currently.
- Additionally, the conftest plugin adds hooks for propagating through
  the xmlrpc client (communications with odoo) and enables the
  requests instrumentor from the opentelemetry python contribs.
- The dummy `saas_worker` was moved into the filesystem, that makes it
  easier to review and update.
- A second such module was added for the otel instrumentation *of the
  odoo under test*, that instruments psycopg2, requests, wsgi, and the
  server side of xmlrpc.
- The git ops were instrumented directly in runbot_merge, as I've
  tried to keep `saas_tracing` relatively generic, in case it could be
  moved to community or used by internal eventually.

Some typing was added to the conftest hooks and fixtures, and they
were migrated from tmpdir (py.path) to tmp_path (pathlib) for
consistency, even though technically the `mkdir` of pathlib is an
annoying downgrade.

Fixes #835
2025-02-24 15:43:11 +01:00
Xavier Morel
e3b4d2bb40 [IMP] *: cleanup status contexts in tests
For historical reasons pretty much all tests used to use the contexts
legal/cla and ci/runbot. While there are a few tests where we need the
interactions of multiple contexts and that makes sense, on the vast
majority of tests that's just extra traffic and noise in the
test (from needing to send multiple statuses unnecessarily).

In fact on the average PR where everything passes by default we could
even remove the required statuses entirely...
2025-02-06 14:55:28 +01:00
Xavier Morel
645a10a7ca [IMP] runbot_merge: de-randomise fw/bp uniquifier
The forward port process adds a uniquifier to the branch name as it's
possible to reuse branch names for different sets of PRs (and some
individuals do that systematically) which can cause collisions in the
fw branches.

Originally this uniquifier was random by necessity, however now that
batches are reified and forward port is performed batch-wise we don't
need the randomness we can just use the source batch's id as it's
unique per sequence of forward ports. This means it'll eventually be
possible for an external system to retrieve the source(s) of a forward
port by reading the batch object, and it's also possible to correlate
forward ports through the batch id (although not possible to find the
source set without access to the mergebot's information).

Do the same thing for backports, because why not.
2025-01-30 09:57:43 +01:00
Xavier Morel
509c152156 [IMP] *: modernise tests via to_pr
The `to_pr` helper was added a *while* ago to replace the pretty
verbose process of looking a PR with the right number in the right
repository after a `make_pr`. However while I did some ad-hoc
replacement of existing `search` calls as I had to work with existing
tests I never did a full search and replace to try and excise searches
from the test suite.

This is now done. I've undoubtedly missed a few, but a hundred-odd
lines of codes have been simplified.
2024-12-02 16:32:53 +01:00
Xavier Morel
679d556c90 [FIX] project creation: handling of mergebot info
- don't *fail* in `_compute_identity`, it causes issues when the token
  is valid but doesn't have `user:email` access as the request is
  aborted and saving doesn't work
- make `github_name` and `github_email` required rather than ad-hoc
  requiring them in `_compute_identity` (which doesn't work correctly)
- force copy of `github_name` and `github_email`, with o2ms being
  !copy this means duplicating projects now works out of the box (or
  should...)

Currently errors in `_compute_identity` are reported via logging which
is not great as it's not UI visible, should probably get moved to
chatter eventually but that's not currently enabled on projects.

Fixes #990
2024-12-02 16:32:53 +01:00
Xavier Morel
d9e6d39448 [IMP] mergebot_test_utils: minor style fixes 2024-10-22 10:51:30 +02:00
Xavier Morel
4215be770d [IMP] runbot_merge: move read_tracking_value to utils
And use it in test_trivial_flow instead of the kinda half-assed manual
version.
2024-10-07 08:06:03 +02:00
Xavier Morel
c8a06601a7 [FIX] *: unstage on status going from success to failure
And unconditionally unstage when the HEAD of a PR is synchronised.

While a rebuild on a PR which was previously staged can be a false
positive (e.g. because it hit a non-derministic test the second time
around) it can also be legitimate e.g. auto-rebase of an old PR to
check it out. In that case the PR should be unstaged.

Furthermore, as soon as the PR gets rebuilt it goes back into
`approved` (because the status goes to pending and currently there's
no great way to suppress that in the rebuild case without also fucking
it up for the sync case). This would cause a sync on the PR to be
missed (in that it would not unstage the PR), which is broken. Fix
that by not checking the state of the PR beforehand, it seems to be an
unnecessary remnant of older code, and not really an optimisation (or
at least one likely not worth bothering with, especially as we then
proceed to perform a full PR validation anyway).

Fixes #950
2024-09-18 15:19:13 +02:00
Xavier Morel
81f133de15 [IMP] test utils: don't ACL-protect repos by default
This is quite frustrating when trying to access the frontend from a
test especially combined with the lack of in-log feedback (before the
previous commit).
2024-06-12 15:09:42 +02:00
Xavier Morel
fec3d39d19 [ADD] *: per-repository webhook secret
Currently webhook secrets are configured per *project* which is an
issue both because different repositories may have different
administrators and thus creates safety concerns, and because multiple
repositories can feed into different projects (e.g. on mergebot,
odoo-dev/odoo is both an ancillary repository to the main RD project,
and the main repository to the minor / legacy master-wowl
project). This means it can be necessary to have multiple projects
share the same secret as well, this then mandates the secret for more
repositories per (1).

This is a pain in the ass, so just detach secrets from projects and
link them *only* to repositories, it's cleaner and easier to manage
and set up progressively.

This requires a lot of changes to the tests, as they all need to
correctly configure the signaling.

For `runbot_merge` there was *some* setup sharing already via the
module-level `repo` fixtures`, those were merged into a conftest-level
fixture which could handle the signaling setup. A few tests which
unnecessarily set up repositories ad-hoc were also moved to the
fixture. But for most of the ad-hoc setup in `runbot_merge`, as well
as `forwardport` where it's all ad-hoc, events sources setup was just
appended as is. This should probably be cleaned up at one point, with
the various requirements collected and organised into a small set of
fixtures doing the job more uniformly.

Fixes #887
2024-06-06 11:07:57 +02:00
Xavier Morel
c1e2e5a2e0 [REF] forwardport: update re_matches to not use a regex
Using a regex as the pattern is quite frustrating due to all the
escaping necessary, which in this refactoring I found out I'd missed,
multiple times.

Convert the pattern to something bespoke but not too complicated, we
may want to add anchoring support and a bit more finesse and the
future but for now straightforward "holes" seem to work well. I've
added support for capturing and even named groups even if this as yet
unnecessary and unused.

Fixes #861

[^1]: https://docs.pytest.org/en/stable/reference.html#pytest.hookspec.pytest_assertrepr_compare
2024-06-04 14:18:04 +02:00
Xavier Morel
3f4519d605 [CHG] runbot_merge: add signoff & related to all commits
if rebased. Untouched commits (straight merge) remain unalterated, but
all rebased or squashed commits now get signoff and `Related` headers
added on top of the already previously added `part-of`.

Implement by generalising `_build_merge_message` to `_build_message`
and having `add_self_references` delegate to it, removes some of the
redundancy / differential handling.

Update the `part_of` helper to also add the S-O-B header to the PR,
although it currently does not reference the entire forward port
chain.

Fixes #876
2024-05-30 10:59:07 +02:00
Xavier Morel
232aa271b0 [ADD] runbot_merge: PR dashboard V2
Displays the entire batch set as a table, along both
repository (linked PRs) and branch (forward ports). Should provide a
much more complete overview.

Adds a copy of the dashboard as a raster render, to link from the PR:
as usual SVG is shit, content-based viewboxes are hell and having to
duplicate the entire CSS because `<img/>`-linked CSS can't run is
gross. And there's no payoff since the image is not interactible
anyway.

Performing manual ad-hoc table rendering via pillow is not
significantly worse, it works fine and it's possible to do *really*
good conditional request handling (hopefully) because I've basically
got all the information I need right here.

In fact it might make sense to upgrade the regular HTML page with
similar conditional request handling, at least for the last-update
bit if not the etag.

Fixes #771,fixes #770
2024-05-29 07:55:07 +02:00
Xavier Morel
e7e81bf375 [IMP] *: handle the addition of a new PR to a fw-ported batch
Given a batch which has been merged, and been forward-ported, to
multiple branches (because skipci was set or ci passed on the repos
the batch covers).

There might come the need to add a PR for one of the uncovered
repos. This raises the question of what to do with it, since the
forward-ports for the batch already exist it's not going to get
forwardported normally, nor may we want to, possibly?

Options are:

- don't do anything, such additions don't get ported, this is
  incongruous and unexpected as by default PRs are forward-ported, and
  if the batch wasn't an intermediate (but e.g. a conflict) it
  probably would be ported forward
- port on merge, this allows configuring the PR properly (as it might
  need its own limit) but it means further batches may get
  unexpectedly merged (or at least retied) without the additional PR
  even though we likely want it in
- immediately port the additional PR on creation, this makes the limit
  harder or impossible to configure but it makes the *batch sequence*
  more consistent

We ended up selecting the latter, it feels closer to the updates
system, and it creates more consistent batches through the
sequence. It's also technically easier to ad-hoc port a PR through a
bunch of branches than it is to update the "normal" forward-port
process to handle partial fixups.
2024-05-29 07:55:07 +02:00
Xavier Morel
6972749fee [IMP] forwardport: remove duplicate-ish make_basic in test_weird
The one from the utils can easily be adapted for the job, and
documented while at it, no need for an extra 50 lines or so of
complete duplicate.
2024-05-24 09:08:56 +02:00
Xavier Morel
d4fa1fd353 [CHG] *: rewrite commands set, rework status management
This commit revisits the commands set in order to make it more
regular, and limit inconsistent command-sets, although it includes
pseudo-command aliases for common tasks now removed from the core set.

Hard Errors
===========

The previous iteration of the commands set would ignore any
non-command term in a command line. This has been changed to hard
error (and ignoring the entire thing) if any command is unknown or
invalid.

This fixes inconsistent / unexpected interpretations where a user
sends a command, then writes a novel on the same line some words of
which happen to *also* be commands, leading to merge states they did
not expect. They should now be told to fuck off.

Priority Restructuring
----------------------

The numerical priority system was pretty messy in that it confused
"staging priority" (in ways which were not entirely straightforward)
with overrides to other concerns.

This has now being split along all the axis, with separate command
subsets for:

- staging prioritisation, now separated between `default`, `priority`,
  and `alone`,

  - `default` means PRs are picked by an unspecified order when
    creating a staging, if nothing better is available
  - `priority` means PRs are picked first when staging, however if
    `priority` PRs don't fill the staging the rest will be filled with
    `default`, this mode did not previously exist
  - `alone` means the PRs are picked first, before splits, and only
    `alone` PRs can be part of the staging (which usually matches the
    modename)
- `skipchecks` overrides both statuses and approval checks, for the
  batch, something previously implied in `p=0`, but now
  independent. Setting `skipchecks` basically makes the entire batch
  `ready`.

  For consistency this also sets the reviewer implicitly: since
  skipchecks overrides both statuses *and approval*, whoever enables
  this mode is essentially the reviewer.
- `cancel` cancels any ongoing staging when the marked PR becomes
  ready again, previously this was also implied (in a more restricted
  form) by setting `p=0`

FWBot removal
=============

While the "forwardport bot" still exists as an API level (to segregate
access rights between tokens) it has been removed as an interaction
point, as part of the modules merge plan. As a result,

fwbot stops responding
----------------------

Feedback messages are now always sent by the mergebot, the
forward-porting bot should not send any message or notification
anymore.

commands moved to the merge bot
-------------------------------

- `ignore`/`up to` simply changes bot
- `close` as well
- `skipci` is now a choice / flag of an `fw` command, which denotes
  the forward-port policy,

  - `fw=default` is the old `ci` and resets the policy to default,
    that is wait for the PR to be merged to create forward ports, and
    for the required statuses on each forward port to be received
    before creating the next
  - `fw=skipci` is the old `skipci`, it waits for the merge of the
    base PR but then creates all the forward ports immediately (unless
    it gets a conflict)
  - `fw=skipmerge` immediately creates all the forward ports, without
    even waiting for the PR to be merged

    This is a completely new mode, and may be rather broken as until
    now the 'bot has always assumed the source PR had been merged.

approval rework
---------------

Because of the previous section, there is no distinguishing feature
between `mergebot r+` = "merge this PR" and `forwardbot r+` = "merge
this PR and all its parent with different access rights".

As a result, the two have been merged under a single `mergebot r+`
with heuristics attempting to provide the best experience:

- if approving a non-forward port, the behavior does not change
- else, with review rights on the source, all ancestors are approved
- else, as author of the original, approves all ancestors which descend
  from a merged PR
- else, approves all ancestors up to and including the oldest ancestor
  to which we have review rights

Most notably, the source's author is not delegated on the source or
any of its descendants anymore. This might need to be revisited if it
provides too restrictive.

For the very specialized need of approving a forward-port *and none of
its ancestors*, `review=` can now take a comma (`,`) separated list of
pull request numbers (github numbers, not mergebot ids).

Computed State
==============

The `state` field of pull requests is now computed. Hopefully this
makes the status more consistent and predictable in the long run, and
importantly makes status management more reliable (because reference
datum get updated naturally flowing to the state).

For now however it makes things more complicated as some of the states
have to be separately signaled or updated:

- `closed` and `error` are now separate flags
- `merge_date` is pulled down from forwardport and becomes the
  transition signal for ready -> merged
- `reviewed_by` becomes the transition signal for approval (might be a
  good idea to rename it...)
- `status` is computed from the head's statuses and overrides, and
  *that* becomes the validation state

Ideally, batch-level flags like `skipchecks` should be on, well, the
batch, and `state` should have a dependency on the batch. However
currently the batch is not a durable / permanent member of the system,
so it's a PR-level flag and a messy pile.

On notable change is that *forcing* the state to `ready` now does that
but also sets the reviewer, `skipchecks`, and overrides to ensure the
API-mediated readying does not get rolled back by e.g. the runbot
sending a status.

This is useful for a few types of automated / programmatic PRs
e.g. translation exports, where we set the state programmatically to
limit noise.

recursive dependency hack
-------------------------

Given a sequence of PRs with an override of the source, if one of the
PRs is updated its descendants should not have the override
anymore. However if the updated PR gets overridden, its descendants
should have *that* override.

This requires some unholy manipulations via an override of `modified`,
as the ORM supports recursive fields but not recursive
dependencies (on a different field).

unconditional followup scheduling
---------------------------------

Previously scheduling forward-port followup was contigent on the FW
policy, but it's not actually correct if the new PR is *immediately*
validated (which can happen now that the field is computed, if there
are no required statuses *or* all of the required statuses are
overridden by an ancestor) as nothing will trigger the state change
and thus scheduling of the fp followup.

The followup function checks all the properties of the batch to port,
so this should not result on incorrect ports. Although it's a bit more
expensive, and will lead to more spam.

Previously this would not happen because on creation of a PR the
validation task (commit -> PR) would still have to execute.

Misc Changes
============

- If a PR is marked as overriding / canceling stagings, it now does
  so on retry not just when setting initially.

  This was not handled at all previously, so a PR in P0 going into
  error due to e.g. a non-deterministic bug would be retried and still
  p=0, but a current staging would not get cancelled. Same when a PR
  in p=0 goes into error because something was failed, then is updated
  with a fix.
- Add tracking to a bunch of relevant PR fields.

  Post-mortem analysis currently generally requires going through the
  text logs to see what happened, which is annoying.

  There is a nondeterminism / inconsistency in the tracking which
  sometimes leads the admin user to trigger tracking before the bot
  does, leading to the staging tracking being attributed to them
  during tests, shove under the carpet by ignoring the user to whom
  that tracking is attributed.

  When multiple users update tracked fields in the same transaction
  all the changes are attributed to the first one having triggered
  tracking (?), I couldn't find why the admin sometimes takes over.
- added and leveraged support for enum-backed selection fields
- moved variuous fields from forwardport to runbot_merge
- fix a migration which had never worked and which never run (because
  I forgot to bump the version on the module)
- remove some unnecessary intermediate de/serialisation

fixes #673, fixes #309, fixes #792, fixes #846 (probably)
2024-05-23 07:58:46 +02:00
Xavier Morel
4b9fb513eb [IMP] *: make to_pr more resilient to webhook delays
Github delivery delays keep getting worse. Depending on what comes
before `to_pr`, this leads it to fail more often as it runs before the
PR it's looking for was signaled to the mergebot.

In order to mitigate this issue, add a wait loop in `to_pr`, waiting
up to 4 seconds for the PR it's looking for before aborting.

Also replace manual lookups by `to_pr` in every method of
`TestPRUpdate` while at it since it hit a few of the issues. And
remove the xfail test case since it seems unlikely github will change
tack (maybe? could be worth testing to be sure).
2024-01-16 15:03:45 +01:00
Xavier Morel
cea1b62ac2 [FIX] runbot_merge: commit messages should be trimmed indeed
Reverts commit 85a7890023 which
untrimmed the commits: while it's *probably* true that git and
github's APIs differ in their treatment of whitespace (in that git
pretty much always terminates the commit message with a newline while
github does not, as far as I understand, though I didn't really
validate it) the issue was that github also trims on *output* when
fetching over the API, something the fake did not do.

So rather than update the test data I should have fixed the fake, but
I failed to realise that at the time. I only realised when I decided
to re-run against github actual (something I rarely do anymore as it's
painfully slow) and it went on to choke on every message I'd updated.
2024-01-16 10:51:37 +01:00
Xavier Morel
65ed7c51bc [IMP] *: note to merge using mergebot in conflict message
The message has a lot of info, but left the merging bit
unwritten. Correct this issue.

Fixes #765
2023-08-30 12:10:46 +02:00
Xavier Morel
85a7890023 [CHG] runbot_merge: switch staging from github API to local
It has been a consideration for a while, but the pain of subtly
interacting with git via the ignominous CLI kept it back. Then ~~the
fire nation attacked~~ github got more and more tight-fisted (and in
some ways less reliable) with their API.

Staging pretty much just interacts with the git database, so it's both
a facultative github operator (it can just interact with git directly)
and a big consumer of API requests (because the git database endpoints
are very low level so it takes quite a bit of work to do anything
especially when high-level operations like rebase have to be
replicated by hand).

Furthermore, an issue has also been noticed which can be attributed to
using the github API (and that API's reliability getting worse): in
some cases github will fail to propagate a ref update / reset, so when
staging 2 PRs it's possible that the second one is merged on top of
the temporary branch of the first one, yielding a kinda broken commit
(in that it's a merge commit with a broken error message) instead of
the rebase / squash commit we expected.

As it turns out it's a very old issue but only happened very early so
was misattributed and not (sufficiently) guarded against:

- 41bd82244bb976bbd4d4be5e7bd792417c7dae6b (October 8th 2018) was
  spotted but thought to be a mergebot issue (might have been one of
  the opportunities where ref-checks were added though I can't find
  any reference to the commit in the runbot repo).
- 2be25052e147b151d1d8a5bc73cceb351586ce03 (October 15th, 2019) was
  missed (or ignored).
- 5a9fe7a7d05a9df7186072a7bffd60c6b428fd0e (July 31st, 2023) was
  spotted, but happened at a moment where everything kinda broke
  because of github rate-limiting ref updates, so the forensics were
  difficult and it was attributed to rate limiting issues.
- f10d03bf0f2e8f88f62a5d8356b84f714196130f (August 24th, 2023) broke
  the camel's back (and the head block): the logs were not too
  interspersed with other garbage and pretty clear that github ack'd a
  ref update, returned the correct oid when checking the ref, then
  returned the wrong oid when fetching it later on.

No Working Copy
===============

The working copy turns out to not be necessary, the plumbing commands
we *need* work just fine on a bare repository.

Working without a WC means we had to reimplement the high level
operations (rebase) by hand much as we'd done previously, *but* we
needed to do that anyway as git doesn't seem to provide any way to
retrieve the mapping when rebasing/cherrypicking, and cherrypicking by
commit doesn't work well as it can't really find the *merge base* it
needs.

Forward-porting can almost certainly be implemented similarly (with
some overhead), issue #803 has been opened to keep track of the idea.

No TMP
======

The `tmp.` branches are no more, the process of creating stagings is
based entirely around oids, if staging something fails we can just
abandon the oids (they'll be collected by the weekly GC), we only
need to update the staging branches at the very end of the process.

This simplifies things a fair bit.

For now we have stopped checking for visibility / backoff as we're
pushing via git, hopefully it is a more reliable reference than the
API.

Commmit Message Formatting
==========================

There's some unfortunate churn in the test, as the handling of
trailing newlines differs between github's APIs and git itself.

Fixes #247

PS: It might be a good idea to use pygit2 instead of the CLI
    eventually, the library is typed which is nice, and it avoids
    shelling out although that's really unlikely to be a major cost.
2023-08-25 15:06:04 +02:00
Xavier Morel
270dfdd495 [REF] *: move most feedback messages to pseudo-templates
Current system makes it hard to iterate feedback messages and make
them clearer, this should improve things a touch.

Use a bespoke model to avoid concerns with qweb rendering
complexity (we just want GFM output and should not need logic).

Also update fwbot test setup to always configure an fwbot name, in
order to avoid ping messages closing the PRs they're talking
about, that took a while to debug, and given the old message I assume
I'd already hit it and just been too lazy to fix. This requires
updating a bunch of tests as fwbot ping are sent *to*
`fp_github_name`, but sent *from* the reference user (because that's
the key we set).

Note: noupdate on CSV files doesn't seem to work anymore, which isn't
great. But instead set tracking on the template's templates, it's not
quite as good but should be sufficient.

Fixes #769
2023-06-14 16:01:45 +02:00
Xavier Morel
2009177ada [IMP] *: allow disabling staging on branch, remove fp target flag
- currently disabling staging only works globally, allow disabling on
  a single branch

  - use a toggle
  - remove a pair of tests which work specifically with `fp_target`,
    can't work with `active` (probably)
  - cleanup search of possible and active stagings, add relevant
    indexes and use direct search of relevant branches instead of
    looking up from the project

- also use toggle button for `active` on branches
- shitty workaround for upgrading DB: apparently mail really wants to
  have a `user_id` to do some weird thing, so need to re-add it after
  resetting everything

Fixes #727
2023-06-14 16:01:42 +02:00
Xavier Morel
a563fcf907 [REM] forwardport: fp_sequence field
It's almost certainly not useful, save as a minor convenience for
tests: decorrelating the branch sequence and the fp sequence seems
like it would only be extremely confusing, and on the mergebot all the
fp_sequence values are set to the default while the sequence values
are set to something useful and sensible (kinda).

Fixes #584
2022-12-08 10:46:22 +01:00
Xavier Morel
ebbe77b849 [IMP] runbot_merge: reorg test
Test seems to fail from time to time with one of the PRs getting
lost. Tried to move code around trying to investigate, can't repro
anymore. Possibly a race condition because the `to_pr` call was
performed too early, before the webhook had run (and thus before the
PR object had been created on the odoo side).

By moving the `to_pr` calls to after the cron run, we really ensure
the webhooks will have run.

Also update `to_pr` to ensure exactly one PR was retrieved, as
currently nothing is checked so we might have gotten none (yet), which
should be noticed early and clearly. In theory this also guards
against multiple PRs, but PRs should be unique on (repo, number).
2022-08-05 15:35:51 +02:00
Xavier Morel
66c2bdc25b [IMP] runbot_merge: error reporting on fast-forward failure
When a staging's fast-forward (to the target branch) fails, the
mergebot would provide no useful information on the staging or the
dashboard.

This is because the reason was set to the HTTP status, which in case
of a fast-forward error is just "422 client error: unprocessable
entity".

Improve this by trying to parse github's response in that case, and
using the JSON error message as failure reason. This provides more
useful failure information like "update is not a fast forward",
"reference does not exist", or a branch protection failure.

Closes #591
2022-06-30 15:07:49 +02:00
Xavier Morel
a7808425e3 [IMP] runbot_merge: reject review without email
If a reviewer doesn't have an email set, the Signed-Off-By is an
`@users.noreply.github.com` address which just looks weird in the
final result.

Initially the thinking was that emails would be required for users to
*be* reviewers or self-reviewers, but since those are now o2ms / m2ms
it's a bit of a pain in the ass.

Instead, provide an action to easily try and fetch the public email of
a user from github.

Fixes #531
2021-10-20 14:36:50 +02:00
Xavier Morel
6096cc61a9 [IMP] *: tag all rebased commits with source PRev
Although it's possible to find what PR a commit was part of with a bit
of `git log` magic (e.g. `--ancestry-path COMMIT.. --reverse`) it's
not the most convenient, and many people don't know about it, leading
them to various debatable decisions to try and mitigate the issue,
such as tagging every commit in a PR with the PR's identity, which
then leads github to spam the PR itself with pingbacks from its own
commits. Which is great.

Add this information to the commits when rebasing them (and *only*
when rebasing them), using a `Part-of:` pseudo-header.

Fixes #482
2021-08-24 15:39:47 +02:00
Xavier Morel
d249417ceb [FIX] forwardport: fix deduplication of authorship in multi-pr conflict
a45f7260fa had intended to use the
original authorship information for conflict commit even if there were
multiple commits, as long as there was only one author (/ committer)
for the entire sequence.

Sadly the deduplication was buggy as it took the *authorship date* in
account, which basically ensured commits would never be considered as
having the same authorship outside of tests (where it was possible for
commits to be created at the same second).

Related to #505
2021-08-24 15:39:47 +02:00
Xavier Morel
6a8c13b1ef [IMP] runbot_merge: show linked PRs during staging and after merging
Previously, a PR's status page would only show the linked / related
PRs when `open`.

Since the relations between PRs remains useful, also make this
information available during staging and after merging.

Fixes #463
2021-08-24 15:39:47 +02:00
Xavier Morel
5c19342bf6 [CHG] runbot_merge, forwardport: remove labelling
Because github materialises every labels change in the
timeline (interspersed with comments), the increasing labels churn
contributes to PRs being difficult to read and review.

This change removes the update of labels on PRs, instead the mergebot
will automatically send a comment to created PRs serving as a
notification that the PR was noticed & providing a link to the
mergebot's dashboard for that PR where users should be able to see the
PR state in detail in case they wonder what's what.

Lots of tests had to be edited to:

- remove any check on the labels of the PR
- add checks on the PR dashboard (to ensure that they're at least on
  the correct "view")
- add a helper to handle the comment now added to every PR by the 'bot
- since that helper is needed by both mergebot and forwardbot, the
  utils modules were unified and moved out of the odoo modules

Probably relevant note: no test was added for the dashboard
ACL, though since I had to explicitly unset the group on the repo used
for tests for things to work it looks to me like it at least excludes
people just fine.

Fixes #419
2020-11-20 07:41:54 +01:00