... by default. Optional statuses we haven't at least received a
`pending` for are irrelevant to the PR's state, so should remain
hidden.
This subtetly was missed when fixing #1062Fixes#1066
All `pull_request` events seem to provide the `commits` count
property. As such we can use them all to check the `squash` state even
if we don't otherwise care for the event.
Also split out notifying an approved pull request about its missing
merge method into a separate cron from the one notifying a PR that its
siblings are ready.
Fixes#1036
- Update `get_reviewers` to returning all active users, as that makes
more sense for the server side operations, this is now redundant
with `list_users`, so eventually internals should be fixed and it
removed.
- Update `remove_users` to deactivate the user as well as move it to
portal, since the mergebot doesn't use portal keeping enabled portal
users doesn't make much sense.
- Reorganise `provision_user` to make the update path more obvious.
Fixes#968
Having the staged and merged heads via an easy to reach endpoint is
nice, but having *just* the hashes is not as it requires basically
exhaustively checking commits against repos to see which is
where. That's annoying.
Make the head serializations a map of repository to commit
hash. That's a lot more convenient.
- use lingo `staged` and `merged` for clarity (instead of `heads` and
`commits`)
- convert from json-rpc to regular HTTP to make easier to call
- add the staged and merged heads when returning staging data
- return more complete objects (rather than a list of ids) when
matching stagings based on commits / heads
Fixes#1018
The old controller system required `type='json'` which only did
JSON-RPC and prevented returning proper responses.
As of 17.0 this is not the case anymore, `type='http'` controllers can
get `content-type: application/json` requests just fine and return
whatever they want. So change that:
- `type='json'`.
- Return `Response` objects with nice status codes (and text
mimetypes, as otherwise werkzeug defaults to html).
- Update ping to bypass normal flow as otherwise it requires
authentication and event sources which is annoying (it might be a
good idea to have both in order to check for configuration, however
it's not possible to just send a ping via the webhook UI on github
so not sure how useful that would be).
- Add some typing and improve response messages while at it.
Note: some "internal" errors (e.g. ignoring event actions) are
reported as successes because as far as I can tell webhooks only
support success (2xy) / failure (4xy, 5xy) and an event that's ignored
is not really *failed* per se.
Some things are reported as failures even though they are on the edge
because that can be useful to see what's happening e.g. comment too
large or unable to lock rows.
Fixes#1019
In that case, ignore the reopen, close the PR, and tell the idiot to
fuck off.
Also the case where a PR is reopened while its batch is staged was
already handled, but probably not tested: it was implicit in
forcefully updating the HEAD of the PR, which triggers an unstage
since c8a06601a7.
Now that scenario is tested, which should lower the odds of breaking
it in the future.
Fixes#965
If a branch is created via SQL directly, one might forget to set its
`write_date`, leading the computation of the `last_modified` to be
*really* unhappy as it's asked to compare a bunch of datetime objects
to `False` in order to compute the `max`.
Substitute missing `write_date` with `datetime.min` so they are
extremely unlikely to influence the computation until and unless the
branch gets updated.
If a comment causes an unknown PR to be fetched, it's a bit odd to
ping the author (and possibly reviewer) anyway as they're not super
concerned (and technically we could be ignoring the purported /
attempted reviewer).
So if a fetch job was created because of a comment, remember the
comment author and ping *them* instead of using the default ping
policy.
Fixes#981
As far as I can tell they are properly handled:
- In `handle_status` we let the http layer retry the query, which
pretty much always succeeds.
- In `Commit.notify`, we rollback the application of the current
commit, meaning it'll be processed by the next run of the cron,
which also seems to succeed every time (that is going through the
log I pretty much never notice the same commit being serialization
failure'd twice in a row).
Which we can trigger for faster action, this last item is not
entirely necessary as statuses should generally come in fast and
especially if we have concurrency errors, but it can't hurt.
This means the only genuine issue is... sql_db logging a "bad query"
every time there's a serialization failure.
In `handle_status`, just suppress the message outright, if there's an
error other than serialization the http / dispatch layer should catch
and log it.
In `Commit._notify` things are slightly more difficult as the execute
is implicit (`flush` -> `_write` -> `execute`) so we can't pass the
flag by parameter. One option would be to set and unset
`_default_log_exception`, but it would either be a bit dodgy or it
would require using a context manager and increasing the indentation
level (or using a custom context manager).
Instead just `mute_logger` the fucking thing. It's a bit brutish and
mostly used in tests, but not just, and feels like the least bad
option here...
Closes#805
For the longest time Github's `change` key was borked when
transitioning a description to and from empty. They fixed that during
2023, which I already saw and impacted on
DC (xmo-odoo/dummy_central@1ebed9d418
and xmo-odoo/dummy_central@937e87c2a4)
but this had yet to be taken in account by the mergebot.
This is now done, the code is functionally reverted to what it was
before I realised `changes` was hosed and moved off of it in
3da1874196.
Fixes#743
Because of the false negatives due to github's reordering of events on
retargeting, blocking merge methods can be rather frustrating or the
user as what's happening and how to solve it isn't clear in that case.
Keep the warnings and all, but remove the blocking factor: if a PR
doesn't have a merge method and is not single-commit, just skip it on
staging. This way, PRs which are actually single-commit will stage
fine even if the mergebot thinks they shouldn't be.
Fixes#957
By updating the staging timeout every time we run `_compute_state` and
still have a `pending` status, we'd actually bump the timeout *on
every success CI* except for the last one. Which was never the
intention and can add an hour or two to the mergebot-side timeout.
Instead, add an `updated_at` attribute to statuses (name taken from
the webhook payload even though we don't use that), read it when we
see `pending` required statuses, and update the staging timeout based
on that if necessary.
That way as long as we don't get *new* pending statuses, the timeout
doesn't get updated.
Fixes#952
And unconditionally unstage when the HEAD of a PR is synchronised.
While a rebuild on a PR which was previously staged can be a false
positive (e.g. because it hit a non-derministic test the second time
around) it can also be legitimate e.g. auto-rebase of an old PR to
check it out. In that case the PR should be unstaged.
Furthermore, as soon as the PR gets rebuilt it goes back into
`approved` (because the status goes to pending and currently there's
no great way to suppress that in the rebuild case without also fucking
it up for the sync case). This would cause a sync on the PR to be
missed (in that it would not unstage the PR), which is broken. Fix
that by not checking the state of the PR beforehand, it seems to be an
unnecessary remnant of older code, and not really an optimisation (or
at least one likely not worth bothering with, especially as we then
proceed to perform a full PR validation anyway).
Fixes#950
Broken (can't run odoo at all):
- In Odoo 17.0, the `pre_init_hook` takes an env, not a cursor, update
`_check_citext`.
- Odoo 17.0 rejects `@attrs` and doesn't say where they are or how to
update them, fun, hunt down `attrs={'invisible': ...` and try to fix
them.
- Odoo 17.0 warns on non-multi creates, update them, most were very
reasonable, one very wasn't.
Test failures:
- Odoo 17.0 deprecates `name_get` and doesn't use it as a *source*
anymore, replace overrides by overrides to `_compute_display_name`.
- Multiple tracking changes:
- `_track_set_author` takes a `Partner` not an id.
- `_message_compute_author` still requires overriding in order to
handle record creation, which in standard doesn't support author
overriding.
- `mail.tracking.value.field_type` has been removed, the field type
now needs to be retrieved from the `field_id`.
- Some tracking ordering have changed and require adjusting a few
tests.
Also added a few flushes before SQL queries which are not (obviously
at least) at the start of a cron or controller, no test failure
observed but better safe than sorry (probably).
odoo/odoo@1341d52 added native support for tracking / overriding
tracking message so we don't need `change-message `anymore. The calls
had to be modified a bit as `_track_set_log_message` has to be invoked
on the records for which the tracking message is being set / updated.
Sadly the same for authors was only added in 16.3 *and* it's
unsuitable for our needs as we want to set the author without knowing
the scope of affected records (at least in the controller).
That way hopefully in 17.0 we can remove the `_message_compute_author`
override and things should work out. And maybe for 19.0 we can get the
ability to set a per-model (or even global) fallback author into
standard.
Breakages:
- the entire http.py API was updated requiring fixing the uses of
`request.jsonrequest` and the patches to `WebRequest` to hook in
sentry
- `fontawesome` was moved
- `*[@groups]` are now completely removed from the view if not
matching, so any field inside of them which needs to be used outside
(e.g. attrs) has to be added as invisible outside the element
- discuss removed the mail tracking value helpers from RPC in
odoo/odoo#88547, so reimplement locally (and better)
Detailed statuses are useful in the actual PR dashboard as that allows
direct access to the builds, however in the PR where it's only a
picture it's useless, so fold that information. Also fold it when a PR
is staged.
And while at it add a note / sub-title that the PR is staged.
Fixes#919
The PR dashboard picture provides a great overview of the batch state
both horizontally and vertically *but* apparently people can't for the
life of them go check the actual dashboard when things don't line
up. So expand the "current batch" to a view more similar to
dashboard *page*, which gives details of the sub-checks being
performed and whether they are or are not fulfilled.
Fixes#908
The previous version worked but was extremely plodding and
procedural. Initially I wanted to compose the table in a single pass
but that turns out not to really be possible as the goal for #908 is
to have a "drawer" for extended information about the current batch:
this means different cells of the same row can have different heights,
so we can't one-pass the image either vertically (later cells of the
current column might be wider) or horizontally (later cells of the
current row might be taller).
However what can be done is give the entire thing *structure*,
essentially defining a very cut down and ad-hoc layout system before
committing the layout to raster.
This also deduplicates formatting and labelling information which was
previously in both the computation first step and the rasterisation
second step.
Extract current table generation into a separate function, add an
other function to render an alert / list of PR targets if the batch is
not consistent.
This means an extra pass on the table contents to precompute the image
size, but we can delay loading fonts until after etag computation
which might be a bigger gain all things considered: there aren't many
cells in most PR tables, but fonts are rather expensive to
load (I should probably load them at import and cache them in the module...)
Comments which are too long cause `logging` itself to crash, which
kinda sucks. And long comments seem very unlikely to have anything for
the mergebot to do besides.
So just ignore them at intake. Limit is set to 5000 because there
needs to be a limit somewhere and that's about the extent of it.
d4fa1fd353 added tracking to changes
from *comments* (as well as a few hacks around authorship transfer),
however it missed two things:
First, it set the `change-author` during comments handling only, so
changes from the `PullRequest` hook e.g. open, synchronise, close,
edit, don't get attributed to their actual source, and instead just
fall back to uid(1). This is easy enough to fix as the `sender` is
always provided, that can be resolved to a partner which is then set
as the author of whatever changes happen.
Second, I actually missed one of the message hooks: there's both
`_message_log` and `_message_log_batch` and they don't call one
another, so both have to be overridden in order for tracking to be
consistent. In this case, specifically, the *creation* of a tracked
object goes through `_message_log_batch` (since that's a very generic
message and so works on every tracked object created during the
transaction... even though batch has a message per record anyway...)
while *updates* go through `_message_log`.
Fixes#895
The commit cron needs to be triggered any time we:
- create a new commit
- update a commit to set its `to_check`
So do that in create and write as well as the SQL query in the
webhook handler.
This should mean we don't need the periodic cron anymore, but for
safety's sake run it on 30mn for now.
TBF even if we miss triggers, the next `status` webhook hitting will
check all the relevant commits anyway...
Replace the unclear "unchecked" and "unreviewed" by "missing statuses"
and "missing r+", which are hopefully clearer as they better match
other lingo.
Also increase font for attributes, as size 10 was a bit small.
And finally add staging state to caching key, to differentiate "ready"
from "staged" pictures in gh's cache. "ready" should not be necessary
as it ought be implied by the label.
Currently this just silently returns a 404. Since repos are gated by
default (only accessible to internal users) this can get very
confusing when trying to setup a new repo or when forgetting this
information when writing tests.
The dashboard can be a bit unclear as to the state of a PR when
everything's gone well. Make it more clear / explicit that it's ready
or staged.
Fixes#888
Currently webhook secrets are configured per *project* which is an
issue both because different repositories may have different
administrators and thus creates safety concerns, and because multiple
repositories can feed into different projects (e.g. on mergebot,
odoo-dev/odoo is both an ancillary repository to the main RD project,
and the main repository to the minor / legacy master-wowl
project). This means it can be necessary to have multiple projects
share the same secret as well, this then mandates the secret for more
repositories per (1).
This is a pain in the ass, so just detach secrets from projects and
link them *only* to repositories, it's cleaner and easier to manage
and set up progressively.
This requires a lot of changes to the tests, as they all need to
correctly configure the signaling.
For `runbot_merge` there was *some* setup sharing already via the
module-level `repo` fixtures`, those were merged into a conftest-level
fixture which could handle the signaling setup. A few tests which
unnecessarily set up repositories ad-hoc were also moved to the
fixture. But for most of the ad-hoc setup in `runbot_merge`, as well
as `forwardport` where it's all ad-hoc, events sources setup was just
appended as is. This should probably be cleaned up at one point, with
the various requirements collected and organised into a small set of
fixtures doing the job more uniformly.
Fixes#887
Displays the entire batch set as a table, along both
repository (linked PRs) and branch (forward ports). Should provide a
much more complete overview.
Adds a copy of the dashboard as a raster render, to link from the PR:
as usual SVG is shit, content-based viewboxes are hell and having to
duplicate the entire CSS because `<img/>`-linked CSS can't run is
gross. And there's no payoff since the image is not interactible
anyway.
Performing manual ad-hoc table rendering via pillow is not
significantly worse, it works fine and it's possible to do *really*
good conditional request handling (hopefully) because I've basically
got all the information I need right here.
In fact it might make sense to upgrade the regular HTML page with
similar conditional request handling, at least for the last-update
bit if not the etag.
Fixes #771,fixes #770
Staging readiness is a batch-level concerns, and many of the markers
are already there though a few need to be aggregated from the PRs. As
such, staging has no reason to be performed in terms of PRs anymore,
it should be performed via batches directly.
There is a bit of a mess in order not to completely fuck up when
retargeting PRs (implicitly via freeze wizard, or explicitely) as for
now we're moving PRs between batches in order to keep the
batches *mostly* target-bound.
Some of the side-effects in managing the coherence of the targeting
and moving PRs between batches is... not great. This might need to be
revisited and cleaned up with those scenarios better considered.
This commit revisits the commands set in order to make it more
regular, and limit inconsistent command-sets, although it includes
pseudo-command aliases for common tasks now removed from the core set.
Hard Errors
===========
The previous iteration of the commands set would ignore any
non-command term in a command line. This has been changed to hard
error (and ignoring the entire thing) if any command is unknown or
invalid.
This fixes inconsistent / unexpected interpretations where a user
sends a command, then writes a novel on the same line some words of
which happen to *also* be commands, leading to merge states they did
not expect. They should now be told to fuck off.
Priority Restructuring
----------------------
The numerical priority system was pretty messy in that it confused
"staging priority" (in ways which were not entirely straightforward)
with overrides to other concerns.
This has now being split along all the axis, with separate command
subsets for:
- staging prioritisation, now separated between `default`, `priority`,
and `alone`,
- `default` means PRs are picked by an unspecified order when
creating a staging, if nothing better is available
- `priority` means PRs are picked first when staging, however if
`priority` PRs don't fill the staging the rest will be filled with
`default`, this mode did not previously exist
- `alone` means the PRs are picked first, before splits, and only
`alone` PRs can be part of the staging (which usually matches the
modename)
- `skipchecks` overrides both statuses and approval checks, for the
batch, something previously implied in `p=0`, but now
independent. Setting `skipchecks` basically makes the entire batch
`ready`.
For consistency this also sets the reviewer implicitly: since
skipchecks overrides both statuses *and approval*, whoever enables
this mode is essentially the reviewer.
- `cancel` cancels any ongoing staging when the marked PR becomes
ready again, previously this was also implied (in a more restricted
form) by setting `p=0`
FWBot removal
=============
While the "forwardport bot" still exists as an API level (to segregate
access rights between tokens) it has been removed as an interaction
point, as part of the modules merge plan. As a result,
fwbot stops responding
----------------------
Feedback messages are now always sent by the mergebot, the
forward-porting bot should not send any message or notification
anymore.
commands moved to the merge bot
-------------------------------
- `ignore`/`up to` simply changes bot
- `close` as well
- `skipci` is now a choice / flag of an `fw` command, which denotes
the forward-port policy,
- `fw=default` is the old `ci` and resets the policy to default,
that is wait for the PR to be merged to create forward ports, and
for the required statuses on each forward port to be received
before creating the next
- `fw=skipci` is the old `skipci`, it waits for the merge of the
base PR but then creates all the forward ports immediately (unless
it gets a conflict)
- `fw=skipmerge` immediately creates all the forward ports, without
even waiting for the PR to be merged
This is a completely new mode, and may be rather broken as until
now the 'bot has always assumed the source PR had been merged.
approval rework
---------------
Because of the previous section, there is no distinguishing feature
between `mergebot r+` = "merge this PR" and `forwardbot r+` = "merge
this PR and all its parent with different access rights".
As a result, the two have been merged under a single `mergebot r+`
with heuristics attempting to provide the best experience:
- if approving a non-forward port, the behavior does not change
- else, with review rights on the source, all ancestors are approved
- else, as author of the original, approves all ancestors which descend
from a merged PR
- else, approves all ancestors up to and including the oldest ancestor
to which we have review rights
Most notably, the source's author is not delegated on the source or
any of its descendants anymore. This might need to be revisited if it
provides too restrictive.
For the very specialized need of approving a forward-port *and none of
its ancestors*, `review=` can now take a comma (`,`) separated list of
pull request numbers (github numbers, not mergebot ids).
Computed State
==============
The `state` field of pull requests is now computed. Hopefully this
makes the status more consistent and predictable in the long run, and
importantly makes status management more reliable (because reference
datum get updated naturally flowing to the state).
For now however it makes things more complicated as some of the states
have to be separately signaled or updated:
- `closed` and `error` are now separate flags
- `merge_date` is pulled down from forwardport and becomes the
transition signal for ready -> merged
- `reviewed_by` becomes the transition signal for approval (might be a
good idea to rename it...)
- `status` is computed from the head's statuses and overrides, and
*that* becomes the validation state
Ideally, batch-level flags like `skipchecks` should be on, well, the
batch, and `state` should have a dependency on the batch. However
currently the batch is not a durable / permanent member of the system,
so it's a PR-level flag and a messy pile.
On notable change is that *forcing* the state to `ready` now does that
but also sets the reviewer, `skipchecks`, and overrides to ensure the
API-mediated readying does not get rolled back by e.g. the runbot
sending a status.
This is useful for a few types of automated / programmatic PRs
e.g. translation exports, where we set the state programmatically to
limit noise.
recursive dependency hack
-------------------------
Given a sequence of PRs with an override of the source, if one of the
PRs is updated its descendants should not have the override
anymore. However if the updated PR gets overridden, its descendants
should have *that* override.
This requires some unholy manipulations via an override of `modified`,
as the ORM supports recursive fields but not recursive
dependencies (on a different field).
unconditional followup scheduling
---------------------------------
Previously scheduling forward-port followup was contigent on the FW
policy, but it's not actually correct if the new PR is *immediately*
validated (which can happen now that the field is computed, if there
are no required statuses *or* all of the required statuses are
overridden by an ancestor) as nothing will trigger the state change
and thus scheduling of the fp followup.
The followup function checks all the properties of the batch to port,
so this should not result on incorrect ports. Although it's a bit more
expensive, and will lead to more spam.
Previously this would not happen because on creation of a PR the
validation task (commit -> PR) would still have to execute.
Misc Changes
============
- If a PR is marked as overriding / canceling stagings, it now does
so on retry not just when setting initially.
This was not handled at all previously, so a PR in P0 going into
error due to e.g. a non-deterministic bug would be retried and still
p=0, but a current staging would not get cancelled. Same when a PR
in p=0 goes into error because something was failed, then is updated
with a fix.
- Add tracking to a bunch of relevant PR fields.
Post-mortem analysis currently generally requires going through the
text logs to see what happened, which is annoying.
There is a nondeterminism / inconsistency in the tracking which
sometimes leads the admin user to trigger tracking before the bot
does, leading to the staging tracking being attributed to them
during tests, shove under the carpet by ignoring the user to whom
that tracking is attributed.
When multiple users update tracked fields in the same transaction
all the changes are attributed to the first one having triggered
tracking (?), I couldn't find why the admin sometimes takes over.
- added and leveraged support for enum-backed selection fields
- moved variuous fields from forwardport to runbot_merge
- fix a migration which had never worked and which never run (because
I forgot to bump the version on the module)
- remove some unnecessary intermediate de/serialisation
fixes#673, fixes#309, fixes#792, fixes#846 (probably)
If an untracked PR is closed, especially on an inactive or untracked
branch, the closer (or author) almost certainly don't care to receive
3 different notifications on the subject.
The fix requires a schema change in order to track that we're fetching
the PR due to a `closed` event, as in other cases we may still want to
notify the user that we received the request (and it just happened to
resolve to a closed PR).
Fixes#857
- correctly handle projects without a secret set, we don't want the
requests to blow up by trying to `strip()` a `False` or `None`, that
is dumb, who would do that?
- provide better reporting on signature mismatch: which repo we tried
to access, and the full list of headers
- log when there was no signature matching, either because there was
no signature in the request and no secret on the project, or because
the request is signed but no secret is configured on the repo
The signature validation code seems correct, but there are validation
failure in production, increase logging around webhook requests to
try and diagnose things better:
- dump the *entire* body to the github_requests logfile
- add the received & computed signatures to the log error
Per the Github webhook documentation:
1. sha1 signatures are deprecated, github recommends sha256 (though
that's unlikely to be a concern anyway), and dummy-central supports
both so it should be no issue.
> If possible, we recommend that you use the x-hub-signature-256
> header for improved security.
2. Non-ascii secrets are supported and should be utf8-encoded to
compute signatures... that's not actually documented as github docs
only mention payload encoding but it seems to make sense anyway.
Also improve the warning message by replacing the signature (which is
useless) by the delivery id (which could allow introspecing the hook
or something).
`/runbot_merge/stagings`
========================
This endpoint is a reverse lookup from any number of commits to a
(number of) staging(s):
- it takes a list of commit hashes as either the `commits` or the
`heads` keyword parameter
- it then returns the stagings which have *all* these commits as
respectively commits or heads, if providing all commits for a
project the result should always be unique (if any)
- `commits` are the merged commits, aka the stuff which ends up in the
actual branches
- `heads` are the staging heads, aka the commits at the tip of the
`staging.$name` branches, those may be the same as the corresponding
commit, or might be deduplicator commits which get discarded on
success
`/runbot_merge/stagings/:id`
============================
Returns a list of all PRs in the staging, grouped by batch (aka PRs
which have the same label and must be merged together).
For each PR, the `repository` name, `number`, and `name` in the form
`$repository#$number` get returned.
`/runbot_merge/stagings/:id1/:id2`
==================================
Returns a list of all the *successfully merged* stagings between `id1`
and `id2`, from oldest to most recent. Individual records have the
form:
- `staging` is the id of the staging
- `prs` is the contents of the previous endpoint (a list of PRs
grouped by batch)
`id1` *must* be lower than `id2`.
By default, this endpoint is inclusive on both ends, the
`include_from` and / or `include_to` parameters can be passed with the
`False` value to exclude the corresponding bound from the result.
Related to #768
Allow filtering stagings by state (success or failure), and provide a
control to explicitly update the staging date limit.
Should make it easier to drill through stagings when looking for
specific information.
Related to #751
Fix outstanding query to make a positive `state` filtering, instead of
negative, matching 3b52b1aace8674259812a76b1566260937dbcacb.
Also manually create a map of stagings (grouped by branch) sharing a
single prefetch set.
For odoo the mergebot home page has 12 branches in the odoo project
and 8 in spreadsheet, 6 stagings each. This means 120 queries to
retrieve all the heads (Odoo stagings have 5 heads and spreadsheet
have 1, but that seems immaterial).
By fixing `_compute_statuses` and creating a single prefetch set for
all stagings of all branches we can fetch all the commits in a single
query instead of 120.
A few cases of conflict were missing from the provisioning
handler.
They can't really be auto-fixed, so just output a warning and ignore
the entry, that way the rest of the provisioning succeeds.
Currently sentry is only hooked from the outside, which doesn't
necessarily provide sufficiently actionable information.
Add some a few hooks to (try and) report odoo / mergebot metadata:
- add the user to WSGI transactions
- add a transaction (with users) around crons
- add the webhook event info to webhook requests
- add a few spans to the long-running crons, when they cover multiple
units per iteration (e.g. a span per branch being staged)
Closes#544
- move sentry configuration and add exception-based filtering
- clarify and reclassify (e.g. from warning to info) a few messages
- convert assertions in rebase to MergeError so they can be correctly
logged & reported, and ignored by sentry, also clarify them
(especially the consistency one)
Related to #544
Current system makes it hard to iterate feedback messages and make
them clearer, this should improve things a touch.
Use a bespoke model to avoid concerns with qweb rendering
complexity (we just want GFM output and should not need logic).
Also update fwbot test setup to always configure an fwbot name, in
order to avoid ping messages closing the PRs they're talking
about, that took a while to debug, and given the old message I assume
I'd already hit it and just been too lazy to fix. This requires
updating a bunch of tests as fwbot ping are sent *to*
`fp_github_name`, but sent *from* the reference user (because that's
the key we set).
Note: noupdate on CSV files doesn't seem to work anymore, which isn't
great. But instead set tracking on the template's templates, it's not
quite as good but should be sufficient.
Fixes#769
- github logins are case-insensitive while the db field is CI the dict
in which partners are stored for matching is not, And the caller may
not preserve casing.
Thus it's necessary to check the casefolded database values against
casefolded parameters, rather than exactly.
- users may get disabled by mistake or when one leaves the project,
they may also get switched from internal to portal, therefore it is
necessary to re-enable and re-enroll them if they come back.
- while at it remove the user's email when they depart, as they likely
use an organisational email which they don't have access to anymore
Side-note, but remove the limit on the number of users / partners
being created at once: because there are almost no modules in the
mergebot's instance, creating partner goes quite fast (compared to a
full instance), thus the limitation is almost certainly unnecessary
(creating ~300 users seems to take ~450ms).
Fixes ##776