`get_content` round-trips the text part through `ascii` with
`error=replace`, so if the input is not ascii it screws up
tremendously, which leads to either failing to apply patches (the more
likely situation) or corrupting the patches.
`get_payload`, if called without `decode`, pretty much just returns
the payload unless it needs a decoding pass (e.g. because it contains
raw surrogates, but that should not be an issue for us). So this is
really what we want.
While at it, increase `patch`'s verbosity in case it can give us more
info.
If a branch is created via SQL directly, one might forget to set its
`write_date`, leading the computation of the `last_modified` to be
*really* unhappy as it's asked to compare a bunch of datetime objects
to `False` in order to compute the `max`.
Substitute missing `write_date` with `datetime.min` so they are
extremely unlikely to influence the computation until and unless the
branch gets updated.
Since b45ecf08f9 forwardport batches
which fail have a delay set in order to avoid spamming. However that
delay was not displayed anywhere, which made things confusing as the
batch would not get run even after creating new triggers.
Show the delay if it's set (to a value later than now), as a relative
delta for clarity (as normally the delay is in minutes so a full blown
date is difficult to read / aprehend), and allow viewing and setting
it in the form view.
Fixes#982
If a comment causes an unknown PR to be fetched, it's a bit odd to
ping the author (and possibly reviewer) anyway as they're not super
concerned (and technically we could be ignoring the purported /
attempted reviewer).
So if a fetch job was created because of a comment, remember the
comment author and ping *them* instead of using the default ping
policy.
Fixes#981
If staging gets re-enabled on a branch (or the branch itself gets
re-enabled), immediately run a staging cron as there may already be
PRs waiting, and no trigger enqueued: cron triggers have no payload,
they just get removed when the cron runs which means if a bunch of PRs
become ready for branch B with staging disabled, the cron is going to
run, it's going to stage nothing on that branch (because staging is
disabled) then it's going to delete all the triggers.
Fixes#979
Turns out you can configure format-patch with `--no-prefix` and some
people (*cough cough* mat) have that in their standard setup, so the
assumption of needing to strip 1 level of prefix does not necessarily
hold.
Also fix a few more issues:
- some people (*cough cough* still mat) also use `-n` by default,
which adds the series sequence (`n/m`) even for a single patch,
handle that correctly
- logging patch application errors is pretty useful when patching
fails and I'm trying to get the information via logs, do that
- especially when I decide to add error messages to tracking *but
forgot to show the chatter by default*, fix that as well
The commit-based patcher worked first try, and patch-based would have
worked too if not for those meddling kids. In the future it might be a
good idea to reify the stripping level (`-p`) on the patch object
though, and maybe provide a computed preview of the list of files to
patch, so issues are easier for the operator to diagnose.
- fix incorrect view specs (the action id comes first)
- add a wizard form and hook it into the PR, completely forgot to do
that
- usability improvements: filter branches to be in the same project as
the PR being backported, and older than the current PR's branch
The latter is a somewhat incomplete condition: ideally we'd want to
only allow selecting branches preceding the target of the *source* of
the PR being backported, that way we don't risk errors when
backporting forward-ports (the condition should be checked in the
final action but still).
Also we're only filtering by sequence, so we're missing the name part
of the ordering, hence if multiple branches have the same sequence we
may not allow selecting some of the preceding branches.
The initial idea to link an error to another one was a quick solution
to group them if they where related, but this became challenging
to copute metada regarding errors.
- The displayed error message was not always consistent with the real
root cause/the error that lead here.
- The aggregates (lets says, linked buils ids) could be the one of the
error, or from all error messages. Same for the versions, first seen, ..
This is confusing to knwo what is the leist we are managing and what is
the expecte result to display
Main motivation:
on a standard error page (will be changed to "assignment"), we want to
have the list of error message that is related to this one. We want to
know for each message (a real build error) what is the version,
first seen, ...
This will give more flexibility on the display,
The assigned person/team/test-tags, ... are moved to this model
The appearance data remains on the build error but are aggregate on the
assignation.
When a type error occurs when trying to format a message in a build log,
the suspicious are are joined with the message. But as the args may be a
tuple, an errors occurs when concatenating the message with the args
during the join.
With this commit, we ensure that the args are casted into a list.
pymarkdown's footnotes plugin *saves footnotes across invocations by
default*. Even if I understand the documented use case it seems wild
that it's not opt-in...
Anyway disable that resetting all internal state. Thanks rfr for the
inital report that things were looking odd.
Like limit, fw=no should restrict the table length, in this case to
just the current branch (as we're not forward porting at all).
Before this, `no` would not be applied as a limit visually, the table
would still go up to the main branch which is very confusing.
Fixes#962
As far as I can tell they are properly handled:
- In `handle_status` we let the http layer retry the query, which
pretty much always succeeds.
- In `Commit.notify`, we rollback the application of the current
commit, meaning it'll be processed by the next run of the cron,
which also seems to succeed every time (that is going through the
log I pretty much never notice the same commit being serialization
failure'd twice in a row).
Which we can trigger for faster action, this last item is not
entirely necessary as statuses should generally come in fast and
especially if we have concurrency errors, but it can't hurt.
This means the only genuine issue is... sql_db logging a "bad query"
every time there's a serialization failure.
In `handle_status`, just suppress the message outright, if there's an
error other than serialization the http / dispatch layer should catch
and log it.
In `Commit._notify` things are slightly more difficult as the execute
is implicit (`flush` -> `_write` -> `execute`) so we can't pass the
flag by parameter. One option would be to set and unset
`_default_log_exception`, but it would either be a bit dodgy or it
would require using a context manager and increasing the indentation
level (or using a custom context manager).
Instead just `mute_logger` the fucking thing. It's a bit brutish and
mostly used in tests, but not just, and feels like the least bad
option here...
Closes#805
For the longest time Github's `change` key was borked when
transitioning a description to and from empty. They fixed that during
2023, which I already saw and impacted on
DC (xmo-odoo/dummy_central@1ebed9d418
and xmo-odoo/dummy_central@937e87c2a4)
but this had yet to be taken in account by the mergebot.
This is now done, the code is functionally reverted to what it was
before I realised `changes` was hosed and moved off of it in
3da1874196.
Fixes#743
This is not a full user-driven backport thingie for now, just one
admins can use to facilitate thing and debug issues with the
system. May eventually graduate to a frontend feature.
Fixes#925
- replace manual token_urlsafe by actual token_urlsafe
- make conditional right side up and more readable
- replace match by fullmatch, should not change anything since we end
with a greedy universal match but is slightly more explicit
Missed it during the previous pass, probably because it's in the
middle of `pull_requests.py`. It's a classic template for triggered
crons since the model is just a queue of actions for the cron.
The mergebot tests have always been pretty gentle on system load which
is nice, however it's just looking at the list of longest tests that I
realised / re-membered the hook wait duration is 10 seconds for the
benefit of github, which doesn't really matter locally. This means on
interaction / cron-heavy tests the test might only be using on the
order of 10% CPU or something, that is a waste of time.
TBF this is easily compensated by increasing the concurrency of the
test suite (e.g. from 16 to 32 when I switched machine, but it seems
as if not more sensible to lower the webhook wait delay to something
more reasonable. 1s seems to be a good fit here, on my new computer at
n=16 it leads to the test suite running in 15mn at 600% CPU (which is
pretty good on a 6/12 CPU as it loads the system heavily but doesn't
completely bog it down).
Reducing it to 0.5s, the test suite takes the same duration but CPU
load increases to 770%, and errors creep up, likely a mix of
concurrency issues in the DB and dummy-central sending webhooks too
slowly as we compete with it for CPU resources (could actually make
sense to restrict the number of threads tokio can use). Reducing
concurrency could make this work better, but I think at this point
we're in a pretty good state, it's even somewhat reasonable to run the
test suite sequentially (taking about 1h10 but being functionally
invisible in terms of load).
If a PR is closed but part of an ongoing batch, the change in status
of the batch might be reflected on the PR still:
- if a PR is closed and the batch gets staged, the PR shows up as
being staged
- if the PR is merged then the batch gets merged, the PR shows up as
merged
Fixes#914
Also remove the unused `_tagstate` helper property.
When an image is build for the first time and the build fails, an
ImageNotfound Exception is raised when trying to push the image.
Whith this commit, a warning is emmited instead of crashing in such a
case.
Because of the false negatives due to github's reordering of events on
retargeting, blocking merge methods can be rather frustrating or the
user as what's happening and how to solve it isn't clear in that case.
Keep the warnings and all, but remove the blocking factor: if a PR
doesn't have a merge method and is not single-commit, just skip it on
staging. This way, PRs which are actually single-commit will stage
fine even if the mergebot thinks they shouldn't be.
Fixes#957