- add a logging entry for PR updates
- change the generic log entry to log the sender (of the event) rather
than the PR author
- fix the post-facto PR loader to more systematically and reliably set
a `sender`
Old messages were quite inconsistent in their pinging of the PR author
and reviewer.
Reviewed messages (probably missed some but...) and try to more
consistently ping when the feedback requires some sort of action in
order to proceed.
Fixes#592
A few fixes and improvements after testing the feature:
- ensure the provisioned users are created as internal (not portal)
- assume oauth is installed and just crash if it's not
- handle a user not having an email (ignore)
- return value from json handler, otherwise JsonRequest sends no
payload which is *weird*
- code in the various menus added over time through the UI (queues,
configuration, ...)
- update / improve PR layout a tick
- fix "outstanding forward ports" count on the dashboard
- improve hover title / help on dashboard
- add date of last modification (usually date of success / failure)
- make casing more coherent (everything lowercase)
- add explicit note that UTC date on staged at label is staged at datetime
- rediscover yet again that the staging information is when hovering
on the staging *except the staged at label*
- improve `PullRequest.unstage` to always insert the PR at the start of the
reason when cancelling the staging, for clarity / traceability
Closes#560, closes#609
New accounts endpoint such that the SSO can push new pre-configured
users / employees directly. This lowers maintenance burden.
Also remove one of the source partners from the merge test, as
ordering seems wonky for unclear reasons leading to random failures of
that test.
* Adds a changelog page, linked from the main, with content
automatically loaded from the source. To avoid conflicts, each entry
is its own file and entries are grouped by the month during which
the update will (probably) be deployed
* The last group (most likely "last update") doesn't have a title, the
rest do.
* Add changelog entries from the last update so it's not too empty.
* Also update the layout for the alerts a bit: remove bottom margin to
reduce loss of whitespace.
* Remove the forwardport creating PRs in draft, that was mostly to
avoid codeowners triggering but we've removed the github one and
hand-rolled it, so not a concern anymore.
* Prevent merging `draft` PRs, the mergebot rejects approval on draft
PRs and insults people.
TBD (maybe): try to create *conflicting* forward-port PRs in draft so
it's clearer they need to be *fixed*? Issue of not being able to do
that on all private repositories remains so~~
Fixes#500
"Uniquifier" commits were introduced to ensure branches of a staging
on which nothing had been staged would still be rebuilt properly.
This means technically the branches on which something had been
staged never *needed* a uniquifier, strictly speaking. And those lead
to extra building, because once the actually staged PRs get pushed
from staging to their final destination it's an unknown commit to the
runbot, which needs to rebuild it instead of being able to just use
the staging it already has.
Thus only add the uniquifier where it *might* be necessary:
technically the runbot should not manage this use case much better,
however there are still issues like an ancillary build working with
the same branch tip (e.g. the "current master") and sending a failure
result which would fail the entire staging. The uniquifier guards
against this issue.
Also update rebase semantics to always update the *commit date* of the
rebased commits: this ensures the tip commit is always "recent" in the
case of a rebase-ff (which is common as that's what single-commit PRs
do), as the runbot may skip commits it considers "old".
Also update some of the utility methods around repos / commits to be
simpler, and avoid assuming the result is JSON-decodable (sometimes it
is not).
Also update the handling of commit statuses using postgres' ON
CONFLICT and jsonb support, hopefully this improves (or even fixes)
the serialization errors. Should be compatible with 9.5 onwards which
is *ancient* at this point.
Fixes#509
If a PR is closed on github and unknown by the mergebot, when fetched
it should be properly sync'd as "closed" in the backend, otherwise the
PR can get in a weird state and cause issues.
Also move the "I fetched the thing" comment before the actual creation
of the PR for workflow clarity, otherwise the reader has the
impression that the 'bot knew about the PR then fetched it anyway.
And improve savepoint management around the fetching: savepoints
should be released in all cases.
Closes#488.
Before this change, a CI override would have to be replicated on most
/ all forward-ports of the base PR. This was intentional to see how it
would shake out, the answer being that it's rather annoying.
Also add a `statuses_full` computed field on PRs for the aggregate
status: the existing `statuses` field is just a copy of the commit
statuses which I didn't remember I kept free of the overrides so the
commit statuses could be displayed "as-is" in the backend (the
overrides are displayed separately). And while at it fix the PR
dashboard to use that new field: that was basically the intention but
then I went on to use the "wrong" field hence #433.
Mebbe the UI part should be displayed using a computed M2M (?)
as a table or as tags instead? This m2m could indicate whether the
status is an override or an "intrinsic" status.
Also removed some dead code:
* leftover from the removed tagging feature (removed the tag
manipulation but forgot some of the setup / computations)
* unused local variables
* an empty skipped test case
Fixes#439.
Fixes#433.
Because github materialises every labels change in the
timeline (interspersed with comments), the increasing labels churn
contributes to PRs being difficult to read and review.
This change removes the update of labels on PRs, instead the mergebot
will automatically send a comment to created PRs serving as a
notification that the PR was noticed & providing a link to the
mergebot's dashboard for that PR where users should be able to see the
PR state in detail in case they wonder what's what.
Lots of tests had to be edited to:
- remove any check on the labels of the PR
- add checks on the PR dashboard (to ensure that they're at least on
the correct "view")
- add a helper to handle the comment now added to every PR by the 'bot
- since that helper is needed by both mergebot and forwardbot, the
utils modules were unified and moved out of the odoo modules
Probably relevant note: no test was added for the dashboard
ACL, though since I had to explicitly unset the group on the repo used
for tests for things to work it looks to me like it at least excludes
people just fine.
Fixes#419
Historically PRs to disabled branches were treated like PRs to
un-managed branches: ignored.
However because they cay *already exist* when the branch is disabled,
the effects can be subtly different, and problematically so
e.g. ignoring all PR events on PRs targeting disabled branches means
we can't close them anymore, which is less than great.
So don't ignore events on PRs to disabled branches (creation, sync,
closing, and reopening) but also send feedback on PRs to disabled or
un-managed branches to indicate that they're not merge-able.
Fixes#410
The "blocked" computation would not take branch targets in account, so
PRs with the same label targeting *different branches* (possible if
somewhat rare due to our naming conventions) could block one another,
despite really being unrelated.
Also fix up some messages:
* if a PR is blocked due to having no merge method, it should say
that, not "has no merge" (no merge what?)
* format un-managed branches as `$repo:$branch` in logging messages,
`$repo#$thing` is for issues / PRs and `$branch` alone can be very
unhelpful
Closes#407
Adds an `override` mergebot command. The ability to override is set on
an individual per-context per-repository basis, similar to but
independent from review rights. That is, a given individual may be
able to override the status X on repository A and unable to do so on
repository B.
Overrides are stored in the same format as regular statuses, but
independent from them in order to persist them across builds.
Only PR statuses can be overridden, statuses which are overridable on
PRs would simply not be required on stagings.
An alternative to implementing this feature in the mergebot would be
to add it to individual status-generating tools on a per-need
basis.
Pros of that alternative:
* display the correct status on PRs, currently the PR will be failing
status-wise (on github) but correct as far as the mergebot is
concerned
* remove complexity from the mergebot
Cons of that alternative:
* each status-generating tool would have to implement some sort of ACL
system
* each status-generating tool would have to receive & parse PR
comments
* each status-generating tool would have to maintain per-pr state in
order to track overrides
Some sort of helper library / framework ought make that rather easy
though. It could also be linked into the central provisioning system
thing.
Closes#376
While the head gets updated (properly), the squash flag did not which
could lead to odd results. Since a PR can only be reopened if it was
regular-pushed to (not after a force push) there are two scenarios:
* the PR updated to have 0 commits, closed, pushed to with one commit
then reopened, after reopening the PR would be marked as !squash and
would ask for a merge method (that's what happened with
odoo/odoo#51763)
* the PR has a single commit, is closed, pushed to then reopened,
after reopening the PR would still be marked a squash and potentially
straight rebased without asking for a merge method
Nothing would break per-se but both scenarios are undesirable.
Close#373
As the odds of having more projects or more repos with different
requirements in the same project, the need to have different sets of
reviewers for different repositories increases.
As a result, rather than be trivial boolean flags the review info
should probably depend on the user / partner and the repo. Turns out
the permission checks had already been extracted into their own
function so most of the mess comes from testing utilities which went
and configured their review rights as needed.
Incidentally it might be that the test suite could just use something
like a sequence of commoditized accounts which get configured as
needed and not even looked at unless they're used.
Despite the existing dedup' sometimes the "xxx failed on this
forward-port PR" would still get multiplicated due to split builds
e.g. in odoo/odoo#43935 4 such messages appear within ~5 minutes, then
one more 10mn later.
This is despite all of them having the same "build" (target_url) and
status (failure). Since the description is the only thing that's not
logged I assume that's the field which varies and makes the dedup'
fail. Therefore:
* add the description to the logging (when getting a status ping)
* exclude the description when checking if a new status should be
taken in account or ignored: the build (and thus url) should change
on rebuild
Hopefully fixes#281
A while back I implemented name_get/display_name to print PRs using
the canonical github format (owner/repo#number), however looks like
some of the logging calls were still using bespoke formatting.
When an employee sadly leaves Odoo,
the Odoo production database (odoo.com) will call these routes
in order to remove the reviewer rights automatically.
So a user who no longer works for Odoo can't "r+" Github PRs.
This is related to odoo/internal#617
If odoo is configured with a logfile, log to a separate file in the
same directory.
* log request / response when querying github
* log *received* requests for webhooks
Either way log the entire request metadata, though only the first 400
bytes/chars of the entity bodies.
This is intended to help mostly with post-mortem debugging: timestamps
from the main log can be correlated with the timestamps from the
github log in order to have more relevant information, both for
internal use and to send to gh support.
Closes#257
The closing or reopening of PRs was not logged at all, which can be
inconvenient when trying to find out why PRs are closed (or not) in
the backend.
Also leverage PR display_name improvements from
3ce3dd9569 for more regular PR names in
logs.
Attempt to avoid some of the comment spam by dedup-ing input (only
signaling when the status actually changes and ignoring identity
transformations) and in case of failing CI keeping the last failed
status and not signaling on the next update if it's the same failure.
Closes#225
* Cherrypicking is handrolled because there seems to be no easy way to
programmatically edit commit messages during the cherrypicking
sequence: `-n` basically squashes all commits and `-e` invokes a
subprocess. `-e` with `VISUAL=false` kinda sorta works (in that it
interrupts the process before each commit), however there doesn't
seem to be clean status codes so it's difficult to know if the
cherrypick failed or if it's just waiting for a commit of this step.
Instead, cherrypick commits individually then edit / rewrite their
commit messages:
* add a reference to the original commit
* convert signed-off-by to something else as the original commit was
signed off but not necessarily this one
* Can't assign users when creating PRs: only repository collaborators
or people who commented on the issue / PR (which we're in the
process of creating) can be assigned.
PR authors are as likely to be collaborators as not, and we can have
non-collaborator reviewers. So pinging via a regular comment seems
less fraught as a way to notify users.
A deactivated branch is generally treated as unmanaged which is mostly
correct except for the case of retargeting an existing PR.
When a branch is deactivated the corresponding PRs are not removed, so
it's possible to have live PRs associated with ~unmanamaged
branches. When retargeting those PRs to active branches, the mergebot
would assume there was no existing PR and would create a duplicate,
then either get completely lost (before
a84595ea04) or blow up (after the same).
Properly search amongst deactivated branches for retargeting sources
so we update the relevant PR instead of trying to create duplicates.
Fixes#169
* extract method to create a PR object from a github result (from the
PR endpoint)
* move some of the remote's fixtures to a global conftest (so they can
be reused in the forwardbot)
On a PR being updated, closed or unreviewed, if it is part of an
active staging that staging would get cancelled (yay). However, if the
PR was part of a pending *split*, then the split would *not* get
cancelled / updated (to remove the PR from it), and the PR could go on
to get staged as if everything were right in the world which is an
issue.
It doesn't look like it actually happened (at least I got no echo of
it), but it almost did at least once.
fixes#160
Previously, creating a PR would validate the head (in case it had
already passed CI) but reopening it would not, which is inconvenient
as the CI would not automatically run on a reopened PR.
Update both the state and the head of the PR on reopen to force a
revalidation, that way if the head has already passed CI the PR will
be reopened validated and there won't be an unclear need to perform an
explicit CI run.
Fixes#119
A status being updated on a commit is a read/modify/update, meaning
it's possible for somebody else (including a concurrent event?) to
concurrently update the commit and conflict leading to the webhook
blowing up, which is undesirable as it's a data loss (whereas if it
blows up on the other side e.g. in the cron's commit processor the
cron will just take it up next iteration).
Before this, impacting a commit's statuses on the relevant PR or
staging would be performed immediatly / inline with its
consumption. This, however, is problematic if we want to implement
additional processing like #87 (and possibly though probably not #52):
webhook handlers should be kept short and fast, feeding back into
github would not be acceptable.
- flag commits as needing processing instead of processing them
immediately, this uses a partial index as it looks like the
recommended / proper way to index a boolean column in which one of
the values is searched much more than the other (todo: eventually
check if that actually does anythnig)
- add a new cron for commits processing
- alter tests so they use this new cron (mostly by migrating them to
`run_crons` though not solely as some still need more detailed
management to properly check intermediate steps)
Fix an issue with closing a staged PR while at it (the "merging" tag
would potentially never be removed).
The choice to keep sync'd PRs in error means it's possible to update
the code and re-run the PR directly without it going through review &
CI again, which is a bit odd. Remove the special case and always reset
a sync'd PR to opened for clarity and simplicity.
closes#71closes#83
Turns out skipping locks is not very useful when there are no locks
being held because we only touch the PRs *after* the merge has been
applied.
So finally do that, lock all of a staging's PRs before we try to
fast-forward the relevant repositories, so a close command coming back
from github (from having seen the closes #xxx annotation) doesn't
screw us over.
Closed tagging was broken since the raw-sql alterations of the close
hook: because it's raw SQL, the write() method doesn't get invoked
anymore and as a result the tagging feedback record is not created,
and never executed.
Add a test to check for the PR's proper tagging, and fix this issue by
explicitly creating a tagging record.
Closes#49
Original issue (staging would get cancelled just as it was being
merged) was not really fixed but traded for a new one: serialization
errors which can lock up the mergebot for a long time, stopping
handling of all incoming signals (possibly/probably because all of
them try to write on the PR which is locked?).
Splitting the tagging cron out should already way improve things as
the status update cron should be way shorter (and thus hold its locks
for a smaller amount of time). This should also avoid the "close"
handler waiting on the extant transaction, and make the "pr update"
transaction be much shorter as each staging gets its own tnx.
Send reponse comments when users mis-interact with robodoo e.g.
send comments they don't have the right to, or commands which don't make
sense in the PR's state, or tentative interactions with robodoo from
unmanaged PRs.
Because mergebot cron can run on any runbot, it's apparently possible
that a staging gets merged and the "closed" feedback from github
overwrites the merged status which the mergebot is supposed to set
despite the supposed protection.
The webhook used the "sender" of the event as comment author, however
if the comment is edited by a maintainer github sends a
"issue_comment" event with that maintainer as sender.
This means a random user could create a comment with a robodoo
command, and if a registered reviewer happened to edit the comment the
command would suddenly be taken in account. This was not the intention.
I just spent 10mn trying to find out why staging 28 was cancelled
(a p=0 comment). Add a common prefix to all staging cancels to make
them easier to find.
A limitation to 50 commits PRs was put in place to avoid rebasing
huge PRs (as a rebase means 1 merge + 1 commit *per source commit*),
however the way it was done would also limit regular merges, and the
way the limitation was implemented was not clear.
* explicitly check that limit in the rebase case
* make error message on PR sizes (rebase 50 or merge 250) clearer
* remove limit from commits-fetching (check it beforehand)
* add a test to merge >50 commits PRs
* fix the local implementation of pulls/:number/commits to properly
paginate
a0063f9df0 slightly improved the error
message on non-PR ci failure (e.g. a community PR makes enterprise
break) by adding the failed commit, but that's still not exactly clear,
even for technical users (plus it requires having access to all the
repos which is not the case for everyone).
This commit further improves the situation by storing the target_url
and description fields of the commit statuses, and printing out the
target_url on failure if it's present.
That way the PR comment denoting build failure should now have a link to
the relevant failed build on runbot, as that's the target_url it
provides.
The change is nontrivial as it tries to be compatible with both old and
new statuses storage format, such that there is no migration to perform.
After discussion with mat, rco and moc, if a PR is updated it should
be unapproved for safety reasons: if a reviewer approves a PR, that's
what should be merged, if there are things to fix/change a reviewer
should at least rubberstamp the changes to avoid mistakes.
This is a bit more noisy/constraining, but can be changed or tuned
afterwards if it's considered too constraining.
* avoid fetching PRs for un-managed branches if we know up-front
* avoid processing comments with no commands (avoids fetching the
corresponding PR which we know nothing about yet and which may or
may not be for a managed branch)
The old "sync pr" thing is turning out to be a bust, while it
originally worked fine these days it's a catastrophe as the v4 API
performances seem to have significantly degraded, to the point that
fetching all 15k PRs by pages of 100 simply blows up after a few
hundreds/thousands.
Instead, add a table of PRs to sync: if we get notified of a
"compatible" PR (enabled repo & target) which we don't know of, create
an entry in a "fetch jobs" table, then a cron will handle fetching the
PR then fetching/applying all relevant metadata (statuses,
review-comments and reviews).
Also change indexation of Commit(sha) and PR(head) to hash, as btree
indexes are not really sensible for such content (the ordering is
unhelpful and the index locality is awful by design/definition).