Unstaged changes can be useful or necessary for some tasks
e.g. absolute emergency (where even faking the state of a staging is
not really desirable, if that's even possible anymore), or changes
which are so broad they're difficult to stage (e.g. t10s updates).
Add a new object which serves as a queue for patch to direct-apply,
with support for either text patches (udiff style out of git show or
format-patch) or commits to cherry-pick. In the former case, the part
of the show / format-patch before the diff itself is used for the
commit metadata (author, committer, dates, message) whereas for the
commit version the commit itself is reused as-is.
Applied patches are simply disabled for traceability.
Fixes#926
- fix staging reasons containing escaped quotes (would render as
` ` to the end user)
- remove extra spacing around PR title @title/popovers
- simplify a few view conditionals through judicious use of `t-elif`
and nesting
- make `staging_end` non-computed as it's not computed anymore, just
set if and when the staging gets disabled
(146564a90a)
Given branch A, and branch B forked from it. If B removes a file which
a PR to A later modifies, on forward port Git generates a
modify/delete conflict (as in one side modifies a file which the
other deleted).
So far so good, except while it does notify the caller of the issue
the modified file is just dumped as-is into the working copy (or
commit), which essentially resurrects it.
This is an issue, *especially* as the file is already part of a
commit (rather tan just a U local file), if the developer fixes the
conflict "in place" rather than re-doing the forward-port from scratch
it's easy to miss the reincarnated file (and possibly the changes that
were part of it too), which at best leaves parasitic dead code in the
working copy. There is also no easy way for the runbot to check it as
adding unimported standalone files while rare is not unknown
e.g. utility scripts (to say nothing of JS where we can't really track
the usages / imports at all).
To resolve this issue, during conflict generation post-process
modify/delete to insert artificial conflict markers, the file should
be syntactically invalid so linters / checkers should complain, and
the minimal check has a step looking for conflict markers which should
fire and prevent merging the PR.
Fixes#896
A recent task introduced a record loop using self inside.
It doesn't respect an api multi classic loop but it isn't an issue
since onchange are always called on a single record.
Removing the loop on all onchange to clean it up.
Add warnings when trying to send comments / commands to PRs targeting
inactive branches.
This was missing leading to confusion, as one warning is clearly not
enough.
Fixes#941
By updating the staging timeout every time we run `_compute_state` and
still have a `pending` status, we'd actually bump the timeout *on
every success CI* except for the last one. Which was never the
intention and can add an hour or two to the mergebot-side timeout.
Instead, add an `updated_at` attribute to statuses (name taken from
the webhook payload even though we don't use that), read it when we
see `pending` required statuses, and update the staging timeout based
on that if necessary.
That way as long as we don't get *new* pending statuses, the timeout
doesn't get updated.
Fixes#952
- Switch to just `default` ci.
- Autouse fixture to init the master branch.
- Switch to `make_commits`.
- Merge `test_reopen_update` and `test_update_closed_revalidate` into
`test_update_closed`: the former did basically nothing useful and
the latter could easily be folded into `test_update_closed` by just
validating the new commit.
Idea is to help runbot/QA team to monitor other peoples tasks
easier. This way we will have a responsible person for each task
that someone else has assigned.
The goal of pr is two-fold:
1) We need to be able to better monitor the progress of tasks that
are not our own.
2) It introduces another metric with which we can measure individual
progress, and this can become main measuring metric for future non-technical
employees whose main goal will be making sure that none of the tasks become
rotten(not have any visible progress for months).
And unconditionally unstage when the HEAD of a PR is synchronised.
While a rebuild on a PR which was previously staged can be a false
positive (e.g. because it hit a non-derministic test the second time
around) it can also be legitimate e.g. auto-rebase of an old PR to
check it out. In that case the PR should be unstaged.
Furthermore, as soon as the PR gets rebuilt it goes back into
`approved` (because the status goes to pending and currently there's
no great way to suppress that in the rebuild case without also fucking
it up for the sync case). This would cause a sync on the PR to be
missed (in that it would not unstage the PR), which is broken. Fix
that by not checking the state of the PR beforehand, it seems to be an
unnecessary remnant of older code, and not really an optimisation (or
at least one likely not worth bothering with, especially as we then
proceed to perform a full PR validation anyway).
Fixes#950
The UX around the split of limit and forward port policy (and
especially moving "don't forward port" to the policy) was not really
considered and caused confusion for both me and devs: after having
disabled forward porting, updating the limit would not restore it, but
there would be no indication of such an issue.
This caused odoo/enterprise#68916 to not be forward ported at merge
(despite looking for all the world like it should be), and while
updating the limit post-merge did force a forward-port that
inconsistency was just as jarring (also not helped by being unable to
create an fw batch via the backend UI because reasons, see
10ca096d86).
Fix this along most axis:
- Notify the user and reset the fw policy if the limit is updated
while `fw=no`.
- Trigger a forward port if the fw policy is updated (from `no`) on a
merged PR, currently only sources.
- Add check & warning to limit update process so it does *not* force a
port (though maybe it should under the assumption that we're
updating the limit anyway? We'll see).
Fixes#953
- rather than enumerate states, forward-porting should just check if
the statuses are successful on a PR
- for the same consistency reasons explained in
f97502e503, `skipchecks` should force
the status of a PR to `success`: it very odd that a PR would be
ready without being validated...
Apparently when I added these to trigger the corresponding cron the
lack of decorator caused them to not work at all, at least when
invoked from the interface, consequently I notably wasn't able to
force a forward port via creating one such task when trying to work
around #953
When a lot of Docker images are updated at the same time, all runbot
hosts will try to pull them at the same moment.
With this commit, only the images marked as `always_pull` will be pulled
and if one of them takes too much time to be pulled, we let the host
make another loop turn before continuing the images pull.
Finally, if a host uses the Docker registry, it will pull the remote
image when running a step, that way the image will be pulled
automatically if needed.
The docker operation are called often and cannot be logged each time.
If a docker operation is slow, we log it at the end but waiting for this
output we have no idea what the process is doing.
This pr proposed to go a lower level and get the stream from the docker
operation to be able to log earlier if we started a potentially slow
operation.
Today (or really a month ago) I learned: when giving git a symbolic
ref (e.g. a ref name), if it's ambiguous then
1. If `$GIT_DIR/$name` exists, that is what you mean (this is usually
useful only for `HEAD`, `FETCH_HEAD`, `ORIG_HEAD`, `MERGE_HEAD`,
`REBASE_HEAD`, `REVERT_HEAD`, `CHERRY_PICK_HEAD`, `BISECT_HEAD` and
`AUTO_MERGE`)
2. otherwise, `refs/$name` if it exists
3. otherwise, `refs/tags/$name` if it exists
4. otherwise, `refs/heads/$name` if it exists
5. otherwise, `refs/remotes/$name` if it exists
6. otherwise, `refs/remotes/$name/HEAD` if it exists
This means if a tag and a branch have the same name and only the name
is provided (not the full ref), git will select the tag, which gets
very confusing for the mergebot as it now tries to rebase onto the tag
(which because that's not fun otherwise was not even on the branch of
the same name).
Fix by providing full refs to `rev-parse` when trying to retrieve the
head of the target branches. And as a defense in depth opportunity,
also exclude tags when fetching refs by spec: apparently fetching a
specific commit does not trigger the retrieval of tags, but any sort
of spec will see the tags come along for the ride even if the tags are
not in any way under the fetched ref e.g. `refs/heads/*` will very
much retrieve the tags despite them being located at `refs/tags/*`.
Fixes#922
This is an approximation under the assumption that stored computes
update the `write_date`, and that there's not much else that will be
computed on a batch.
Eventually it might be a good idea for this to be a proper field,
computed alongside the unblocking of the batch.
Fixes#932
Rather than only setting `staging_end` on status change, set it when
the staging gets deactivated. This way cancelling a staging (whether
explicitely or via a PR update) will also end it, so will a staging
timeout, etc..., rather than keep the counter running.
Fixes#931
Reminding users that `r-` on a forward port only unreviews *that*
forwardport is useful, as `r+;r-` is not a no-op (all preceding
siblings are still reviewed).
However it is useless if all siblings are not approved or already
merged. So avoid sending the warning in that case.
Fixes#934
The FW reminder is useful to remind people of the outstanding forward
ports they need to process, as taking too long can be an issue.
They are, however, not useful if the developer has already done
everything, the PR is ready and unblocked, but the mergebot has fallen
behind and has a hard time catching up. In that case there is nothing
for the developer to do, so pinging them is not productive, it's only
frustrating.
Fixes#930
When disabling tests on runbot by using the test_tags on a build error,
the tests are disabled on every tested version. This can be annoying
when a test only fails from one version up to another.
With this commit, a min and max version can be specified on a
build_error when the test_tags field is used.
If the min and max are not used, the test will be disabled cross
versions.
A common error on runbot is to generate link containing a __init__.py
file
[/some/path/to/__init__.py](/some/path/to/__init__.py)
This would be rendered as
<a href="/some/path/to/<ins>init<ins>.py">/some/path/to/<ins>init<ins>.py</a>
Breaking the link, and the display of the name
By default markdown will not render links avoiding this issue, but it
will remain for the content of the a, needing to manage some kind of
escaping.
The way to escape markdown is to add a \ before any special character
This must be done upront before formating, adding the method
markdown_escape
Our implementation of markdown is not meant to meet the exact
specification of markdown but better suit our needs. One of the
requirements is to be able to use it to format message easily but adding
dynamic countent comming from the outside. One of the error than can
occur is also
'Some code `%s`' % code can also cause problem if code contains `
This issue could be solved using indented code block, but this would
need to complexify the generated string, have a dedicated method to
escape the code blocs, ...
Since we have the controll on the input, we can easily sanitize all
ynamic content to avoid such issues. For code block we introduce a way
to escape backtick (\`). It is non standard but will be easier to use.
Combine with that, the build._log method now allows to add args with the
values to format the string (similar to logging) but will escape
params by default. (cr.execute spirit)
name = '__init__.py'
url = 'path/to/__init__.py'
code = '# comment `for` something'
build._log('f', 'Some message [%s](%s) \n `%s`', name, url, code)
name, url and code will be escaped
The stats are only kept when the build finishes, this makes sence to
avoid collecting stats of a manually killed build (or because of newer
build found), but the stats mays be interresting when the build timeouts.
Adding a manual collection of stats in this case.
In some cases, feedback to the PR author that an r+ is redundant went
missing.
This turns out to be due to the convolution of the handling of
approval on forward-port, and the fact that the target PR is treated
exactly like its ancestors: if the PR is already approved the approval
is not even attempted (and so no feedback if it's incorrect).
Straighten up this bit and add a special case for the PR being
commented on, it should have the usual feedback if in error or already
commented on.
Furthermore, update `PullRequests._pr_acl` to kinda work out of the
box for forward-port: if the current PR is a forward port,
`is_reviewer` should check delegation on all ancestors, there doesn't
seem to be any reason to split "source_reviewer", "parent_reviewer",
and "is_reviewer".
Fixes#939
The underscore was previously used in database name, but this is not a
valid character for certificates (still need some investigation about
the limitations)
If it worked to access it it was causing issues for IOT andother.
_ is forbidden on saas actually.
The design_theme database was changed to design-theme, but this is not
accessible through the nginx condig. This should solve the issue.
The current runbot infrastructure has 100+ machine that will each build
all docker images.
This is unpractical for multiple reasons:
- impotant load on all machine when the build could be done once
- possible differences for the same image depending on the moment the
base was pulled on the host. An extreme example was the version of
python (3.11 or 3.12) when building the noble docker image before it was
stabilized.
- increase the chance to have a docker build failure in case of network
problem. (random)
A centralized registry will help with that, and allow future devlopment
to generate more docker images. All docker images will be exactly the
same, the pull time is faster than build time, only one build per docker
image, ...
When there is no Dockerfile marked as `to_build`, the runbot builder
crashes in loop trying to find the max of an empty sequence.
With this commit, the builder does not even try to build Dockerfile in
that case.
- Odoo 17 seems to not be adjusting `nolabel` fields to be `colspan=2`
by default, so every such occurrence has to be adjusted by hand or
it gets squeezed in just the labels column.
- Because of the loss of readonly mode, some fields / setups which
previously looked ugly during the rare edition (e.g. Pr titles) now
look ugly all the time. Rework layout and force them to always be
readonly (hopefully we won't need to edit those).
- This is compounded by unfortunate styling I can't find how to
override e.g. char fields are 100% width even if readonly.
- `<header>` system requires some workarounds to have the right layout
and spacing (notably `header` has a bunch of awful rules which we
need to work around via an interstitial div to set up our own
flexbox).
- reduce font-size and bold-ness a hair as it causes issues in the
backend
- remove font adjustment on root object
- add `text-bg-primary` in the oustanding FP view, apparently BS5 does
not do anything like that by default when setting `bg-primary` for
some reason so the current team or user appears in black on dark
blue leading to sub-par readability
Turns out having the same ids for the feedback template and feedback
object actions was kinda dumb, who'd have thought?
Split them apart so I can get both objects in my menu...
Fix a few issues with migrated bootstrap classes (left -> start, right
-> end), revert a bunch of shitty colors from standard, fixup
backgrounds.
Tried to remove the background overrides what with having used
variables but it does completely wrong for info, and I can't be
arsed (also force primary alerts for the same reason, I don't
understand what either bootstrap or standard does and how to override
it properly but it's shit). It'll keep.
It was all borken and the web client couldn't even load.
The logic is basically the same, except the new client doesn't support
forcing onchanges maybe, so we need to `read()` the entire thing if
the form is not modified.
`message_main_attachment_id` removed from `mail.thread` in
odoo/odoo@f8c7f2e5bb (16.2), however
precise inheritance tracking was only added in
odoo/odoo@b27eb20a41 (17.1), so
stock migration only handles stock models for this migration.
Broken (can't run odoo at all):
- In Odoo 17.0, the `pre_init_hook` takes an env, not a cursor, update
`_check_citext`.
- Odoo 17.0 rejects `@attrs` and doesn't say where they are or how to
update them, fun, hunt down `attrs={'invisible': ...` and try to fix
them.
- Odoo 17.0 warns on non-multi creates, update them, most were very
reasonable, one very wasn't.
Test failures:
- Odoo 17.0 deprecates `name_get` and doesn't use it as a *source*
anymore, replace overrides by overrides to `_compute_display_name`.
- Multiple tracking changes:
- `_track_set_author` takes a `Partner` not an id.
- `_message_compute_author` still requires overriding in order to
handle record creation, which in standard doesn't support author
overriding.
- `mail.tracking.value.field_type` has been removed, the field type
now needs to be retrieved from the `field_id`.
- Some tracking ordering have changed and require adjusting a few
tests.
Also added a few flushes before SQL queries which are not (obviously
at least) at the start of a cron or controller, no test failure
observed but better safe than sorry (probably).
When building a docker image the error is part of the stream, and
at the end.
The current behaviour will append the error message at the begining of
the "result log" breaking the temporality of the output. Adding it at
the end should be more intuitive to read. This will also help to get a
more usefull error sumary in some cases.
Forgot to declare `defaultstatuses` when it was introduced in
f367a64481. pytest righteously warns
when markers are not declated, so do that (turns out running pytest
with `-We` can be useful, though it also requires `-W
ignore::DeprecationWarning::importlib._boostrap` because reportlab
does weird shit).