A few fixes and improvements after testing the feature:
- ensure the provisioned users are created as internal (not portal)
- assume oauth is installed and just crash if it's not
- handle a user not having an email (ignore)
- return value from json handler, otherwise JsonRequest sends no
payload which is *weird*
Stagings would be cancelled automatically if the PR's commits were
updated, but not if the target (base) was changed, even though that
has a drastic impact on staging.
Add hooks to unstage PRs if their base is updated, or if their message
is updated and relevant to staging (merge or rebase-merge methods).
Fixes#604
Extract the creation of a PR link (to github) as a dedicated
template for easier updates, and to use `display_name`
everywhere (instead of reimplementing it by name).
Also implement support for repo-level groups setting for information
hiding.
Fixes#590
When a staging's fast-forward (to the target branch) fails, the
mergebot would provide no useful information on the staging or the
dashboard.
This is because the reason was set to the HTTP status, which in case
of a fast-forward error is just "422 client error: unprocessable
entity".
Improve this by trying to parse github's response in that case, and
using the JSON error message as failure reason. This provides more
useful failure information like "update is not a fast forward",
"reference does not exist", or a branch protection failure.
Closes#591
- code in the various menus added over time through the UI (queues,
configuration, ...)
- update / improve PR layout a tick
- fix "outstanding forward ports" count on the dashboard
- improve hover title / help on dashboard
- add date of last modification (usually date of success / failure)
- make casing more coherent (everything lowercase)
- add explicit note that UTC date on staged at label is staged at datetime
- rediscover yet again that the staging information is when hovering
on the staging *except the staged at label*
- improve `PullRequest.unstage` to always insert the PR at the start of the
reason when cancelling the staging, for clarity / traceability
Closes#560, closes#609
In some conditions, the google-chrome crashes and a core dump is written
in the bind mounted directory.
With this commit, the core dumps are disabled in the containers.
While using the docker cli as a subprocess was KISS and convenient,
the python Docker SDK is mature, easy to use, available as a Debian
package and much more powerful.
It will permit to monitor the containers memory consumption and will
help to spot memory leaks.
On the actual runbot deployments, the `git gc` command is handled by a
unix cron. From time to time, some repositories get corrupted and we
suspect that some concurrent action may be involved as stated in
documentation [0].
For those reasons, with this commit, the `git gc` will be run by the
runbot clients themselves in order to avoid concurrent operations.
By default, the first gc will occur a few minutes after the start of the
client and the next gc are scheduled a two hours and a few minutes later.
Also, this commit ensures that the git config is written regularly in
case of change.
[0] https://git-scm.com/docs/git-gc
Regarding the number of refs in odoo repo (arround 18 million at this
time), the parsing of the date was significant when filtering old refs.
Using unix time allows a direct comparaison without parsing the date,
and improved performance, going from ~7 seconds to ~1.3 seconds.
The status are currently sent by leader when build are created and by
workers on post_commit.
If the leader fails during the preparing of a batch (while creating
builds) the transaction is rollbacked and the status are send again.
The number of status to send makes it quite slow, making the transaction
longuer, and the retry even more expensive. This leads to preparing time
being quite important, sometimes ten minutes after many retries.
This commit proposes to send status in another dedicated transaction.
Since status are sent in batch, we can also try to use a unique session,
and uniquify commit+context status.
This allows to remove the postcommit logic
A further improvement would be to wait before sending status in order to
skip the pending status if the build is verry fast.
An option is also added on the remote to skip the status: if the remote
is a fork, sending the satus on the main remote should be enough.
Initial model was a (build_id, key, value) where key is in fact a two
part information: `category.key` (where key is usually a module)
This means that for each module, we will have one entry per
modules*category.
We have between 200 and 400 modules per build * 4 keys -> around 1000
entries per build.
The hudge amount of total entries lead to a fast overflow of the table
sequence + this create important indexes.
Also, most of the time, the js will manage the display of stats meaning
that python will transform
(build_id, category.key, value)
into
(build_id, {key: value}) for one category
A new model makes use of a json field to store values for different
modules in a json dict, on entry per category*build. (4 entry per build)
The table will be renamed and migrate later
The initial pull_info_failures feature was introduced before the separate
process for the leader. Since this is here for a rare case, this
wasn't tested since then.
Testing the new infrastructure without token, there was many
pull_info_failure, leading to an error on cleanup.
When rendering templates for git configuration and nginx configuration,
the templates are rendered as Markup, while a byte-like object was
expected for the saving.
With this commit, the Markup is casted into str and the files are saved
as text files.
The `utils.escape` utility was deprecated in werkzeug 2.0.0, so it can
be replaced with our `html_escape` which is itself `markupsafe.escape`.
Also, with this change, the double quotes are now escaped in `"`
instead of `"`, so we fix the test too.
New accounts endpoint such that the SSO can push new pre-configured
users / employees directly. This lowers maintenance burden.
Also remove one of the source partners from the merge test, as
ordering seems wonky for unclear reasons leading to random failures of
that test.
Some non tested code was making running build available without
an nginx config (using port and local address instead of using a proper
dns/nginx config).
Removing this part to reduce complexity.
Source cleanup will check in multiple place for potentially used
commits. This maked the cleanup logic complex, plus limit the usable
commits in python steps. The current use case is to export the mergebase
commits.
The poposed solution is to save the exported commit and clean this list
when the build is done. All commit in this list cannot be cleaned.
The main motivation is to be able to see existing config_data on params
and edit configuration on triggers.
A simple version would be to use the `FieldChar` with a simple
`JSON.stringify` formater, but having some indent on field make it
easier to read and edit.
The multiline need make it closet to the `FieldText` than the `FieldChar`
but the reset makes the browser freeze.