Before this commit, dependencies (i.e. community commit to use when testing enterprise)
were computed at checkout, when the build was going from pending to testing state and
were not stored.
Since the duplicate detection was done at create, the get_closest_branch_name was called
in a loop for each posible duplicate candidate, then a last time at checkout. The main idea of this
pr is to store the build dependecies on build at create, making the duplicate detection
faster (especially when the build name is matching many indirect builds).
The side effect of this change is that the build dependencies won't be affected if a new
commit is pushed between the build creation and the checkout. The build is fully
determined at creation. get_closest_branch is only called once per build
The duplicate detection will also be more precise since we are matching on the commits groups
that were used to run the build, and not only the branch name.
Some work has also been done to rework the closest branch detection in order to manage new corner
cases. Hopefully, everything should work as before (or in a better way).
In a soon future, it will also be possible to use this information to make an "exact rebuild"
or to find corresponding community build.
Pr: #117
When some special builds are scheduled during the night, free slots on
runbot instances are used. Depending on the number of scheduled builds,
all the slots can be used. That prevents people to use the runbot for
normal builds during this time.
To mitigate the problem, the scheduled builds were postponed to the
middle of the night ... the CET night. It means that it could be morning
in India.
With this commit, a build priority is given to normal builds. On the
other hand, scheduled builds are pushed at the end of the queue.
So even if there are plenty of builds during the Belgian night, if
someone pushes a commit in between, it will be built in priority before
the scheduled pending builds.
When using a local git repo, the git name does not have colon, making
the frontend crash.
With this commit, a non-stored computed field 'short_name' is added to
compute a shortest version of the name.
When the docker_run function is called, the odoo command is decorated
with a pip command to install required packages.
This pollute the docker_run function if a runbot job_ method wants to
use docker for something else that starting an odoo instance (like
pg_dump) for example.
With this commit, command modification is made in an optional helper
function named build_odoo_cmd.
the docker_run function now needs the command to run as a string instead
of a list of odoo cmd and its parameters.
Use the proper / actual "is there any stageable PR" query to check if
a PR is blocked as well, that way they shoudn't be diverging all the
time even if it might make PR.blocked a bit more expensive.
fixes#111
Asking for the kill of a build which is the duplicate of another fails
because the state of this build is "duplicate", so the _ask_kill method
has no effect on it.
With this commit, the effect of _ask_kill is applied on the duplicate in
the above mentioned case.
When searching the builds for the frontend the resulting query can last
a very long time (up to 7sec).
With this commit, the search result is strictly limited to 100 builds,
the limit query parameter is removed and the search string length is limited to
60 chars.
The guess_result method is now optimized to guess results for testing
builds only. The others have the same value as the final result.
A few tests were added for this method.
Thanks @KangOl for the optimization code.
When github reaches the hook controller, the repo hook_time field is
updated. That way, a git fetch is done only when the hook_time is newer
that the last fetch. If the hook_time is updated during the long running
cron that runs the _cron_fetch_and_schedule method, the hook_time is cached
and only the old hook time is seen until the cron's end. The cursor
commit is not enough. As a result, the new builds are scheduled in the
next cron run.
With this commit, the cache is invalidated after the commit, that way,
the hook_time field contains the correct value.
When a PR is created in odoo/enterprise but without a corresponding
PR in odoo/community BUT a corresponding branch in odoo-dev/community,
the closest_branch detection fails. Moreover, the duplicate detection
fails too.
As a consequence, the PR build will probably fail because it will be
built with the default target branch that could not be suited for it.
If the branch built succeeds, it leads to inconsistent results.
With this commit, a new case is added on the _get_closests_branch_name
to handle this case.
The serever_match field also reflects the difference as this case will
be marked as 'no PR'.
When a PR also exists in odoo/community, the server_match field will be
'exact PR'. This change should not imply migration.
This commit also adds a bunch of tests to test the closests branch name
detection and the duplicates.
Co-authored by @Xavier-Do
A status being updated on a commit is a read/modify/update, meaning
it's possible for somebody else (including a concurrent event?) to
concurrently update the commit and conflict leading to the webhook
blowing up, which is undesirable as it's a data loss (whereas if it
blows up on the other side e.g. in the cron's commit processor the
cron will just take it up next iteration).
Might eventually extract / generalise, but for now it's simpler to
just do it in runbot_merge's post_load, that way there's no setup
change (just a small bit of configuration), and it's only enabled on
the instances runbot_merge is installed on.
fixes#97, closes#103
Will comment any time a statuses update folds to a CI failure on a
reviewed pull request. Might be somewhat spammy, we'll see.
No notification if the PR is not reviewed yet.
fixes#87
Before this, impacting a commit's statuses on the relevant PR or
staging would be performed immediatly / inline with its
consumption. This, however, is problematic if we want to implement
additional processing like #87 (and possibly though probably not #52):
webhook handlers should be kept short and fast, feeding back into
github would not be acceptable.
- flag commits as needing processing instead of processing them
immediately, this uses a partial index as it looks like the
recommended / proper way to index a boolean column in which one of
the values is searched much more than the other (todo: eventually
check if that actually does anythnig)
- add a new cron for commits processing
- alter tests so they use this new cron (mostly by migrating them to
`run_crons` though not solely as some still need more detailed
management to properly check intermediate steps)
Fix an issue with closing a staged PR while at it (the "merging" tag
would potentially never be removed).
Proper RFC5322 makes for much noisier messages, and seems completely
unnecessary as examples of sign-off on the internet don't quote spaces
/ names.
closes#102
* split out truly awaiting PRs from those waiting on an event of some
sort
* if a staging is active but doesn't have a state yet, it should be
considered pending not cancelled
closes#74
If a PR gets sync'd to a known-valid commit, it should be marked as
valid rather than get in this weird state where it's merely open but
github knows it passes CI.
Fixes#72
When a runbot build ends without error but with one or more warning,
status are not sent to github. As a result, the PR stays in pending
state.
With this commit, the github status is set to failure when a build ends
in a "warn" result.
This is somewhat less useful with runbot's fail-fast as a runbot
failure (false positive or not) will now very quickly trigger an end
to the current staging.
Still, could be of use.
closes#89
The choice to keep sync'd PRs in error means it's possible to update
the code and re-run the PR directly without it going through review &
CI again, which is a bit odd. Remove the special case and always reset
a sync'd PR to opened for clarity and simplicity.
closes#71closes#83
Turns out skipping locks is not very useful when there are no locks
being held because we only touch the PRs *after* the merge has been
applied.
So finally do that, lock all of a staging's PRs before we try to
fast-forward the relevant repositories, so a close command coming back
from github (from having seen the closes #xxx annotation) doesn't
screw us over.
No test because I don't understand how / why it's triggered, it's just
that some PRs don't have a label. I assumed the issue occurred when
the source branch or even repo (cross-repo PR) was deleted, but it
doesn't seem to trigger the issue (or in any case not in as short a
time as a test, maybe GH eventually does some vacuuming which causes
the issue?
Anyway we may eventually want to reclaim these PRs (allowing a lack of
label and treating them like the patch-\d labels: with no semantic
value) however the simplest thing to do for now is to just ignore the
corresponding PR.
closes#101
In remote tests, if the deletion of a test repository fails (because
gh glitch) or the repo creation succeeded but reported a failure (for
some reason) the entire run is hosed because every test trying to
create a similarly named repository will explode.
Alter repomaker to just try to delete the repo, unless --no-delete
mode in which case just skip any further test trying to use the same
repository (not deleting the repo is the entire point of --no-delete,
as its purpose is the ability to do post-mortem debugging on
repository state).
closes#99
Github is subject to a fair amount of transient failures, which are
currently ill-logged: an exception is raised and the caller /
responsible might eventually log something, but it's not really
formalised and centralised, and is thus inconvenient to try and
post-mortem issues with github's support.
Change this such that *almost* all github API calls get extensively
logged (status, reason, all headers, body) on failure.
Also automatically sets debug logging for odoo in local tests, and
alter the fake response constructor thing so it doesn't set a json
mimetype when the body is not valid json.
Closes#98
At checkout time when a build has no server (e.g. enterprise),
the dependency repo that contains the server needs to be extracted too.
It happens that this dependency repo is not up to date.
With this commit, the dependency repo is updated before its extracttion.
When searching for duplicate builds, a git ls-remote is used to verify
that the branch still exists. This command is time consuming (up to 2
seconds).
If the number of build is significant, it can last a very long time.
When a user push one ore more new branches without new commits, the
number of duplicate builds found may be very large (more than 92).
This loop blocks the cron wroker in charge of creating new builds.
This quick fix will limit the number of duplicate to 1 but if the
closest name is not the same, it will not be considered as a duplicate.
When a runbot execute the cron_fetch_and_build method, the repo is
updated only if the webhook time is newer than the last fetch
time.
As the cron is now split into long running crons, the hook_time field is
cached. The runbot instance that sees a new build pending use this
cached value to estimate if the repo update is needed.
With this commit, the repo update is done right before exporting the
repo and only if the commit hash is not found.
As a bonus, the environment is reset in the long running cron of the
runbot builders to update the cached values.
The Runbot Cron is executed on each runbot instance. When the number of
instances scales, the time needed for an instance to obtain the cron
increases.
With this commit, the original runbot_cron is removed. Instead, a cron
have to be created to run the _cron_fetch_and_schedule method.
This method will fetch the repo and create pending builds. This cron is
intended to run only on one runbot instance. This method needs a host
parameter to specify which runbot instance will be in charge of this
task.
On the other hand, a dedicated cron have to be manually created for each
runbot instance that will have the build task.
Those cron's only have to call the _cron_fetch_and_build method with the
runbot hostname as a parameter. This method will then self
assign pending builds if there are slots available.
All available build slots are reserved in a single LOCKED SQL query.
Both methods are intended to last a large amount of time, just a few
minutes below the cron timeout to maximize the cron productivity.
The timeout is randomized to avoid deadlocks if the runbot instances are
started at the same time.
So the --limit-time-real parameter have to be set to a minimum of 180
sec (600 or 1200 are probably better targets).
When displaying build logs, all the messages from ir_logging about this
particular build are fetched from the database.
From time to times, it happens that the number of logged messages is
really huge. Those messages lines could also contain multiple lines,
multiplying the number of row to generate in the html page.
When this happens, the process that generates the template last a long
time and ends with a MemoryError. If the end user, bored, hits the
refresh button multiple times, all the workers will be busy building
this template. In the end, all users get a Bad Gateway from nginx.
With this commit, the number of messages that will be taken into account
will be limited to 10000.
When a user checks the runbot frontend, the guess_result field is used
to change the color of the build state. But github is not notified of
this guessed result.
As a consequence, the runbot_merge is not aware the build is failed and
will continue to wait.
With this commit, as soon as the guess_result detects a failure, the
status is sent to github, that way, runbot_merge will stop waiting
sooner.
When running the _job_10 method, a database is created with base module
alone. Tests are enabled during this job. Those tests are run again with
the _job_20 method. Moreover, even if the tests fail during _job_10,
they are not taken into account for the final result. The _job_10 method
duration is approximately 4 min.
With this commit, the tests are not enabled during _job_10.
A new module in Odoo needs pyCrypto but this module alone is too limited
to justify an addition in the requirement file.
PR: https://github.com/odoo/odoo/pull/28816
The number is probably the most common search criteria for PRs (to
track their status / issues). Having to go through custom filters to
find one is a pain in the ass.
Already done live by editing the view, but means it's getting lost
every time the module gets updated.
closes#73