Identifying build errors by fingerprint is not enough.
Most of the times we want to link build errors based on some criteria.
With this commit, a build error can be qualified by using regular
expressions that will extract informations.
Those informations will be stored in a jsondict field. That way, we can
find simmilar build errors when they share some qualifiers.
To extract information, the regular expression must use named group
patterns.
e.g:
`^Tour (?P<tour_name>\w+) failed at step (?P<tour_step>.+)`
The above regular expression should extract the tour name and tour step
from a build error and store them like:
`{ "tour_name": "article_portal_tour", "tour_step": "clik on 'Help'" }`
The field can be queried to find similar errors. Also, a button is added
on the Build Error Form to search for errors that share the same
qualifiers.
Until now the favicon was handled by the `/favicon.ico` route made
available by `website`.
This commit adds the different favicons and logic to be able to display
the state of the current page through it.
Duplicate error content should not happens ... but it does.
With this commit, a server actions allows to relink error contents and
thus removes error contents having the same fingerprint.
When digging into deactivated build errors, one cannot easily find why
an error was deactivated and to which one it was merged.
With this commit, a message is added in the chatter to explain where it
is merged.
The docker metadata are currently computed only on
some images durring the nightly.
This aims to get metadata after each docker image build in order to be
able to rely on them when needed.
The initial idea to link an error to another one was a quick solution
to group them if they where related, but this became challenging
to copute metada regarding errors.
- The displayed error message was not always consistent with the real
root cause/the error that lead here.
- The aggregates (lets says, linked buils ids) could be the one of the
error, or from all error messages. Same for the versions, first seen, ..
This is confusing to knwo what is the leist we are managing and what is
the expecte result to display
Main motivation:
on a standard error page (will be changed to "assignment"), we want to
have the list of error message that is related to this one. We want to
know for each message (a real build error) what is the version,
first seen, ...
This will give more flexibility on the display,
The assigned person/team/test-tags, ... are moved to this model
The appearance data remains on the build error but are aggregate on the
assignation.
When a type error occurs when trying to format a message in a build log,
the suspicious are are joined with the message. But as the args may be a
tuple, an errors occurs when concatenating the message with the args
during the join.
With this commit, we ensure that the args are casted into a list.
When an image is build for the first time and the build fails, an
ImageNotfound Exception is raised when trying to push the image.
Whith this commit, a warning is emmited instead of crashing in such a
case.
A recent task introduced a record loop using self inside.
It doesn't respect an api multi classic loop but it isn't an issue
since onchange are always called on a single record.
Removing the loop on all onchange to clean it up.
Idea is to help runbot/QA team to monitor other peoples tasks
easier. This way we will have a responsible person for each task
that someone else has assigned.
The goal of pr is two-fold:
1) We need to be able to better monitor the progress of tasks that
are not our own.
2) It introduces another metric with which we can measure individual
progress, and this can become main measuring metric for future non-technical
employees whose main goal will be making sure that none of the tasks become
rotten(not have any visible progress for months).
When a lot of Docker images are updated at the same time, all runbot
hosts will try to pull them at the same moment.
With this commit, only the images marked as `always_pull` will be pulled
and if one of them takes too much time to be pulled, we let the host
make another loop turn before continuing the images pull.
Finally, if a host uses the Docker registry, it will pull the remote
image when running a step, that way the image will be pulled
automatically if needed.
The docker operation are called often and cannot be logged each time.
If a docker operation is slow, we log it at the end but waiting for this
output we have no idea what the process is doing.
This pr proposed to go a lower level and get the stream from the docker
operation to be able to log earlier if we started a potentially slow
operation.