[DEL] runbot: remove master version

This commit is contained in:
Xavier-Do 2024-02-21 10:01:38 +01:00
parent fffc27d2fa
commit 979d1fd377
232 changed files with 1 additions and 35366 deletions

318
README.md
View File

@ -1,317 +1 @@
# Odoo Runbot Repository
This repository contains the source code of Odoo testing bot [runbot.odoo.com](http://runbot.odoo.com/runbot) and related addons.
------------------
## Warnings
**Runbot will delete folders/ drop databases to free some space during usage.** Even if only elements created by runbot are concerned, don't use runbot on a server with sensitive data.
**Runbot changes some default odoo behaviours** Runbot database may work with other modules, but without any guarantee.
**Runbot is not safe by itsefl** This tutorial describes the minimal way to deploy runbot, without too many security considerations. Only trusted code should be executed with this single machine setup. For more security the builder should be deployed separately with minimal access.
## Glossary/models
Runbot use a set of concept in order to cover all the use cases we need
- **Project**: regroups a set of repositories that works together. Usually one project is enough and a default *R&D* project exists.
- **Repository**: A repository name regrouping repo and forks Ex: odoo, enterprise
- **Remote**: A remote for a repository. Example: odoo/odoo, odoo-dev/odoo
- **Build**: A test instance, using a set of commits and parameters to run some code and produce a result.
- **Trigger**: Indicates that a build should be created when a new commit is pushed on a repo. A trigger has both trigger repos, and dependency repo. Ex: new commit on runbot-> build with runbot and a dependency with odoo.
- **Bundle**: A set or branches that work together: all the branches with the same name and all linked pr in the same project.
- **Batch**: A container for builds and commits of a bundle. When a new commit is pushed on a branch, if a trigger exists for the repo of that branch, a new batch is created with this commit. After 60 seconds, if no other commit is added to the batch, a build is created by trigger having a new commit in this batch.
## Processes
Mainly to allow to distribute runbot on multiple machine and avoid cron worker limitations, the runbot is using 2 process besides the main server.
- **runbot process**: the main runbot process, serving the frontend. This is the odoo-bin process.
- **leader process**: this process should only be started once, detect new commits and creates builds for builders.
- **builder process**: this process can run at most once per physical host, will pick unassigned builds and execute them.
## HOW TO
This section give the basic steps to follow to configure the runbot. The configuration may differ from one use to another, this one will describe how to test addons for odoo, needing to fetch odoo core but without testing vanilla odoo. As an example, the runbot odoo addon will be used as a test case. Runbotception.
### DNS
You may configure a DNS entry for your runbot domain as well as a CNAME for all subdomain.
```
* IN CNAME runbot.domain.com.
```
This is mainly usefull to access running build but will also give more freedom for future configurations.
This is not needed but many features won't work without that.
### nginx
An exemple of config is given in the example_scripts folder.
This may be adapted depending on your setup, mainly for domain names. This can be adapted during the install but serving at least the runbot frontend (proxy pass 80 to 8069) is the minimal config needed.
Note that runbot also has a dynamic nginx config listening on the 8080 port, mainly for running build.
This config is an ir_ui_view (runbot.nginx_config) and can be edited if needed. The config is applied and updated automatically after some time by the builder process.
It is also advised to adapt this config to work in https.
### Requirements
Runbot is an addon for odoo, meaning that both odoo and runbot code are needed to run. Some tips to configure odoo are available in [odoo setup documentation](https://www.odoo.com/documentation/15.0/setup/install.html#setup-install-source) (requirements, postgres, ...) This page will mainly focus on runbot specificities.
You will also need to install docker and other requirements before running runbot.
```bash
sudo apt-get install docker.io python3-unidiff python3-docker python3-matplotlib
```
### Setup
Choose a workspace to clone both repositories and checkout the right branch in both of them.
The directory used in example scripts is `/home/$USER/odoo/`
Note: It is highly advised to create a user for runbot. This example creates a new user `runbot`
```bash
sudo adduser runbot
# needed access rights, docker, postgress
sudo -u postgres createuser -d runbot
sudo adduser runbot docker
sudo systemctl restart docker
# no sudo power needed for now
su runbot
cd
mkdir odoo
cd odoo
```
You may [add valid ssh key linked to a github account](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/adding-a-new-ssh-key-to-your-github-account)
to this user in order to clone the different repositories. You could clone in https but this may be a problem latter to access your ptivate repositories.
It is important to clone the repo with the runbot user
```bash
git clone --depth=1 --branch=15.0 git@github.com:odoo/odoo.git
git clone git@github.com:odoo/runbot.git
git -C odoo checkout 15.0
git -C runbot checkout 15.0
mkdir logs
```
Note: `--depth=1 --branch=15.0 ` is optionnal but will help to reduce the disc usage for the odoo repo.
Finally, check that you have acess to docker, listing the dockers should work without error (but will be empty).
```bash
docker ps
```
If it is not working, ensure you have the docker group and logout if needed.
### Install and start runbot
This parts only consist in configuring and starting the 3 services.
Some example scripts are given in `runbot/runbot/example_scripts`
```bash
mkdir ~/bin # if not exist
cp -r ~/odoo/runbot/runbot/example_scripts/runbot ~/bin/runbot
```
Scripts should be adapted, mainly forthe `--forced-host-name parameter` in builder.sh:
```bash
sed -i "s/runbot.domain.com/runbot.my_real_domain.com/" ~/bin/runbot/builder.sh
```
*The hostname is initally the machine hostname but it should be different per process, having the same hostname for leader and builder is not ideal. This is why the script is using the forced-host-name parameter.*
*The most important one is the builder hostname since it will be used to define running build, zip download and logs urls. We recommand setting your main domain name on this process. The nginx config given in example should be adapted if not.*
Create the corresponding services. You can copy them from the example scripts and adapt them:
```bash
exit # go back to a sudoer user
runbot_user="runbot"
sudo bash -c "cp /home/${runbot_user}/odoo/runbot/runbot/example_scripts/services/* /etc/systemd/system/"
sudo sed -i "s/runbot_user/${runbot_user}/" /etc/systemd/system/runbot.service
sudo sed -i "s/runbot_user/${runbot_user}/" /etc/systemd/system/leader.service
sudo sed -i "s/runbot_user/${runbot_user}/" /etc/systemd/system/builder.service
```
Enable all services and start runbot frontend
```bash
sudo systemctl enable runbot
sudo systemctl enable leader
sudo systemctl enable builder
sudo systemctl daemon-reload
sudo systemctl start runbot
sudo systemctl status runbot
```
Runbot service should be running
You can now connect to your backend and preconfigure runbot.
- Install runbot module, if it wasn't done before.
- Navigate to `/web` to leave the website configurator.
- Connect as admin (default password: admin).
Check odoo documentation for other needed security configuration (master password). This is mainly needed for production purpose.
You can check that in the `/web/database/manager` page. [More info here](https://www.odoo.com/documentation/15.0/administration/install/deploy.html#security)
Change your admin user login and password
You may want to check the runbot settings (`Runbot > Setting > setting`):
- Default number of workers should be the max number of parallel build, consider having max `#cpu - 1`
- Modify `Default odoorc for builds` to change the running build master password to something unique ([idealy a hashed one](https://github.com/odoo/odoo/blob/15.0/odoo/tools/config.py#L722)).
- Tweak the garbage collection settings if you have limited disk space
- The `number of running build` is the number of parallel running builds.
- `Max commit age (in days)` will limt the max age of commit to detect. Increase this limit to detect older branches.
Finally, start the two other services
```bash
systemctl start leader
systemctl start builder
```
Several log files should have been created in `/home/runbot/odoo/logs/`, one per service.
#### Bootstrap
Once launched, the leader process should start to do basic work and bootstrap will start to setup some directories in static.
```bash
su runbot
ls ~/odoo/runbot/runbot/static
```
>build  docker  nginx  repo  sources  src
- **repo** contains the bare repositories
- **sources** contains the exported sources needed for each build
- **build** contains the different workspaces for dockers, containing logs/ filestore, ...
- **docker** contains DockerFile and docker build logs
- **nginx** contains the nginx config used to access running instances
All of them are empty for now.
A database defined by *runbot.runbot_db_template* icp will be created. By default, runbot use template0. This database will be used as a template for testing builds. You can change this database for more customisation.
Other cron operations are still disabled for now.
#### DOCKER images
A default docker image is present in the database and should automatically be build (this may take some time, check builder logs).
Depending on your version it may not be enough.
You can modify it to fit your needs or ask us for the latest version of the Dockerfile waiting for an official link.
#### Add remotes and repositories
Access runbot app and go to the `Runbot>Setting>Repositories` menu
Create a new repo for odoo
![Odoo repo configuration](runbot/documentation/images/repo_odoo.png "Odoo repo configuration")
- **Name**: `odoo` It will be used as the directory name to export the sources
- **Identityfile** is only usefull if you want to use another ssh key to access a repo
- **Project**: `R&D` by default.
- **Modules to install**: `-*` in order to remove them from the default `-i`. This will speed up installation. To install and test all modules, leave this space empty or use `*`. Some modules may be blacklisted individually, by using `*-module,-other_module, l10n_*`.
- **Server files**: `odoo-bin` will allow runbot to know the possible file to use to launch odoo. odoo-bin is the one to use for the last version, but you may want to add other server files for older versions (comma separated list). The same logic is used for manifest files.
- **Manifest files**: `__manifest__.py`. This field is only usefull to configure old versions of odoo.
- **Addons path**: `addons,odoo/addons`. The paths where addons are stored in this repository.
- **Mode**: `poll` since github won't hook your runbot instance. Poll mode is limited to one update every 5 minutes. *It is advised to set it in hook mode later and hook it manually of from a cron or automated action to have more control*.
- **Remotes**: `git@github.com:odoo/odoo.git` A single remote is added, the base odoo repo. Only branches will be fetched to limit disk usage and branches will be created in the backend. It is possible to add multiple remotes for forks.
Create another project for your repositories `Runbot>Setting>Project`
This is optionnal you could use the R&D one, but this may be more noisy since every update in odoo/odoo will be displayed on the same page as your own repo one. Splitting by project also allows to manage access rights.
Create a repo for your custom addons repo
![Odoo repo configuration](runbot/documentation/images/repo_runbot.png "Odoo repo configuration")
- **Name**: `runbot`
- **Project**: `runbot`.
- **Modules to install**: `-*,runbot` ton only install the runbot module.
- No addons_path given to use repo root as default.
- (optionnal) For your custom repo, it is advised to configure the repo in `hook` mode if possible, adding a webhook on `/runbot/hook`. Use `/runbot/hook/<repo_id>` to do it manually.
- **Remotes**: `git@github.com:odoo/runbot.git`
- The remote *PR* option can be checked if needed to fetch pull request too . Will only work if a github token is given for this repo.
A config file with your remotes should be created for each repo. You can check the content in `/runbot/static/repo/(runbot|odoo)/config`. The repo will be fetched, this operation may take some time too. After that, you should start seeing empty batches in both projects on the frontend (`/` or `/runbot`)
#### Triggers and config
At this point, runbot will discover new branches, new commits, create bundle, but no build will be created.
When a new commit is discovered, the branch is updated with a new commit. Then this commit is added in a batch, a container for new builds when they arrive, but only if a trigger corresponding to this repo exists. After one minute without a new commit update in the batch, the different triggers will create one build each.
In this example, we want to create a new build when a new commit is pushed on runbot, and this build needs a commit in odoo as a dependency.
By default the basic config will use the step `all` to test all addons. The installed addons will depends on the repo configuration, but all dependencies tests will be executed too.
This may not be wanted because some `base` or `web` test may be broken. This is the case with runbot addons. Also, selecting only the test for the addons
we are interested in will speedup the build a lot.
Even if it would be better to create new Config and steps, we will modify the curent `all` config step.
`Runbot > Configs > Build Config Steps`
Edit the `all` config step and set `/runbot` as **Test tags**
We can also check the config were going to use:
`Runbot > Configs > Build Config`
Optionnaly, edit `Default no run` config and remove the `base` step. It will only test the module base.
Config and steps can be usefull to create custom test behaviour but this is out of the scope of this tutorial.
Create a new trigger like this:
`Runbot>Triggers`
- *Name*: `Runbot` Just for display
- *Project id*: `runbot` This is important since you can only chose repo triggering a new build in this project.
- *Triggers*: `runbot` A new build will be created int the project when pushing on this repo.
- *Dependencies*: `odoo` Runbot needs odoo to run
- *Config*: `Default no run` Will start a build but dont make it running at the end. You can still wake up a build.
When a branch is pushed, a new batch will be created, and after one minute the new build will be created if no other change is detected.
CI options will only be used to send status on remotes of trigger repositories having a valid token.
You can either push, or go on the frontend bundle page and use the `Force new batch` button (refresh icon) to test this new trigger.
#### Bundles
Bundles can be marked as `no_build`, so that new commit won't create batch creation and the bundle won't be displayed on the main page.
#### Hosts
Runbot is able to share pending builds across multiple hosts. In the present case, there is only one. A new host will never assign a pending build to himself by default.
Go in the Build Hosts menu and choose yours. Uncheck *Only accept assigned build*. You can also tweak the number of parallel builds for this host.
### Modules filters
Modules to install can be filtered by repo, and by config step. The first filter to be applied is the repo one, creating the default list for a config step.
Addon -module on a repo will remove the module from the default, it is advised to reflect the default case on repo. To test only a custom module, adding `-*` on odoo repo will disable all odoo addons. Only dependencies of custom modules will be installed. Some specific modules can also be filtered using `-module1,-module1` or somme specific modules can be kept using `-*,module1,module2`.
Module can also be filtered on a config step with the same logic as repo filter, except that repo's blacklist can be disabled to allow all modules by starting the list with `*` (all available modules)
It is also possible to add test-tags to config step to allow more module to be installed but only testing some specific one. Test tags: `/module1,/module2`
### db template
Db creation will use template0 by default. It is possible to specify a specific template to use in runbot config *Postgresql template*. It is mainly used to add extensions. This will also avoid having issue if template0 is used when creating a new database.
It is recommended to generate a `template_runbot` database based on template0 and set this value in the runbot settings
```
createdb template_runbot -T template0
```
## Dockerfiles
Runbot is using a Dockerfile Odoo model to define the Dockerfile used for builds and is shipped with a default one. This default Dockerfile is based on Ubuntu Bionic and is intended to build recent supported versions of Odoo.
The model is using Odoo QWeb views as templates.
A new Dockerfile can be created as needed either by duplicating the default one and adapt parameters in the view. e.g.: changing the key `'from': 'ubuntu:bionic'` to `'from': 'debian:buster'` will create a new Dockerfile based on Debian instead of ubuntu.
Or by providing a plain Dockerfile in the template.
Once the Dockerfile is created and the `to_build` field is checked, the Dockerfile will be built (pay attention that no other operations will occur during the build).
A version or a bundle can be assigned a specific Dockerfile.
No master version available for runbot, please use the latest stable

File diff suppressed because it is too large Load Diff

View File

@ -1,2 +0,0 @@
from . import models
from . import controllers

View File

@ -1,14 +0,0 @@
# -*- coding: utf-8 -*-
{
'name': 'forward port bot',
'version': '1.1',
'summary': "A port which forward ports successful PRs.",
'depends': ['runbot_merge'],
'data': [
'data/security.xml',
'data/crons.xml',
'data/views.xml',
'data/queues.xml',
],
'license': 'LGPL-3',
}

View File

@ -1 +0,0 @@
FIX: the deduplication of authorship in case of conflicts in multi-commit PRs

View File

@ -1 +0,0 @@
FIX: loss of authorship on conflicts in multi-commit PRs, such conflicts now generate a commit with no authorship information, which can not be merged

View File

@ -1 +0,0 @@
ADD: better localisation of conflicts in multi-PR commits, list all the commits in the comment and add an arrow pointing to the one which broke

View File

@ -1 +0,0 @@
REM: creation of forward ports in draft mode

View File

@ -1 +0,0 @@
FIX: some feedback messages didn't correctly ping the person being replied to

View File

@ -1 +0,0 @@
IMP: properly notify the user when an update to a pull request causes a conflict when impacted on the followup

View File

@ -1 +0,0 @@
IMP: add the forward-port remote to the repository view, so it can be set via the UI

View File

@ -1 +0,0 @@
IMP: error messages when trying to `@fw-bot r+` on pull requests not under its purview

View File

@ -1 +0,0 @@
ADD: list of outstanding forward-ports

View File

@ -1 +0,0 @@
FIX: allow delegate reviewers *on forward ports* to approve the followups, it worked fine for delegates on the original pull request but a delegation on a forward port would only work for that specific PR (note: only works if the followups don't already exist)

View File

@ -1 +0,0 @@
FIX: rare condition where updating a forwardport would then require all followups to be individually approved

View File

@ -1 +0,0 @@
FIX: don't trigger an error message when using `fw-bot r+` and some of the PRs were already approved

View File

@ -1 +0,0 @@
IMP: layout and features of the "outstanding forward port" page, show the oldest-merged PRs first and allow filtering by reviewer

View File

@ -1 +0,0 @@
IMP: notifications when reopening a closed forward-port (e.g. indicate that they're detached)

View File

@ -1 +0,0 @@
IMP: use the `diff3` conflict style, should make forward port conflicts clearer and easier to fix

View File

@ -1 +0,0 @@
IMP: flag detached PRs in their dashboard

View File

@ -1,15 +0,0 @@
import pathlib
from odoo.addons.runbot_merge.controllers.dashboard import MergebotDashboard
class Dashboard(MergebotDashboard):
def _entries(self):
changelog = pathlib.Path(__file__).parent / 'changelog'
if not changelog.is_dir():
return super()._entries()
return super()._entries() + [
(d.name, [f.read_text(encoding='utf-8') for f in d.iterdir() if f.is_file()])
for d in changelog.iterdir()
]

View File

@ -1,45 +0,0 @@
<odoo>
<record model="ir.cron" id="port_forward">
<field name="name">Check if there are merged PRs to port</field>
<field name="model_id" ref="model_forwardport_batches"/>
<field name="state">code</field>
<field name="code">model._process()</field>
<field name="interval_number">1</field>
<field name="interval_type">minutes</field>
<field name="numbercall">-1</field>
<field name="doall" eval="False"/>
</record>
<record model="ir.cron" id="updates">
<field name="name">Update followup FP PRs</field>
<field name="model_id" ref="model_forwardport_updates"/>
<field name="state">code</field>
<field name="code">model._process()</field>
<field name="interval_number">1</field>
<field name="interval_type">minutes</field>
<field name="numbercall">-1</field>
<field name="doall" eval="False"/>
</record>
<record model="ir.cron" id="reminder">
<field name="name">Remind open PR</field>
<field name="model_id" ref="model_runbot_merge_pull_requests"/>
<field name="state">code</field>
<field name="code">model._reminder()</field>
<field name="interval_number">1</field>
<field name="interval_type">days</field>
<field name="numbercall">-1</field>
<field name="doall" eval="False"/>
</record>
<record model="ir.cron" id="remover">
<field name="name">Remove branches of merged PRs</field>
<field name="model_id" ref="model_forwardport_branch_remover"/>
<field name="state">code</field>
<field name="code">model._process()</field>
<field name="interval_number">1</field>
<field name="interval_type">hours</field>
<field name="numbercall">-1</field>
<field name="doall" eval="False"/>
</record>
</odoo>

View File

@ -1,51 +0,0 @@
<odoo>
<record id="action_forward_port" model="ir.actions.act_window">
<field name="name">Forward port batches</field>
<field name="res_model">forwardport.batches</field>
<field name="context">{'active_test': False}</field>
</record>
<record id="tree_forward_port" model="ir.ui.view">
<field name="name">Forward port batches</field>
<field name="model">forwardport.batches</field>
<field name="arch" type="xml">
<tree>
<field name="source"/>
<field name="batch_id"/>
</tree>
</field>
</record>
<record id="form_forward_port" model="ir.ui.view">
<field name="name">Forward port batch</field>
<field name="model">forwardport.batches</field>
<field name="arch" type="xml">
<form>
<group>
<group><field name="source"/></group>
<group><field name="batch_id"/></group>
</group>
</form>
</field>
</record>
<record id="action_followup_updates" model="ir.actions.act_window">
<field name="name">Followup Updates</field>
<field name="res_model">forwardport.updates</field>
</record>
<record id="tree_followup_updates" model="ir.ui.view">
<field name="name">Followup Updates</field>
<field name="model">forwardport.updates</field>
<field name="arch" type="xml">
<tree editable="bottom">
<field name="original_root"/>
<field name="new_root"/>
</tree>
</field>
</record>
<menuitem name="Forward Port Batches" id="menu_forward_port"
parent="runbot_merge.menu_queues"
action="action_forward_port"/>
<menuitem name="Followup Updates" id="menu_followup"
parent="runbot_merge.menu_queues"
action="action_followup_updates"/>
</odoo>

View File

@ -1,46 +0,0 @@
<odoo>
<record id="access_forwardport_batches_admin" model="ir.model.access">
<field name="name">Admin access to batches</field>
<field name="model_id" ref="model_forwardport_batches"/>
<field name="group_id" ref="runbot_merge.group_admin"/>
<field name="perm_read">1</field>
<field name="perm_create">1</field>
<field name="perm_write">1</field>
<field name="perm_unlink">1</field>
</record>
<record id="access_forwardport_updates_admin" model="ir.model.access">
<field name="name">Admin access to updates</field>
<field name="model_id" ref="model_forwardport_updates"/>
<field name="group_id" ref="runbot_merge.group_admin"/>
<field name="perm_read">1</field>
<field name="perm_create">1</field>
<field name="perm_write">1</field>
<field name="perm_unlink">1</field>
</record>
<record id="access_forwardport_branch_remover_admin" model="ir.model.access">
<field name="name">Admin access to branch remover</field>
<field name="model_id" ref="model_forwardport_branch_remover"/>
<field name="group_id" ref="runbot_merge.group_admin"/>
<field name="perm_read">1</field>
<field name="perm_create">1</field>
<field name="perm_write">1</field>
<field name="perm_unlink">1</field>
</record>
<record id="access_forwardport_batches" model="ir.model.access">
<field name="name">No normal access to batches</field>
<field name="model_id" ref="model_forwardport_batches"/>
<field name="perm_read">0</field>
<field name="perm_create">0</field>
<field name="perm_write">0</field>
<field name="perm_unlink">0</field>
</record>
<record id="access_forwardport_updates" model="ir.model.access">
<field name="name">No normal access to updates</field>
<field name="model_id" ref="model_forwardport_updates"/>
<field name="perm_read">0</field>
<field name="perm_create">0</field>
<field name="perm_write">0</field>
<field name="perm_unlink">0</field>
</record>
</odoo>

View File

@ -1,215 +0,0 @@
<odoo>
<template id="alerts" inherit_id="runbot_merge.alerts">
<xpath expr="//div[@id='alerts']">
<t t-set="fpcron" t-value="env(user=1).ref('forwardport.port_forward')"/>
<div t-if="not fpcron.active" class="alert alert-warning col-12" role="alert">
Forward-port is disabled, merged pull requests will not be forward-ported.
</div>
</xpath>
<!-- key block (hopefully) -->
<xpath expr="//div[@id='alerts']" position="inside">
<t t-if="env['runbot_merge.pull_requests'].check_access_rights('read', False)">
<t t-set="outstanding" t-value="env['runbot_merge.pull_requests'].search_count([
('source_id', '!=', False),
('state', 'not in', ['merged', 'closed']),
('source_id.merge_date', '&lt;', datetime.datetime.now() - relativedelta(days=3)),
])"/>
<div t-if="outstanding != 0" class="alert col-md-12 alert-warning mb-0">
<a href="/forwardport/outstanding">
<t t-esc="outstanding"/> outstanding forward-ports
</a>
</div>
</t>
</xpath>
</template>
<template id="pr_background">
<t t-if="p.state == 'merged'">bg-success</t>
<t t-elif="p.state == 'closed'">bg-light</t>
<t t-elif="p.state == 'error'">bg-danger</t>
<t t-else="">bg-warning</t>
</template>
<record id="forwardport.outstanding_fp" model="website.page">
<field name="name">Outstanding forward ports</field>
<field name="type">qweb</field>
<field name="url">/forwardport/outstanding</field>
<field name="website_indexed" eval="False"/>
<field name="is_published">True</field>
<field name="key">forwardport.outstanding_fp</field>
<field name="arch" type="xml">
<t name="Outstanding forward ports" t-name="forwardport.outstanding_fp">
<t t-call="website.layout">
<t t-set="hof" t-value="env['runbot_merge.pull_requests']._hall_of_shame()"/>
<div id="wrap" class="oe_structure oe_empty"><div class="container-fluid">
<ul class="alert bg-light list-inline">
<span t-foreach="hof.reviewers" t-as="count" class="list-inline-item">
<a t-attf-href="?reviewer={{count[0].id}}"
t-field="count[0].display_name"
t-att-title="count[1]"
/>
</span>
</ul>
<h1>List of pull requests with outstanding forward ports</h1>
<t t-set="reviewer" t-value="env['res.partner'].browse(int(request.params.get('reviewer') or 0))"/>
<form method="get" action="" id="reset-filter"/>
<h2 t-if="reviewer" class="text-muted">
merged by <span t-field="reviewer.display_name" t-attf-title="@{{reviewer.github_login}}"/>
<button form="reset-filter" type="submit"
name="reviewer" value=""
title="See All" class="btn fa fa-times"/>
</h2>
<dl><t t-foreach="hof.outstanding" t-as="x">
<t t-set="source" t-value="x[0]"/>
<t t-if="not reviewer or source.reviewed_by == reviewer">
<dt>
<a t-att-href="source.url"><span t-field="source.display_name"/></a>
by <span t-field="source.author.display_name"
t-attf-title="@{{source.author.github_login}}"/>
merged <span t-field="source.merge_date"
t-options="{'widget': 'relative'}"
t-att-title="source.merge_date"/>
<t t-if="not reviewer">
by <span t-field="source.reviewed_by.display_name"
t-attf-title="@{{source.reviewed_by.github_login}}"/>
</t>
</dt>
<dd>
Outstanding forward-ports:
<ul>
<li t-foreach="x.prs" t-as="p">
<a t-att-href="p.url"><span t-field="p.display_name"/></a>
(<span t-field="p.state"/>)
targeting <span t-field="p.target.name"/>
</li>
</ul>
</dd>
</t>
</t></dl>
</div></div>
</t>
</t>
</field>
</record>
<template id="view_pull_request" inherit_id="runbot_merge.view_pull_request">
<xpath expr="//dl[hasclass('runbot-merge-fields')]" position="inside">
<t t-if="pr.state == 'merged'">
<dt>merged</dt>
<dd>
<span t-field="pr.merge_date" t-options="{'widget': 'relative'}"
t-att-title="pr.merge_date"/>
by <span t-field="pr.reviewed_by.display_name"
t-attf-title="@{{pr.reviewed_by.github_login}}"/>
</dd>
</t>
<t t-if="pr.source_id">
<dt>forward-port of</dt>
<dd>
<a t-att-href="pr.source_id.url">
<span t-field="pr.source_id.display_name"/>
</a>
<span t-if="not pr.parent_id"
class="badge badge-danger user-select-none"
title="A detached PR behaves like a non-forward-port, it has to be approved via the mergebot, this is usually caused by the forward-port having been in conflict or updated.">
DETACHED
</span>
</dd>
</t>
<t t-if="pr.forwardport_ids">
<dt>forward-ports</dt>
<dd><ul>
<t t-foreach="pr.forwardport_ids" t-as="p">
<t t-set="bgsignal"><t t-call="forwardport.pr_background"/></t>
<li t-att-class="bgsignal">
<a t-att-href="p.url"><span t-field="p.display_name"/></a>
targeting <span t-field="p.target.name"/>
</li>
</t>
</ul></dd>
</t>
</xpath>
</template>
<record model="ir.ui.view" id="project">
<field name="name">Show forwardport project fields</field>
<field name="inherit_id" ref="runbot_merge.runbot_merge_form_project"/>
<field name="model">runbot_merge.project</field>
<field name="arch" type="xml">
<xpath expr="//sheet/group[2]" position="after">
<group string="Forwardport Configuration">
<group>
<field string="Token" name="fp_github_token"/>
</group>
<group>
<field string="Bot Name" name="fp_github_name"/>
<field string="Bot Email" name="fp_github_email"/>
</group>
</group>
</xpath>
<xpath expr="//field[@name='repo_ids']/tree" position="inside">
<field string="FP remote" name="fp_remote_target"
help="Repository where forward port branches will be created"
/>
</xpath>
<xpath expr="//field[@name='branch_ids']/tree" position="inside">
<field name="fp_target" string="FP to"
help="This branch will be forward-ported to (from lower ones)"
/>
<field name="fp_sequence" string="FP sequence"
help="Overrides the normal sequence"
/>
</xpath>
</field>
</record>
<record model="ir.ui.view" id="repository">
<field name="name">Show forwardport repository fields</field>
<field name="inherit_id" ref="runbot_merge.form_repository"/>
<field name="model">runbot_merge.repository</field>
<field name="arch" type="xml">
<field name="branch_filter" position="after">
<field string="FP remote" name="fp_remote_target"
help="Repository where forward port branches will be created"/>
</field>
</field>
</record>
<record model="ir.ui.view" id="pr">
<field name="name">Show forwardport PR fields</field>
<field name="inherit_id" ref="runbot_merge.runbot_merge_form_prs"/>
<field name="model">runbot_merge.pull_requests</field>
<field name="arch" type="xml">
<xpath expr="//field[@name='state']" position="after">
<field name="merge_date" attrs="{'invisible': [('state', '!=', 'merged')]}"/>
</xpath>
<xpath expr="//sheet/group[2]" position="after">
<separator string="Forward Port" attrs="{'invisible': [('source_id', '=', False)]}"/>
<group attrs="{'invisible': [('source_id', '!=', False)]}">
<group>
<field string="Policy" name="fw_policy"/>
</group>
</group>
<group attrs="{'invisible': [('source_id', '=', False)]}">
<group>
<field string="Original PR" name="source_id"/>
</group>
<group attrs="{'invisible': [('parent_id', '=', False)]}">
<field name="parent_id"/>
</group>
<group colspan="4" attrs="{'invisible': [('parent_id', '!=', False)]}">
<span>
Detached from forward porting (either conflicting
or explicitly updated).
</span>
</group>
<group>
<field string="Forward ported up to" name="limit_id"/>
</group>
</group>
</xpath>
</field>
</record>
</odoo>

View File

@ -1,10 +0,0 @@
def migrate(cr, version):
""" Set the merge_date field to the current write_date, and reset
the backoff to its default so we reprocess old PRs properly.
"""
cr.execute("""
UPDATE runbot_merge_pull_requests
SET merge_date = write_date,
reminder_backoff_factor = -4
WHERE state = 'merged'
""")

View File

@ -1,2 +0,0 @@
def migrate(cr, version):
cr.execute("delete from ir_model where model = 'forwardport.tagging'")

View File

@ -1,4 +0,0 @@
# -*- coding: utf-8 -*-
from . import project
from . import project_freeze
from . import forwardport

View File

@ -1,254 +0,0 @@
# -*- coding: utf-8 -*-
import logging
import uuid
from contextlib import ExitStack
from datetime import datetime
from dateutil import relativedelta
from odoo import fields, models
from odoo.addons.runbot_merge.github import GH
# how long a merged PR survives
MERGE_AGE = relativedelta.relativedelta(weeks=2)
_logger = logging.getLogger(__name__)
class Queue:
__slots__ = ()
limit = 100
def _process_item(self):
raise NotImplementedError
def _process(self):
for b in self.search(self._search_domain(), order='create_date, id', limit=self.limit):
try:
b._process_item()
b.unlink()
self.env.cr.commit()
except Exception:
_logger.exception("Error while processing %s, skipping", b)
self.env.cr.rollback()
self.clear_caches()
def _search_domain(self):
return []
class ForwardPortTasks(models.Model, Queue):
_name = 'forwardport.batches'
_description = 'batches which got merged and are candidates for forward-porting'
limit = 10
batch_id = fields.Many2one('runbot_merge.batch', required=True)
source = fields.Selection([
('merge', 'Merge'),
('fp', 'Forward Port Followup'),
('insert', 'New branch port')
], required=True)
def _process_item(self):
batch = self.batch_id
newbatch = batch.prs._port_forward()
if newbatch:
_logger.info(
"Processing %s (from %s): %s (%s) -> %s (%s)",
self.id, self.source,
batch, batch.prs,
newbatch, newbatch.prs,
)
# insert new batch in ancestry sequence unless conflict (= no parent)
if self.source == 'insert':
for pr in newbatch.prs:
if not pr.parent_id:
break
newchild = pr.search([
('parent_id', '=', pr.parent_id.id),
('id', '!=', pr.id),
])
if newchild:
newchild.parent_id = pr.id
else: # reached end of seq (or batch is empty)
# FIXME: or configuration is fucky so doesn't want to FP (maybe should error and retry?)
_logger.info(
"Processing %s (from %s): %s (%s) -> end of the sequence",
self.id, self.source,
batch, batch.prs
)
batch.active = False
CONFLICT_TEMPLATE = "{ping}WARNING: the latest change ({previous.head}) triggered " \
"a conflict when updating the next forward-port " \
"({next.display_name}), and has been ignored.\n\n" \
"You will need to update this pull request differently, " \
"or fix the issue by hand on {next.display_name}."
CHILD_CONFLICT = "{ping}WARNING: the update of {previous.display_name} to " \
"{previous.head} has caused a conflict in this pull request, " \
"data may have been lost."
class UpdateQueue(models.Model, Queue):
_name = 'forwardport.updates'
_description = 'if a forward-port PR gets updated & has followups (cherrypick succeeded) the followups need to be updated as well'
limit = 10
original_root = fields.Many2one('runbot_merge.pull_requests')
new_root = fields.Many2one('runbot_merge.pull_requests')
def _process_item(self):
Feedback = self.env['runbot_merge.pull_requests.feedback']
previous = self.new_root
with ExitStack() as s:
for child in self.new_root._iter_descendants():
self.env.cr.execute("""
SELECT id
FROM runbot_merge_pull_requests
WHERE id = %s
FOR UPDATE NOWAIT
""", [child.id])
_logger.info(
"Re-port %s from %s (changed root %s -> %s)",
child.display_name,
previous.display_name,
self.original_root.display_name,
self.new_root.display_name
)
if child.state in ('closed', 'merged'):
Feedback.create({
'repository': child.repository.id,
'pull_request': child.number,
'message': "%sancestor PR %s has been updated but this PR"
" is %s and can't be updated to match."
"\n\n"
"You may want or need to manually update any"
" followup PR." % (
child.ping(),
self.new_root.display_name,
child.state,
)
})
return
conflicts, working_copy = previous._create_fp_branch(
child.target, child.refname, s)
if conflicts:
_, out, err, _ = conflicts
Feedback.create({
'repository': previous.repository.id,
'pull_request': previous.number,
'message': CONFLICT_TEMPLATE.format(
ping=previous.ping(),
previous=previous,
next=child
)
})
Feedback.create({
'repository': child.repository.id,
'pull_request': child.number,
'message': CHILD_CONFLICT.format(ping=child.ping(), previous=previous, next=child)\
+ (f'\n\nstdout:\n```\n{out.strip()}\n```' if out.strip() else '')
+ (f'\n\nstderr:\n```\n{err.strip()}\n```' if err.strip() else '')
})
new_head = working_copy.stdout().rev_parse(child.refname).stdout.decode().strip()
commits_count = int(working_copy.stdout().rev_list(
f'{child.target.name}..{child.refname}',
count=True
).stdout.decode().strip())
old_head = child.head
# update child's head to the head we're going to push
child.with_context(ignore_head_update=True).write({
'head': new_head,
# 'state': 'opened',
'squash': commits_count == 1,
})
# push the new head to the local cache: in some cases github
# doesn't propagate revisions fast enough so on the next loop we
# can't find the revision we just pushed
dummy_branch = str(uuid.uuid4())
ref = previous._get_local_directory()
working_copy.push(ref._directory, f'{new_head}:refs/heads/{dummy_branch}')
ref.branch('--delete', '--force', dummy_branch)
# then update the child's branch to the new head
working_copy.push(f'--force-with-lease={child.refname}:{old_head}',
'target', child.refname)
# committing here means github could technically trigger its
# webhook before sending a response, but committing before
# would mean we can update the PR in database but fail to
# update on github, which is probably worse?
# alternatively we can commit, push, and rollback if the push
# fails
# FIXME: handle failures (especially on non-first update)
self.env.cr.commit()
previous = child
_deleter = _logger.getChild('deleter')
class DeleteBranches(models.Model, Queue):
_name = 'forwardport.branch_remover'
_description = "Removes branches of merged PRs"
pr_id = fields.Many2one('runbot_merge.pull_requests')
def _search_domain(self):
cutoff = self.env.context.get('forwardport_merged_before') \
or fields.Datetime.to_string(datetime.now() - MERGE_AGE)
return [('pr_id.merge_date', '<', cutoff)]
def _process_item(self):
_deleter.info(
"PR %s: checking deletion of linked branch %s",
self.pr_id.display_name,
self.pr_id.label
)
if self.pr_id.state != 'merged':
_deleter.info('✘ PR is not "merged" (got %s)', self.pr_id.state)
return
repository = self.pr_id.repository
fp_remote = repository.fp_remote_target
if not fp_remote:
_deleter.info('✘ no forward-port target')
return
repo_owner, repo_name = fp_remote.split('/')
owner, branch = self.pr_id.label.split(':')
if repo_owner != owner:
_deleter.info('✘ PR owner != FP target owner (%s)', repo_owner)
return # probably don't have access to arbitrary repos
github = GH(token=repository.project_id.fp_github_token, repo=fp_remote)
refurl = 'git/refs/heads/' + branch
ref = github('get', refurl, check=False)
if ref.status_code != 200:
_deleter.info("✘ branch already deleted (%s)", ref.json())
return
ref = ref.json()
if isinstance(ref, list):
_deleter.info(
"✘ got a fuzzy match (%s), branch probably deleted",
', '.join(r['ref'] for r in ref)
)
return
if ref['object']['sha'] != self.pr_id.head:
_deleter.info(
"✘ branch %s head mismatch, expected %s, got %s",
self.pr_id.label,
self.pr_id.head,
ref['object']['sha']
)
return
r = github('delete', refurl, check=False)
assert r.status_code == 204, \
"Tried to delete branch %s of %s, got %s" % (
branch, self.pr_id.display_name,
r.json()
)
_deleter.info('✔ deleted branch %s of PR %s', self.pr_id.label, self.pr_id.display_name)

File diff suppressed because it is too large Load Diff

View File

@ -1,22 +0,0 @@
from odoo import models
class FreezeWizard(models.Model):
""" Override freeze wizard to disable the forward port cron when one is
created (so there's a freeze ongoing) and re-enable it once all freezes are
done.
If there ever is a case where we have lots of projects,
"""
_inherit = 'runbot_merge.project.freeze'
def create(self, vals_list):
r = super().create(vals_list)
self.env.ref('forwardport.port_forward').active = False
return r
def unlink(self):
r = super().unlink()
if not self.search_count([]):
self.env.ref('forwardport.port_forward').active = True
return r

View File

@ -1,52 +0,0 @@
# -*- coding: utf-8 -*-
import re
import pytest
import requests
@pytest.fixture
def default_crons():
return [
'runbot_merge.process_updated_commits',
'runbot_merge.merge_cron',
'runbot_merge.staging_cron',
'forwardport.port_forward',
'forwardport.updates',
'runbot_merge.check_linked_prs_status',
'runbot_merge.feedback_cron',
]
# public_repo — necessary to leave comments
# admin:repo_hook — to set up hooks (duh)
# delete_repo — to cleanup repos created under a user
# user:email — fetch token/user's email addresses
TOKEN_SCOPES = {
'github': {'admin:repo_hook', 'delete_repo', 'public_repo', 'user:email'},
# TODO: user:email so they can fetch the user's email?
'role_reviewer': {'public_repo'},# 'delete_repo'},
'role_self_reviewer': {'public_repo'},# 'delete_repo'},
'role_other': {'public_repo'},# 'delete_repo'},
}
@pytest.fixture(autouse=True, scope='session')
def _check_scopes(config):
for section, vals in config.items():
required_scopes = TOKEN_SCOPES.get(section)
if required_scopes is None:
continue
response = requests.get('https://api.github.com/rate_limit', headers={
'Authorization': 'token %s' % vals['token']
})
assert response.status_code == 200
x_oauth_scopes = response.headers['X-OAuth-Scopes']
token_scopes = set(re.split(r',\s+', x_oauth_scopes))
assert token_scopes >= required_scopes, \
"%s should have scopes %s, found %s" % (section, token_scopes, required_scopes)
@pytest.fixture()
def module():
""" When a test function is (going to be) run, selects the containing
module (as needing to be installed)
"""
# NOTE: no request.fspath (because no request.function) in session-scoped fixture so can't put module() at the toplevel
return 'forwardport'

View File

@ -1,89 +0,0 @@
from utils import Commit, make_basic
def test_single_updated(env, config, make_repo):
""" Given co-dependent PRs getting merged, one of them being modified should
lead to a restart of the merge & forward port process.
See test_update_pr for a simpler (single-PR) version
"""
r1, _ = make_basic(env, config, make_repo, reponame='repo-1')
r2, _ = make_basic(env, config, make_repo, reponame='repo-2')
with r1:
r1.make_commits('a', Commit('1', tree={'1': '0'}), ref='heads/aref')
pr1 = r1.make_pr(target='a', head='aref')
r1.post_status('aref', 'success', 'legal/cla')
r1.post_status('aref', 'success', 'ci/runbot')
pr1.post_comment('hansen r+', config['role_reviewer']['token'])
with r2:
r2.make_commits('a', Commit('2', tree={'2': '0'}), ref='heads/aref')
pr2 = r2.make_pr(target='a', head='aref')
r2.post_status('aref', 'success', 'legal/cla')
r2.post_status('aref', 'success', 'ci/runbot')
pr2.post_comment('hansen r+', config['role_reviewer']['token'])
env.run_crons()
with r1, r2:
r1.post_status('staging.a', 'success', 'legal/cla')
r1.post_status('staging.a', 'success', 'ci/runbot')
r2.post_status('staging.a', 'success', 'legal/cla')
r2.post_status('staging.a', 'success', 'ci/runbot')
env.run_crons()
pr1_id, pr11_id, pr2_id, pr21_id = pr_ids = env['runbot_merge.pull_requests'].search([]).sorted('display_name')
assert pr1_id.number == pr1.number
assert pr2_id.number == pr2.number
assert pr1_id.state == pr2_id.state == 'merged'
assert pr11_id.parent_id == pr1_id
assert pr11_id.repository.name == pr1_id.repository.name == r1.name
assert pr21_id.parent_id == pr2_id
assert pr21_id.repository.name == pr2_id.repository.name == r2.name
assert pr11_id.target.name == pr21_id.target.name == 'b'
# don't even bother faking CI failure, straight update pr21_id
repo, ref = r2.get_pr(pr21_id.number).branch
with repo:
repo.make_commits(
pr21_id.target.name,
Commit('Whops', tree={'2': '1'}),
ref='heads/' + ref,
make=False
)
env.run_crons()
assert not pr21_id.parent_id
with r1, r2:
r1.post_status(pr11_id.head, 'success', 'legal/cla')
r1.post_status(pr11_id.head, 'success', 'ci/runbot')
r1.get_pr(pr11_id.number).post_comment('hansen r+', config['role_reviewer']['token'])
r2.post_status(pr21_id.head, 'success', 'legal/cla')
r2.post_status(pr21_id.head, 'success', 'ci/runbot')
r2.get_pr(pr21_id.number).post_comment('hansen r+', config['role_reviewer']['token'])
env.run_crons()
prs_again = env['runbot_merge.pull_requests'].search([])
assert prs_again == pr_ids,\
"should not have created FP PRs as we're now in a detached (iso new PR) state " \
"(%s)" % prs_again.mapped('display_name')
with r1, r2:
r1.post_status('staging.b', 'success', 'legal/cla')
r1.post_status('staging.b', 'success', 'ci/runbot')
r2.post_status('staging.b', 'success', 'legal/cla')
r2.post_status('staging.b', 'success', 'ci/runbot')
env.run_crons()
new_prs = env['runbot_merge.pull_requests'].search([]).sorted('display_name') - pr_ids
assert len(new_prs) == 2, "should have created the new FP PRs"
pr12_id, pr22_id = new_prs
assert pr12_id.source_id == pr1_id
assert pr12_id.parent_id == pr11_id
assert pr22_id.source_id == pr2_id
assert pr22_id.parent_id == pr21_id

View File

@ -1,356 +0,0 @@
import re
import time
from operator import itemgetter
from utils import make_basic, Commit, validate_all, re_matches, seen, REF_PATTERN, to_pr
def test_conflict(env, config, make_repo, users):
""" Create a PR to A which will (eventually) conflict with C when
forward-ported.
"""
prod, other = make_basic(env, config, make_repo)
# create a d branch
with prod:
prod.make_commits('c', Commit('1111', tree={'i': 'a'}), ref='heads/d')
project = env['runbot_merge.project'].search([])
project.write({
'branch_ids': [
(0, 0, {'name': 'd', 'fp_sequence': 4, 'fp_target': True})
]
})
# generate a conflict: create a h file in a PR to a
with prod:
[p_0] = prod.make_commits(
'a', Commit('p_0', tree={'h': 'xxx'}),
ref='heads/conflicting'
)
pr = prod.make_pr(target='a', head='conflicting')
prod.post_status(p_0, 'success', 'legal/cla')
prod.post_status(p_0, 'success', 'ci/runbot')
pr.post_comment('hansen r+', config['role_reviewer']['token'])
env.run_crons()
with prod:
prod.post_status('staging.a', 'success', 'legal/cla')
prod.post_status('staging.a', 'success', 'ci/runbot')
env.run_crons()
pra_id, prb_id = env['runbot_merge.pull_requests'].search([], order='number')
# mark pr b as OK so it gets ported to c
with prod:
validate_all([prod], [prb_id.head])
env.run_crons()
pra_id, prb_id, prc_id = env['runbot_merge.pull_requests'].search([], order='number')
# should have created a new PR
# but it should not have a parent, and there should be conflict markers
assert not prc_id.parent_id
assert prc_id.source_id == pra_id
assert prc_id.state == 'opened'
p = prod.commit(p_0)
c = prod.commit(prc_id.head)
assert c.author == p.author
# ignore date as we're specifically not keeping the original's
without_date = itemgetter('name', 'email')
assert without_date(c.committer) == without_date(p.committer)
assert prod.read_tree(c) == {
'f': 'c',
'g': 'a',
'h': re_matches(r'''<<<\x3c<<< HEAD
a
|||||||| parent of [\da-f]{7,}.*
=======
xxx
>>>\x3e>>> [\da-f]{7,}.*
'''),
}
prb = prod.get_pr(prb_id.number)
assert prb.comments == [
seen(env, prb, users),
(users['user'], '''\
This PR targets b and is part of the forward-port chain. Further PRs will be created up to d.
More info at https://github.com/odoo/odoo/wiki/Mergebot#forward-port
'''),
(users['user'], """@%s @%s the next pull request (%s) is in conflict. \
You can merge the chain up to here by saying
> @%s r+
More info at https://github.com/odoo/odoo/wiki/Mergebot#forward-port
""" % (
users['user'], users['reviewer'],
prc_id.display_name,
project.fp_github_name
))
]
# check that CI passing does not create more PRs
with prod:
validate_all([prod], [prc_id.head])
env.run_crons()
time.sleep(5)
env.run_crons()
assert pra_id | prb_id | prc_id == env['runbot_merge.pull_requests'].search([], order='number'),\
"CI passing should not have resumed the FP process on a conflicting PR"
# fix the PR, should behave as if this were a normal PR
prc = prod.get_pr(prc_id.number)
pr_repo, pr_ref = prc.branch
with pr_repo:
pr_repo.make_commits(
# if just given a branch name, goes and gets it from pr_repo whose
# "b" was cloned before that branch got rolled back
'c',
Commit('h should indeed be xxx', tree={'h': 'xxx'}),
ref='heads/%s' % pr_ref,
make=False,
)
env.run_crons()
assert prod.read_tree(prod.commit(prc_id.head)) == {
'f': 'c',
'g': 'a',
'h': 'xxx',
}
assert prc_id.state == 'opened', "state should be open still"
assert ('#%d' % pra_id.number) in prc_id.message
# check that merging the fixed PR fixes the flow and restarts a forward
# port process
with prod:
prod.post_status(prc.head, 'success', 'legal/cla')
prod.post_status(prc.head, 'success', 'ci/runbot')
prc.post_comment('hansen r+', config['role_reviewer']['token'])
env.run_crons()
assert prc_id.staging_id
with prod:
prod.post_status('staging.c', 'success', 'legal/cla')
prod.post_status('staging.c', 'success', 'ci/runbot')
env.run_crons()
*_, prd_id = env['runbot_merge.pull_requests'].search([], order='number')
assert ('#%d' % pra_id.number) in prd_id.message, \
"check that source / PR A is referenced by resume PR"
assert ('#%d' % prc_id.number) in prd_id.message, \
"check that parent / PR C is referenced by resume PR"
assert prd_id.parent_id == prc_id
assert prd_id.source_id == pra_id
assert re.match(
REF_PATTERN.format(target='d', source='conflicting'),
prd_id.refname
)
assert prod.read_tree(prod.commit(prd_id.head)) == {
'f': 'c',
'g': 'a',
'h': 'xxx',
'i': 'a',
}
def test_conflict_deleted(env, config, make_repo):
prod, other = make_basic(env, config, make_repo)
# remove f from b
with prod:
prod.make_commits(
'b', Commit('33', tree={'g': 'c'}, reset=True),
ref='heads/b'
)
# generate a conflict: update f in a
with prod:
[p_0] = prod.make_commits(
'a', Commit('p_0', tree={'f': 'xxx'}),
ref='heads/conflicting'
)
pr = prod.make_pr(target='a', head='conflicting')
prod.post_status(p_0, 'success', 'legal/cla')
prod.post_status(p_0, 'success', 'ci/runbot')
pr.post_comment('hansen r+', config['role_reviewer']['token'])
env.run_crons()
with prod:
prod.post_status('staging.a', 'success', 'legal/cla')
prod.post_status('staging.a', 'success', 'ci/runbot')
env.run_crons()
# wait a bit for PR webhook... ?
time.sleep(5)
env.run_crons()
# should have created a new PR
pr0, pr1 = env['runbot_merge.pull_requests'].search([], order='number')
# but it should not have a parent
assert not pr1.parent_id
assert pr1.source_id == pr0
assert prod.read_tree(prod.commit('b')) == {
'g': 'c',
}
assert pr1.state == 'opened'
# NOTE: no actual conflict markers because pr1 essentially adds f de-novo
assert prod.read_tree(prod.commit(pr1.head)) == {
'f': 'xxx',
'g': 'c',
}
# check that CI passing does not create more PRs
with prod:
validate_all([prod], [pr1.head])
env.run_crons()
time.sleep(5)
env.run_crons()
assert pr0 | pr1 == env['runbot_merge.pull_requests'].search([], order='number'),\
"CI passing should not have resumed the FP process on a conflicting PR"
# fix the PR, should behave as if this were a normal PR
get_pr = prod.get_pr(pr1.number)
pr_repo, pr_ref = get_pr.branch
with pr_repo:
pr_repo.make_commits(
# if just given a branch name, goes and gets it from pr_repo whose
# "b" was cloned before that branch got rolled back
prod.commit('b').id,
Commit('f should indeed be removed', tree={'g': 'c'}, reset=True),
ref='heads/%s' % pr_ref,
make=False,
)
env.run_crons()
assert prod.read_tree(prod.commit(pr1.head)) == {
'g': 'c',
}
assert pr1.state == 'opened', "state should be open still"
def test_multiple_commits_same_authorship(env, config, make_repo):
""" When a PR has multiple commits by the same author and its
forward-porting triggers a conflict, the resulting (squashed) conflict
commit should have the original author (same with the committer).
"""
author = {'name': 'George Pearce', 'email': 'gp@example.org'}
committer = {'name': 'G. P. W. Meredith', 'email': 'gpwm@example.org'}
prod, _ = make_basic(env, config, make_repo)
with prod:
# conflict: create `g` in `a`, using two commits
prod.make_commits(
'a',
Commit('c0', tree={'g': '1'},
author={**author, 'date': '1932-10-18T12:00:00Z'},
committer={**committer, 'date': '1932-11-02T12:00:00Z'}),
Commit('c1', tree={'g': '2'},
author={**author, 'date': '1932-11-12T12:00:00Z'},
committer={**committer, 'date': '1932-11-13T12:00:00Z'}),
ref='heads/conflicting'
)
pr = prod.make_pr(target='a', head='conflicting')
prod.post_status('conflicting', 'success', 'legal/cla')
prod.post_status('conflicting', 'success', 'ci/runbot')
pr.post_comment('hansen r+ rebase-ff', config['role_reviewer']['token'])
env.run_crons()
pr_id = to_pr(env, pr)
assert pr_id.state == 'ready'
assert pr_id.staging_id
with prod:
prod.post_status('staging.a', 'success', 'legal/cla')
prod.post_status('staging.a', 'success', 'ci/runbot')
env.run_crons()
for _ in range(20):
pr_ids = env['runbot_merge.pull_requests'].search([], order='number')
if len(pr_ids) == 2:
_ , pr2_id = pr_ids
break
time.sleep(0.5)
else:
assert 0, "timed out"
c = prod.commit(pr2_id.head)
get = itemgetter('name', 'email')
assert get(c.author) == get(author)
assert get(c.committer) == get(committer)
def test_multiple_commits_different_authorship(env, config, make_repo, users, rolemap):
""" When a PR has multiple commits by different authors, the resulting
(squashed) conflict commit should have
"""
author = {'name': 'George Pearce', 'email': 'gp@example.org'}
committer = {'name': 'G. P. W. Meredith', 'email': 'gpwm@example.org'}
prod, _ = make_basic(env, config, make_repo)
with prod:
# conflict: create `g` in `a`, using two commits
# just swap author and committer in the commits
prod.make_commits(
'a',
Commit('c0', tree={'g': '1'},
author={**author, 'date': '1932-10-18T12:00:00Z'},
committer={**committer, 'date': '1932-11-02T12:00:00Z'}),
Commit('c1', tree={'g': '2'},
author={**committer, 'date': '1932-11-12T12:00:00Z'},
committer={**author, 'date': '1932-11-13T12:00:00Z'}),
ref='heads/conflicting'
)
pr = prod.make_pr(target='a', head='conflicting')
prod.post_status('conflicting', 'success', 'legal/cla')
prod.post_status('conflicting', 'success', 'ci/runbot')
pr.post_comment('hansen r+ rebase-ff', config['role_reviewer']['token'])
env.run_crons()
pr_id = to_pr(env, pr)
assert pr_id.state == 'ready'
assert pr_id.staging_id
with prod:
prod.post_status('staging.a', 'success', 'legal/cla')
prod.post_status('staging.a', 'success', 'ci/runbot')
env.run_crons()
for _ in range(20):
pr_ids = env['runbot_merge.pull_requests'].search([], order='number')
if len(pr_ids) == 2:
_ , pr2_id = pr_ids
break
time.sleep(0.5)
else:
assert 0, "timed out"
c = prod.commit(pr2_id.head)
assert len(c.parents) == 1
get = itemgetter('name', 'email')
rm = rolemap['user']
assert get(c.author) == (rm['login'], ''), \
"In a multi-author PR, the squashed conflict commit should have the " \
"author set to the bot but an empty email"
assert get(c.committer) == (rm['login'], '')
assert re.match(r'''<<<\x3c<<< HEAD
b
|||||||| parent of [\da-f]{7,}.*
=======
2
>>>\x3e>>> [\da-f]{7,}.*
''', prod.read_tree(c)['g'])
# I'd like to fix the conflict so everything is clean and proper *but*
# github's API apparently rejects creating commits with an empty email.
#
# So fuck that, I'll just "merge the conflict". Still works at simulating
# a resolution error as technically that's the sort of things people do.
pr2 = prod.get_pr(pr2_id.number)
with prod:
prod.post_status(pr2_id.head, 'success', 'legal/cla')
prod.post_status(pr2_id.head, 'success', 'ci/runbot')
pr2.post_comment('hansen r+', config['role_reviewer']['token'])
env.run_crons()
assert pr2.comments == [
seen(env, pr2, users),
(users['user'], re_matches(r'@%s @%s .*CONFLICT' % (users['user'], users['reviewer']), re.DOTALL)),
(users['reviewer'], 'hansen r+'),
(users['user'], f"@{users['user']} @{users['reviewer']} unable to stage: "
"All commits must have author and committer email, "
f"missing email on {pr2_id.head} indicates the "
"authorship is most likely incorrect."),
]
assert pr2_id.state == 'error'
assert not pr2_id.staging_id, "staging should have been rejected"

View File

@ -1,293 +0,0 @@
# -*- coding: utf-8 -*-
import collections
import time
import pytest
from utils import seen, Commit, make_basic
Description = collections.namedtuple('Restriction', 'source limit')
def test_configure(env, config, make_repo):
""" Checks that configuring an FP limit on a PR is respected
* limits to not the latest
* limits to the current target (= no FP)
* limits to an earlier branch (???)
"""
prod, other = make_basic(env, config, make_repo)
bot_name = env['runbot_merge.project'].search([]).fp_github_name
descriptions = [
Description(source='a', limit='b'),
Description(source='b', limit='b'),
Description(source='b', limit='a'),
]
originals = []
with prod:
for i, descr in enumerate(descriptions):
[c] = prod.make_commits(
descr.source, Commit('c %d' % i, tree={str(i): str(i)}),
ref='heads/branch%d' % i,
)
pr = prod.make_pr(target=descr.source, head='branch%d'%i)
prod.post_status(c, 'success', 'legal/cla')
prod.post_status(c, 'success', 'ci/runbot')
pr.post_comment('hansen r+\n%s up to %s' % (bot_name, descr.limit), config['role_reviewer']['token'])
originals.append(pr.number)
env.run_crons()
with prod:
prod.post_status('staging.a', 'success', 'legal/cla')
prod.post_status('staging.a', 'success', 'ci/runbot')
prod.post_status('staging.b', 'success', 'legal/cla')
prod.post_status('staging.b', 'success', 'ci/runbot')
env.run_crons()
# should have created a single FP PR for 0, none for 1 and none for 2
prs = env['runbot_merge.pull_requests'].search([], order='number')
assert len(prs) == 4
assert prs[-1].parent_id == prs[0]
assert prs[0].number == originals[0]
assert prs[1].number == originals[1]
assert prs[2].number == originals[2]
def test_self_disabled(env, config, make_repo):
""" Allow setting target as limit even if it's disabled
"""
prod, other = make_basic(env, config, make_repo)
bot_name = env['runbot_merge.project'].search([]).fp_github_name
branch_a = env['runbot_merge.branch'].search([('name', '=', 'a')])
branch_a.fp_target = False
with prod:
[c] = prod.make_commits('a', Commit('c', tree={'0': '0'}), ref='heads/mybranch')
pr = prod.make_pr(target='a', head='mybranch')
prod.post_status(c, 'success', 'legal/cla')
prod.post_status(c, 'success', 'ci/runbot')
pr.post_comment('hansen r+\n%s up to a' % bot_name, config['role_reviewer']['token'])
env.run_crons()
pr_id = env['runbot_merge.pull_requests'].search([('number', '=', pr.number)])
assert pr_id.limit_id == branch_a
with prod:
prod.post_status('staging.a', 'success', 'legal/cla')
prod.post_status('staging.a', 'success', 'ci/runbot')
assert env['runbot_merge.pull_requests'].search([]) == pr_id,\
"should not have created a forward port"
def test_ignore(env, config, make_repo):
""" Provide an "ignore" command which is equivalent to setting the limit
to target
"""
prod, other = make_basic(env, config, make_repo)
bot_name = env['runbot_merge.project'].search([]).fp_github_name
branch_a = env['runbot_merge.branch'].search([('name', '=', 'a')])
with prod:
[c] = prod.make_commits('a', Commit('c', tree={'0': '0'}), ref='heads/mybranch')
pr = prod.make_pr(target='a', head='mybranch')
prod.post_status(c, 'success', 'legal/cla')
prod.post_status(c, 'success', 'ci/runbot')
pr.post_comment('hansen r+\n%s ignore' % bot_name, config['role_reviewer']['token'])
env.run_crons()
pr_id = env['runbot_merge.pull_requests'].search([('number', '=', pr.number)])
assert pr_id.limit_id == branch_a
with prod:
prod.post_status('staging.a', 'success', 'legal/cla')
prod.post_status('staging.a', 'success', 'ci/runbot')
assert env['runbot_merge.pull_requests'].search([]) == pr_id,\
"should not have created a forward port"
@pytest.mark.parametrize('enabled', ['active', 'fp_target'])
def test_disable(env, config, make_repo, users, enabled):
""" Checks behaviour if the limit target is disabled:
* disable target while FP is ongoing -> skip over (and stop there so no FP)
* forward-port over a disabled branch
* request a disabled target as limit
Disabling (with respect to forward ports) can be performed by marking the
branch as !active (which also affects mergebot operations), or as
!fp_target (won't be forward-ported to).
"""
prod, other = make_basic(env, config, make_repo)
project = env['runbot_merge.project'].search([])
bot_name = project.fp_github_name
with prod:
[c] = prod.make_commits('a', Commit('c 0', tree={'0': '0'}), ref='heads/branch0')
pr = prod.make_pr(target='a', head='branch0')
prod.post_status(c, 'success', 'legal/cla')
prod.post_status(c, 'success', 'ci/runbot')
pr.post_comment('hansen r+\n%s up to b' % bot_name, config['role_reviewer']['token'])
[c] = prod.make_commits('a', Commit('c 1', tree={'1': '1'}), ref='heads/branch1')
pr = prod.make_pr(target='a', head='branch1')
prod.post_status(c, 'success', 'legal/cla')
prod.post_status(c, 'success', 'ci/runbot')
pr.post_comment('hansen r+', config['role_reviewer']['token'])
env.run_crons()
with prod:
prod.post_status('staging.a', 'success', 'legal/cla')
prod.post_status('staging.a', 'success', 'ci/runbot')
# disable branch b
env['runbot_merge.branch'].search([('name', '=', 'b')]).write({enabled: False})
env.run_crons()
# should have created a single PR (to branch c, for pr 1)
_0, _1, p = env['runbot_merge.pull_requests'].search([], order='number')
assert p.parent_id == _1
assert p.target.name == 'c'
project.fp_github_token = config['role_other']['token']
bot_name = project.fp_github_name
with prod:
[c] = prod.make_commits('a', Commit('c 2', tree={'2': '2'}), ref='heads/branch2')
pr = prod.make_pr(target='a', head='branch2')
prod.post_status(c, 'success', 'legal/cla')
prod.post_status(c, 'success', 'ci/runbot')
pr.post_comment('hansen r+\n%s up to' % bot_name, config['role_reviewer']['token'])
pr.post_comment('%s up to b' % bot_name, config['role_reviewer']['token'])
pr.post_comment('%s up to foo' % bot_name, config['role_reviewer']['token'])
pr.post_comment('%s up to c' % bot_name, config['role_reviewer']['token'])
env.run_crons()
# use a set because git webhooks delays might lead to mis-ordered
# responses and we don't care that much
assert set(pr.comments) == {
(users['reviewer'], "hansen r+\n%s up to" % bot_name),
(users['other'], "@%s please provide a branch to forward-port to." % users['reviewer']),
(users['reviewer'], "%s up to b" % bot_name),
(users['other'], "@%s branch 'b' is disabled, it can't be used as a forward port target." % users['reviewer']),
(users['reviewer'], "%s up to foo" % bot_name),
(users['other'], "@%s there is no branch 'foo', it can't be used as a forward port target." % users['reviewer']),
(users['reviewer'], "%s up to c" % bot_name),
(users['other'], "Forward-porting to 'c'."),
seen(env, pr, users),
}
def test_default_disabled(env, config, make_repo, users):
""" If the default limit is disabled, it should still be the default
limit but the ping message should be set on the actual last FP (to the
last non-deactivated target)
"""
prod, other = make_basic(env, config, make_repo)
branch_c = env['runbot_merge.branch'].search([('name', '=', 'c')])
branch_c.fp_target = False
with prod:
[c] = prod.make_commits('a', Commit('c', tree={'0': '0'}), ref='heads/branch0')
pr = prod.make_pr(target='a', head='branch0')
prod.post_status(c, 'success', 'legal/cla')
prod.post_status(c, 'success', 'ci/runbot')
pr.post_comment('hansen r+', config['role_reviewer']['token'])
env.run_crons()
assert env['runbot_merge.pull_requests'].search([]).limit_id == branch_c
with prod:
prod.post_status('staging.a', 'success', 'legal/cla')
prod.post_status('staging.a', 'success', 'ci/runbot')
env.run_crons()
p1, p2 = env['runbot_merge.pull_requests'].search([], order='number')
assert p1.number == pr.number
pr2 = prod.get_pr(p2.number)
cs = pr2.comments
assert len(cs) == 2
assert pr2.comments == [
seen(env, pr2, users),
(users['user'], """\
@%(user)s @%(reviewer)s this PR targets b and is the last of the forward-port chain.
To merge the full chain, say
> @%(user)s r+
More info at https://github.com/odoo/odoo/wiki/Mergebot#forward-port
""" % users)
]
def test_limit_after_merge(env, config, make_repo, users):
""" If attempting to set a limit (<up to>) on a PR which is merged
(already forward-ported or not), or is a forward-port PR, fwbot should
just feedback that it won't do it
"""
prod, other = make_basic(env, config, make_repo)
reviewer = config['role_reviewer']['token']
branch_c = env['runbot_merge.branch'].search([('name', '=', 'c')])
bot_name = env['runbot_merge.project'].search([]).fp_github_name
with prod:
[c] = prod.make_commits('a', Commit('c', tree={'0': '0'}), ref='heads/abranch')
pr1 = prod.make_pr(target='a', head='abranch')
prod.post_status(c, 'success', 'legal/cla')
prod.post_status(c, 'success', 'ci/runbot')
pr1.post_comment('hansen r+', reviewer)
env.run_crons()
with prod:
prod.post_status('staging.a', 'success', 'legal/cla')
prod.post_status('staging.a', 'success', 'ci/runbot')
env.run_crons()
p1, p2 = env['runbot_merge.pull_requests'].search([], order='number')
assert p1.limit_id == p2.limit_id == branch_c, "check that limit is correctly set"
pr2 = prod.get_pr(p2.number)
with prod:
pr1.post_comment(bot_name + ' up to b', reviewer)
pr2.post_comment(bot_name + ' up to b', reviewer)
env.run_crons()
assert p1.limit_id == p2.limit_id == branch_c, \
"check that limit was not updated"
assert pr1.comments == [
(users['reviewer'], "hansen r+"),
seen(env, pr1, users),
(users['reviewer'], bot_name + ' up to b'),
(bot_name, "@%s forward-port limit can only be set before the PR is merged." % users['reviewer']),
]
assert pr2.comments == [
seen(env, pr2, users),
(users['user'], """\
This PR targets b and is part of the forward-port chain. Further PRs will be created up to c.
More info at https://github.com/odoo/odoo/wiki/Mergebot#forward-port
"""),
(users['reviewer'], bot_name + ' up to b'),
(bot_name, "@%s forward-port limit can only be set on an origin PR"
" (%s here) before it's merged and forward-ported." % (
users['reviewer'],
p1.display_name,
)),
]
# update pr2 to detach it from pr1
with other:
other.make_commits(
p2.target.name,
Commit('updated', tree={'1': '1'}),
ref=pr2.ref,
make=False
)
env.run_crons()
assert not p2.parent_id
assert p2.source_id == p1
with prod:
pr2.post_comment(bot_name + ' up to b', reviewer)
env.run_crons()
assert pr2.comments[4:] == [
(bot_name, "@%s @%s this PR was modified / updated and has become a normal PR. "
"It should be merged the normal way (via @%s)" % (
users['user'], users['reviewer'],
p2.repository.project_id.github_prefix
)),
(users['reviewer'], bot_name + ' up to b'),
(bot_name, f"@{users['reviewer']} forward-port limit can only be set on an origin PR "
f"({p1.display_name} here) before it's merged and forward-ported."
),
]

View File

@ -1,116 +0,0 @@
import json
from utils import Commit, make_basic
def statuses(pr):
return {
k: v['state']
for k, v in json.loads(pr.statuses_full).items()
}
def test_override_inherited(env, config, make_repo, users):
""" A forwardport should inherit its parents' overrides, until it's edited.
"""
repo, other = make_basic(env, config, make_repo)
project = env['runbot_merge.project'].search([])
env['res.partner'].search([('github_login', '=', users['reviewer'])])\
.write({'override_rights': [(0, 0, {
'repository_id': project.repo_ids.id,
'context': 'ci/runbot',
})]})
with repo:
repo.make_commits('a', Commit('C', tree={'a': '0'}), ref='heads/change')
pr = repo.make_pr(target='a', head='change')
repo.post_status('change', 'success', 'legal/cla')
pr.post_comment('hansen r+ override=ci/runbot', config['role_reviewer']['token'])
env.run_crons()
original = env['runbot_merge.pull_requests'].search([('repository.name', '=', repo.name), ('number', '=', pr.number)])
assert original.state == 'ready'
with repo:
repo.post_status('staging.a', 'success', 'legal/cla')
repo.post_status('staging.a', 'success', 'ci/runbot')
env.run_crons()
pr0_id, pr1_id = env['runbot_merge.pull_requests'].search([], order='number')
assert pr0_id == original
assert pr1_id.parent_id, pr0_id
with repo:
repo.post_status(pr1_id.head, 'success', 'legal/cla')
env.run_crons()
assert pr1_id.state == 'validated'
assert statuses(pr1_id) == {'ci/runbot': 'success', 'legal/cla': 'success'}
# now we edit the child PR
pr_repo, pr_ref = repo.get_pr(pr1_id.number).branch
with pr_repo:
pr_repo.make_commits(
pr1_id.target.name,
Commit('wop wop', tree={'a': '1'}),
ref=f'heads/{pr_ref}',
make=False
)
env.run_crons()
assert pr1_id.state == 'opened'
assert not pr1_id.parent_id
assert statuses(pr1_id) == {}, "should not have any status left"
def test_override_combination(env, config, make_repo, users):
""" A forwardport should inherit its parents' overrides, until it's edited.
"""
repo, other = make_basic(env, config, make_repo)
project = env['runbot_merge.project'].search([])
env['res.partner'].search([('github_login', '=', users['reviewer'])]) \
.write({'override_rights': [
(0, 0, {
'repository_id': project.repo_ids.id,
'context': 'ci/runbot',
}),
(0, 0, {
'repository_id': project.repo_ids.id,
'context': 'legal/cla',
})
]})
with repo:
repo.make_commits('a', Commit('C', tree={'a': '0'}), ref='heads/change')
pr = repo.make_pr(target='a', head='change')
repo.post_status('change', 'success', 'legal/cla')
pr.post_comment('hansen r+ override=ci/runbot', config['role_reviewer']['token'])
env.run_crons()
pr0_id = env['runbot_merge.pull_requests'].search([('repository.name', '=', repo.name), ('number', '=', pr.number)])
assert pr0_id.state == 'ready'
assert statuses(pr0_id) == {'ci/runbot': 'success', 'legal/cla': 'success'}
with repo:
repo.post_status('staging.a', 'success', 'legal/cla')
repo.post_status('staging.a', 'success', 'ci/runbot')
env.run_crons()
# check for combination: ci/runbot is overridden through parent, if we
# override legal/cla then the PR should be validated
pr1_id = env['runbot_merge.pull_requests'].search([('parent_id', '=', pr0_id.id)])
assert pr1_id.state == 'opened'
assert statuses(pr1_id) == {'ci/runbot': 'success'}
with repo:
repo.get_pr(pr1_id.number).post_comment('hansen override=legal/cla', config['role_reviewer']['token'])
env.run_crons()
assert pr1_id.state == 'validated'
# editing the child should devalidate
pr_repo, pr_ref = repo.get_pr(pr1_id.number).branch
with pr_repo:
pr_repo.make_commits(
pr1_id.target.name,
Commit('wop wop', tree={'a': '1'}),
ref=f'heads/{pr_ref}',
make=False
)
env.run_crons()
assert pr1_id.state == 'opened'
assert not pr1_id.parent_id
assert statuses(pr1_id) == {'legal/cla': 'success'}, \
"should only have its own status left"

File diff suppressed because it is too large Load Diff

View File

@ -1,414 +0,0 @@
"""
Test cases for updating PRs during after the forward-porting process after the
initial merge has succeeded (and forward-porting has started)
"""
import re
import sys
import pytest
from utils import seen, re_matches, Commit, make_basic, to_pr
def test_update_pr(env, config, make_repo, users):
""" Even for successful cherrypicks, it's possible that e.g. CI doesn't
pass or the reviewer finds out they need to update the code.
In this case, all following forward ports should... be detached? Or maybe
only this one and its dependent should be updated?
"""
prod, _ = make_basic(env, config, make_repo)
with prod:
[p_1] = prod.make_commits(
'a',
Commit('p_0', tree={'x': '0'}),
ref='heads/hugechange'
)
pr = prod.make_pr(target='a', head='hugechange')
prod.post_status(p_1, 'success', 'legal/cla')
prod.post_status(p_1, 'success', 'ci/runbot')
pr.post_comment('hansen r+', config['role_reviewer']['token'])
env.run_crons()
with prod:
prod.post_status('staging.a', 'success', 'legal/cla')
prod.post_status('staging.a', 'success', 'ci/runbot')
# should merge the staging then create the FP PR
env.run_crons()
pr0_id, pr1_id = env['runbot_merge.pull_requests'].search([], order='number')
fp_intermediate = (users['user'], '''\
This PR targets b and is part of the forward-port chain. Further PRs will be created up to c.
More info at https://github.com/odoo/odoo/wiki/Mergebot#forward-port
''')
ci_warning = (users['user'], '@%(user)s @%(reviewer)s ci/runbot failed on this forward-port PR' % users)
# oh no CI of the first FP PR failed!
# simulate status being sent multiple times (e.g. on multiple repos) with
# some delivery lag allowing for the cron to run between each delivery
for st, ctx in [('failure', 'ci/runbot'), ('failure', 'ci/runbot'), ('success', 'legal/cla'), ('success', 'legal/cla')]:
with prod:
prod.post_status(pr1_id.head, st, ctx)
env.run_crons()
with prod: # should be ignored because the description doesn't matter
prod.post_status(pr1_id.head, 'failure', 'ci/runbot', description="HAHAHAHAHA")
env.run_crons()
# check that FP did not resume & we have a ping on the PR
assert env['runbot_merge.pull_requests'].search([], order='number') == pr0_id | pr1_id,\
"forward port should not continue on CI failure"
pr1_remote = prod.get_pr(pr1_id.number)
assert pr1_remote.comments == [seen(env, pr1_remote, users), fp_intermediate, ci_warning]
# it was a false positive, rebuild... it fails again!
with prod:
prod.post_status(pr1_id.head, 'failure', 'ci/runbot', target_url='http://example.org/4567890')
env.run_crons()
# check that FP did not resume & we have a ping on the PR
assert env['runbot_merge.pull_requests'].search([], order='number') == pr0_id | pr1_id,\
"ensure it still hasn't restarted"
assert pr1_remote.comments == [seen(env, pr1_remote, users), fp_intermediate, ci_warning, ci_warning]
# nb: updating the head would detach the PR and not put it in the warning
# path anymore
# rebuild again, finally passes
with prod:
prod.post_status(pr1_id.head, 'success', 'ci/runbot')
env.run_crons()
pr0_id, pr1_id, pr2_id = env['runbot_merge.pull_requests'].search([], order='number')
assert pr1_id.parent_id == pr0_id
assert pr2_id.parent_id == pr1_id
pr1_head = pr1_id.head
pr2_head = pr2_id.head
# turns out branch b is syntactically but not semantically compatible! It
# needs x to be 5!
pr_repo, pr_ref = prod.get_pr(pr1_id.number).branch
with pr_repo:
# force-push correct commit to PR's branch
[new_c] = pr_repo.make_commits(
pr1_id.target.name,
Commit('whop whop', tree={'x': '5'}),
ref='heads/%s' % pr_ref,
make=False
)
env.run_crons()
assert pr1_id.head == new_c != pr1_head, "the FP PR should be updated"
assert not pr1_id.parent_id, "the FP PR should be detached from the original"
assert pr1_remote.comments == [
seen(env, pr1_remote, users),
fp_intermediate, ci_warning, ci_warning,
(users['user'], "@%s @%s this PR was modified / updated and has become a normal PR. "
"It should be merged the normal way (via @%s)" % (
users['user'], users['reviewer'],
pr1_id.repository.project_id.github_prefix
)),
], "users should be warned that the PR has become non-FP"
# NOTE: should the followup PR wait for pr1 CI or not?
assert pr2_id.head != pr2_head
assert pr2_id.parent_id == pr1_id, "the followup PR should still be linked"
assert prod.read_tree(prod.commit(pr1_id.head)) == {
'f': 'c',
'g': 'b',
'x': '5'
}, "the FP PR should have the new code"
assert prod.read_tree(prod.commit(pr2_id.head)) == {
'f': 'c',
'g': 'a',
'h': 'a',
'x': '5'
}, "the followup FP should also have the update"
def test_update_merged(env, make_repo, config, users):
""" Strange things happen when an FP gets closed / merged but then its
parent is modified and the forwardport tries to update the (now merged)
child.
Turns out the issue is the followup: given a PR a and forward port targets
B -> C -> D. When a is merged we get b, c and d. If c gets merged *then*
b gets updated, the fwbot will update c in turn, then it will look for the
head of the updated c in order to create d.
However it *will not* find that head, as update events don't get propagated
on closed PRs (this is generally a good thing). As a result, the sanity
check when trying to port c to d will fail.
After checking with nim, the safest behaviour seems to be:
* stop at the update of the first closed or merged PR
* signal on that PR that something fucky happened
* also maybe disable or exponentially backoff the update job after some
number of attempts?
"""
prod, _ = make_basic(env, config, make_repo)
# add a 4th branch
with prod:
prod.make_ref('heads/d', prod.commit('c').id)
env['runbot_merge.project'].search([]).write({
'branch_ids': [(0, 0, {
'name': 'd', 'fp_sequence': -1, 'fp_target': True,
})]
})
with prod:
[c] = prod.make_commits('a', Commit('p_0', tree={'0': '0'}), ref='heads/hugechange')
pr = prod.make_pr(target='a', head='hugechange')
prod.post_status(c, 'success', 'legal/cla')
prod.post_status(c, 'success', 'ci/runbot')
pr.post_comment('hansen r+', config['role_reviewer']['token'])
env.run_crons()
with prod:
prod.post_status('staging.a', 'success', 'legal/cla')
prod.post_status('staging.a', 'success', 'ci/runbot')
env.run_crons()
_, pr1_id = env['runbot_merge.pull_requests'].search([], order='number')
with prod:
prod.post_status(pr1_id.head, 'success', 'legal/cla')
prod.post_status(pr1_id.head, 'success', 'ci/runbot')
env.run_crons()
pr0_id, pr1_id, pr2_id = env['runbot_merge.pull_requests'].search([], order='number')
pr2 = prod.get_pr(pr2_id.number)
with prod:
pr2.post_comment('hansen r+', config['role_reviewer']['token'])
prod.post_status(pr2_id.head, 'success', 'legal/cla')
prod.post_status(pr2_id.head, 'success', 'ci/runbot')
env.run_crons()
assert pr2_id.staging_id
with prod:
prod.post_status('staging.c', 'success', 'legal/cla')
prod.post_status('staging.c', 'success', 'ci/runbot')
env.run_crons()
assert pr2_id.state == 'merged'
assert pr2.state == 'closed'
# now we can try updating pr1 and see what happens
repo, ref = prod.get_pr(pr1_id.number).branch
with repo:
repo.make_commits(
pr1_id.target.name,
Commit('2', tree={'0': '0', '1': '1'}),
ref='heads/%s' % ref,
make=False
)
updates = env['forwardport.updates'].search([])
assert updates
assert updates.original_root == pr0_id
assert updates.new_root == pr1_id
env.run_crons()
assert not pr1_id.parent_id
assert not env['forwardport.updates'].search([])
assert pr2.comments == [
seen(env, pr2, users),
(users['user'], '''This PR targets c and is part of the forward-port chain. Further PRs will be created up to d.
More info at https://github.com/odoo/odoo/wiki/Mergebot#forward-port
'''),
(users['reviewer'], 'hansen r+'),
(users['user'], """@%s @%s ancestor PR %s has been updated but this PR is merged and can't be updated to match.
You may want or need to manually update any followup PR.""" % (
users['user'],
users['reviewer'],
pr1_id.display_name,
))
]
def test_duplicate_fw(env, make_repo, setreviewers, config, users):
""" Test for #451
"""
# 0 - 1 - 2 - 3 - 4 master
# \ - 31 v3
# \ - 21 v2
# \ - 11 v1
repo = make_repo('proj')
with repo:
_, c1, c2, c3, _ = repo.make_commits(
None,
Commit('0', tree={'f': 'a'}),
Commit('1', tree={'f': 'b'}),
Commit('2', tree={'f': 'c'}),
Commit('3', tree={'f': 'd'}),
Commit('4', tree={'f': 'e'}),
ref='heads/master'
)
repo.make_commits(c1, Commit('11', tree={'g': 'a'}), ref='heads/v1')
repo.make_commits(c2, Commit('21', tree={'h': 'a'}), ref='heads/v2')
repo.make_commits(c3, Commit('31', tree={'i': 'a'}), ref='heads/v3')
proj = env['runbot_merge.project'].create({
'name': 'a project',
'github_token': config['github']['token'],
'github_prefix': 'hansen',
'fp_github_token': config['github']['token'],
'branch_ids': [
(0, 0, {'name': 'master', 'fp_sequence': 0, 'fp_target': True}),
(0, 0, {'name': 'v3', 'fp_sequence': 1, 'fp_target': True}),
(0, 0, {'name': 'v2', 'fp_sequence': 2, 'fp_target': True}),
(0, 0, {'name': 'v1', 'fp_sequence': 3, 'fp_target': True}),
],
'repo_ids': [
(0, 0, {
'name': repo.name,
'required_statuses': 'ci',
'fp_remote_target': repo.name,
})
]
})
setreviewers(*proj.repo_ids)
# create a PR in v1, merge it, then create all 3 ports
with repo:
repo.make_commits('v1', Commit('c0', tree={'z': 'a'}), ref='heads/hugechange')
prv1 = repo.make_pr(target='v1', head='hugechange')
repo.post_status('hugechange', 'success', 'ci')
prv1.post_comment('hansen r+', config['role_reviewer']['token'])
env.run_crons()
PRs = env['runbot_merge.pull_requests']
prv1_id = PRs.search([
('repository.name', '=', repo.name),
('number', '=', prv1.number),
])
assert prv1_id.state == 'ready'
with repo:
repo.post_status('staging.v1', 'success', 'ci')
env.run_crons()
assert prv1_id.state == 'merged'
parent = prv1_id
while True:
child = PRs.search([('parent_id', '=', parent.id)])
if not child:
break
assert child.state == 'opened'
with repo:
repo.post_status(child.head, 'success', 'ci')
env.run_crons()
parent = child
pr_ids = _, prv2_id, prv3_id, prmaster_id = PRs.search([], order='number')
_, prv2, prv3, prmaster = [repo.get_pr(p.number) for p in pr_ids]
assert pr_ids.mapped('target.name') == ['v1', 'v2', 'v3', 'master']
assert pr_ids.mapped('state') == ['merged', 'validated', 'validated', 'validated']
assert repo.read_tree(repo.commit(prmaster_id.head)) == {'f': 'e', 'z': 'a'}
with repo:
repo.make_commits('v2', Commit('c0', tree={'z': 'b'}), ref=prv2.ref, make=False)
env.run_crons()
assert pr_ids.mapped('state') == ['merged', 'opened', 'validated', 'validated']
assert repo.read_tree(repo.commit(prv2_id.head)) == {'f': 'c', 'h': 'a', 'z': 'b'}
assert repo.read_tree(repo.commit(prv3_id.head)) == {'f': 'd', 'i': 'a', 'z': 'b'}
assert repo.read_tree(repo.commit(prmaster_id.head)) == {'f': 'e', 'z': 'b'}
assert prv2_id.source_id == prv1_id
assert not prv2_id.parent_id
env.run_crons()
assert PRs.search([], order='number') == pr_ids
with repo:
repo.post_status(prv2.head, 'success', 'ci')
prv2.post_comment('hansen r+', config['role_reviewer']['token'])
env.run_crons()
with repo:
repo.post_status('staging.v2', 'success', 'ci')
env.run_crons()
# env.run_crons()
assert PRs.search([], order='number') == pr_ids
def test_subsequent_conflict(env, make_repo, config, users):
""" Test for updating an fw PR in the case where it produces a conflict in
the followup. Cf #467.
"""
repo, fork = make_basic(env, config, make_repo)
# create a PR in branch A which adds a new file
with repo:
repo.make_commits('a', Commit('newfile', tree={'x': '0'}), ref='heads/pr1')
pr_1 = repo.make_pr(target='a', head='pr1')
repo.post_status('pr1', 'success', 'legal/cla')
repo.post_status('pr1', 'success', 'ci/runbot')
pr_1.post_comment('hansen r+', config['role_reviewer']['token'])
env.run_crons()
with repo:
repo.post_status('staging.a', 'success', 'legal/cla')
repo.post_status('staging.a', 'success', 'ci/runbot')
env.run_crons()
pr1_id = to_pr(env, pr_1)
assert pr1_id.state == 'merged'
pr2_id = env['runbot_merge.pull_requests'].search([('source_id', '=', pr1_id.id)])
assert pr2_id
with repo:
repo.post_status(pr2_id.head, 'success', 'legal/cla')
repo.post_status(pr2_id.head, 'success', 'ci/runbot')
env.run_crons()
pr3_id = env['runbot_merge.pull_requests'].search([('parent_id', '=', pr2_id.id)])
assert pr3_id
assert repo.read_tree(repo.commit(pr3_id.head)) == {
'f': 'c',
'g': 'a',
'h': 'a',
'x': '0',
}
# update pr2: add a file "h"
pr2 = repo.get_pr(pr2_id.number)
t = {**repo.read_tree(repo.commit(pr2_id.head)), 'h': 'conflict!'}
with fork:
fork.make_commits(pr2_id.target.name, Commit('newfiles', tree=t), ref=pr2.ref, make=False)
env.run_crons()
assert repo.read_tree(repo.commit(pr3_id.head)) == {
'f': 'c',
'g': 'a',
'h': re_matches(r'''<<<\x3c<<< HEAD
a
|||||||| parent of [\da-f]{7,}.*
=======
conflict!
>>>\x3e>>> [\da-f]{7,}.*
'''),
'x': '0',
}
# skip comments:
# 1. link to mergebot status page
# 2. "forward port chain" bit
# 3. updated / modified & got detached
assert pr2.comments[3:] == [
(users['user'], f"@{users['user']} WARNING: the latest change ({pr2_id.head}) triggered "
f"a conflict when updating the next forward-port "
f"({pr3_id.display_name}), and has been ignored.\n\n"
f"You will need to update this pull request "
f"differently, or fix the issue by hand on "
f"{pr3_id.display_name}.")
]
# skip comments:
# 1. link to status page
# 2. forward-port chain thing
assert repo.get_pr(pr3_id.number).comments[2:] == [
(users['user'], re_matches(f'''\
@{users['user']} WARNING: the update of {pr2_id.display_name} to {pr2_id.head} has caused a \
conflict in this pull request, data may have been lost.
stdout:
```.*?
CONFLICT \(add/add\): Merge conflict in h.*?
```
stderr:
```
\\d{{2}}:\\d{{2}}:\\d{{2}}.\\d+ .* {pr2_id.head}
error: could not apply [0-9a-f]+\\.\\.\\. newfiles
''', re.DOTALL))
]

View File

@ -1,814 +0,0 @@
# -*- coding: utf-8 -*-
import pytest
from utils import seen, Commit, to_pr
def make_basic(env, config, make_repo, *, fp_token, fp_remote):
""" Creates a basic repo with 3 forking branches
0 -- 1 -- 2 -- 3 -- 4 : a
|
`-- 11 -- 22 : b
|
`-- 111 : c
each branch just adds and modifies a file (resp. f, g and h) through the
contents sequence a b c d e
"""
Projects = env['runbot_merge.project']
project = Projects.search([('name', '=', 'myproject')])
if not project:
project = Projects.create({
'name': 'myproject',
'github_token': config['github']['token'],
'github_prefix': 'hansen',
'fp_github_token': fp_token and config['github']['token'],
'branch_ids': [
(0, 0, {'name': 'a', 'sequence': 2, 'fp_target': True}),
(0, 0, {'name': 'b', 'sequence': 1, 'fp_target': True}),
(0, 0, {'name': 'c', 'sequence': 0, 'fp_target': True}),
],
})
prod = make_repo('proj')
with prod:
a_0, a_1, a_2, a_3, a_4, = prod.make_commits(
None,
Commit("0", tree={'f': 'a'}),
Commit("1", tree={'f': 'b'}),
Commit("2", tree={'f': 'c'}),
Commit("3", tree={'f': 'd'}),
Commit("4", tree={'f': 'e'}),
ref='heads/a',
)
b_1, b_2 = prod.make_commits(
a_2,
Commit('11', tree={'g': 'a'}),
Commit('22', tree={'g': 'b'}),
ref='heads/b',
)
prod.make_commits(
b_1,
Commit('111', tree={'h': 'a'}),
ref='heads/c',
)
other = prod.fork()
repo = env['runbot_merge.repository'].create({
'project_id': project.id,
'name': prod.name,
'required_statuses': 'legal/cla,ci/runbot',
'fp_remote_target': fp_remote and other.name,
})
env['res.partner'].search([
('github_login', '=', config['role_reviewer']['user'])
]).write({
'review_rights': [(0, 0, {'repository_id': repo.id, 'review': True})]
})
env['res.partner'].search([
('github_login', '=', config['role_self_reviewer']['user'])
]).write({
'review_rights': [(0, 0, {'repository_id': repo.id, 'self_review': True})]
})
return project, prod, other
def test_no_token(env, config, make_repo):
""" if there's no token on the repo, nothing should break though should
log
"""
# create project configured with remotes on the repo but no token
proj, prod, _ = make_basic(env, config, make_repo, fp_token=False, fp_remote=True)
with prod:
prod.make_commits(
'a', Commit('c0', tree={'a': '0'}), ref='heads/abranch'
)
pr = prod.make_pr(target='a', head='abranch')
prod.post_status(pr.head, 'success', 'legal/cla')
prod.post_status(pr.head, 'success', 'ci/runbot')
pr.post_comment('hansen r+', config['role_reviewer']['token'])
env.run_crons()
with prod:
prod.post_status('staging.a', 'success', 'legal/cla')
prod.post_status('staging.a', 'success', 'ci/runbot')
# wanted to use capfd, however it's not compatible with the subprocess
# being created beforehand and server() depending on capfd() would remove
# all its output from the normal pytest capture (dumped on test failure)
#
# so I'd really have to hand-roll the entire thing by having server()
# pipe stdout/stderr to temp files, yield those temp files, and have the
# tests mess around with reading those files, and finally have the server
# dump the file contents back to the test runner's stdout/stderr on
# fixture teardown...
env.run_crons()
assert len(env['runbot_merge.pull_requests'].search([], order='number')) == 1,\
"should not have created forward port"
def test_remove_token(env, config, make_repo):
proj, prod, _ = make_basic(env, config, make_repo, fp_token=True, fp_remote=True)
proj.fp_github_token = False
with prod:
prod.make_commits(
'a', Commit('c0', tree={'a': '0'}), ref='heads/abranch'
)
pr = prod.make_pr(target='a', head='abranch')
prod.post_status(pr.head, 'success', 'legal/cla')
prod.post_status(pr.head, 'success', 'ci/runbot')
pr.post_comment('hansen r+', config['role_reviewer']['token'])
env.run_crons()
with prod:
prod.post_status('staging.a', 'success', 'legal/cla')
prod.post_status('staging.a', 'success', 'ci/runbot')
env.run_crons()
assert len(env['runbot_merge.pull_requests'].search([], order='number')) == 1,\
"should not have created forward port"
def test_no_target(env, config, make_repo):
proj, prod, _ = make_basic(env, config, make_repo, fp_token=True, fp_remote=False)
with prod:
prod.make_commits(
'a', Commit('c0', tree={'a': '0'}), ref='heads/abranch'
)
pr = prod.make_pr(target='a', head='abranch')
prod.post_status(pr.head, 'success', 'legal/cla')
prod.post_status(pr.head, 'success', 'ci/runbot')
pr.post_comment('hansen r+', config['role_reviewer']['token'])
env.run_crons()
with prod:
prod.post_status('staging.a', 'success', 'legal/cla')
prod.post_status('staging.a', 'success', 'ci/runbot')
env.run_crons()
assert len(env['runbot_merge.pull_requests'].search([], order='number')) == 1,\
"should not have created forward port"
def test_failed_staging(env, config, make_repo):
proj, prod, _ = make_basic(env, config, make_repo, fp_token=True, fp_remote=True)
reviewer = config['role_reviewer']['token']
with prod:
prod.make_commits('a', Commit('c', tree={'a': '0'}), ref='heads/abranch')
pr1 = prod.make_pr(target='a', head='abranch')
prod.post_status(pr1.head, 'success', 'legal/cla')
prod.post_status(pr1.head, 'success', 'ci/runbot')
pr1.post_comment('hansen r+', reviewer)
env.run_crons()
with prod:
prod.post_status('staging.a', 'success', 'legal/cla')
prod.post_status('staging.a', 'success', 'ci/runbot')
env.run_crons()
pr1_id, pr2_id = env['runbot_merge.pull_requests'].search([], order='number')
assert pr2_id.parent_id == pr2_id.source_id == pr1_id
with prod:
prod.post_status(pr2_id.head, 'success', 'legal/cla')
prod.post_status(pr2_id.head, 'success', 'ci/runbot')
env.run_crons()
pr1_id, pr2_id, pr3_id = env['runbot_merge.pull_requests'].search([], order='number')
pr3 = prod.get_pr(pr3_id.number)
with prod:
prod.post_status(pr3_id.head, 'success', 'legal/cla')
prod.post_status(pr3_id.head, 'success', 'ci/runbot')
pr3.post_comment('%s r+' % proj.fp_github_name, reviewer)
env.run_crons()
prod.commit('staging.c')
with prod:
prod.post_status('staging.b', 'success', 'legal/cla')
prod.post_status('staging.b', 'success', 'ci/runbot')
prod.post_status('staging.c', 'failure', 'ci/runbot')
env.run_crons()
pr3_head = env['runbot_merge.commit'].search([
('sha', '=', pr3_id.head),
])
assert len(pr3_head) == 1
assert not pr3_id.batch_id, "check that the PR indeed has no batch anymore"
assert not pr3_id.batch_ids.filtered(lambda b: b.active)
assert len(env['runbot_merge.batch'].search([
('prs', 'in', pr3_id.id),
'|', ('active', '=', True),
('active', '=', False),
])) == 2, "check that there do exist batches"
# send a new status to the PR, as if somebody had rebuilt it or something
with prod:
pr3.post_comment('hansen retry', reviewer)
prod.post_status(pr3_id.head, 'success', 'foo/bar')
prod.post_status(pr3_id.head, 'success', 'legal/cla')
assert pr3_head.to_check, "check that the commit was updated as to process"
env.run_crons()
assert not pr3_head.to_check, "check that the commit was processed"
class TestNotAllBranches:
""" Check that forward-ports don't behave completely insanely when not all
branches are supported on all repositories.
repo A branches a -> b -> c
a0 -> a1 -> a2 branch a
`-> a11 -> a22 branch b
`-> a111 branch c
repo B branches a -> c
b0 -> b1 -> b2 branch a
|
`-> b000 branch c
"""
@pytest.fixture
def repos(self, env, config, make_repo, setreviewers):
a = make_repo('A')
with a:
_, a_, _ = a.make_commits(
None,
Commit('a0', tree={'a': '0'}),
Commit('a1', tree={'a': '1'}),
Commit('a2', tree={'a': '2'}),
ref='heads/a'
)
b_, _ = a.make_commits(
a_,
Commit('a11', tree={'b': '11'}),
Commit('a22', tree={'b': '22'}),
ref='heads/b'
)
a.make_commits(b_, Commit('a111', tree={'c': '111'}), ref='heads/c')
a_dev = a.fork()
b = make_repo('B')
with b:
_, _a, _ = b.make_commits(
None,
Commit('b0', tree={'a': 'x'}),
Commit('b1', tree={'a': 'y'}),
Commit('b2', tree={'a': 'z'}),
ref='heads/a'
)
b.make_commits(_a, Commit('b000', tree={'c': 'x'}), ref='heads/c')
b_dev = b.fork()
project = env['runbot_merge.project'].create({
'name': 'proj',
'github_token': config['github']['token'],
'github_prefix': 'hansen',
'fp_github_token': config['github']['token'],
'branch_ids': [
(0, 0, {'name': 'a', 'fp_sequence': 2, 'fp_target': True}),
(0, 0, {'name': 'b', 'fp_sequence': 1, 'fp_target': True}),
(0, 0, {'name': 'c', 'fp_sequence': 0, 'fp_target': True}),
]
})
repo_a = env['runbot_merge.repository'].create({
'project_id': project.id,
'name': a.name,
'required_statuses': 'ci/runbot',
'fp_remote_target': a_dev.name,
})
repo_b = env['runbot_merge.repository'].create({
'project_id': project.id,
'name': b.name,
'required_statuses': 'ci/runbot',
'fp_remote_target': b_dev.name,
'branch_filter': '[("name", "in", ["a", "c"])]',
})
setreviewers(repo_a, repo_b)
return project, a, a_dev, b, b_dev
def test_single_first(self, env, repos, config):
""" A merge in A.a should be forward-ported to A.b and A.c
"""
project, a, a_dev, b, _ = repos
with a, a_dev:
[c] = a_dev.make_commits('a', Commit('pr', tree={'pr': '1'}), ref='heads/change')
pr = a.make_pr(target='a', title="a pr", head=a_dev.owner + ':change')
a.post_status(c, 'success', 'ci/runbot')
pr.post_comment('hansen r+', config['role_reviewer']['token'])
p = env['runbot_merge.pull_requests'].search([('repository.name', '=', a.name), ('number', '=', pr.number)])
env.run_crons()
assert p.staging_id
with a, b:
for repo in a, b:
repo.post_status('staging.a', 'success', 'ci/runbot')
env.run_crons()
a_head = a.commit('a')
assert a_head.message.startswith('pr\n\n')
assert a.read_tree(a_head) == {'a': '2', 'pr': '1'}
pr0, pr1 = env['runbot_merge.pull_requests'].search([], order='number')
with a:
a.post_status(pr1.head, 'success', 'ci/runbot')
env.run_crons()
pr0, pr1, pr2 = env['runbot_merge.pull_requests'].search([], order='number')
with a:
a.post_status(pr2.head, 'success', 'ci/runbot')
a.get_pr(pr2.number).post_comment(
'%s r+' % project.fp_github_name,
config['role_reviewer']['token'])
env.run_crons()
assert pr1.staging_id
assert pr2.staging_id
with a, b:
a.post_status('staging.b', 'success', 'ci/runbot')
a.post_status('staging.c', 'success', 'ci/runbot')
b.post_status('staging.c', 'success', 'ci/runbot')
env.run_crons()
assert pr0.state == 'merged'
assert pr1.state == 'merged'
assert pr2.state == 'merged'
assert a.read_tree(a.commit('b')) == {'a': '1', 'b': '22', 'pr': '1'}
assert a.read_tree(a.commit('c')) == {'a': '1', 'b': '11', 'c': '111', 'pr': '1'}
def test_single_second(self, env, repos, config):
""" A merge in B.a should "skip ahead" to B.c
"""
project, a, _, b, b_dev = repos
with b, b_dev:
[c] = b_dev.make_commits('a', Commit('pr', tree={'pr': '1'}), ref='heads/change')
pr = b.make_pr(target='a', title="a pr", head=b_dev.owner + ':change')
b.post_status(c, 'success', 'ci/runbot')
pr.post_comment('hansen r+', config['role_reviewer']['token'])
env.run_crons()
with a, b:
a.post_status('staging.a', 'success', 'ci/runbot')
b.post_status('staging.a', 'success', 'ci/runbot')
env.run_crons()
assert b.read_tree(b.commit('a')) == {'a': 'z', 'pr': '1'}
pr0, pr1 = env['runbot_merge.pull_requests'].search([], order='number')
with b:
b.post_status(pr1.head, 'success', 'ci/runbot')
b.get_pr(pr1.number).post_comment(
'%s r+' % project.fp_github_name,
config['role_reviewer']['token'])
env.run_crons()
with a, b:
a.post_status('staging.c', 'success', 'ci/runbot')
b.post_status('staging.c', 'success', 'ci/runbot')
env.run_crons()
assert pr0.state == 'merged'
assert pr1.state == 'merged'
assert b.read_tree(b.commit('c')) == {'a': 'y', 'c': 'x', 'pr': '1'}
def test_both_first(self, env, repos, config, users):
""" A merge in A.a, B.a should... not be forward-ported at all?
"""
project, a, a_dev, b, b_dev = repos
with a, a_dev:
[c_a] = a_dev.make_commits('a', Commit('pr a', tree={'pr': 'a'}), ref='heads/change')
pr_a = a.make_pr(target='a', title='a pr', head=a_dev.owner + ':change')
a.post_status(c_a, 'success', 'ci/runbot')
pr_a.post_comment('hansen r+', config['role_reviewer']['token'])
with b, b_dev:
[c_b] = b_dev.make_commits('a', Commit('pr b', tree={'pr': 'b'}), ref='heads/change')
pr_b = b.make_pr(target='a', title='b pr', head=b_dev.owner + ':change')
b.post_status(c_b, 'success', 'ci/runbot')
pr_b.post_comment('hansen r+', config['role_reviewer']['token'])
env.run_crons()
with a, b:
for repo in a, b:
repo.post_status('staging.a', 'success', 'ci/runbot')
env.run_crons()
pr_a_id = env['runbot_merge.pull_requests'].search([
('repository.name', '=', a.name),
('number', '=', pr_a.number),
])
pr_b_id = env['runbot_merge.pull_requests'].search([
('repository.name', '=', b.name),
('number', '=', pr_b.number)
])
assert pr_a_id.state == pr_b_id.state == 'merged'
assert env['runbot_merge.pull_requests'].search([]) == pr_a_id | pr_b_id
# should have refused to create a forward port because the PRs have
# different next target
assert pr_a.comments == [
(users['reviewer'], 'hansen r+'),
seen(env, pr_a, users),
(users['user'], "@%s @%s this pull request can not be forward ported:"
" next branch is 'b' but linked pull request %s "
"has a next branch 'c'." % (
users['user'], users['reviewer'], pr_b_id.display_name,
)),
]
assert pr_b.comments == [
(users['reviewer'], 'hansen r+'),
seen(env, pr_b, users),
(users['user'], "@%s @%s this pull request can not be forward ported:"
" next branch is 'c' but linked pull request %s "
"has a next branch 'b'." % (
users['user'], users['reviewer'], pr_a_id.display_name,
)),
]
def test_new_intermediate_branch(env, config, make_repo):
""" In the case of a freeze / release a new intermediate branch appears in
the sequence. New or ongoing forward ports should pick it up just fine (as
the "next target" is decided when a PR is ported forward) however this is
an issue for existing yet-to-be-merged sequences e.g. given the branches
1.0, 2.0 and master, if a branch 3.0 is forked off from master and inserted
before it, we need to create a new *intermediate* forward port PR
"""
def validate(commit):
prod.post_status(commit, 'success', 'ci/runbot')
prod.post_status(commit, 'success', 'legal/cla')
project, prod, _ = make_basic(env, config, make_repo, fp_token=True, fp_remote=True)
original_c_tree = prod.read_tree(prod.commit('c'))
prs = []
with prod:
for i in ['0', '1', '2']:
prod.make_commits('a', Commit(i, tree={i:i}), ref='heads/branch%s' % i)
pr = prod.make_pr(target='a', head='branch%s' % i)
prs.append(pr)
validate(pr.head)
pr.post_comment('hansen r+', config['role_reviewer']['token'])
# cancel validation of PR2
prod.post_status(prs[2].head, 'failure', 'ci/runbot')
# also add a PR targeting b forward-ported to c, in order to check
# for an insertion right after the source
prod.make_commits('b', Commit('x', tree={'x': 'x'}), ref='heads/branchx')
prx = prod.make_pr(target='b', head='branchx')
validate(prx.head)
prx.post_comment('hansen r+', config['role_reviewer']['token'])
env.run_crons()
with prod:
validate('staging.a')
validate('staging.b')
env.run_crons()
# should have merged pr1, pr2 and prx and created their forward ports, now
# validate pr0's FP so the c-targeted FP is created
PRs = env['runbot_merge.pull_requests']
pr0_id = PRs.search([
('repository.name', '=', prod.name),
('number', '=', prs[0].number),
])
pr0_fp_id = PRs.search([
('source_id', '=', pr0_id.id),
])
assert pr0_fp_id
assert pr0_fp_id.target.name == 'b'
with prod:
validate(pr0_fp_id.head)
env.run_crons()
original0 = PRs.search([('parent_id', '=', pr0_fp_id.id)])
assert original0, "Could not find FP of PR0 to C"
assert original0.target.name == 'c'
# also check prx's fp
prx_id = PRs.search([('repository.name', '=', prod.name), ('number', '=', prx.number)])
prx_fp_id = PRs.search([('source_id', '=', prx_id.id)])
assert prx_fp_id
assert prx_fp_id.target.name == 'c'
# NOTE: the branch must be created on git(hub) first, probably
# create new branch forked from the "current master" (c)
c = prod.commit('c').id
with prod:
prod.make_ref('heads/new', c)
currents = {branch.name: branch.id for branch in project.branch_ids}
# insert a branch between "b" and "c"
project.write({
'branch_ids': [
(1, currents['a'], {'fp_sequence': 3}),
(1, currents['b'], {'fp_sequence': 2}),
(0, False, {'name': 'new', 'fp_sequence': 1, 'fp_target': True}),
(1, currents['c'], {'fp_sequence': 0})
]
})
env.run_crons()
descendants = PRs.search([('source_id', '=', pr0_id.id)])
new0 = descendants - pr0_fp_id - original0
assert len(new0) == 1
assert new0.parent_id == pr0_fp_id
assert original0.parent_id == new0
descx = PRs.search([('source_id', '=', prx_id.id)])
newx = descx - prx_fp_id
assert len(newx) == 1
assert newx.parent_id == prx_id
assert prx_fp_id.parent_id == newx
# finish up: merge pr1 and pr2, ensure all the content is present in both
# "new" (the newly inserted branch) and "c" (the tippity tip)
with prod: # validate pr2
prod.post_status(prs[2].head, 'success', 'ci/runbot')
env.run_crons()
# merge pr2
with prod:
validate('staging.a')
env.run_crons()
# ci on pr1/pr2 fp to b
sources = [
env['runbot_merge.pull_requests'].search([
('repository.name', '=', prod.name),
('number', '=', pr.number),
]).id
for pr in prs
]
sources.append(prx_id.id)
# CI all the forward port PRs (shouldn't hurt to re-ci the forward port of
# prs[0] to b aka pr0_fp_id
for target in ['b', 'new', 'c']:
fps = PRs.search([('source_id', 'in', sources), ('target.name', '=', target)])
with prod:
for fp in fps:
validate(fp.head)
env.run_crons()
# now fps should be the last PR of each sequence, and thus r+-able
with prod:
for pr in fps:
assert pr.target.name == 'c'
prod.get_pr(pr.number).post_comment(
'%s r+' % project.fp_github_name,
config['role_reviewer']['token'])
assert all(p.state == 'merged' for p in PRs.browse(sources)), \
"all sources should be merged"
assert all(p.state == 'ready' for p in PRs.search([('id', 'not in', sources)])),\
"All PRs except sources should be ready"
env.run_crons()
with prod:
for target in ['b', 'new', 'c']:
validate('staging.' + target)
env.run_crons()
assert all(p.state == 'merged' for p in PRs.search([])), \
"All PRs should be merged now"
assert prod.read_tree(prod.commit('c')) == {
**original_c_tree,
'0': '0', '1': '1', '2': '2', # updates from PRs
'x': 'x',
}, "check that C got all the updates"
assert prod.read_tree(prod.commit('new')) == {
**original_c_tree,
'0': '0', '1': '1', '2': '2', # updates from PRs
'x': 'x',
}, "check that new got all the updates (should be in the same state as c really)"
def test_author_can_close_via_fwbot(env, config, make_repo):
project, prod, xxx = make_basic(env, config, make_repo, fp_token=True, fp_remote=True)
other_user = config['role_other']
other_token = other_user['token']
other = prod.fork(token=other_token)
with prod, other:
[c] = other.make_commits('a', Commit('c', tree={'0': '0'}), ref='heads/change')
pr = prod.make_pr(
target='a', title='my change',
head=other_user['user'] + ':change',
token=other_token
)
# should be able to close and open own PR
pr.close(other_token)
pr.open(other_token)
prod.post_status(c, 'success', 'legal/cla')
prod.post_status(c, 'success', 'ci/runbot')
pr.post_comment('%s close' % project.fp_github_name, other_token)
pr.post_comment('hansen r+', config['role_reviewer']['token'])
env.run_crons()
assert pr.state == 'open'
with prod:
prod.post_status('staging.a', 'success', 'legal/cla')
prod.post_status('staging.a', 'success', 'ci/runbot')
env.run_crons()
pr0_id, pr1_id = env['runbot_merge.pull_requests'].search([], order='number')
assert pr0_id.number == pr.number
pr1 = prod.get_pr(pr1_id.number)
# `other` can't close fw PR directly, because that requires triage (and even
# write depending on account type) access to the repo, which an external
# contributor probably does not have
with prod, pytest.raises(Exception):
pr1.close(other_token)
# use can close via fwbot
with prod:
pr1.post_comment('%s close' % project.fp_github_name, other_token)
env.run_crons()
assert pr1.state == 'closed'
assert pr1_id.state == 'closed'
def test_skip_ci_all(env, config, make_repo):
project, prod, _ = make_basic(env, config, make_repo, fp_token=True, fp_remote=True)
with prod:
prod.make_commits('a', Commit('x', tree={'x': '0'}), ref='heads/change')
pr = prod.make_pr(target='a', head='change')
prod.post_status(pr.head, 'success', 'legal/cla')
prod.post_status(pr.head, 'success', 'ci/runbot')
pr.post_comment('%s skipci' % project.fp_github_name, config['role_reviewer']['token'])
pr.post_comment('hansen r+', config['role_reviewer']['token'])
env.run_crons()
assert env['runbot_merge.pull_requests'].search([
('repository.name', '=', prod.name),
('number', '=', pr.number)
]).fw_policy == 'skipci'
with prod:
prod.post_status('staging.a', 'success', 'legal/cla')
prod.post_status('staging.a', 'success', 'ci/runbot')
env.run_crons()
# run cron a few more times for the fps
env.run_crons()
env.run_crons()
env.run_crons()
pr0_id, pr1_id, pr2_id = env['runbot_merge.pull_requests'].search([], order='number')
assert pr1_id.state == 'opened'
assert pr1_id.source_id == pr0_id
assert pr2_id.state == 'opened'
assert pr2_id.source_id == pr0_id
def test_skip_ci_next(env, config, make_repo):
project, prod, _ = make_basic(env, config, make_repo, fp_token=True, fp_remote=True)
with prod:
prod.make_commits('a', Commit('x', tree={'x': '0'}), ref='heads/change')
pr = prod.make_pr(target='a', head='change')
prod.post_status(pr.head, 'success', 'legal/cla')
prod.post_status(pr.head, 'success', 'ci/runbot')
pr.post_comment('hansen r+', config['role_reviewer']['token'])
env.run_crons()
with prod:
prod.post_status('staging.a', 'success', 'legal/cla')
prod.post_status('staging.a', 'success', 'ci/runbot')
env.run_crons()
pr0_id, pr1_id = env['runbot_merge.pull_requests'].search([], order='number')
with prod:
prod.get_pr(pr1_id.number).post_comment(
'%s skipci' % project.fp_github_name,
config['role_user']['token']
)
assert pr0_id.fw_policy == 'skipci'
env.run_crons()
_, _, pr2_id = env['runbot_merge.pull_requests'].search([], order='number')
assert pr1_id.state == 'opened'
assert pr2_id.state == 'opened'
def test_retarget_after_freeze(env, config, make_repo, users):
"""Turns out it was possible to trip the forwardbot if you're a bit of a
dick: the forward port cron was not resilient to forward port failure in
case of filling in new branches (forward ports existing across a branch
insertion so the fwbot would have to "fill in" for the new branch).
But it turns out causing such failure is possible by e.g. regargeting the
latter port. In that case the reinsertion task should just do nothing, and
the retargeted PR should be forward-ported normally once merged.
"""
project, prod, _ = make_basic(env, config, make_repo, fp_token=True, fp_remote=True)
with prod:
[c] = prod.make_commits('b', Commit('thing', tree={'x': '1'}), ref='heads/mypr')
pr = prod.make_pr(target='b', head='mypr')
prod.post_status(c, 'success', 'ci/runbot')
prod.post_status(c, 'success', 'legal/cla')
pr.post_comment('hansen r+', config['role_reviewer']['token'])
env.run_crons()
original_pr_id = to_pr(env, pr)
assert original_pr_id.state == 'ready'
assert original_pr_id.staging_id
with prod:
prod.post_status('staging.b', 'success', 'ci/runbot')
prod.post_status('staging.b', 'success', 'legal/cla')
env.run_crons()
# should have created a pr targeted to C
port_id = env['runbot_merge.pull_requests'].search([('state', 'not in', ('merged', 'closed'))])
assert len(port_id) == 1
assert port_id.target.name == 'c'
assert port_id.source_id == original_pr_id
assert port_id.parent_id == original_pr_id
# because the module doesn't update the ordering of `branch_ids` to take
# `fp_sequence` in account so it's misleading
branch_c, branch_b, branch_a = branches_before = project.branch_ids.sorted('fp_sequence')
assert [branch_a.name, branch_b.name, branch_c.name] == ['a', 'b', 'c']
# create branch so cron runs correctly
with prod: prod.make_ref('heads/bprime', prod.get_ref('c'))
project.write({
'branch_ids': [
(1, branch_c.id, {'sequence': 1, 'fp_sequence': 20}),
(0, 0, {'name': 'bprime', 'sequence': 2, 'fp_sequence': 20, 'fp_target': True}),
(1, branch_b.id, {'sequence': 3, 'fp_sequence': 20}),
(1, branch_a.id, {'sequence': 4, 'fp_sequence': 20}),
]
})
new_branch = project.branch_ids - branches_before
assert new_branch.name == 'bprime'
# should have added a job for the new fp
job = env['forwardport.batches'].search([])
assert job
# fuck up yo life: retarget the existing FP PR to the new branch
port_id.target = new_branch.id
env.run_crons('forwardport.port_forward')
assert not job.exists(), "job should have succeeded and apoptosed"
# since the PR was "already forward-ported" to the new branch it should not
# be touched
assert env['runbot_merge.pull_requests'].search([('state', 'not in', ('merged', 'closed'))]) == port_id
# merge the retargered PR
port_pr = prod.get_pr(port_id.number)
with prod:
prod.post_status(port_pr.head, 'success', 'ci/runbot')
prod.post_status(port_pr.head, 'success', 'legal/cla')
port_pr.post_comment('hansen r+', config['role_reviewer']['token'])
env.run_crons()
with prod:
prod.post_status('staging.bprime', 'success', 'ci/runbot')
prod.post_status('staging.bprime', 'success', 'legal/cla')
env.run_crons()
new_pr_id = env['runbot_merge.pull_requests'].search([('state', 'not in', ('merged', 'closed'))])
assert len(new_pr_id) == 1
assert new_pr_id.parent_id == port_id
assert new_pr_id.target == branch_c
def test_approve_draft(env, config, make_repo, users):
_, prod, _ = make_basic(env, config, make_repo, fp_token=True, fp_remote=True)
with prod:
prod.make_commits('a', Commit('x', tree={'x': '0'}), ref='heads/change')
pr = prod.make_pr(target='a', head='change', draft=True)
pr.post_comment('hansen r+', config['role_reviewer']['token'])
env.run_crons()
pr_id = to_pr(env, pr)
assert pr_id.state == 'opened'
assert pr.comments == [
(users['reviewer'], 'hansen r+'),
seen(env, pr, users),
(users['user'], f"I'm sorry, @{users['reviewer']}: draft PRs can not be approved."),
]
with prod:
pr.draft = False
assert pr.draft is False
with prod:
pr.post_comment('hansen r+', config['role_reviewer']['token'])
env.run_crons()
assert pr_id.state == 'approved'
def test_freeze(env, config, make_repo, users):
"""Freeze:
- should not forward-port the freeze PRs themselves
"""
project, prod, _ = make_basic(env, config, make_repo, fp_token=True, fp_remote=True)
# branches here are "a" (older), "b", and "c" (master)
with prod:
[root, _] = prod.make_commits(
None,
Commit('base', tree={'version': '', 'f': '0'}),
Commit('release 1.0', tree={'version': '1.0'}),
ref='heads/b'
)
prod.make_commits(root, Commit('other', tree={'f': '1'}), ref='heads/c')
with prod:
prod.make_commits(
'c',
Commit('Release 1.1', tree={'version': '1.1'}),
ref='heads/release-1.1'
)
release = prod.make_pr(target='c', head='release-1.1')
env.run_crons()
w = project.action_prepare_freeze()
assert w['res_model'] == 'runbot_merge.project.freeze'
w_id = env[w['res_model']].browse([w['res_id']])
assert w_id.release_pr_ids.repository_id.name == prod.name
release_id = to_pr(env, release)
w_id.release_pr_ids.pr_id = release_id.id
assert not w_id.errors
w_id.action_freeze()
# run crons to process the feedback, run a second time in case of e.g.
# forward porting
env.run_crons()
env.run_crons()
assert release_id.state == 'merged'
assert not env['runbot_merge.pull_requests'].search([
('state', '!=', 'merged')
]), "the release PRs should not be forward-ported"

View File

@ -1,139 +0,0 @@
# -*- coding: utf-8 -*-
import itertools
import re
from lxml import html
MESSAGE_TEMPLATE = """{message}
closes {repo}#{number}
{headers}Signed-off-by: {name} <{email}>"""
# target branch '-' source branch '-' base64 unique '-fw'
REF_PATTERN = r'{target}-{source}-[a-zA-Z0-9_-]{{4}}-fw'
class Commit:
def __init__(self, message, *, author=None, committer=None, tree, reset=False):
self.id = None
self.message = message
self.author = author
self.committer = committer
self.tree = tree
self.reset = reset
def validate_all(repos, refs, contexts=('ci/runbot', 'legal/cla')):
""" Post a "success" status for each context on each ref of each repo
"""
for repo, branch, context in itertools.product(repos, refs, contexts):
repo.post_status(branch, 'success', context)
def get_partner(env, gh_login):
return env['res.partner'].search([('github_login', '=', gh_login)])
def _simple_init(repo):
""" Creates a very simple initialisation: a master branch with a commit,
and a PR by 'user' with two commits, targeted to the master branch
"""
m = repo.make_commit(None, 'initial', None, tree={'m': 'm'})
repo.make_ref('heads/master', m)
c1 = repo.make_commit(m, 'first', None, tree={'m': 'c1'})
c2 = repo.make_commit(c1, 'second', None, tree={'m': 'c2'})
prx = repo.make_pr(title='title', body='body', target='master', head=c2)
return prx
class re_matches:
def __init__(self, pattern, flags=0):
self._r = re.compile(pattern, flags)
def __eq__(self, text):
return self._r.match(text)
def __repr__(self):
return self._r.pattern + '...'
def seen(env, pr, users):
return users['user'], f'[Pull request status dashboard]({to_pr(env, pr).url}).'
def make_basic(env, config, make_repo, *, reponame='proj', project_name='myproject'):
""" Creates a basic repo with 3 forking branches
f = 0 -- 1 -- 2 -- 3 -- 4 : a
|
g = `-- 11 -- 22 : b
|
h = `-- 111 : c
each branch just adds and modifies a file (resp. f, g and h) through the
contents sequence a b c d e
"""
Projects = env['runbot_merge.project']
project = Projects.search([('name', '=', project_name)])
if not project:
project = env['runbot_merge.project'].create({
'name': project_name,
'github_token': config['github']['token'],
'github_prefix': 'hansen',
'fp_github_token': config['github']['token'],
'branch_ids': [
(0, 0, {'name': 'a', 'fp_sequence': 10, 'fp_target': True}),
(0, 0, {'name': 'b', 'fp_sequence': 8, 'fp_target': True}),
(0, 0, {'name': 'c', 'fp_sequence': 6, 'fp_target': True}),
],
})
prod = make_repo(reponame)
with prod:
a_0, a_1, a_2, a_3, a_4, = prod.make_commits(
None,
Commit("0", tree={'f': 'a'}),
Commit("1", tree={'f': 'b'}),
Commit("2", tree={'f': 'c'}),
Commit("3", tree={'f': 'd'}),
Commit("4", tree={'f': 'e'}),
ref='heads/a',
)
b_1, b_2 = prod.make_commits(
a_2,
Commit('11', tree={'g': 'a'}),
Commit('22', tree={'g': 'b'}),
ref='heads/b',
)
prod.make_commits(
b_1,
Commit('111', tree={'h': 'a'}),
ref='heads/c',
)
other = prod.fork()
repo = env['runbot_merge.repository'].create({
'project_id': project.id,
'name': prod.name,
'required_statuses': 'legal/cla,ci/runbot',
'fp_remote_target': other.name,
})
env['res.partner'].search([
('github_login', '=', config['role_reviewer']['user'])
]).write({
'review_rights': [(0, 0, {'repository_id': repo.id, 'review': True})]
})
env['res.partner'].search([
('github_login', '=', config['role_self_reviewer']['user'])
]).write({
'review_rights': [(0, 0, {'repository_id': repo.id, 'self_review': True})]
})
return prod, other
def pr_page(page, pr):
return html.fromstring(page(f'/{pr.repo.name}/pull/{pr.number}'))
def to_pr(env, pr):
pr = env['runbot_merge.pull_requests'].search([
('repository.name', '=', pr.repo.name),
('number', '=', pr.number),
])
assert len(pr) == 1, f"Expected to find {pr.repo.name}#{pr.number}, got {pr}."
return pr
def part_of(label, pr_id, *, separator='\n\n'):
""" Adds the "part-of" pseudo-header in the footer.
"""
return f'{label}{separator}Part-of: {pr_id.display_name}'

View File

@ -1,4 +0,0 @@
matplotlib==3.5.0
unidiff
docker==4.1.0; python_version < '3.10'
docker==5.0.3; python_version >= '3.10' # (Jammy)

View File

@ -1,7 +0,0 @@
# -*- coding: utf-8 -*-
from . import controllers
from . import models
from . import common
from . import container
from . import wizards

View File

@ -1,68 +0,0 @@
# -*- coding: utf-8 -*-
{
'name': "runbot",
'summary': "Runbot",
'description': "Runbot for Odoo 15.0",
'author': "Odoo SA",
'website': "http://runbot.odoo.com",
'category': 'Website',
'version': '5.1',
'application': True,
'depends': ['base', 'base_automation', 'website'],
'data': [
'templates/dockerfile.xml',
'data/dockerfile_data.xml',
'data/build_parse.xml',
'data/error_link.xml',
'data/runbot_build_config_data.xml',
'data/runbot_data.xml',
'data/runbot_error_regex_data.xml',
'data/website_data.xml',
'security/runbot_security.xml',
'security/ir.model.access.csv',
'security/ir.rule.csv',
'templates/utils.xml',
'templates/badge.xml',
'templates/batch.xml',
'templates/branch.xml',
'templates/build.xml',
'templates/build_stats.xml',
'templates/bundle.xml',
'templates/commit.xml',
'templates/dashboard.xml',
'templates/frontend.xml',
'templates/git.xml',
'templates/nginx.xml',
'templates/build_error.xml',
'views/branch_views.xml',
'views/build_error_views.xml',
'views/build_views.xml',
'views/bundle_views.xml',
'views/codeowner_views.xml',
'views/commit_views.xml',
'views/config_views.xml',
'views/dashboard_views.xml',
'views/dockerfile_views.xml',
'views/error_log_views.xml',
'views/host_views.xml',
'views/repo_views.xml',
'views/res_config_settings_views.xml',
'views/stat_views.xml',
'views/upgrade.xml',
'views/warning_views.xml',
'views/custom_trigger_wizard_views.xml',
'wizards/stat_regex_wizard_views.xml',
'views/menus.xml',
],
'license': 'LGPL-3',
'assets': {
'web.assets_backend': [
'runbot/static/src/js/json_field.js',
],
}
}

View File

@ -1,158 +0,0 @@
# -*- coding: utf-8 -*-
import contextlib
import itertools
import logging
import psycopg2
import re
import socket
import time
import os
from collections import OrderedDict
from datetime import timedelta
from babel.dates import format_timedelta
from markupsafe import Markup
from odoo.tools.misc import DEFAULT_SERVER_DATETIME_FORMAT, html_escape
_logger = logging.getLogger(__name__)
dest_reg = re.compile(r'^\d{5,}-.+$')
class RunbotException(Exception):
pass
def fqdn():
return socket.getfqdn()
def time2str(t):
return time.strftime(DEFAULT_SERVER_DATETIME_FORMAT, t)
def dt2time(datetime):
"""Convert datetime to time"""
return time.mktime(datetime.timetuple())
def now():
return time.strftime(DEFAULT_SERVER_DATETIME_FORMAT)
def findall(filename, pattern):
return set(re.findall(pattern, open(filename).read()))
def grep(filename, string):
if os.path.isfile(filename):
return find(filename, string) != -1
return False
def find(filename, string):
return open(filename).read().find(string)
def uniq_list(l):
return OrderedDict.fromkeys(l).keys()
def flatten(list_of_lists):
return list(itertools.chain.from_iterable(list_of_lists))
def rfind(filename, pattern):
"""Determine in something in filename matches the pattern"""
if os.path.isfile(filename):
regexp = re.compile(pattern, re.M)
with open(filename, 'r') as f:
if regexp.findall(f.read()):
return True
return False
def time_delta(time):
if isinstance(time, timedelta):
return time
return timedelta(seconds=-time)
def s2human(time):
"""Convert a time in second into an human readable string"""
return format_timedelta(
time_delta(time),
format="narrow",
threshold=2.1,
)
def s2human_long(time):
return format_timedelta(
time_delta(time),
threshold=2.1,
add_direction=True, locale='en'
)
@contextlib.contextmanager
def local_pgadmin_cursor():
cnx = None
try:
cnx = psycopg2.connect("dbname=postgres")
cnx.autocommit = True # required for admin commands
yield cnx.cursor()
finally:
if cnx:
cnx.close()
def list_local_dbs(additionnal_conditions=None):
additionnal_condition_str = ''
if additionnal_conditions:
additionnal_condition_str = 'AND (%s)' % ' OR '.join(additionnal_conditions)
with local_pgadmin_cursor() as local_cr:
local_cr.execute("""
SELECT datname
FROM pg_database
WHERE pg_get_userbyid(datdba) = current_user
%s
""" % additionnal_condition_str)
return [d[0] for d in local_cr.fetchall()]
def pseudo_markdown(text):
text = html_escape(text)
# first, extract code blocs:
codes = []
def code_remove(match):
codes.append(match.group(1))
return f'<code>{len(codes)-1}</code>'
patterns = {
r'`(.+?)`': code_remove,
r'\*\*(.+?)\*\*': '<strong>\\g<1></strong>',
r'~~(.+?)~~': '<del>\\g<1></del>', # it's not official markdown but who cares
r'__(.+?)__': '<ins>\\g<1></ins>', # same here, maybe we should change the method name
r'\r?\n': '<br/>',
}
for p, b in patterns.items():
text = re.sub(p, b, text, flags=re.DOTALL)
# icons
re_icon = re.compile(r'@icon-([a-z0-9-]+)')
text = re_icon.sub('<i class="fa fa-\\g<1>"></i>', text)
# links
re_links = re.compile(r'\[(.+?)\]\((.+?)\)')
text = re_links.sub('<a href="\\g<2>">\\g<1></a>', text)
def code_replace(match):
return f'<code>{codes[int(match.group(1))]}</code>'
text = Markup(re.sub(r'<code>(\d+)</code>', code_replace, text, flags=re.DOTALL))
return text

View File

@ -1,303 +0,0 @@
# -*- coding: utf-8 -*-
"""Containerize builds
The docker image used for the build is always tagged like this:
odoo:runbot_tests
This file contains helpers to containerize builds with Docker.
When testing this file:
the first parameter should be a directory containing Odoo.
The second parameter is the exposed port
"""
import configparser
import io
import logging
import os
import re
import subprocess
import warnings
# unsolved issue https://github.com/docker/docker-py/issues/2928
with warnings.catch_warnings():
warnings.filterwarnings(
"ignore",
message="The distutils package is deprecated.*",
category=DeprecationWarning
)
import docker
_logger = logging.getLogger(__name__)
class Command():
def __init__(self, pres, cmd, posts, finals=None, config_tuples=None, cmd_checker=None):
""" Command object that represent commands to run in Docker container
:param pres: list of pre-commands
:param cmd: list of main command only run if the pres commands succeed (&&)
:param posts: list of post commands posts only run if the cmd command succedd (&&)
:param finals: list of finals commands always executed
:param config_tuples: list of key,value tuples to write in config file
:param cmd_checker: a checker object that must have a `_cmd_check` method that will be called at build
returns a string of the full command line to run
"""
self.pres = pres or []
self.cmd = cmd
self.posts = posts or []
self.finals = finals or []
self.config_tuples = config_tuples or []
self.cmd_checker = cmd_checker
def __getattr__(self, name):
return getattr(self.cmd, name)
def __getitem__(self, key):
return self.cmd[key]
def __add__(self, l):
return Command(self.pres, self.cmd + l, self.posts, self.finals, self.config_tuples, self.cmd_checker)
def __str__(self):
return ' '.join(self)
def __repr__(self):
return self.build().replace('&& ', '&&\n').replace('|| ', '||\n\t').replace(';', ';\n')
def build(self):
if self.cmd_checker:
self.cmd_checker._cmd_check(self)
cmd_chain = []
cmd_chain += [' '.join(pre) for pre in self.pres if pre]
cmd_chain.append(' '.join(self))
cmd_chain += [' '.join(post) for post in self.posts if post]
cmd_chain = [' && '.join(cmd_chain)]
cmd_chain += [' '.join(final) for final in self.finals if final]
return ' ; '.join(cmd_chain)
def add_config_tuple(self, option, value):
assert '-' not in option
self.config_tuples.append((option, value))
def get_config(self, starting_config=''):
""" returns a config file content based on config tuples and
and eventually update the starting config
"""
config = configparser.ConfigParser()
config.read_string(starting_config)
if self.config_tuples and not config.has_section('options'):
config.add_section('options')
for option, value in self.config_tuples:
config.set('options', option, value)
res = io.StringIO()
config.write(res)
res.seek(0)
return res.read()
def docker_build(build_dir, image_tag):
return _docker_build(build_dir, image_tag)
def _docker_build(build_dir, image_tag):
"""Build the docker image
:param build_dir: the build directory that contains Dockerfile.
:param image_tag: name used to tag the resulting docker image
:return: tuple(success, msg) where success is a boolean and msg is the error message or None
"""
docker_client = docker.from_env()
try:
docker_client.images.build(path=build_dir, tag=image_tag, rm=True)
except docker.errors.APIError as e:
_logger.error('Build of image %s failed with this API error:', image_tag)
return (False, e.explanation)
except docker.errors.BuildError as e:
_logger.error('Build of image %s failed with this BUILD error:', image_tag)
msg = f"{e.msg}\n{''.join(l.get('stream') or '' for l in e.build_log)}"
return (False, msg)
_logger.info('Dockerfile %s finished build', image_tag)
return (True, None)
def docker_run(*args, **kwargs):
return _docker_run(*args, **kwargs)
def _docker_run(cmd=False, log_path=False, build_dir=False, container_name=False, image_tag=False, exposed_ports=None, cpu_limit=None, memory=None, preexec_fn=None, ro_volumes=None, env_variables=None):
"""Run tests in a docker container
:param run_cmd: command string to run in container
:param log_path: path to the logfile that will contain odoo stdout and stderr
:param build_dir: the build directory that contains the Odoo sources to build.
This directory is shared as a volume with the container
:param container_name: used to give a name to the container for later reference
:param image_tag: Docker image tag name to select which docker image to use
:param exposed_ports: if not None, starting at 8069, ports will be exposed as exposed_ports numbers
:param memory: memory limit in bytes for the container
:params ro_volumes: dict of dest:source volumes to mount readonly in builddir
:params env_variables: list of environment variables
"""
assert cmd and log_path and build_dir and container_name
run_cmd = cmd
image_tag = image_tag or 'odoo:DockerDefault'
container_name = sanitize_container_name(container_name)
if isinstance(run_cmd, Command):
cmd_object = run_cmd
run_cmd = cmd_object.build()
else:
cmd_object = Command([], run_cmd.split(' '), [])
_logger.info('Docker run command: %s', run_cmd)
run_cmd = 'cd /data/build;touch start-%s;%s;cd /data/build;touch end-%s' % (container_name, run_cmd, container_name)
docker_clear_state(container_name, build_dir) # ensure that no state are remaining
open(os.path.join(build_dir, 'exist-%s' % container_name), 'w+').close()
logs = open(log_path, 'w')
logs.write("Docker command:\n%s\n=================================================\n" % cmd_object)
# create start script
volumes = {
'/var/run/postgresql': {'bind': '/var/run/postgresql', 'mode': 'rw'},
f'{build_dir}': {'bind': '/data/build', 'mode': 'rw'},
f'{log_path}': {'bind': '/data/buildlogs.txt', 'mode': 'rw'}
}
if ro_volumes:
for dest, source in ro_volumes.items():
logs.write("Adding readonly volume '%s' pointing to %s \n" % (dest, source))
volumes[source] = {'bind': dest, 'mode': 'ro'}
logs.close()
ports = {}
if exposed_ports:
for dp, hp in enumerate(exposed_ports, start=8069):
ports[f'{dp}/tcp'] = ('127.0.0.1', hp)
ulimits = [docker.types.Ulimit(name='core', soft=0, hard=0)] # avoid core dump in containers
if cpu_limit:
ulimits.append(docker.types.Ulimit(name='cpu', soft=cpu_limit, hard=cpu_limit))
docker_client = docker.from_env()
container = docker_client.containers.run(
image_tag,
name=container_name,
volumes=volumes,
shm_size='128m',
mem_limit=memory,
ports=ports,
ulimits=ulimits,
environment=env_variables,
init=True,
command=['/bin/bash', '-c',
f'exec &>> /data/buildlogs.txt ;{run_cmd}'],
auto_remove=True,
detach=True
)
if container.status not in ('running', 'created') :
_logger.error('Container %s started but status is not running or created: %s', container_name, container.status) # TODO cleanup
else:
_logger.info('Started Docker container %s', container_name)
return
def docker_stop(container_name, build_dir=None):
return _docker_stop(container_name, build_dir)
def _docker_stop(container_name, build_dir):
"""Stops the container named container_name"""
container_name = sanitize_container_name(container_name)
_logger.info('Stopping container %s', container_name)
docker_client = docker.from_env()
if build_dir:
end_file = os.path.join(build_dir, 'end-%s' % container_name)
subprocess.run(['touch', end_file])
else:
_logger.info('Stopping docker without defined build_dir')
try:
container = docker_client.containers.get(container_name)
container.stop(timeout=1)
except docker.errors.NotFound:
_logger.error('Cannnot stop container %s. Container not found', container_name)
except docker.errors.APIError as e:
_logger.error('Cannnot stop container %s. API Error "%s"', container_name, e)
def docker_state(container_name, build_dir):
container_name = sanitize_container_name(container_name)
exist = os.path.exists(os.path.join(build_dir, 'exist-%s' % container_name))
started = os.path.exists(os.path.join(build_dir, 'start-%s' % container_name))
if not exist:
return 'VOID'
if os.path.exists(os.path.join(build_dir, f'end-{container_name}')):
return 'END'
state = 'UNKNOWN'
if started:
docker_client = docker.from_env()
try:
container = docker_client.containers.get(container_name)
# possible statuses: created, restarting, running, removing, paused, exited, or dead
state = 'RUNNING' if container.status in ('created', 'running', 'paused') else 'GHOST'
except docker.errors.NotFound:
state = 'GHOST'
# check if the end- file has been written in between time
if state == 'GHOST' and os.path.exists(os.path.join(build_dir, f'end-{container_name}')):
state = 'END'
return state
def docker_clear_state(container_name, build_dir):
"""Return True if container is still running"""
container_name = sanitize_container_name(container_name)
if os.path.exists(os.path.join(build_dir, 'start-%s' % container_name)):
os.remove(os.path.join(build_dir, 'start-%s' % container_name))
if os.path.exists(os.path.join(build_dir, 'end-%s' % container_name)):
os.remove(os.path.join(build_dir, 'end-%s' % container_name))
if os.path.exists(os.path.join(build_dir, 'exist-%s' % container_name)):
os.remove(os.path.join(build_dir, 'exist-%s' % container_name))
def docker_get_gateway_ip():
"""Return the host ip of the docker default bridge gateway"""
docker_client = docker.from_env()
try:
bridge_net = docker_client.networks.get([n.id for n in docker_client.networks.list('bridge')][0])
return bridge_net.attrs['IPAM']['Config'][0]['Gateway']
except (KeyError, IndexError):
return None
def docker_ps():
return _docker_ps()
def _docker_ps():
"""Return a list of running containers names"""
docker_client = docker.client.from_env()
return [ c.name for c in docker_client.containers.list()]
def sanitize_container_name(name):
"""Returns a container name with unallowed characters removed"""
name = re.sub('^[^a-zA-Z0-9]+', '', name)
return re.sub('[^a-zA-Z0-9_.-]', '', name)
##############################################################################
# Ugly monkey patch to set runbot in set runbot in testing mode
# No Docker will be started, instead a fake docker_run function will be used
##############################################################################
if os.environ.get('RUNBOT_MODE') == 'test':
_logger.warning('Using Fake Docker')
def fake_docker_run(run_cmd, log_path, build_dir, container_name, exposed_ports=None, cpu_limit=None, preexec_fn=None, ro_volumes=None, env_variables=None, *args, **kwargs):
_logger.info('Docker Fake Run: %s', run_cmd)
open(os.path.join(build_dir, 'exist-%s' % container_name), 'w').write('fake end')
open(os.path.join(build_dir, 'start-%s' % container_name), 'w').write('fake start\n')
open(os.path.join(build_dir, 'end-%s' % container_name), 'w').write('fake end')
with open(log_path, 'w') as log_file:
log_file.write('Fake docker_run started\n')
log_file.write('run_cmd: %s\n' % run_cmd)
log_file.write('build_dir: %s\n' % container_name)
log_file.write('container_name: %s\n' % container_name)
log_file.write('.modules.loading: Modules loaded.\n')
log_file.write('Initiating shutdown\n')
docker_run = fake_docker_run

View File

@ -1,5 +0,0 @@
# -*- coding: utf-8 -*-
from . import frontend
from . import hook
from . import badge

View File

@ -1,90 +0,0 @@
# -*- coding: utf-8 -*-
import hashlib
import werkzeug
from matplotlib.font_manager import FontProperties
from matplotlib.textpath import TextToPath
from odoo.http import request, route, Controller
class RunbotBadge(Controller):
@route([
'/runbot/badge/<int:repo_id>/<name>.svg',
'/runbot/badge/trigger/<int:trigger_id>/<name>.svg',
'/runbot/badge/<any(default,flat):theme>/<int:repo_id>/<name>.svg',
'/runbot/badge/trigger/<any(default,flat):theme>/<int:trigger_id>/<name>.svg',
], type="http", auth="public", methods=['GET', 'HEAD'], sitemap=False)
def badge(self, name, repo_id=False, trigger_id=False, theme='default'):
# Sudo is used here to allow the badge to be returned for projects
# which have restricted permissions.
Trigger = request.env['runbot.trigger'].sudo()
Repo = request.env['runbot.repo'].sudo()
Batch = request.env['runbot.batch'].sudo()
Bundle = request.env['runbot.bundle'].sudo()
if trigger_id:
triggers = Trigger.browse(trigger_id)
project = triggers.project_id
else:
triggers = Trigger.search([('repo_ids', 'in', repo_id)])
project = Repo.browse(repo_id).project_id
# -> hack to use repo. Would be better to change logic and use a trigger_id in params
bundle = Bundle.search([('name', '=', name),
('project_id', '=', project.id)])
if not bundle or not triggers:
return request.not_found()
batch = Batch.search([
('bundle_id', '=', bundle.id),
('state', '=', 'done'),
('category_id', '=', request.env.ref('runbot.default_category').id)
], order='id desc', limit=1)
builds = batch.slot_ids.filtered(lambda s: s.trigger_id in triggers).mapped('build_id')
if not builds:
state = 'testing'
else:
result = builds.result_multi()
if result == 'ok':
state = 'success'
elif result == 'warn':
state = 'warning'
else:
state = 'failed'
etag = request.httprequest.headers.get('If-None-Match')
retag = hashlib.md5(state.encode()).hexdigest()
if etag == retag:
return werkzeug.wrappers.Response(status=304)
# from https://github.com/badges/shields/blob/master/colorscheme.json
color = {
'testing': "#dfb317",
'success': "#4c1",
'failed': "#e05d44",
'warning': "#fe7d37",
}[state]
def text_width(s):
fp = FontProperties(family='DejaVu Sans', size=11)
w, h, d = TextToPath().get_text_width_height_descent(s, fp, False)
return int(w + 1)
class Text(object):
__slot__ = ['text', 'color', 'width']
def __init__(self, text, color):
self.text = text
self.color = color
self.width = text_width(text) + 10
data = {
'left': Text(name, '#555'),
'right': Text(state, color),
}
headers = [
('Content-Type', 'image/svg+xml'),
('Cache-Control', 'max-age=%d' % (10*60,)),
('ETag', retag),
]
return request.render("runbot.badge_" + theme, data, headers=headers)

View File

@ -1,570 +0,0 @@
# -*- coding: utf-8 -*-
import datetime
import werkzeug
import logging
import functools
import werkzeug.utils
import werkzeug.urls
from collections import defaultdict, OrderedDict
from werkzeug.exceptions import NotFound, Forbidden
from odoo.addons.http_routing.models.ir_http import slug
from odoo.addons.website.controllers.main import QueryURL
from odoo.http import Controller, Response, request, route as o_route
from odoo.osv import expression
_logger = logging.getLogger(__name__)
def route(routes, **kw):
def decorator(f):
@o_route(routes, **kw)
@functools.wraps(f)
def response_wrap(*args, **kwargs):
projects = request.env['runbot.project'].search([])
more = request.httprequest.cookies.get('more', False) == '1'
filter_mode = request.httprequest.cookies.get('filter_mode', 'all')
keep_search = request.httprequest.cookies.get('keep_search', False) == '1'
cookie_search = request.httprequest.cookies.get('search', '')
refresh = kwargs.get('refresh', False)
nb_build_errors = request.env['runbot.build.error'].search_count([('random', '=', True), ('parent_id', '=', False)])
nb_assigned_errors = request.env['runbot.build.error'].search_count([('responsible', '=', request.env.user.id)])
kwargs['more'] = more
kwargs['projects'] = projects
response = f(*args, **kwargs)
if isinstance(response, Response):
if keep_search and cookie_search and 'search' not in kwargs:
search = cookie_search
else:
search = kwargs.get('search', '')
if keep_search and cookie_search != search:
response.set_cookie('search', search)
project = response.qcontext.get('project') or projects and projects[0]
response.qcontext['projects'] = projects
response.qcontext['more'] = more
response.qcontext['keep_search'] = keep_search
response.qcontext['search'] = search
response.qcontext['current_path'] = request.httprequest.full_path
response.qcontext['refresh'] = refresh
response.qcontext['filter_mode'] = filter_mode
response.qcontext['default_category'] = request.env['ir.model.data']._xmlid_to_res_id('runbot.default_category')
response.qcontext['qu'] = QueryURL('/runbot/%s' % (slug(project) if project else ''), path_args=['search'], search=search, refresh=refresh)
if 'title' not in response.qcontext:
response.qcontext['title'] = 'Runbot %s' % project.name or ''
response.qcontext['nb_build_errors'] = nb_build_errors
response.qcontext['nb_assigned_errors'] = nb_assigned_errors
return response
return response_wrap
return decorator
class Runbot(Controller):
def _pending(self):
ICP = request.env['ir.config_parameter'].sudo().get_param
warn = int(ICP('runbot.pending.warning', 5))
crit = int(ICP('runbot.pending.critical', 12))
pending_count = request.env['runbot.build'].search_count([('local_state', '=', 'pending'), ('build_type', '!=', 'scheduled')])
scheduled_count = request.env['runbot.build'].search_count([('local_state', '=', 'pending'), ('build_type', '=', 'scheduled')])
level = ['info', 'warning', 'danger'][int(pending_count > warn) + int(pending_count > crit)]
return pending_count, level, scheduled_count
@o_route([
'/runbot/submit'
], type='http', auth="public", methods=['GET', 'POST'], csrf=False)
def submit(self, more=False, redirect='/', keep_search=False, category=False, filter_mode=False, update_triggers=False, **kwargs):
response = werkzeug.utils.redirect(redirect)
response.set_cookie('more', '1' if more else '0')
response.set_cookie('keep_search', '1' if keep_search else '0')
response.set_cookie('filter_mode', filter_mode or 'all')
response.set_cookie('category', category or '0')
if update_triggers:
enabled_triggers = []
project_id = int(update_triggers)
for key in kwargs.keys():
if key.startswith('trigger_'):
enabled_triggers.append(key.replace('trigger_', ''))
key = 'trigger_display_%s' % project_id
if len(request.env['runbot.trigger'].search([('project_id', '=', project_id)])) == len(enabled_triggers):
response.delete_cookie(key)
else:
response.set_cookie(key, '-'.join(enabled_triggers))
return response
@route(['/',
'/runbot',
'/runbot/<model("runbot.project"):project>',
'/runbot/<model("runbot.project"):project>/search/<search>'], website=True, auth='public', type='http')
def bundles(self, project=None, search='', projects=False, refresh=False, **kwargs):
search = search if len(search) < 60 else search[:60]
env = request.env
categories = env['runbot.category'].search([])
if not project and projects:
project = projects[0]
pending_count, level, scheduled_count = self._pending()
context = {
'categories': categories,
'search': search,
'message': request.env['ir.config_parameter'].sudo().get_param('runbot.runbot_message'),
'pending_total': pending_count,
'pending_level': level,
'scheduled_count': scheduled_count,
'hosts_data': request.env['runbot.host'].search([('assigned_only', '=', False)]),
}
if project:
domain = [('last_batch', '!=', False), ('project_id', '=', project.id), ('no_build', '=', False)]
filter_mode = request.httprequest.cookies.get('filter_mode', False)
if filter_mode == 'sticky':
domain.append(('sticky', '=', True))
elif filter_mode == 'nosticky':
domain.append(('sticky', '=', False))
if search:
search_domains = []
pr_numbers = []
for search_elem in search.split("|"):
if search_elem.isnumeric():
pr_numbers.append(int(search_elem))
search_domains.append([('name', 'like', search_elem)])
if pr_numbers:
res = request.env['runbot.branch'].search([('name', 'in', pr_numbers)])
if res:
search_domains.append([('id', 'in', res.mapped('bundle_id').ids)])
search_domain = expression.OR(search_domains)
domain = expression.AND([domain, search_domain])
e = expression.expression(domain, request.env['runbot.bundle'])
query = e.query
query.order = """
(case when "runbot_bundle".sticky then 1 when "runbot_bundle".sticky is null then 2 else 2 end),
case when "runbot_bundle".sticky then "runbot_bundle".version_number end collate "C" desc,
"runbot_bundle".last_batch desc
"""
query.limit=40
bundles = env['runbot.bundle'].browse(query)
category_id = int(request.httprequest.cookies.get('category') or 0) or request.env['ir.model.data']._xmlid_to_res_id('runbot.default_category')
trigger_display = request.httprequest.cookies.get('trigger_display_%s' % project.id, None)
if trigger_display is not None:
trigger_display = [int(td) for td in trigger_display.split('-') if td]
bundles = bundles.with_context(category_id=category_id)
triggers = env['runbot.trigger'].search([('project_id', '=', project.id)])
context.update({
'active_category_id': category_id,
'bundles': bundles,
'project': project,
'triggers': triggers,
'trigger_display': trigger_display,
})
context.update({'message': request.env['ir.config_parameter'].sudo().get_param('runbot.runbot_message')})
res = request.render('runbot.bundles', context)
return res
@route([
'/runbot/bundle/<model("runbot.bundle"):bundle>',
'/runbot/bundle/<model("runbot.bundle"):bundle>/page/<int:page>'
], website=True, auth='public', type='http', sitemap=False)
def bundle(self, bundle=None, page=1, limit=50, **kwargs):
domain = [('bundle_id', '=', bundle.id), ('hidden', '=', False)]
batch_count = request.env['runbot.batch'].search_count(domain)
pager = request.website.pager(
url='/runbot/bundle/%s' % bundle.id,
total=batch_count,
page=page,
step=50,
)
batchs = request.env['runbot.batch'].search(domain, limit=limit, offset=pager.get('offset', 0), order='id desc')
context = {
'bundle': bundle,
'batchs': batchs,
'pager': pager,
'project': bundle.project_id,
'title': 'Bundle %s' % bundle.name
}
return request.render('runbot.bundle', context)
@o_route([
'/runbot/bundle/<model("runbot.bundle"):bundle>/force',
'/runbot/bundle/<model("runbot.bundle"):bundle>/force/<int:auto_rebase>',
], type='http', auth="user", methods=['GET', 'POST'], csrf=False)
def force_bundle(self, bundle, auto_rebase=False, **_post):
_logger.info('user %s forcing bundle %s', request.env.user.name, bundle.name) # user must be able to read bundle
batch = bundle.sudo()._force()
batch._log('Batch forced by %s', request.env.user.name)
batch._prepare(auto_rebase)
return werkzeug.utils.redirect('/runbot/batch/%s' % batch.id)
@route(['/runbot/batch/<int:batch_id>'], website=True, auth='public', type='http', sitemap=False)
def batch(self, batch_id=None, **kwargs):
batch = request.env['runbot.batch'].browse(batch_id)
context = {
'batch': batch,
'project': batch.bundle_id.project_id,
'title': 'Batch %s (%s)' % (batch.id, batch.bundle_id.name)
}
return request.render('runbot.batch', context)
@o_route(['/runbot/batch/slot/<model("runbot.batch.slot"):slot>/build'], auth='user', type='http')
def slot_create_build(self, slot=None, **kwargs):
build = slot.sudo()._create_missing_build()
return werkzeug.utils.redirect('/runbot/build/%s' % build.id)
@route(['/runbot/commit/<model("runbot.commit"):commit>'], website=True, auth='public', type='http', sitemap=False)
def commit(self, commit=None, **kwargs):
status_list = request.env['runbot.commit.status'].search([('commit_id', '=', commit.id)], order='id desc')
last_status_by_context = dict()
for status in status_list:
if status.context in last_status_by_context:
continue
last_status_by_context[status.context] = status
context = {
'commit': commit,
'project': commit.repo_id.project_id,
'reflogs': request.env['runbot.ref.log'].search([('commit_id', '=', commit.id)]),
'status_list': status_list,
'last_status_by_context': last_status_by_context,
'title': 'Commit %s' % commit.name[:8]
}
return request.render('runbot.commit', context)
@o_route(['/runbot/commit/resend/<int:status_id>'], website=True, auth='user', type='http')
def resend_status(self, status_id=None, **kwargs):
CommitStatus = request.env['runbot.commit.status']
status = CommitStatus.browse(status_id)
if not status.exists():
raise NotFound()
last_status = CommitStatus.search([('commit_id', '=', status.commit_id.id), ('context', '=', status.context)], order='id desc', limit=1)
if status != last_status:
raise Forbidden("Only the last status can be resent")
if not last_status.sent_date or (datetime.datetime.now() - last_status.sent_date).seconds > 60: # ensure at least 60sec between two resend
new_status = status.sudo().copy()
new_status.description = 'Status resent by %s' % request.env.user.name
new_status._send()
_logger.info('github status %s resent by %s', status_id, request.env.user.name)
return werkzeug.utils.redirect('/runbot/commit/%s' % status.commit_id.id)
@o_route([
'/runbot/build/<int:build_id>/<operation>',
], type='http', auth="public", methods=['POST'], csrf=False)
def build_operations(self, build_id, operation, **post):
build = request.env['runbot.build'].sudo().browse(build_id)
if operation == 'rebuild':
build = build._rebuild()
elif operation == 'kill':
build._ask_kill()
elif operation == 'wakeup':
build._wake_up()
return str(build.id)
@route([
'/runbot/build/<int:build_id>',
'/runbot/batch/<int:from_batch>/build/<int:build_id>'
], type='http', auth="public", website=True, sitemap=False)
def build(self, build_id, search=None, from_batch=None, **post):
"""Events/Logs"""
if from_batch:
from_batch = request.env['runbot.batch'].browse(int(from_batch))
if build_id not in from_batch.with_context(active_test=False).slot_ids.build_id.ids:
# the url may have been forged replacing the build id, redirect to hide the batch
return werkzeug.utils.redirect('/runbot/build/%s' % build_id)
from_batch = from_batch.with_context(batch=from_batch)
Build = request.env['runbot.build'].with_context(batch=from_batch)
build = Build.browse([build_id])[0]
if not build.exists():
return request.not_found()
siblings = (build.parent_id.children_ids if build.parent_id else from_batch.slot_ids.build_id if from_batch else build).sorted('id')
context = {
'build': build,
'from_batch': from_batch,
'project': build.params_id.trigger_id.project_id,
'title': 'Build %s' % build.id,
'siblings': siblings,
# following logic is not the most efficient but good enough
'prev_ko': next((b for b in reversed(siblings) if b.id < build.id and b.global_result != 'ok'), Build),
'prev_bu': next((b for b in reversed(siblings) if b.id < build.id), Build),
'next_bu': next((b for b in siblings if b.id > build.id), Build),
'next_ko': next((b for b in siblings if b.id > build.id and b.global_result != 'ok'), Build),
}
return request.render("runbot.build", context)
@route([
'/runbot/build/search',
], website=True, auth='public', type='http', sitemap=False)
def builds(self, **kwargs):
domain = []
for key in ('config_id', 'version_id', 'project_id', 'trigger_id', 'create_batch_id.bundle_id', 'create_batch_id'): # allowed params
value = kwargs.get(key)
if value:
domain.append((f'params_id.{key}', '=', int(value)))
for key in ('global_state', 'local_state', 'global_result', 'local_result'):
value = kwargs.get(key)
if value:
domain.append((f'{key}', '=', value))
for key in ('description',):
if key in kwargs:
domain.append((f'{key}', 'ilike', kwargs.get(key)))
context = {
'builds': request.env['runbot.build'].search(domain, limit=100),
}
return request.render('runbot.build_search', context)
@route([
'/runbot/branch/<model("runbot.branch"):branch>',
], website=True, auth='public', type='http', sitemap=False)
def branch(self, branch=None, **kwargs):
pr_branch = branch.bundle_id.branch_ids.filtered(lambda rec: not rec.is_pr and rec.id != branch.id and rec.remote_id.repo_id == branch.remote_id.repo_id)[:1]
branch_pr = branch.bundle_id.branch_ids.filtered(lambda rec: rec.is_pr and rec.id != branch.id and rec.remote_id.repo_id == branch.remote_id.repo_id)[:1]
context = {
'branch': branch,
'project': branch.remote_id.repo_id.project_id,
'title': 'Branch %s' % branch.name,
'pr_branch': pr_branch,
'branch_pr': branch_pr
}
return request.render('runbot.branch', context)
@route([
'/runbot/glances',
'/runbot/glances/<int:project_id>'
], type='http', auth='public', website=True, sitemap=False)
def glances(self, project_id=None, **kwargs):
project_ids = [project_id] if project_id else request.env['runbot.project'].search([]).ids # search for access rights
bundles = request.env['runbot.bundle'].search([('sticky', '=', True), ('project_id', 'in', project_ids)])
pending = self._pending()
qctx = {
'pending_total': pending[0],
'pending_level': pending[1],
'bundles': bundles,
'title': 'Glances'
}
return request.render("runbot.glances", qctx)
@route(['/runbot/monitoring',
'/runbot/monitoring/<int:category_id>',
'/runbot/monitoring/<int:category_id>/<int:view_id>'], type='http', auth='user', website=True, sitemap=False)
def monitoring(self, category_id=None, view_id=None, **kwargs):
pending = self._pending()
hosts_data = request.env['runbot.host'].search([])
if category_id:
category = request.env['runbot.category'].browse(category_id)
assert category.exists()
else:
category = request.env.ref('runbot.nightly_category')
category_id = category.id
bundles = request.env['runbot.bundle'].search([('sticky', '=', True)]) # NOTE we dont filter on project
qctx = {
'category': category,
'pending_total': pending[0],
'pending_level': pending[1],
'scheduled_count': pending[2],
'bundles': bundles,
'hosts_data': hosts_data,
'auto_tags': request.env['runbot.build.error'].disabling_tags(),
'build_errors': request.env['runbot.build.error'].search([('random', '=', True)]),
'kwargs': kwargs,
'title': 'monitoring'
}
return request.render(view_id if view_id else "runbot.monitoring", qctx)
@route(['/runbot/errors',
'/runbot/errors/page/<int:page>'
], type='http', auth='user', website=True, sitemap=False)
def build_errors(self, sort=None, page=1, limit=20, **kwargs):
sort_order_choices = {
'last_seen_date desc': 'Last seen date: Newer First',
'last_seen_date asc': 'Last seen date: Older First',
'build_count desc': 'Number seen: High to Low',
'build_count asc': 'Number seen: Low to High',
'responsible asc': 'Assignee: A - Z',
'responsible desc': 'Assignee: Z - A',
'module_name asc': 'Module name: A - Z',
'module_name desc': 'Module name: Z -A'
}
sort_order = sort if sort in sort_order_choices else 'last_seen_date desc'
current_user_errors = request.env['runbot.build.error'].search([
('responsible', '=', request.env.user.id),
('parent_id', '=', False),
], order='last_seen_date desc, build_count desc')
domain = [('parent_id', '=', False), ('responsible', '!=', request.env.user.id), ('build_count', '>', 1)]
build_errors_count = request.env['runbot.build.error'].search_count(domain)
url_args = {}
url_args['sort'] = sort
pager = request.website.pager(url='/runbot/errors/', url_args=url_args, total=build_errors_count, page=page, step=limit)
build_errors = request.env['runbot.build.error'].search(domain, order=sort_order, limit=limit, offset=pager.get('offset', 0))
qctx = {
'current_user_errors': current_user_errors,
'build_errors': build_errors,
'title': 'Build Errors',
'sort_order_choices': sort_order_choices,
'pager': pager
}
return request.render('runbot.build_error', qctx)
@route(['/runbot/teams', '/runbot/teams/<model("runbot.team"):team>',], type='http', auth='user', website=True, sitemap=False)
def team_dashboards(self, team=None, hide_empty=False, **kwargs):
teams = request.env['runbot.team'].search([]) if not team else None
domain = [('id', 'in', team.build_error_ids.ids)] if team else []
# Sort & Filter
sortby = kwargs.get('sortby', 'count')
filterby = kwargs.get('filterby', 'not_one')
searchbar_sortings = {
'date': {'label': 'Recently Seen', 'order': 'last_seen_date desc'},
'count': {'label': 'Nb Seen', 'order': 'build_count desc'},
}
order = searchbar_sortings[sortby]['order']
searchbar_filters = {
'all': {'label': 'All', 'domain': []},
'unassigned': {'label': 'Unassigned', 'domain': [('responsible', '=', False)]},
'not_one': {'label': 'Seen more than once', 'domain': [('build_count', '>', 1)]},
}
domain = expression.AND([domain, searchbar_filters[filterby]['domain']])
qctx = {
'team': team,
'teams': teams,
'build_error_ids': request.env['runbot.build.error'].search(domain, order=order),
'hide_empty': bool(hide_empty),
'searchbar_sortings': searchbar_sortings,
'sortby': sortby,
'searchbar_filters': OrderedDict(sorted(searchbar_filters.items())),
'filterby': filterby,
'default_url': request.httprequest.path,
}
return request.render('runbot.team', qctx)
@route(['/runbot/dashboards/<model("runbot.dashboard"):dashboard>',], type='http', auth='user', website=True, sitemap=False)
def dashboards(self, dashboard=None, hide_empty=False, **kwargs):
qctx = {
'dashboard': dashboard,
'hide_empty': bool(hide_empty),
}
return request.render('runbot.dashboard_page', qctx)
@route(['/runbot/build/stats/<int:build_id>'], type='http', auth="public", website=True, sitemap=False)
def build_stats(self, build_id, search=None, **post):
"""Build statistics"""
Build = request.env['runbot.build']
build = Build.browse([build_id])[0]
if not build.exists():
return request.not_found()
build_stats = defaultdict(dict)
for stat in build.stat_ids:
for module, value in sorted(stat.values.items(), key=lambda item: item[1], reverse=True):
build_stats[stat.category][module] = value
context = {
'build': build,
'build_stats': build_stats,
'project': build.params_id.trigger_id.project_id,
'title': 'Build %s statistics' % build.id
}
return request.render("runbot.build_stats", context)
@route(['/runbot/stats/'], type='json', auth="public", website=False, sitemap=False)
def stats_json(self, bundle_id=False, trigger_id=False, key_category='', center_build_id=False, limit=100, search=None, **post):
""" Json stats """
trigger_id = trigger_id and int(trigger_id)
bundle_id = bundle_id and int(bundle_id)
center_build_id = center_build_id and int(center_build_id)
limit = min(int(limit), 1000)
trigger = request.env['runbot.trigger'].browse(trigger_id)
bundle = request.env['runbot.bundle'].browse(bundle_id)
if not trigger_id or not bundle_id or not trigger.exists() or not bundle.exists():
return request.not_found()
builds_domain = [
('global_state', 'in', ('running', 'done')),
('slot_ids.batch_id.bundle_id', '=', bundle_id),
('params_id.trigger_id', '=', trigger.id),
]
builds = request.env['runbot.build'].with_context(active_test=False)
if center_build_id:
builds = builds.search(
expression.AND([builds_domain, [('id', '>=', center_build_id)]]),
order='id', limit=limit/2)
builds_domain = expression.AND([builds_domain, [('id', '<=', center_build_id)]])
limit -= len(builds)
builds |= builds.search(builds_domain, order='id desc', limit=limit)
if not builds:
return {}
builds = builds.search([('id', 'child_of', builds.ids)])
parents = {b.id: b.top_parent.id for b in builds.with_context(prefetch_fields=False)}
request.env.cr.execute("SELECT build_id, values FROM runbot_build_stat WHERE build_id IN %s AND category = %s", [tuple(builds.ids), key_category]) # read manually is way faster than using orm
res = {}
for (build_id, values) in request.env.cr.fetchall():
if values:
res.setdefault(parents[build_id], {}).update(values)
# we need to update here to manage the post install case: we want to combine stats from all post_install childrens.
return res
@route(['/runbot/stats/<model("runbot.bundle"):bundle>/<model("runbot.trigger"):trigger>'], type='http', auth="public", website=True, sitemap=False)
def modules_stats(self, bundle, trigger, search=None, **post):
"""Modules statistics"""
categories = request.env['runbot.build.stat.regex'].search([]).mapped('name')
context = {
'stats_categories': categories,
'bundle': bundle,
'trigger': trigger,
}
return request.render("runbot.modules_stats", context)
@route(['/runbot/load_info'], type='http', auth="user", website=True, sitemap=False)
def load_infos(self, **post):
build_by_bundle = {}
for build in request.env['runbot.build'].search([('local_state', 'in', ('pending', 'testing'))], order='id'):
build_by_bundle.setdefault(build.params_id.create_batch_id.bundle_id, []).append(build)
build_by_bundle = list(build_by_bundle.items())
build_by_bundle.sort(key=lambda x: -len(x[1]))
pending_count, level, scheduled_count = self._pending()
context = {
'build_by_bundle': build_by_bundle,
'pending_total': pending_count,
'pending_level': level,
'scheduled_count': scheduled_count,
'hosts_data': request.env['runbot.host'].search([('assigned_only', '=', False)]),
}
return request.render("runbot.load_info", context)

View File

@ -1,53 +0,0 @@
# -*- coding: utf-8 -*-
import time
import json
import logging
from odoo import http
from odoo.http import request
_logger = logging.getLogger(__name__)
class Hook(http.Controller):
@http.route(['/runbot/hook', '/runbot/hook/<int:remote_id>'], type='http', auth="public", website=True, csrf=False)
def hook(self, remote_id=None, **_post):
event = request.httprequest.headers.get("X-Github-Event")
payload = json.loads(request.params.get('payload', '{}'))
if remote_id is None:
repo_data = payload.get('repository')
if repo_data:
remote_domain = [
'|', '|', '|',
('name', '=', repo_data['ssh_url']),
('name', '=', repo_data['ssh_url'].replace('.git', '')),
('name', '=', repo_data['clone_url']),
('name', '=', repo_data['clone_url'].replace('.git', '')),
]
remote = request.env['runbot.remote'].sudo().search(
remote_domain, limit=1)
remote_id = remote.id
if not remote_id:
_logger.error("Remote %s not found", repo_data['ssh_url'])
remote = request.env['runbot.remote'].sudo().browse(remote_id)
_logger.info('Remote found %s', remote)
# force update of dependencies too in case a hook is lost
if not payload or event == 'push':
remote.repo_id.set_hook_time(time.time())
elif event == 'pull_request':
pr_number = payload.get('pull_request', {}).get('number', '')
branch = request.env['runbot.branch'].sudo().search([('remote_id', '=', remote.id), ('name', '=', pr_number)])
branch.recompute_infos(payload.get('pull_request', {}))
if payload.get('action') in ('synchronize', 'opened', 'reopened'):
remote.repo_id.set_hook_time(time.time())
# remaining recurrent actions: labeled, review_requested, review_request_removed
elif event == 'delete':
if payload.get('ref_type') == 'branch':
branch_ref = payload.get('ref')
_logger.info('Branch %s in repo %s was deleted', branch_ref, remote.repo_id.name)
branch = request.env['runbot.branch'].sudo().search([('remote_id', '=', remote.id), ('name', '=', branch_ref)])
branch.alive = False
return ""

View File

@ -1,22 +0,0 @@
<odoo>
<record model="ir.actions.server" id="action_parse_build_logs">
<field name="name">Parse build logs</field>
<field name="model_id" ref="runbot.model_runbot_build" />
<field name="binding_model_id" ref="runbot.model_runbot_build" />
<field name="type">ir.actions.server</field>
<field name="state">code</field>
<field name="code">
action = records._parse_logs()
</field>
</record>
<record model="ir.actions.server" id="action_parse_log">
<field name="name">Parse log entry</field>
<field name="model_id" ref="runbot.model_runbot_error_log" />
<field name="binding_model_id" ref="runbot.model_runbot_error_log" />
<field name="type">ir.actions.server</field>
<field name="state">code</field>
<field name="code">
action = records._parse_logs()
</field>
</record>
</odoo>

View File

@ -1,9 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<odoo>
<record model="runbot.dockerfile" id="runbot.docker_default">
<field name="name">Docker Default</field>
<field name="template_id" ref="runbot.docker_base"/>
<field name="to_build">True</field>
<field name="description">Default Dockerfile for latest Odoo versions.</field>
</record>
</odoo>

View File

@ -1,22 +0,0 @@
<odoo>
<record model="ir.actions.server" id="action_link_build_errors">
<field name="name">Link build errors</field>
<field name="model_id" ref="runbot.model_runbot_build_error" />
<field name="binding_model_id" ref="runbot.model_runbot_build_error" />
<field name="type">ir.actions.server</field>
<field name="state">code</field>
<field name="code">
records.link_errors()
</field>
</record>
<record model="ir.actions.server" id="action_clean_build_errors">
<field name="name">Re-clean build errors</field>
<field name="model_id" ref="runbot.model_runbot_build_error" />
<field name="binding_model_id" ref="runbot.model_runbot_build_error" />
<field name="type">ir.actions.server</field>
<field name="state">code</field>
<field name="code">
records.clean_content()
</field>
</record>
</odoo>

View File

@ -1,160 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<odoo>
<data noupdate="1">
<record id="runbot_build_config_step_test_base" model="runbot.build.config.step">
<field name="name">base</field>
<field name="install_modules">-*,base</field>
<field name="cpu_limit">600</field>
<field name="test_enable" eval="False"/>
<field name="protected" eval="True"/>
<field name="default_sequence">10</field>
</record>
<record id="runbot_build_config_step_test_all" model="runbot.build.config.step">
<field name="name">all</field>
<field name="install_modules"></field>
<field name="test_enable" eval="True"/>
<field name="protected" eval="True"/>
<field name="default_sequence">20</field>
</record>
<record id="runbot_build_config_step_run" model="runbot.build.config.step">
<field name="name">run</field>
<field name="job_type">run_odoo</field>
<field name="protected" eval="True"/>
<field name="default_sequence">1000</field>
</record>
<record id="runbot_build_config_default" model="runbot.build.config">
<field name="name">Default</field>
<field name="step_order_ids" eval="[(5,0,0),
(0, 0, {'step_id': ref('runbot_build_config_step_test_base')}),
(0, 0, {'step_id': ref('runbot_build_config_step_test_all')}),
(0, 0, {'step_id': ref('runbot_build_config_step_run')})]"/>
<field name="protected" eval="True"/>
</record>
<record id="runbot_build_config_default_no_run" model="runbot.build.config">
<field name="name">Default no run</field>
<field name="step_order_ids" eval="[(5,0,0),
(0, 0, {'step_id': ref('runbot_build_config_step_test_base')}),
(0, 0, {'step_id': ref('runbot_build_config_step_test_all')})]"/>
<field name="protected" eval="True"/>
</record>
<record id="runbot_build_config_light_test" model="runbot.build.config">
<field name="name">All only</field>
<field name="description">Test all only, usefull for multibuild</field>
<field name="step_order_ids" eval="[(5,0,0), (0, 0, {'step_id': ref('runbot_build_config_step_test_all')})]"/>
<field name="protected" eval="True"/>
</record>
<!-- Coverage-->
<record id="runbot_build_config_step_test_coverage" model="runbot.build.config.step">
<field name="name">coverage</field>
<field name="install_modules"></field>
<field name="cpu_limit">7000</field>
<field name="test_enable" eval="True"/>
<field name="coverage" eval="True"/>
<field name="protected" eval="True"/>
<field name="default_sequence">30</field>
</record>
<record id="runbot_build_config_test_coverage" model="runbot.build.config">
<field name="name">Coverage</field>
<field name="step_order_ids" eval="[(5,0,0), (0, 0, {'step_id': ref('runbot_build_config_step_test_coverage')})]"/>
<field name="protected" eval="True"/>
</record>
<!-- Multi build-->
<record id="runbot_build_config_step_create_light_multi" model="runbot.build.config.step">
<field name="name">create_light_multi</field>
<field name="job_type">create_build</field>
<field name="create_config_ids" eval="[(4, ref('runbot_build_config_light_test'))]"/>
<field name="number_builds">20</field>
<field name="protected" eval="True"/>
</record>
<record id="runbot_build_config_multibuild" model="runbot.build.config">
<field name="name">Multi build</field>
<field name="description">Run 20 children build with the same hash and dependencies. Use to detect undeterministic issues</field>
<field name="step_order_ids" eval="[(5,0,0), (0, 0, {'step_id': ref('runbot_build_config_step_create_light_multi')})]"/>
<field name="protected" eval="True"/>
</record>
<!-- l10n -->
<record id="runbot_build_config_step_test_l10n" model="runbot.build.config.step">
<field name="name">l10n</field>
<field name="install_modules"></field>
<field name="test_enable" eval="True"/>
<field name="protected" eval="True"/>
<field name="default_sequence">30</field>
<field name="test_tags">l10nall</field>
<field name="protected" eval="True"/>
</record>
<record id="runbot_build_config_l10n" model="runbot.build.config">
<field name="name">L10n</field>
<field name="description">A simple test_all with a l10n test_tags</field>
<field name="step_order_ids" eval="[(5,0,0), (0, 0, {'step_id': ref('runbot_build_config_step_test_l10n')})]"/>
<field name="protected" eval="True"/>
</record>
<!-- Click all-->
<record id="runbot_build_config_step_test_click_all" model="runbot.build.config.step">
<field name="name">clickall</field>
<field name="install_modules"></field>
<field name="cpu_limit">5400</field>
<field name="test_enable" eval="True"/>
<field name="protected" eval="True"/>
<field name="default_sequence">40</field>
<field name="test_tags">click_all</field>
<field name="protected" eval="True"/>
</record>
<record id="runbot_build_config_click_all" model="runbot.build.config">
<field name="name">Click All</field>
<field name="description">Used for nightly click all, test all filters and menus.</field>
<field name="step_order_ids" eval="[(5,0,0), (0, 0, {'step_id': ref('runbot_build_config_step_test_click_all')})]"/>
<field name="protected" eval="True"/>
</record>
<record id="runbot_build_config_step_restore" model="runbot.build.config.step">
<field name="name">restore</field>
<field name="job_type">restore</field>
<field name="default_sequence">2</field>
</record>
<record id="runbot_build_config_step_test_only" model="runbot.build.config.step">
<field name="name">test_only</field>
<field name="custom_db_name">all</field>
<field name="create_db" eval="False"/>
<field name="install_modules">-*</field>
<field name="test_enable" eval="True"/>
<field name="protected" eval="True"/>
<field name="default_sequence">30</field>
</record>
<record id="runbot_build_config_restore_and_test" model="runbot.build.config">
<field name="name">Restore and Test</field>
<field name="step_order_ids" eval="[(5,0,0),
(0, 0, {'step_id': ref('runbot_build_config_step_restore')}),
(0, 0, {'step_id': ref('runbot_build_config_step_test_only')})]"/>
<field name="protected" eval="True"/>
</record>
<!-- Multi build custom-->
<record id="runbot_build_config_step_custom_multi_create" model="runbot.build.config.step">
<field name="name">custom_create_multi</field>
<field name="job_type">create_build</field>
<field name="create_config_ids" eval="[(4, ref('runbot_build_config_restore_and_test'))]"/>
<field name="number_builds">1</field>
<field name="protected" eval="True"/>
</record>
<record id="runbot_build_config_custom_multi" model="runbot.build.config">
<field name="name">Custom Multi</field>
<field name="description">Generic multibuild to use with custom trigger wizard</field>
<field name="step_order_ids" eval="[(5,0,0), (0, 0, {'step_id': ref('runbot_build_config_step_create_light_multi')})]"/>
<field name="protected" eval="True"/>
</record>
</data>
</odoo>

View File

@ -1,111 +0,0 @@
<odoo>
<record model="runbot.category" id="runbot.default_category">
<field name="name">Default</field>
<field name="icon">gear</field>
</record>
<record model="runbot.category" id="runbot.nightly_category">
<field name="name">Nightly</field>
<field name="icon">moon-o</field>
</record>
<record model="runbot.category" id="runbot.weekly_category">
<field name="name">Weekly</field>
<field name="icon">tasks</field>
</record>
<record model="runbot.project" id="runbot.main_project">
<field name="name">R&amp;D</field>
</record>
<data noupdate="1">
<record model="runbot.bundle" id="runbot.bundle_master" >
<field name="name">master</field>
<field name="is_base">True</field>
<field name="project_id" ref="runbot.main_project"/>
</record>
<record model="runbot.bundle" id="runbot.bundle_dummy">
<field name="name">Dummy</field>
<field name="no_build">True</field>
<field name="project_id" ref="runbot.main_project"/>
</record>
<record model="ir.config_parameter" id="runbot.runbot_upgrade_exception_message">
<field name="key">runbot.runbot_upgrade_exception_message</field>
<field name="value">Upgrade exception [#{exception.id}]({base_url}/web/#id={exception.id}&amp;view_type=form&amp;model=runbot.upgrade.exception) added\
{exception.elements}
</field>
</record>
<record model="ir.config_parameter" id="runbot.runbot_default_odoorc">
<field name="key">runbot.runbot_default_odoorc</field>
<field name="value">[options]\nadmin_passwd=running_master_password</field>
</record>
</data>
<record model="ir.config_parameter" id="runbot.runbot_is_base_regex">
<field name="key">runbot.runbot_is_base_regex</field>
<field name="value">^((master)|(saas-)?\d+\.\d+)$</field>
</record>
<record model="ir.actions.server" id="action_toggle_is_base">
<field name="name">Mark is base</field>
<field name="model_id" ref="runbot.model_runbot_bundle" />
<field name="binding_model_id" ref="runbot.model_runbot_bundle" />
<field name="type">ir.actions.server</field>
<field name="state">code</field>
<field name="code">
records.write({'is_base': True})
</field>
</record>
<record model="ir.actions.server" id="action_mark_no_build">
<field name="name">Mark no build</field>
<field name="model_id" ref="runbot.model_runbot_bundle" />
<field name="binding_model_id" ref="runbot.model_runbot_bundle" />
<field name="type">ir.actions.server</field>
<field name="state">code</field>
<field name="code">
records.write({'no_build': True})
</field>
</record>
<record model="ir.actions.server" id="action_mark_build">
<field name="name">Mark build</field>
<field name="model_id" ref="runbot.model_runbot_bundle" />
<field name="binding_model_id" ref="runbot.model_runbot_bundle" />
<field name="type">ir.actions.server</field>
<field name="state">code</field>
<field name="code">
records.write({'no_build': False})
</field>
</record>
<record id="ir_cron_runbot" model="ir.cron">
<field name="name">Runbot</field>
<field name="active" eval="False"/>
<field name="interval_number">10</field>
<field name="interval_type">seconds</field>
<field name="numbercall">-1</field>
<field name="doall" eval="False"/>
<field name="model_id" ref="model_runbot_runbot"/>
<field name="code">model._cron()</field>
<field name="state">code</field>
</record>
<record id="bundle_create" model="base.automation">
<field name="name">Base, staging and tmp management</field>
<field name="model_id" ref="runbot.model_runbot_bundle"/>
<field name="trigger">on_create</field>
<field name="active" eval="True"/>
<field name="state">code</field>
<field name="code">
if record.name.startswith('tmp.'):
record['no_build'] = True
elif record.name.startswith('staging.'):
name = record.name.replace('staging.', '')
base = record.env['runbot.bundle'].search([('name', '=', name), ('project_id', '=', record.project_id.id), ('is_base', '=', True)], limit=1)
record['build_all'] = True
if base:
record['defined_base_id'] = base
</field>
</record>
</odoo>

View File

@ -1,17 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<odoo>
<data noupdate="1">
<record id="runbot_error_regex_clean_numbers" model="runbot.error.regex">
<field name="regex">, line \d+,</field>
<field name="re_type">cleaning</field>
</record>
<record id="runbot_error_regex_filter_failures" model="runbot.error.regex">
<field name="regex">Module .+: \d+ failures, \d+ errors</field>
<field name="re_type">filter</field>
</record>
<record id="runbot_error_regex_filter_failed" model="runbot.error.regex">
<field name="regex">At least one test failed when loading the modules.</field>
<field name="re_type">filter</field>
</record>
</data>
</odoo>

View File

@ -1,5 +0,0 @@
<odoo>
<record id="website.homepage_page" model="website.page">
<field name="url">/home</field>
</record>
</odoo>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

View File

@ -1,75 +0,0 @@
# only needed if not defined yet
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
proxy_read_timeout 600;
proxy_connect_timeout 600;
proxy_set_header X-Forwarded-Host $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
server {
# runbot frontend
listen 80;
listen [::]:80;
server_name runbot.domain.com;
location / {
proxy_pass http://127.0.0.1:8069;
}
# runbot frontend notifications: optionnal
location /longpolling {
proxy_pass http://127.0.0.1:8070;
}
# not tested yet, replacement of longpolling to websocket for odoo 16.0
# location /websocket {
# proxy_set_header X-Forwarded-Host $remote_addr;
# proxy_set_header X-Forwarded-For $remote_addr;
# proxy_set_header X-Real-IP $remote_addr;
# proxy_set_header Host $host;
# proxy_set_header Upgrade $http_upgrade;
# proxy_set_header Connection $connection_upgrade;
# proxy_pass http://127.0.0.1:8080;
# }
# serve text log, zip, other docker outputs ...
# server_name should be the same as the local builder (foced-host-name)
location /runbot/static/ {
alias /home/runbot_user/odoo/runbot/runbot/static/;
autoindex off;
location ~ /runbot/static/build/[^/]+/(logs|tests)/ {
autoindex on;
add_header 'Access-Control-Allow-Origin' 'http://runbot.domain.com';
}
}
}
server {
# config for running builds
# subdomain redirect to the local runbot nginx with dynamic config
# anothe nginx layer will listen to the 8080 port and redirect to the correct instance
server_name *.runbot.domain.com;
location / {
proxy_set_header Host $host:$proxy_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_pass http://127.0.0.1:8080;
}
# needed for v16.0 websockets
location /websocket {
proxy_set_header Host $host:$proxy_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_pass http://127.0.0.1:8080;
}
}

View File

@ -1,3 +0,0 @@
#!/bin/bash
workdir=/home/$USER/odoo
exec python3 $workdir/runbot/runbot_builder/builder.py --odoo-path $workdir/odoo -d runbot --logfile $workdir/logs/runbot_builder.txt --forced-host-name runbot.domain.com

View File

@ -1,3 +0,0 @@
#!/bin/bash
workdir=/home/$USER/odoo/
exec python3 $workdir/runbot/runbot_builder/leader.py --odoo-path $workdir/odoo -d runbot --logfile $workdir/logs/runbot_leader.txt --forced-host-name=leader

View File

@ -1,3 +0,0 @@
#!/bin/bash
workdir=/home/$USER/odoo
exec python3 $workdir/odoo/odoo-bin --workers=2 --without-demo=1 --max-cron-thread=1 --addons-path $workdir/odoo/addons,$workdir/runbot -d runbot --logfile $workdir/logs/runbot.txt

View File

@ -1,15 +0,0 @@
[Unit]
Description=runbot
[Service]
PassEnvironment=LANG
Type=simple
User=runbot_user
WorkingDirectory=/home/runbot_user/odoo
ExecStart=/home/runbot_user/bin/runbot/builder.sh
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target

View File

@ -1,15 +0,0 @@
[Unit]
Description=runbot
[Service]
PassEnvironment=LANG
Type=simple
User=runbot_user
WorkingDirectory=/home/runbot_user/odoo
ExecStart=/home/runbot_user/bin/runbot/leader.sh
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target

View File

@ -1,15 +0,0 @@
[Unit]
Description=runbot
[Service]
PassEnvironment=LANG
Type=simple
User=runbot_user
WorkingDirectory=/home/runbot_user/odoo
ExecStart=/home/runbot_user/bin/runbot/runbot.sh
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target

View File

@ -1,52 +0,0 @@
from odoo.fields import Field
from collections.abc import MutableMapping
from psycopg2.extras import Json
class JsonDictField(Field):
type = 'jsonb'
column_type = ('jsonb', 'jsonb')
column_cast_from = ('varchar',)
def convert_to_write(self, value, record):
return value
def convert_to_column(self, value, record, values=None, validate=True):
val = self.convert_to_cache(value, record, validate=validate)
return Json(val) if val else None
def convert_to_cache(self, value, record, validate=True):
return value.dict if isinstance(value, FieldDict) else value if isinstance(value, dict) else None
def convert_to_record(self, value, record):
return FieldDict(value or {}, self, record)
def convert_to_read(self, value, record, use_name_get=True):
return self.convert_to_cache(value, record)
class FieldDict(MutableMapping):
def __init__(self, init_dict, field, record):
self.field = field
self.record = record
self.dict = init_dict
def __setitem__(self, key, value):
new = self.dict.copy()
new[key] = value
self.record[self.field.name] = new
def __getitem__(self, key):
return self.dict[key]
def __delitem__(self, key):
new = self.dict.copy()
del new[key]
self.record[self.field.name] = new
def __iter__(self):
return iter(self.dict)
def __len__(self):
return len(self.dict)

View File

@ -1,29 +0,0 @@
# -*- coding: utf-8 -*-
from . import batch
from . import branch
from . import build
from . import build_config
from . import build_error
from . import bundle
from . import codeowner
from . import commit
from . import custom_trigger
from . import database
from . import dockerfile
from . import event
from . import host
from . import ir_cron
from . import ir_ui_view
from . import project
from . import repo
from . import res_config_settings
from . import res_users
from . import runbot
from . import upgrade
from . import user
from . import version
# those imports have to be at the end otherwise the sql view cannot be initialised
from . import build_stat
from . import build_stat_regex

View File

@ -1,462 +0,0 @@
import time
import logging
import datetime
import subprocess
from odoo import models, fields, api
from ..common import dt2time, s2human_long, pseudo_markdown
_logger = logging.getLogger(__name__)
class Batch(models.Model):
_name = 'runbot.batch'
_description = "Bundle batch"
last_update = fields.Datetime('Last ref update')
bundle_id = fields.Many2one('runbot.bundle', required=True, index=True, ondelete='cascade')
commit_link_ids = fields.Many2many('runbot.commit.link')
commit_ids = fields.Many2many('runbot.commit', compute='_compute_commit_ids')
slot_ids = fields.One2many('runbot.batch.slot', 'batch_id')
all_build_ids = fields.Many2many('runbot.build', compute='_compute_all_build_ids', help="Recursive builds")
state = fields.Selection([('preparing', 'Preparing'), ('ready', 'Ready'), ('done', 'Done'), ('skipped', 'Skipped')])
hidden = fields.Boolean('Hidden', default=False)
age = fields.Integer(compute='_compute_age', string='Build age')
category_id = fields.Many2one('runbot.category', default=lambda self: self.env.ref('runbot.default_category', raise_if_not_found=False))
log_ids = fields.One2many('runbot.batch.log', 'batch_id')
has_warning = fields.Boolean("Has warning")
base_reference_batch_id = fields.Many2one('runbot.batch')
@api.depends('slot_ids.build_id')
def _compute_all_build_ids(self):
all_builds = self.env['runbot.build'].search([('id', 'child_of', self.slot_ids.build_id.ids)])
for batch in self:
batch.all_build_ids = all_builds.filtered_domain([('id', 'child_of', batch.slot_ids.build_id.ids)])
@api.depends('commit_link_ids')
def _compute_commit_ids(self):
for batch in self:
batch.commit_ids = batch.commit_link_ids.commit_id
@api.depends('create_date')
def _compute_age(self):
"""Return the time between job start and now"""
for batch in self:
if batch.create_date:
batch.age = int(time.time() - dt2time(batch.create_date))
else:
batch.buildage_age = 0
def get_formated_age(self):
return s2human_long(self.age)
def _url(self):
self.ensure_one()
return "/runbot/batch/%s" % self.id
def _new_commit(self, branch, match_type='new'):
# if not the same hash for repo:
commit = branch.head
self.last_update = fields.Datetime.now()
for commit_link in self.commit_link_ids:
# case 1: a commit already exists for the repo (pr+branch, or fast push)
if commit_link.commit_id.repo_id == commit.repo_id:
if commit_link.commit_id.id != commit.id:
self._log('New head on branch %s during throttle phase: Replacing commit %s with %s', branch.name, commit_link.commit_id.name, commit.name)
commit_link.write({'commit_id': commit.id, 'branch_id': branch.id})
elif not commit_link.branch_id.is_pr and branch.is_pr:
commit_link.branch_id = branch # Try to have a pr instead of branch on commit if possible ?
break
else:
self.write({'commit_link_ids': [(0, 0, {
'commit_id': commit.id,
'match_type': match_type,
'branch_id': branch.id
})]})
def _skip(self):
for batch in self:
if batch.bundle_id.is_base or batch.state == 'done':
continue
batch.state = 'skipped' # done?
batch._log('Skipping batch')
for slot in batch.slot_ids:
slot.skipped = True
build = slot.build_id
if build.global_state in ('running', 'done'):
continue
testing_slots = build.slot_ids.filtered(lambda s: not s.skipped)
if not testing_slots:
if build.global_state == 'pending':
build._skip('Newer build found')
elif build.global_state in ('waiting', 'testing'):
if not build.killable:
build.killable = True
elif slot.link_type == 'created':
batches = testing_slots.mapped('batch_id')
_logger.info('Cannot skip build %s build is still in use in batches %s', build.id, batches.ids)
bundles = batches.mapped('bundle_id') - batch.bundle_id
if bundles:
batch._log('Cannot kill or skip build %s, build is used in another bundle: %s', build.id, bundles.mapped('name'))
def _process(self):
processed = self.browse()
for batch in self:
if batch.state == 'preparing' and batch.last_update < fields.Datetime.now() - datetime.timedelta(seconds=60):
batch._prepare()
processed |= batch
elif batch.state == 'ready' and all(slot.build_id.global_state in (False, 'running', 'done') for slot in batch.slot_ids):
_logger.info('Batch %s is done', self.id)
batch._log('Batch done')
batch.state = 'done'
processed |= batch
return processed
def _create_build(self, params):
"""
Create a build with given params_id if it does not already exists.
In the case that a very same build already exists that build is returned
"""
domain = [('params_id', '=', params.id), ('parent_id', '=', False)]
if self.bundle_id.host_id:
domain += [('host', '=', self.bundle_id.host_id.name), ('keep_host', '=', True)]
build = self.env['runbot.build'].search(domain, limit=1, order='id desc')
link_type = 'matched'
if build:
if build.killable:
build.killable = False
else:
description = params.trigger_id.description if params.trigger_id.description else False
link_type = 'created'
build = self.env['runbot.build'].create({
'params_id': params.id,
'description': description,
'build_type': 'normal' if self.category_id == self.env.ref('runbot.default_category') else 'scheduled',
'no_auto_run': self.bundle_id.no_auto_run,
})
if self.bundle_id.host_id:
build.host = self.bundle_id.host_id.name
build.keep_host = True
build._github_status()
return link_type, build
def _prepare(self, auto_rebase=False):
_logger.info('Preparing batch %s', self.id)
if not self.bundle_id.base_id:
# in some case the base can be detected lately. If a bundle has no base, recompute the base before preparing
self.bundle_id._compute_base_id()
for level, message in self.bundle_id.consistency_warning():
if level == "warning":
self.warning("Bundle warning: %s" % message)
self.state = 'ready'
bundle = self.bundle_id
project = bundle.project_id
if not bundle.version_id:
_logger.error('No version found on bundle %s in project %s', bundle.name, project.name)
dockerfile_id = bundle.dockerfile_id or bundle.base_id.dockerfile_id or bundle.version_id.dockerfile_id or bundle.project_id.dockerfile_id
if not dockerfile_id:
_logger.error('No dockerfile found !')
triggers = self.env['runbot.trigger'].search([ # could be optimised for multiple batches. Ormcached method?
('project_id', '=', project.id),
('category_id', '=', self.category_id.id)
]).filtered(
lambda t: not t.version_domain or \
self.bundle_id.version_id.filtered_domain(t.get_version_domain())
)
pushed_repo = self.commit_link_ids.mapped('commit_id.repo_id')
dependency_repos = triggers.mapped('dependency_ids')
all_repos = triggers.mapped('repo_ids') | dependency_repos
missing_repos = all_repos - pushed_repo
######################################
# Find missing commits
######################################
def fill_missing(branch_commits, match_type):
if branch_commits:
for branch, commit in branch_commits.items(): # branch first in case pr is closed.
nonlocal missing_repos
if commit.repo_id in missing_repos:
if not branch.alive:
self._log("Skipping dead branch %s" % branch.name)
continue
values = {
'commit_id': commit.id,
'match_type': match_type,
'branch_id': branch.id,
}
if match_type.startswith('base'):
values['base_commit_id'] = commit.id
values['merge_base_commit_id'] = commit.id
self.write({'commit_link_ids': [(0, 0, values)]})
missing_repos -= commit.repo_id
# CHECK branch heads consistency
branch_per_repo = {}
for branch in bundle.branch_ids.sorted(lambda b: (b.head.id, b.is_pr), reverse=True):
if branch.alive:
commit = branch.head
repo = commit.repo_id
if repo not in branch_per_repo:
branch_per_repo[repo] = branch
elif branch_per_repo[repo].head != branch.head and branch.alive:
obranch = branch_per_repo[repo]
self._log("Branch %s and branch %s in repo %s don't have the same head: %s%s", branch.dname, obranch.dname, repo.name, branch.head.name, obranch.head.name)
# 1.1 FIND missing commit in bundle heads
if missing_repos:
fill_missing({branch: branch.head for branch in bundle.branch_ids.sorted(lambda b: (b.head.id, b.is_pr), reverse=True)}, 'head')
# 1.2 FIND merge_base info for those commits
# use last not preparing batch to define previous repos_heads instead of branches heads:
# Will allow to have a diff info on base bundle, compare with previous bundle
last_base_batch = self.env['runbot.batch'].search([('bundle_id', '=', bundle.base_id.id), ('state', '!=', 'preparing'), ('category_id', '=', self.category_id.id), ('id', '!=', self.id)], order='id desc', limit=1)
base_head_per_repo = {commit.repo_id.id: commit for commit in last_base_batch.commit_ids}
self._update_commits_infos(base_head_per_repo) # set base_commit, diff infos, ...
# 2. FIND missing commit in a compatible base bundle
if not bundle.is_base:
merge_base_commits = self.commit_link_ids.mapped('merge_base_commit_id')
if auto_rebase:
self.base_reference_batch_id = last_base_batch
else:
self.base_reference_batch_id = False
link_commit = self.env['runbot.commit.link'].search([
('commit_id', 'in', merge_base_commits.ids),
('match_type', 'in', ('new', 'head'))
])
batches = self.env['runbot.batch'].search([
('bundle_id', '=', bundle.base_id.id),
('commit_link_ids', 'in', link_commit.ids),
('state', '!=', 'preparing'),
('category_id', '=', self.category_id.id)
]).sorted(lambda b: (len(b.commit_ids & merge_base_commits), b.id), reverse=True)
if batches:
self.base_reference_batch_id = batches[0]
batch = self.base_reference_batch_id
if batch:
if missing_repos:
self._log('Using batch [%s](%s) to define missing commits', batch.id, batch._url())
fill_missing({link.branch_id: link.commit_id for link in batch.commit_link_ids}, 'base_match')
# check if all mergebase match reference batch
batch_exiting_commit = batch.commit_ids.filtered(lambda c: c.repo_id in merge_base_commits.repo_id)
not_matching = (batch_exiting_commit - merge_base_commits)
if not_matching and not auto_rebase:
message = 'Only %s out of %s merge base matched. You may want to rebase your branches to ensure compatibility' % (len(merge_base_commits)-len(not_matching), len(merge_base_commits))
suggestions = [('Tip: rebase %s to %s' % (commit.repo_id.name, commit.name)) for commit in not_matching]
self.warning('%s\n%s' % (message, '\n'.join(suggestions)))
else:
self._log('No reference batch found to fill missing commits')
# 3.1 FIND missing commit in base heads
if missing_repos:
if not bundle.is_base:
self._log('Not all commit found in bundle branches and base batch. Fallback on base branches heads.')
fill_missing({branch: branch.head for branch in self.bundle_id.base_id.branch_ids}, 'base_head')
# 3.2 FIND missing commit in master base heads
if missing_repos: # this is to get an upgrade branch.
if not bundle.is_base:
self._log('Not all commit found in current version. Fallback on master branches heads.')
master_bundle = self.env['runbot.version']._get('master').with_context(project_id=self.bundle_id.project_id.id).base_bundle_id
fill_missing({branch: branch.head for branch in master_bundle.branch_ids}, 'base_head')
# 4. FIND missing commit in foreign project
if missing_repos:
foreign_projects = dependency_repos.mapped('project_id') - project
if foreign_projects:
self._log('Not all commit found. Fallback on foreign base branches heads.')
foreign_bundles = bundle.search([('name', '=', bundle.name), ('project_id', 'in', foreign_projects.ids)])
fill_missing({branch: branch.head for branch in foreign_bundles.mapped('branch_ids').sorted('is_pr', reverse=True)}, 'head')
if missing_repos:
foreign_bundles = bundle.search([('name', '=', bundle.base_id.name), ('project_id', 'in', foreign_projects.ids)])
fill_missing({branch: branch.head for branch in foreign_bundles.mapped('branch_ids')}, 'base_head')
# CHECK missing commit
if missing_repos:
_logger.warning('Missing repo %s for batch %s', missing_repos.mapped('name'), self.id)
######################################
# Generate build params
######################################
if auto_rebase:
for commit_link in self.commit_link_ids:
commit_link.commit_id = commit_link.commit_id._rebase_on(commit_link.base_commit_id)
commit_link_by_repos = {commit_link.commit_id.repo_id.id: commit_link for commit_link in self.commit_link_ids}
bundle_repos = bundle.branch_ids.mapped('remote_id.repo_id')
version_id = self.bundle_id.version_id.id
project_id = self.bundle_id.project_id.id
trigger_customs = {}
for trigger_custom in self.bundle_id.trigger_custom_ids:
trigger_customs[trigger_custom.trigger_id] = trigger_custom
for trigger in triggers:
trigger_custom = trigger_customs.get(trigger)
trigger_repos = trigger.repo_ids | trigger.dependency_ids
if trigger_repos & missing_repos:
self.warning('Missing commit for repo %s for trigger %s', (trigger_repos & missing_repos).mapped('name'), trigger.name)
continue
# in any case, search for an existing build
config = trigger_custom.config_id if trigger_custom else trigger.config_id
extra_params = trigger_custom.extra_params if trigger_custom else ''
config_data = trigger_custom.config_data if trigger_custom else {}
params_value = {
'version_id': version_id,
'extra_params': extra_params,
'config_id': config.id,
'project_id': project_id,
'trigger_id': trigger.id, # for future reference and access rights
'config_data': config_data,
'commit_link_ids': [(6, 0, [commit_link_by_repos[repo.id].id for repo in trigger_repos])],
'modules': bundle.modules,
'dockerfile_id': dockerfile_id,
'create_batch_id': self.id,
'used_custom_trigger': bool(trigger_custom),
}
params_value['builds_reference_ids'] = trigger._reference_builds(bundle)
params = self.env['runbot.build.params'].create(params_value)
build = self.env['runbot.build']
link_type = 'created'
force_trigger = trigger_custom and trigger_custom.start_mode == 'force'
skip_trigger = (trigger_custom and trigger_custom.start_mode == 'disable') or trigger.manual
should_start = ((trigger.repo_ids & bundle_repos) or bundle.build_all or bundle.sticky)
if force_trigger or (should_start and not skip_trigger): # only auto link build if bundle has a branch for this trigger
link_type, build = self._create_build(params)
self.env['runbot.batch.slot'].create({
'batch_id': self.id,
'trigger_id': trigger.id,
'build_id': build.id,
'params_id': params.id,
'link_type': link_type,
})
######################################
# SKIP older batches
######################################
default_category = self.env.ref('runbot.default_category')
if not bundle.sticky and self.category_id == default_category:
skippable = self.env['runbot.batch'].search([
('bundle_id', '=', bundle.id),
('state', 'not in', ('done', 'skipped')),
('id', '<', self.id),
('category_id', '=', default_category.id)
])
skippable._skip()
def _update_commits_infos(self, base_head_per_repo):
for link_commit in self.commit_link_ids:
commit = link_commit.commit_id
base_head = base_head_per_repo.get(commit.repo_id.id)
if not base_head:
self.warning('No base head found for repo %s', commit.repo_id.name)
continue
link_commit.base_commit_id = base_head
merge_base_sha = False
try:
link_commit.base_ahead = link_commit.base_behind = 0
link_commit.file_changed = link_commit.diff_add = link_commit.diff_remove = 0
link_commit.merge_base_commit_id = commit.id
if commit.name == base_head.name:
continue
merge_base_sha = commit.repo_id._git(['merge-base', commit.name, base_head.name]).strip()
merge_base_commit = self.env['runbot.commit']._get(merge_base_sha, commit.repo_id.id)
link_commit.merge_base_commit_id = merge_base_commit.id
ahead, behind = commit.repo_id._git(['rev-list', '--left-right', '--count', '%s...%s' % (commit.name, base_head.name)]).strip().split('\t')
link_commit.base_ahead = int(ahead)
link_commit.base_behind = int(behind)
if merge_base_sha == commit.name:
continue
# diff. Iter on --numstat, easier to parse than --shortstat summary
diff = commit.repo_id._git(['diff', '--numstat', merge_base_sha, commit.name]).strip()
if diff:
for line in diff.split('\n'):
link_commit.file_changed += 1
add, remove, _ = line.split(None, 2)
try:
link_commit.diff_add += int(add)
link_commit.diff_remove += int(remove)
except ValueError: # binary files
pass
except subprocess.CalledProcessError:
self.warning('Commit info failed between %s and %s', commit.name, base_head.name)
def warning(self, message, *args):
self.has_warning = True
_logger.warning('batch %s: ' + message, self.id, *args)
self._log(message, *args, level='WARNING')
def _log(self, message, *args, level='INFO'):
message = message % args if args else message
self.env['runbot.batch.log'].create({
'batch_id': self.id,
'message': message,
'level': level,
})
class BatchLog(models.Model):
_name = 'runbot.batch.log'
_description = 'Batch log'
batch_id = fields.Many2one('runbot.batch', index=True)
message = fields.Text('Message')
level = fields.Char()
def _markdown(self):
""" Apply pseudo markdown parser for message.
"""
self.ensure_one()
return pseudo_markdown(self.message)
class BatchSlot(models.Model):
_name = 'runbot.batch.slot'
_description = 'Link between a bundle batch and a build'
_order = 'trigger_id,id'
_fa_link_type = {'created': 'hashtag', 'matched': 'link', 'rebuild': 'refresh'}
batch_id = fields.Many2one('runbot.batch', index=True)
trigger_id = fields.Many2one('runbot.trigger', index=True)
build_id = fields.Many2one('runbot.build', index=True)
all_build_ids = fields.Many2many('runbot.build', compute='_compute_all_build_ids')
params_id = fields.Many2one('runbot.build.params', index=True, required=True)
link_type = fields.Selection([('created', 'Build created'), ('matched', 'Existing build matched'), ('rebuild', 'Rebuild')], required=True) # rebuild type?
active = fields.Boolean('Attached', default=True)
skipped = fields.Boolean('Skipped', default=False)
# rebuild, what to do: since build can be in multiple batch:
# - replace for all batch?
# - only available on batch and replace for batch only?
# - create a new bundle batch will new linked build?
@api.depends('build_id')
def _compute_all_build_ids(self):
all_builds = self.env['runbot.build'].search([('id', 'child_of', self.build_id.ids)])
for slot in self:
slot.all_build_ids = all_builds.filtered_domain([('id', 'child_of', slot.build_id.ids)])
def fa_link_type(self):
return self._fa_link_type.get(self.link_type, 'exclamation-triangle')
def _create_missing_build(self):
"""Create a build when the slot does not have one"""
self.ensure_one()
if self.build_id:
return self.build_id
self.batch_id._log(f'Trigger {self.trigger_id.name} was started by {self.env.user.name}')
self.link_type, self.build_id = self.batch_id._create_build(self.params_id)
return self.build_id

View File

@ -1,249 +0,0 @@
# -*- coding: utf-8 -*-
import logging
import re
from collections import defaultdict
from odoo import models, fields, api
_logger = logging.getLogger(__name__)
class Branch(models.Model):
_name = 'runbot.branch'
_description = "Branch"
_order = 'name'
_sql_constraints = [('branch_repo_uniq', 'unique (name,remote_id)', 'The branch must be unique per repository !')]
name = fields.Char('Name', required=True)
remote_id = fields.Many2one('runbot.remote', 'Remote', required=True, ondelete='cascade')
head = fields.Many2one('runbot.commit', 'Head Commit', index=True)
head_name = fields.Char('Head name', related='head.name', store=True)
reference_name = fields.Char(compute='_compute_reference_name', string='Bundle name', store=True)
bundle_id = fields.Many2one('runbot.bundle', 'Bundle', compute='_compute_bundle_id', store=True, ondelete='cascade', index=True)
is_pr = fields.Boolean('IS a pr', required=True)
pull_head_name = fields.Char(compute='_compute_branch_infos', string='PR HEAD name', readonly=1, store=True)
pull_head_remote_id = fields.Many2one('runbot.remote', 'Pull head repository', compute='_compute_branch_infos', store=True, index=True)
target_branch_name = fields.Char(compute='_compute_branch_infos', string='PR target branch', store=True)
reviewers = fields.Char('Reviewers')
reflog_ids = fields.One2many('runbot.ref.log', 'branch_id')
branch_url = fields.Char(compute='_compute_branch_url', string='Branch url', readonly=1)
dname = fields.Char('Display name', compute='_compute_dname', search='_search_dname')
alive = fields.Boolean('Alive', default=True)
draft = fields.Boolean('Draft', compute='_compute_branch_infos', store=True)
@api.depends('name', 'remote_id.short_name')
def _compute_dname(self):
for branch in self:
branch.dname = '%s:%s' % (branch.remote_id.short_name, branch.name)
def _search_dname(self, operator, value):
if ':' not in value:
return [('name', operator, 'value')]
repo_short_name, branch_name = value.split(':')
owner, repo_name = repo_short_name.split('/')
return ['&', ('remote_id', '=', self.env['runbot.remote'].search([('owner', '=', owner), ('repo_name', '=', repo_name)]).id), ('name', operator, branch_name)]
@api.depends('name', 'is_pr', 'target_branch_name', 'pull_head_name', 'pull_head_remote_id')
def _compute_reference_name(self):
"""
Unique reference for a branch inside a bundle.
- branch_name for branches
- branch name part of pull_head_name for pr if remote is known
- pull_head_name (organisation:branch_name) for external pr
"""
for branch in self:
if branch.is_pr:
_, name = branch.pull_head_name.split(':')
if branch.pull_head_remote_id:
reference_name = name
else:
reference_name = branch.pull_head_name # repo is not known, not in repo list must be an external pr, so use complete label
#if ':patch-' in branch.pull_head_name:
# branch.reference_name = '%s~%s' % (branch.pull_head_name, branch.name)
else:
reference_name = branch.name
forced_version = branch.remote_id.repo_id.single_version # we don't add a depend on repo.single_version to avoid mass recompute of existing branches
if forced_version and not reference_name.startswith(f'{forced_version.name}-'):
reference_name = f'{forced_version.name}---{reference_name}'
branch.reference_name = reference_name
@api.depends('name')
def _compute_branch_infos(self, pull_info=None):
"""compute branch_url, pull_head_name and target_branch_name based on name"""
name_to_remote = {}
prs = self.filtered(lambda branch: branch.is_pr)
pull_info_dict = {}
if not pull_info and len(prs) > 30: # this is arbitrary, we should store # page on remote
pr_per_remote = defaultdict(list)
for pr in prs:
pr_per_remote[pr.remote_id].append(pr)
for remote, prs in pr_per_remote.items():
_logger.info('Getting info in %s for %s pr using page scan', remote.name, len(prs))
pr_names = set([pr.name for pr in prs])
count = 0
for result in remote._github('/repos/:owner/:repo/pulls?state=all&sort=updated&direction=desc', ignore_errors=True, recursive=True):
for info in result:
number = str(info.get('number'))
pr_names.discard(number)
pull_info_dict[(remote, number)] = info
count += 1
if not pr_names:
break
if count > 100:
_logger.info('Not all pr found after 100 pages: remaining: %s', pr_names)
break
for branch in self:
branch.target_branch_name = False
branch.pull_head_name = False
branch.pull_head_remote_id = False
if branch.name:
pi = branch.is_pr and (pull_info or pull_info_dict.get((branch.remote_id, branch.name)) or branch._get_pull_info())
if pi:
try:
branch.draft = pi.get('draft', False)
branch.alive = pi.get('state', False) != 'closed'
branch.target_branch_name = pi['base']['ref']
branch.pull_head_name = pi['head']['label']
pull_head_repo_name = False
if pi['head'].get('repo'):
pull_head_repo_name = pi['head']['repo'].get('full_name')
if pull_head_repo_name not in name_to_remote:
owner, repo_name = pull_head_repo_name.split('/')
name_to_remote[pull_head_repo_name] = self.env['runbot.remote'].search([('owner', '=', owner), ('repo_name', '=', repo_name)], limit=1)
branch.pull_head_remote_id = name_to_remote[pull_head_repo_name]
except (TypeError, AttributeError):
_logger.exception('Error for pr %s using pull_info %s', branch.name, pi)
raise
@api.depends('name', 'remote_id.base_url', 'is_pr')
def _compute_branch_url(self):
"""compute the branch url based on name"""
for branch in self:
if branch.name:
if branch.is_pr:
branch.branch_url = "https://%s/pull/%s" % (branch.remote_id.base_url, branch.name)
else:
branch.branch_url = "https://%s/tree/%s" % (branch.remote_id.base_url, branch.name)
else:
branch.branch_url = ''
@api.depends('reference_name', 'remote_id.repo_id.project_id')
def _compute_bundle_id(self):
dummy = self.env.ref('runbot.bundle_dummy')
for branch in self:
if branch.bundle_id == dummy:
continue
name = branch.reference_name
project = branch.remote_id.repo_id.project_id or self.env.ref('runbot.main_project')
project.ensure_one()
bundle = self.env['runbot.bundle'].search([('name', '=', name), ('project_id', '=', project.id)])
need_new_base = not bundle and branch.match_is_base(name)
if (bundle.is_base or need_new_base) and branch.remote_id != branch.remote_id.repo_id.main_remote_id:
_logger.warning('Trying to add a dev branch to base bundle, falling back on dummy bundle')
bundle = dummy
elif name and branch.remote_id and branch.remote_id.repo_id._is_branch_forbidden(name):
_logger.warning('Trying to add a forbidden branch, falling back on dummy bundle')
bundle = dummy
elif bundle.is_base and branch.is_pr:
_logger.warning('Trying to add pr to base bundle, falling back on dummy bundle')
bundle = dummy
elif not bundle:
values = {
'name': name,
'project_id': project.id,
}
if need_new_base:
values['is_base'] = True
if branch.is_pr and branch.target_branch_name: # most likely external_pr, use target as version
base = self.env['runbot.bundle'].search([
('name', '=', branch.target_branch_name),
('is_base', '=', True),
('project_id', '=', project.id)
])
if base:
values['defined_base_id'] = base.id
if name:
bundle = self.env['runbot.bundle'].create(values) # this prevent creating a branch in UI
branch.bundle_id = bundle
@api.model_create_multi
def create(self, value_list):
branches = super().create(value_list)
for branch in branches:
if branch.head:
self.env['runbot.ref.log'].create({'commit_id': branch.head.id, 'branch_id': branch.id})
return branches
def write(self, values):
if 'head' in values:
head = self.head
super().write(values)
if 'head' in values and head != self.head:
self.env['runbot.ref.log'].create({'commit_id': self.head.id, 'branch_id': self.id})
def _get_pull_info(self):
self.ensure_one()
remote = self.remote_id
if self.is_pr:
_logger.info('Getting info for %s', self.name)
return remote._github('/repos/:owner/:repo/pulls/%s' % self.name, ignore_errors=False) or {} # TODO catch and send a managable exception
return {}
def ref(self):
return 'refs/%s/%s/%s' % (
self.remote_id.remote_name,
'pull' if self.is_pr else 'heads',
self.name
)
def recompute_infos(self, payload=None):
""" public method to recompute infos on demand """
was_draft = self.draft
was_alive = self.alive
init_target_branch_name = self.target_branch_name
self._compute_branch_infos(payload)
if self.target_branch_name != init_target_branch_name:
_logger.info('retargeting %s to %s', self.name, self.target_branch_name)
base = self.env['runbot.bundle'].search([
('name', '=', self.target_branch_name),
('is_base', '=', True),
('project_id', '=', self.remote_id.repo_id.project_id.id)
])
if base and self.bundle_id.defined_base_id != base:
_logger.info('Changing base of bundle %s to %s(%s)', self.bundle_id, base.name, base.id)
self.bundle_id.defined_base_id = base.id
self.bundle_id._force()
if self.draft:
self.reviewers = '' # reset reviewers on draft
if (not self.draft and was_draft) or (self.alive and not was_alive) or (self.target_branch_name != init_target_branch_name and self.alive):
self.bundle_id._force()
@api.model
def match_is_base(self, name):
"""match against is_base_regex ir.config_parameter"""
if not name:
return False
icp = self.env['ir.config_parameter'].sudo()
regex = icp.get_param('runbot.runbot_is_base_regex', False)
if regex:
return re.match(regex, name)
class RefLog(models.Model):
_name = 'runbot.ref.log'
_description = 'Ref log'
_log_access = False
commit_id = fields.Many2one('runbot.commit', index=True)
branch_id = fields.Many2one('runbot.branch', index=True)
date = fields.Datetime(default=fields.Datetime.now)

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,334 +0,0 @@
# -*- coding: utf-8 -*-
import ast
import hashlib
import logging
import re
from collections import defaultdict
from fnmatch import fnmatch
from odoo import models, fields, api
from odoo.exceptions import ValidationError
_logger = logging.getLogger(__name__)
class BuildError(models.Model):
_name = "runbot.build.error"
_description = "Build error"
_inherit = "mail.thread"
_rec_name = "id"
content = fields.Text('Error message', required=True)
cleaned_content = fields.Text('Cleaned error message')
summary = fields.Char('Content summary', compute='_compute_summary', store=False)
module_name = fields.Char('Module name') # name in ir_logging
file_path = fields.Char('File Path') # path in ir logging
function = fields.Char('Function name') # func name in ir logging
fingerprint = fields.Char('Error fingerprint', index=True)
random = fields.Boolean('underterministic error', tracking=True)
responsible = fields.Many2one('res.users', 'Assigned fixer', tracking=True)
team_id = fields.Many2one('runbot.team', 'Assigned team')
fixing_commit = fields.Char('Fixing commit', tracking=True)
fixing_pr_id = fields.Many2one('runbot.branch', 'Fixing PR', tracking=True)
build_ids = fields.Many2many('runbot.build', 'runbot_build_error_ids_runbot_build_rel', string='Affected builds')
bundle_ids = fields.One2many('runbot.bundle', compute='_compute_bundle_ids')
version_ids = fields.One2many('runbot.version', compute='_compute_version_ids', string='Versions', search='_search_version')
trigger_ids = fields.Many2many('runbot.trigger', compute='_compute_trigger_ids', string='Triggers', search='_search_trigger_ids')
active = fields.Boolean('Error is not fixed', default=True, tracking=True)
tag_ids = fields.Many2many('runbot.build.error.tag', string='Tags')
build_count = fields.Integer(compute='_compute_build_counts', string='Nb seen', store=True)
parent_id = fields.Many2one('runbot.build.error', 'Linked to', index=True)
child_ids = fields.One2many('runbot.build.error', 'parent_id', string='Child Errors', context={'active_test': False})
children_build_ids = fields.Many2many('runbot.build', compute='_compute_children_build_ids', string='Children builds')
error_history_ids = fields.Many2many('runbot.build.error', compute='_compute_error_history_ids', string='Old errors', context={'active_test': False})
first_seen_build_id = fields.Many2one('runbot.build', compute='_compute_first_seen_build_id', string='First Seen build')
first_seen_date = fields.Datetime(string='First Seen Date', related='first_seen_build_id.create_date')
last_seen_build_id = fields.Many2one('runbot.build', compute='_compute_last_seen_build_id', string='Last Seen build', store=True)
last_seen_date = fields.Datetime(string='Last Seen Date', related='last_seen_build_id.create_date', store=True)
test_tags = fields.Char(string='Test tags', help="Comma separated list of test_tags to use to reproduce/remove this error", tracking=True)
@api.constrains('test_tags')
def _check_test_tags(self):
for build_error in self:
if build_error.test_tags and '-' in build_error.test_tags:
raise ValidationError('Build error test_tags should not be negated')
@api.model_create_single
def create(self, vals):
cleaners = self.env['runbot.error.regex'].search([('re_type', '=', 'cleaning')])
content = vals.get('content')
cleaned_content = cleaners.r_sub('%', content)
vals.update({'cleaned_content': cleaned_content,
'fingerprint': self._digest(cleaned_content)
})
if not 'team_id' in vals and 'module_name' in vals:
vals.update({'team_id': self.env['runbot.team']._get_team(vals['module_name'])})
return super().create(vals)
def write(self, vals):
if 'active' in vals:
for build_error in self:
(build_error.child_ids - self).write({'active': vals['active']})
return super(BuildError, self).write(vals)
@api.depends('build_ids', 'child_ids.build_ids')
def _compute_build_counts(self):
for build_error in self:
build_error.build_count = len(build_error.build_ids | build_error.mapped('child_ids.build_ids'))
@api.depends('build_ids')
def _compute_bundle_ids(self):
for build_error in self:
top_parent_builds = build_error.build_ids.mapped(lambda rec: rec and rec.top_parent)
build_error.bundle_ids = top_parent_builds.mapped('slot_ids').mapped('batch_id.bundle_id')
@api.depends('build_ids', 'child_ids.build_ids')
def _compute_version_ids(self):
for build_error in self:
build_error.version_ids = build_error.build_ids.version_id
@api.depends('build_ids')
def _compute_trigger_ids(self):
for build_error in self:
build_error.trigger_ids = build_error.build_ids.trigger_id
@api.depends('content')
def _compute_summary(self):
for build_error in self:
build_error.summary = build_error.content[:50]
@api.depends('build_ids', 'child_ids.build_ids')
def _compute_children_build_ids(self):
for build_error in self:
all_builds = build_error.build_ids | build_error.mapped('child_ids.build_ids')
build_error.children_build_ids = all_builds.sorted(key=lambda rec: rec.id, reverse=True)
@api.depends('children_build_ids')
def _compute_last_seen_build_id(self):
for build_error in self:
build_error.last_seen_build_id = build_error.children_build_ids and build_error.children_build_ids[0] or False
@api.depends('children_build_ids')
def _compute_first_seen_build_id(self):
for build_error in self:
build_error.first_seen_build_id = build_error.children_build_ids and build_error.children_build_ids[-1] or False
@api.depends('fingerprint', 'child_ids.fingerprint')
def _compute_error_history_ids(self):
for error in self:
fingerprints = [error.fingerprint] + [rec.fingerprint for rec in error.child_ids]
error.error_history_ids = self.search([('fingerprint', 'in', fingerprints), ('active', '=', False), ('id', '!=', error.id or False)])
@api.model
def _digest(self, s):
"""
return a hash 256 digest of the string s
"""
return hashlib.sha256(s.encode()).hexdigest()
@api.model
def _parse_logs(self, ir_logs):
regexes = self.env['runbot.error.regex'].search([])
search_regs = regexes.filtered(lambda r: r.re_type == 'filter')
cleaning_regs = regexes.filtered(lambda r: r.re_type == 'cleaning')
hash_dict = defaultdict(list)
for log in ir_logs:
if search_regs.r_search(log.message):
continue
fingerprint = self._digest(cleaning_regs.r_sub('%', log.message))
hash_dict[fingerprint].append(log)
build_errors = self.env['runbot.build.error']
# add build ids to already detected errors
existing_errors = self.env['runbot.build.error'].search([('fingerprint', 'in', list(hash_dict.keys())), ('active', '=', True)])
build_errors |= existing_errors
for build_error in existing_errors:
for build in {rec.build_id for rec in hash_dict[build_error.fingerprint]}:
build.build_error_ids += build_error
del hash_dict[build_error.fingerprint]
# create an error for the remaining entries
for fingerprint, logs in hash_dict.items():
build_errors |= self.env['runbot.build.error'].create({
'content': logs[0].message,
'module_name': logs[0].name,
'file_path': logs[0].path,
'function': logs[0].func,
'build_ids': [(6, False, [r.build_id.id for r in logs])],
})
if build_errors:
window_action = {
"type": "ir.actions.act_window",
"res_model": "runbot.build.error",
"views": [[False, "tree"]],
"domain": [('id', 'in', build_errors.ids)]
}
if len(build_errors) == 1:
window_action["views"] = [[False, "form"]]
window_action["res_id"] = build_errors.id
return window_action
def link_errors(self):
""" Link errors with the first one of the recordset
choosing parent in error with responsible, random bug and finally fisrt seen
"""
if len(self) < 2:
return
self = self.with_context(active_test=False)
build_errors = self.search([('id', 'in', self.ids)], order='responsible asc, random desc, id asc')
build_errors[1:].write({'parent_id': build_errors[0].id})
def clean_content(self):
cleaning_regs = self.env['runbot.error.regex'].search([('re_type', '=', 'cleaning')])
for build_error in self:
build_error.cleaned_content = cleaning_regs.r_sub('%', build_error.content)
@api.model
def test_tags_list(self):
active_errors = self.search([('test_tags', '!=', False)])
test_tag_list = active_errors.mapped('test_tags')
return [test_tag for error_tags in test_tag_list for test_tag in (error_tags).split(',')]
@api.model
def disabling_tags(self):
return ['-%s' % tag for tag in self.test_tags_list()]
def _search_version(self, operator, value):
return [('build_ids.version_id', operator, value)]
def _search_trigger_ids(self, operator, value):
return [('build_ids.trigger_id', operator, value)]
class BuildErrorTag(models.Model):
_name = "runbot.build.error.tag"
_description = "Build error tag"
name = fields.Char('Tag')
error_ids = fields.Many2many('runbot.build.error', string='Errors')
class ErrorRegex(models.Model):
_name = "runbot.error.regex"
_description = "Build error regex"
_inherit = "mail.thread"
_rec_name = 'id'
_order = 'sequence, id'
regex = fields.Char('Regular expression')
re_type = fields.Selection([('filter', 'Filter out'), ('cleaning', 'Cleaning')], string="Regex type")
sequence = fields.Integer('Sequence', default=100)
def r_sub(self, replace, s):
""" replaces patterns from the recordset by replace in the given string """
for c in self:
s = re.sub(c.regex, '%', s)
return s
def r_search(self, s):
""" Return True if one of the regex is found in s """
for filter in self:
if re.search(filter.regex, s):
return True
return False
class RunbotTeam(models.Model):
_name = 'runbot.team'
_description = "Runbot Team"
_order = 'name, id'
name = fields.Char('Team', required=True)
user_ids = fields.Many2many('res.users', string='Team Members', domain=[('share', '=', False)])
dashboard_id = fields.Many2one('runbot.dashboard', string='Dashboard')
build_error_ids = fields.One2many('runbot.build.error', 'team_id', string='Team Errors', domain=[('parent_id', '=', False)])
path_glob = fields.Char('Module Wildcards',
help='Comma separated list of `fnmatch` wildcards used to assign errors automaticaly\n'
'Negative wildcards starting with a `-` can be used to discard some path\n'
'e.g.: `*website*,-*website_sale*`')
upgrade_exception_ids = fields.One2many('runbot.upgrade.exception', 'team_id', string='Team Upgrade Exceptions')
@api.model_create_single
def create(self, values):
if 'dashboard_id' not in values or values['dashboard_id'] == False:
dashboard = self.env['runbot.dashboard'].search([('name', '=', values['name'])])
if not dashboard:
dashboard = dashboard.create({'name': values['name']})
values['dashboard_id'] = dashboard.id
return super().create(values)
@api.model
def _get_team(self, module_name):
for team in self.env['runbot.team'].search([('path_glob', '!=', False)]):
if any([fnmatch(module_name, pattern.strip().strip('-')) for pattern in team.path_glob.split(',') if pattern.strip().startswith('-')]):
continue
if any([fnmatch(module_name, pattern.strip()) for pattern in team.path_glob.split(',') if not pattern.strip().startswith('-')]):
return team.id
return False
class RunbotDashboard(models.Model):
_name = 'runbot.dashboard'
_description = "Runbot Dashboard"
_order = 'name, id'
name = fields.Char('Team', required=True)
team_ids = fields.One2many('runbot.team', 'dashboard_id', string='Teams')
dashboard_tile_ids = fields.Many2many('runbot.dashboard.tile', string='Dashboards tiles')
class RunbotDashboardTile(models.Model):
_name = 'runbot.dashboard.tile'
_description = "Runbot Dashboard Tile"
_order = 'sequence, id'
sequence = fields.Integer('Sequence')
name = fields.Char('Name')
dashboard_ids = fields.Many2many('runbot.dashboard', string='Dashboards')
display_name = fields.Char(compute='_compute_display_name')
project_id = fields.Many2one('runbot.project', 'Project', help='Project to monitor', required=True,
default=lambda self: self.env.ref('runbot.main_project'))
category_id = fields.Many2one('runbot.category', 'Category', help='Trigger Category to monitor', required=True,
default=lambda self: self.env.ref('runbot.default_category'))
trigger_id = fields.Many2one('runbot.trigger', 'Trigger', help='Trigger to monitor in chosen category')
config_id = fields.Many2one('runbot.build.config', 'Config', help='Select a sub_build with this config')
domain_filter = fields.Char('Domain Filter', help='If present, will be applied on builds', default="[('global_result', '=', 'ko')]")
custom_template_id = fields.Many2one('ir.ui.view', help='Change for a custom Dashboard card template',
domain=[('type', '=', 'qweb')], default=lambda self: self.env.ref('runbot.default_dashboard_tile_view'))
sticky_bundle_ids = fields.Many2many('runbot.bundle', compute='_compute_sticky_bundle_ids', string='Sticky Bundles')
build_ids = fields.Many2many('runbot.build', compute='_compute_build_ids', string='Builds')
@api.depends('project_id', 'category_id', 'trigger_id', 'config_id')
def _compute_display_name(self):
for board in self:
names = [board.project_id.name, board.category_id.name, board.trigger_id.name, board.config_id.name, board.name]
board.display_name = ' / '.join([n for n in names if n])
@api.depends('project_id')
def _compute_sticky_bundle_ids(self):
sticky_bundles = self.env['runbot.bundle'].search([('sticky', '=', True)])
for dashboard in self:
dashboard.sticky_bundle_ids = sticky_bundles.filtered(lambda b: b.project_id == dashboard.project_id)
@api.depends('project_id', 'category_id', 'trigger_id', 'config_id', 'domain_filter')
def _compute_build_ids(self):
for dashboard in self:
last_done_batch_ids = dashboard.sticky_bundle_ids.with_context(category_id=dashboard.category_id.id).last_done_batch
if dashboard.trigger_id:
all_build_ids = last_done_batch_ids.slot_ids.filtered(lambda s: s.trigger_id == dashboard.trigger_id).all_build_ids
else:
all_build_ids = last_done_batch_ids.all_build_ids
domain = ast.literal_eval(dashboard.domain_filter) if dashboard.domain_filter else []
if dashboard.config_id:
domain.append(('config_id', '=', dashboard.config_id.id))
dashboard.build_ids = all_build_ids.filtered_domain(domain)

View File

@ -1,27 +0,0 @@
import logging
from odoo import models, fields, api, tools
from ..fields import JsonDictField
_logger = logging.getLogger(__name__)
class BuildStat(models.Model):
_name = "runbot.build.stat"
_description = "Statistics"
_log_access = False
_sql_constraints = [
(
"build_config_key_unique",
"unique (build_id, config_step_id, category)",
"Build stats must be unique for the same build step",
)
]
build_id = fields.Many2one("runbot.build", "Build", index=True, ondelete="cascade")
config_step_id = fields.Many2one(
"runbot.build.config.step", "Step", ondelete="cascade"
)
category = fields.Char("Category", index=True)
values = JsonDictField("Value")

View File

@ -1,72 +0,0 @@
# -*- coding: utf-8 -*-
import logging
from ..common import os
import re
from odoo import models, fields, api
from odoo.exceptions import ValidationError
VALUE_PATTERN = r"\(\?P\<value\>.+\)" # used to verify value group pattern
_logger = logging.getLogger(__name__)
class BuildStatRegex(models.Model):
""" A regular expression to extract a float/int value from a log file
The regulare should contain a named group like '(?P<value>.+)'.
The result will be a key/value like {name: value}
A second named group '(?P<key>.+)' can bu used to augment the key name
like {name.key_result: value}
A 'generic' regex will be used when no regex are defined on a make_stat
step.
"""
_name = "runbot.build.stat.regex"
_description = "Statistics regex"
_order = 'sequence,id'
name = fields.Char("Key Name")
regex = fields.Char("Regular Expression")
description = fields.Char("Description")
generic = fields.Boolean('Generic', help='Executed when no regex on the step', default=True)
config_step_ids = fields.Many2many('runbot.build.config.step', string='Config Steps')
sequence = fields.Integer('Sequence')
@api.constrains("name", "regex")
def _check_regex(self):
for rec in self:
try:
r = re.compile(rec.regex)
except re.error as e:
raise ValidationError("Unable to compile regular expression: %s" % e)
# verify that a named group exist in the pattern
if not re.search(VALUE_PATTERN, r.pattern):
raise ValidationError(
"The regular expresion should contain the name group pattern 'value' e.g: '(?P<value>.+)'"
)
def _find_in_file(self, file_path):
""" Search file regexes and write stats
returns a dict of key:values
"""
if not os.path.exists(file_path):
return {}
stats_matches = {}
with open(file_path, "r") as log_file:
data = log_file.read()
for build_stat_regex in self:
current_stat_matches = {}
for match in re.finditer(build_stat_regex.regex, data):
group_dict = match.groupdict()
try:
value = float(group_dict.get("value"))
except ValueError:
_logger.warning(
'The matched value (%s) of "%s" cannot be converted into float',
group_dict.get("value"), build_stat_regex.regex
)
continue
current_stat_matches[group_dict.get('key', 'value')] = value
stats_matches[build_stat_regex.name] = current_stat_matches
return stats_matches

View File

@ -1,243 +0,0 @@
import time
import logging
import datetime
import subprocess
from collections import defaultdict
from odoo import models, fields, api, tools
from ..common import dt2time, s2human_long
_logger = logging.getLogger(__name__)
class Bundle(models.Model):
_name = 'runbot.bundle'
_description = "Bundle"
name = fields.Char('Bundle name', required=True, help="Name of the base branch")
project_id = fields.Many2one('runbot.project', required=True, index=True)
branch_ids = fields.One2many('runbot.branch', 'bundle_id')
# custom behaviour
no_build = fields.Boolean('No build')
no_auto_run = fields.Boolean('No run')
build_all = fields.Boolean('Force all triggers')
modules = fields.Char("Modules to install", help="Comma-separated list of modules to install and test.")
batch_ids = fields.One2many('runbot.batch', 'bundle_id')
last_batch = fields.Many2one('runbot.batch', index=True, domain=lambda self: [('category_id', '=', self.env.ref('runbot.default_category').id)])
last_batchs = fields.Many2many('runbot.batch', 'Last batchs', compute='_compute_last_batchs')
last_done_batch = fields.Many2many('runbot.batch', 'Last batchs', compute='_compute_last_done_batch')
sticky = fields.Boolean('Sticky', compute='_compute_sticky', store=True, index=True)
is_base = fields.Boolean('Is base', index=True)
defined_base_id = fields.Many2one('runbot.bundle', 'Forced base bundle', domain="[('project_id', '=', project_id), ('is_base', '=', True)]")
base_id = fields.Many2one('runbot.bundle', 'Base bundle', compute='_compute_base_id', store=True)
to_upgrade = fields.Boolean('To upgrade', compute='_compute_to_upgrade', store=True, index=False)
version_id = fields.Many2one('runbot.version', 'Version', compute='_compute_version_id', store=True, recursive=True)
version_number = fields.Char(related='version_id.number', store=True, index=True)
previous_major_version_base_id = fields.Many2one('runbot.bundle', 'Previous base bundle', compute='_compute_relations_base_id')
intermediate_version_base_ids = fields.Many2many('runbot.bundle', 'Intermediate base bundles', compute='_compute_relations_base_id')
priority = fields.Boolean('Build priority', default=False)
# Custom parameters
trigger_custom_ids = fields.One2many('runbot.bundle.trigger.custom', 'bundle_id')
host_id = fields.Many2one('runbot.host', compute="_compute_host_id", store=True)
dockerfile_id = fields.Many2one('runbot.dockerfile', index=True, help="Use a custom Dockerfile")
commit_limit = fields.Integer("Commit limit")
file_limit = fields.Integer("File limit")
@api.depends('name')
def _compute_host_id(self):
assigned_only = None
runbots = {}
for bundle in self:
bundle.host_id = False
elems = (bundle.name or '').split('-')
for elem in elems:
if elem.startswith('runbot'):
if elem.replace('runbot', '') == '_x':
if assigned_only is None:
assigned_only = self.env['runbot.host'].search([('assigned_only', '=', True)], limit=1)
bundle.host_id = assigned_only or False
elif elem.replace('runbot', '').isdigit():
if elem not in runbots:
runbots[elem] = self.env['runbot.host'].search([('name', 'like', '%s%%' % elem)], limit=1)
bundle.host_id = runbots[elem] or False
@api.depends('sticky')
def _compute_make_stats(self):
for bundle in self:
bundle.make_stats = bundle.sticky
@api.depends('is_base')
def _compute_sticky(self):
for bundle in self:
bundle.sticky = bundle.is_base
@api.depends('is_base')
def _compute_to_upgrade(self):
for bundle in self:
bundle.to_upgrade = bundle.is_base
@api.depends('name', 'is_base', 'defined_base_id', 'base_id.is_base', 'project_id')
def _compute_base_id(self):
for bundle in self:
if bundle.is_base:
bundle.base_id = bundle
continue
if bundle.defined_base_id:
bundle.base_id = bundle.defined_base_id
continue
project_id = bundle.project_id.id
master_base = False
fallback = False
for bid, bname in self._get_base_ids(project_id):
if bundle.name.startswith('%s-' % bname):
bundle.base_id = self.browse(bid)
break
elif bname == 'master':
master_base = self.browse(bid)
elif not fallback or fallback.id < bid:
fallback = self.browse(bid)
else:
bundle.base_id = master_base or fallback
@tools.ormcache('project_id')
def _get_base_ids(self, project_id):
return [(b.id, b.name) for b in self.search([('is_base', '=', True), ('project_id', '=', project_id)])]
@api.depends('is_base', 'base_id.version_id')
def _compute_version_id(self):
for bundle in self.sorted(key='is_base', reverse=True):
if not bundle.is_base:
bundle.version_id = bundle.base_id.version_id
continue
bundle.version_id = self.env['runbot.version']._get(bundle.name)
@api.depends('version_id')
def _compute_relations_base_id(self):
for bundle in self:
bundle = bundle.with_context(project_id=bundle.project_id.id)
bundle.previous_major_version_base_id = bundle.version_id.previous_major_version_id.base_bundle_id
bundle.intermediate_version_base_ids = bundle.version_id.intermediate_version_ids.mapped('base_bundle_id')
@api.depends_context('category_id')
def _compute_last_batchs(self):
batch_ids = defaultdict(list)
if self.ids:
category_id = self.env.context.get('category_id', self.env['ir.model.data']._xmlid_to_res_id('runbot.default_category'))
self.env.cr.execute("""
SELECT
id
FROM (
SELECT
batch.id AS id,
row_number() OVER (PARTITION BY batch.bundle_id order by batch.id desc) AS row
FROM
runbot_bundle bundle INNER JOIN runbot_batch batch ON bundle.id=batch.bundle_id
WHERE
bundle.id in %s
AND batch.category_id = %s
) AS bundle_batch
WHERE
row <= 4
ORDER BY row, id desc
""", [tuple(self.ids), category_id]
)
batchs = self.env['runbot.batch'].browse([r[0] for r in self.env.cr.fetchall()])
for batch in batchs:
batch_ids[batch.bundle_id.id].append(batch.id)
for bundle in self:
bundle.last_batchs = [(6, 0, batch_ids[bundle.id])] if bundle.id in batch_ids else False
@api.depends_context('category_id')
def _compute_last_done_batch(self):
if self:
# self.env['runbot.batch'].flush()
for bundle in self:
bundle.last_done_batch = False
category_id = self.env.context.get('category_id', self.env['ir.model.data']._xmlid_to_res_id('runbot.default_category'))
self.env.cr.execute("""
SELECT
id
FROM (
SELECT
batch.id AS id,
row_number() OVER (PARTITION BY batch.bundle_id order by batch.id desc) AS row
FROM
runbot_bundle bundle INNER JOIN runbot_batch batch ON bundle.id=batch.bundle_id
WHERE
bundle.id in %s
AND batch.state = 'done'
AND batch.category_id = %s
) AS bundle_batch
WHERE
row = 1
ORDER BY row, id desc
""", [tuple(self.ids), category_id]
)
batchs = self.env['runbot.batch'].browse([r[0] for r in self.env.cr.fetchall()])
for batch in batchs:
batch.bundle_id.last_done_batch = batch
def _url(self):
self.ensure_one()
return "/runbot/bundle/%s" % self.id
def create(self, values_list):
res = super().create(values_list)
if res.is_base:
model = self.browse()
model._get_base_ids.clear_cache(model)
return res
def write(self, values):
super().write(values)
if 'is_base' in values:
model = self.browse()
model._get_base_ids.clear_cache(model)
def _force(self, category_id=None):
self.ensure_one()
if self.last_batch.state == 'preparing':
return
values = {
'last_update': fields.Datetime.now(),
'bundle_id': self.id,
'state': 'preparing',
}
if category_id:
values['category_id'] = category_id
new = self.env['runbot.batch'].create(values)
self.last_batch = new
return new
def consistency_warning(self):
if self.defined_base_id:
return [('info', 'This bundle has a forced base: %s' % self.defined_base_id.name)]
warnings = []
if not self.base_id:
warnings.append(('warning', 'No base defined on this bundle'))
else:
for branch in self.branch_ids:
if branch.is_pr and branch.target_branch_name != self.base_id.name:
if branch.target_branch_name.startswith(self.base_id.name):
warnings.append(('info', 'PR %s targeting a non base branch: %s' % (branch.dname, branch.target_branch_name)))
else:
warnings.append(('warning' if branch.alive else 'info', 'PR %s targeting wrong version: %s (expecting %s)' % (branch.dname, branch.target_branch_name, self.base_id.name)))
elif not branch.is_pr and not branch.name.startswith(self.base_id.name) and not self.defined_base_id:
warnings.append(('warning', 'Branch %s not starting with version name (%s)' % (branch.dname, self.base_id.name)))
return warnings
def branch_groups(self):
self.branch_ids.sorted(key=lambda b: (b.remote_id.repo_id.sequence, b.remote_id.repo_id.id, b.is_pr))
branch_groups = {repo: [] for repo in self.branch_ids.mapped('remote_id.repo_id').sorted('sequence')}
for branch in self.branch_ids.sorted(key=lambda b: (b.is_pr)):
branch_groups[branch.remote_id.repo_id].append(branch)
return branch_groups

View File

@ -1,41 +0,0 @@
import ast
import re
from odoo import models, fields, api
from odoo.exceptions import ValidationError
class Codeowner(models.Model):
_name = 'runbot.codeowner'
_description = "Notify github teams based on filenames regex"
_inherit = "mail.thread"
project_id = fields.Many2one('runbot.project', required=True)
regex = fields.Char('Regular Expression', help='Regex to match full file paths', required=True, tracking=True)
github_teams = fields.Char(help='Comma separated list of github teams to notify', required=True, tracking=True)
team_id = fields.Many2one('runbot.team', help='Not mandatory runbot team')
version_domain = fields.Char('Version Domain', help='Codeowner only applies to the filtered versions')
@api.constrains('regex')
def _validate_regex(self):
for rec in self:
try:
r = re.compile(rec.regex)
except re.error as e:
raise ValidationError("Unable to compile regular expression: %s" % e)
@api.constrains('version_domain')
def _validate_version_domain(self):
for rec in self:
try:
self._match_version(self.env.ref('runbot.bundle_master').version_id)
except Exception as e:
raise ValidationError("Unable to validate version_domain: %s" % e)
def _get_version_domain(self):
""" Helper to get the evaluated version domain """
self.ensure_one()
return ast.literal_eval(self.version_domain) if self.version_domain else []
def _match_version(self, version):
return version.filtered_domain(self._get_version_domain())

View File

@ -1,238 +0,0 @@
import subprocess
from ..common import os, RunbotException
import glob
import shutil
from odoo import models, fields, api, registry
import logging
_logger = logging.getLogger(__name__)
class Commit(models.Model):
_name = 'runbot.commit'
_description = "Commit"
_sql_constraints = [
(
"commit_unique",
"unique (name, repo_id, rebase_on_id)",
"Commit must be unique to ensure correct duplicate matching",
)
]
name = fields.Char('SHA')
repo_id = fields.Many2one('runbot.repo', string='Repo group')
date = fields.Datetime('Commit date')
author = fields.Char('Author')
author_email = fields.Char('Author Email')
committer = fields.Char('Committer')
committer_email = fields.Char('Committer Email')
subject = fields.Text('Subject')
dname = fields.Char('Display name', compute='_compute_dname')
rebase_on_id = fields.Many2one('runbot.commit', 'Rebase on commit')
def _get(self, name, repo_id, vals=None, rebase_on_id=False):
commit = self.search([('name', '=', name), ('repo_id', '=', repo_id), ('rebase_on_id', '=', rebase_on_id)])
if not commit:
commit = self.env['runbot.commit'].create({**(vals or {}), 'name': name, 'repo_id': repo_id, 'rebase_on_id': rebase_on_id})
return commit
def _rebase_on(self, commit):
if self == commit:
return self
return self._get(self.name, self.repo_id.id, self.read()[0], commit.id)
def _get_available_modules(self):
for manifest_file_name in self.repo_id.manifest_files.split(','): # '__manifest__.py' '__openerp__.py'
for addons_path in (self.repo_id.addons_paths or '').split(','): # '' 'addons' 'odoo/addons'
sep = os.path.join(addons_path, '*')
for manifest_path in glob.glob(self._source_path(sep, manifest_file_name)):
module = os.path.basename(os.path.dirname(manifest_path))
yield (addons_path, module, manifest_file_name)
def export(self, build):
"""Export a git repo into a sources"""
# TODO add automated tests
self.ensure_one()
if not self.env['runbot.commit.export'].search([('build_id', '=', build.id), ('commit_id', '=', self.id)]):
self.env['runbot.commit.export'].create({'commit_id': self.id, 'build_id': build.id})
export_path = self._source_path()
if os.path.isdir(export_path):
_logger.info('git export: exporting to %s (already exists)', export_path)
return export_path
_logger.info('git export: exporting to %s (new)', export_path)
os.makedirs(export_path)
self.repo_id._fetch(self.name)
export_sha = self.name
if self.rebase_on_id:
export_sha = self.rebase_on_id.name
self.rebase_on_id.repo_id._fetch(export_sha)
p1 = subprocess.Popen(['git', '--git-dir=%s' % self.repo_id.path, 'archive', export_sha], stderr=subprocess.PIPE, stdout=subprocess.PIPE)
p2 = subprocess.Popen(['tar', '--mtime', self.date.strftime('%Y-%m-%d %H:%M:%S'), '-xC', export_path], stdin=p1.stdout, stdout=subprocess.PIPE)
p1.stdout.close() # Allow p1 to receive a SIGPIPE if p2 exits.
(_, err) = p2.communicate()
p1.poll() # fill the returncode
if p1.returncode:
_logger.info("git export: removing corrupted export %r", export_path)
shutil.rmtree(export_path)
raise RunbotException("Git archive failed for %s with error code %s. (%s)" % (self.name, p1.returncode, p1.stderr.read().decode()))
if err:
_logger.info("git export: removing corrupted export %r", export_path)
shutil.rmtree(export_path)
raise RunbotException("Export for %s failed. (%s)" % (self.name, err))
if self.rebase_on_id:
# we could be smart here and detect if merge_base == commit, in witch case checkouting base_commit is enough. Since we don't have this info
# and we are exporting in a custom folder anyway, lets
_logger.info('Applying patch for %s', self.name)
p1 = subprocess.Popen(['git', '--git-dir=%s' % self.repo_id.path, 'diff', '%s...%s' % (export_sha, self.name)], stderr=subprocess.PIPE, stdout=subprocess.PIPE)
p2 = subprocess.Popen(['patch', '-p0', '-d', export_path], stdin=p1.stdout, stdout=subprocess.PIPE)
p1.stdout.close()
(message, err) = p2.communicate()
p1.poll()
if err:
shutil.rmtree(export_path)
raise RunbotException("Apply patch failed for %s...%s. (%s)" % (export_sha, self.name, err))
if p1.returncode or p2.returncode:
shutil.rmtree(export_path)
raise RunbotException("Apply patch failed for %s...%s with error code %s+%s. (%s)" % (export_sha, self.name, p1.returncode, p2.returncode, message))
# migration scripts link if necessary
icp = self.env['ir.config_parameter']
ln_param = icp.get_param('runbot_migration_ln', default='')
migration_repo_id = int(icp.get_param('runbot_migration_repo_id', default=0))
if ln_param and migration_repo_id and self.repo_id.server_files:
scripts_dir = self.env['runbot.repo'].browse(migration_repo_id).name
try:
os.symlink('/data/build/%s' % scripts_dir, self._source_path(ln_param))
except FileNotFoundError:
_logger.warning('Impossible to create migration symlink')
return export_path
def read_source(self, file, mode='r'):
file_path = self._source_path(file)
try:
with open(file_path, mode) as f:
return f.read()
except:
return False
def _source_path(self, *path):
export_name = self.name
if self.rebase_on_id:
export_name = '%s_%s' % (self.name, self.rebase_on_id.name)
return os.path.join(self.env['runbot.runbot']._root(), 'sources', self.repo_id.name, export_name, *path)
@api.depends('name', 'repo_id.name')
def _compute_dname(self):
for commit in self:
commit.dname = '%s:%s' % (commit.repo_id.name, commit.name[:8])
def _github_status(self, build, context, state, target_url, description=None):
self.ensure_one()
Status = self.env['runbot.commit.status']
last_status = Status.search([('commit_id', '=', self.id), ('context', '=', context)], order='id desc', limit=1)
if last_status and last_status.state == state:
_logger.info('Skipping already sent status %s:%s for %s', context, state, self.name)
return
last_status = Status.create({
'build_id': build.id if build else False,
'commit_id': self.id,
'context': context,
'state': state,
'target_url': target_url,
'description': description or context,
'to_process': True,
})
class CommitLink(models.Model):
_name = 'runbot.commit.link'
_description = "Build commit"
commit_id = fields.Many2one('runbot.commit', 'Commit', required=True, index=True)
# Link info
match_type = fields.Selection([('new', 'New head of branch'), ('head', 'Head of branch'), ('base_head', 'Found on base branch'), ('base_match', 'Found on base branch')]) # HEAD, DEFAULT
branch_id = fields.Many2one('runbot.branch', string='Found in branch') # Shouldn't be use for anything else than display
base_commit_id = fields.Many2one('runbot.commit', 'Base head commit', index=True)
merge_base_commit_id = fields.Many2one('runbot.commit', 'Merge Base commit', index=True)
base_behind = fields.Integer('# commits behind base')
base_ahead = fields.Integer('# commits ahead base')
file_changed = fields.Integer('# file changed')
diff_add = fields.Integer('# line added')
diff_remove = fields.Integer('# line removed')
class CommitStatus(models.Model):
_name = 'runbot.commit.status'
_description = 'Commit status'
_order = 'id desc'
commit_id = fields.Many2one('runbot.commit', string='Commit', required=True, index=True)
context = fields.Char('Context', required=True)
state = fields.Char('State', required=True, copy=True)
build_id = fields.Many2one('runbot.build', string='Build', index=True)
target_url = fields.Char('Url')
description = fields.Char('Description')
sent_date = fields.Datetime('Sent Date')
to_process = fields.Boolean('Status was not processed yet', index=True)
def _send_to_process(self):
commits_status = self.search([('to_process', '=', True)], order='create_date DESC, id DESC')
if commits_status:
_logger.info('Sending %s commit status', len(commits_status))
commits_status._send()
def _send(self):
session_cache = {}
processed = set()
for commit_status in self.sorted(lambda cs: (cs.create_date, cs.id), reverse=True): # ensure most recent are processed first
commit_status.to_process = False
# only send the last status for each commit+context
key = (commit_status.context, commit_status.commit_id.name)
if key not in processed:
processed.add(key)
status = {
'context': commit_status.context,
'state': commit_status.state,
'target_url': commit_status.target_url,
'description': commit_status.description,
}
for remote in commit_status.commit_id.repo_id.remote_ids.filtered('send_status'):
if not remote.token:
_logger.warning('No token on remote %s, skipping status', remote.mapped("name"))
else:
if remote.token not in session_cache:
session_cache[remote.token] = remote._make_github_session()
session = session_cache[remote.token]
_logger.info(
"github updating %s status %s to %s in repo %s",
status['context'], commit_status.commit_id.name, status['state'], remote.name)
remote._github('/repos/:owner/:repo/statuses/%s' % commit_status.commit_id.name,
status,
ignore_errors=True,
session=session
)
commit_status.sent_date = fields.Datetime.now()
else:
_logger.info('Skipping outdated status for %s %s', commit_status.context, commit_status.commit_id.name)
class CommitExport(models.Model):
_name = 'runbot.commit.export'
_description = 'Commit export'
build_id = fields.Many2one('runbot.build', index=True)
commit_id = fields.Many2one('runbot.commit')
host = fields.Char(related='build_id.host', store=True)

View File

@ -1,97 +0,0 @@
import json
from odoo import models, fields, api
from ..fields import JsonDictField
class BundleTriggerCustomization(models.Model):
_name = 'runbot.bundle.trigger.custom'
_description = 'Custom trigger'
trigger_id = fields.Many2one('runbot.trigger')
start_mode = fields.Selection([('disabled', 'Disabled'), ('auto', 'Auto'), ('force', 'Force')], required=True, default='auto')
bundle_id = fields.Many2one('runbot.bundle')
config_id = fields.Many2one('runbot.build.config')
extra_params = fields.Char("Custom parameters")
config_data = JsonDictField("Config data")
_sql_constraints = [
(
"bundle_custom_trigger_unique",
"unique (bundle_id, trigger_id)",
"Only one custom trigger per trigger per bundle is allowed",
)
]
class CustomTriggerWizard(models.TransientModel):
_name = 'runbot.trigger.custom.wizard'
_description = 'Custom trigger Wizard'
bundle_id = fields.Many2one('runbot.bundle', "Bundle")
project_id = fields.Many2one(related='bundle_id.project_id', string='Project')
trigger_id = fields.Many2one('runbot.trigger', domain="[('project_id', '=', project_id)]")
config_id = fields.Many2one('runbot.build.config', string="Config id", default=lambda self: self.env.ref('runbot.runbot_build_config_custom_multi'))
config_data = JsonDictField("Config data")
number_build = fields.Integer('Number builds for config multi', default=10)
child_extra_params = fields.Char('Extra params for children', default='--test-tags /module.test_method')
child_dump_url = fields.Char('Dump url for children')
child_config_id = fields.Many2one('runbot.build.config', 'Config for children', default=lambda self: self.env.ref('runbot.runbot_build_config_restore_and_test'))
warnings = fields.Text('Warnings', readonly=True)
@api.onchange('child_extra_params', 'child_dump_url', 'child_config_id', 'number_build', 'config_id', 'trigger_id')
def _onchange_warnings(self):
for wizard in self:
_warnings = []
if wizard._get_existing_trigger():
_warnings.append(f'A custom trigger already exists for trigger {wizard.trigger_id.name} and will be unlinked')
if wizard.child_dump_url or wizard.child_extra_params or wizard.child_config_id or wizard.number_build:
if not any(step.job_type == 'create_build' for step in wizard.config_id.step_ids()):
_warnings.append('Some multi builds params are given but config as no create step')
if wizard.child_dump_url and not any(step.job_type == 'restore' for step in wizard.child_config_id.step_ids()):
_warnings.append('A dump_url is defined but child config has no restore step')
if not wizard.child_dump_url and any(step.job_type == 'restore' for step in wizard.child_config_id.step_ids()):
_warnings.append('Child config has a restore step but no dump_url is given')
if not wizard.trigger_id.manual:
_warnings.append("This custom trigger will replace an existing non manual trigger. The ci won't be sent anymore")
wizard.warnings = '\n'.join(_warnings)
@api.onchange('number_build', 'child_extra_params', 'child_dump_url', 'child_config_id')
def _onchange_config_data(self):
for wizard in self:
wizard.config_data = self._get_config_data()
def _get_config_data(self):
config_data = {}
if self.number_build:
config_data['number_build'] = self.number_build
child_data = {}
if self.child_extra_params:
child_data['extra_params'] = self.child_extra_params
if self.child_dump_url:
child_data['config_data'] = {'dump_url': self.child_dump_url}
if self.child_config_id:
child_data['config_id'] = self.child_config_id.id
if child_data:
config_data['child_data'] = child_data
return config_data
def _get_existing_trigger(self):
return self.env['runbot.bundle.trigger.custom'].search([('bundle_id', '=', self.bundle_id.id), ('trigger_id', '=', self.trigger_id.id)])
def submit(self):
self.ensure_one()
self._get_existing_trigger().unlink()
self.env['runbot.bundle.trigger.custom'].create({
'bundle_id': self.bundle_id.id,
'trigger_id': self.trigger_id.id,
'config_id': self.config_id.id,
'config_data': self.config_data,
})

View File

@ -1,23 +0,0 @@
import logging
from odoo import models, fields, api
_logger = logging.getLogger(__name__)
class Database(models.Model):
_name = 'runbot.database'
_description = "Database"
name = fields.Char('Host name', required=True)
build_id = fields.Many2one('runbot.build', index=True, required=True)
db_suffix = fields.Char(compute='_compute_db_suffix')
def _compute_db_suffix(self):
for record in self:
record.db_suffix = record.name.replace('%s-' % record.build_id.dest, '')
@api.model_create_single
def create(self, values):
res = self.search([('name', '=', values['name']), ('build_id', '=', values['build_id'])])
if res:
return res
return super().create(values)

View File

@ -1,55 +0,0 @@
import logging
import re
from odoo import models, fields, api
from odoo.addons.base.models.qweb import QWebException
_logger = logging.getLogger(__name__)
class Dockerfile(models.Model):
_name = 'runbot.dockerfile'
_inherit = [ 'mail.thread' ]
_description = "Dockerfile"
name = fields.Char('Dockerfile name', required=True, help="Name of Dockerfile")
image_tag = fields.Char(compute='_compute_image_tag', store=True)
template_id = fields.Many2one('ir.ui.view', string='Docker Template', domain=[('type', '=', 'qweb')], context={'default_type': 'qweb', 'default_arch_base': '<t></t>'})
arch_base = fields.Text(related='template_id.arch_base', readonly=False)
dockerfile = fields.Text(compute='_compute_dockerfile', tracking=True)
to_build = fields.Boolean('To Build', help='Build Dockerfile. Check this when the Dockerfile is ready.', default=False)
version_ids = fields.One2many('runbot.version', 'dockerfile_id', string='Versions')
description = fields.Text('Description')
view_ids = fields.Many2many('ir.ui.view', compute='_compute_view_ids')
project_ids = fields.One2many('runbot.project', 'dockerfile_id', string='Default for Projects')
bundle_ids = fields.One2many('runbot.bundle', 'dockerfile_id', string='Used in Bundles')
_sql_constraints = [('runbot_dockerfile_name_unique', 'unique(name)', 'A Dockerfile with this name already exists')]
@api.returns('self', lambda value: value.id)
def copy(self, default=None):
copied_record = super().copy(default={'name': '%s (copy)' % self.name, 'to_build': False})
copied_record.template_id = self.template_id.copy()
copied_record.template_id.name = '%s (copy)' % copied_record.template_id.name
copied_record.template_id.key = '%s (copy)' % copied_record.template_id.key
return copied_record
@api.depends('template_id.arch_base')
def _compute_dockerfile(self):
for rec in self:
try:
res = rec.template_id._render() if rec.template_id else ''
rec.dockerfile = re.sub(r'^\s*$', '', res, flags=re.M).strip()
except QWebException:
rec.dockerfile = ''
@api.depends('name')
def _compute_image_tag(self):
for rec in self:
if rec.name:
rec.image_tag = 'odoo:%s' % re.sub(r'[ /:\(\)\[\]]', '', rec.name)
@api.depends('template_id')
def _compute_view_ids(self):
for rec in self:
keys = re.findall(r'<t.+t-call="(.+)".+', rec.arch_base or '')
rec.view_ids = self.env['ir.ui.view'].search([('type', '=', 'qweb'), ('key', 'in', keys)]).ids

View File

@ -1,230 +0,0 @@
# -*- coding: utf-8 -*-
import logging
from collections import defaultdict
from ..common import pseudo_markdown
from odoo import models, fields, tools
from odoo.exceptions import UserError
_logger = logging.getLogger(__name__)
TYPES = [(t, t.capitalize()) for t in 'client server runbot subbuild link markdown'.split()]
class runbot_event(models.Model):
_inherit = "ir.logging"
_order = 'id'
build_id = fields.Many2one('runbot.build', 'Build', index=True, ondelete='cascade')
active_step_id = fields.Many2one('runbot.build.config.step', 'Active step', index=True)
type = fields.Selection(selection_add=TYPES, string='Type', required=True, index=True, ondelete={t[0]: 'cascade' for t in TYPES})
error_id = fields.Many2one('runbot.build.error', compute='_compute_known_error') # remember to never store this field
dbname = fields.Char(string='Database Name', index=False)
def init(self):
parent_class = super(runbot_event, self)
if hasattr(parent_class, 'init'):
parent_class.init()
self._cr.execute("""
CREATE OR REPLACE FUNCTION runbot_set_logging_build() RETURNS TRIGGER AS $runbot_set_logging_build$
BEGIN
IF (NEW.build_id IS NULL AND NEW.dbname IS NOT NULL AND NEW.dbname != current_database()) THEN
NEW.build_id := split_part(NEW.dbname, '-', 1)::integer;
SELECT active_step INTO NEW.active_step_id FROM runbot_build WHERE runbot_build.id = NEW.build_id;
END IF;
IF (NEW.build_id IS NOT NULL) AND (NEW.type = 'server') THEN
DECLARE
counter INTEGER;
BEGIN
UPDATE runbot_build b
SET log_counter = log_counter - 1
WHERE b.id = NEW.build_id;
SELECT log_counter
INTO counter
FROM runbot_build
WHERE runbot_build.id = NEW.build_id;
IF (counter = 0) THEN
NEW.message = 'Log limit reached (full logs are still available in the log file)';
NEW.level = 'SEPARATOR';
NEW.func = '';
NEW.type = 'runbot';
RETURN NEW;
ELSIF (counter < 0) THEN
RETURN NULL;
END IF;
END;
END IF;
IF (NEW.build_id IS NOT NULL AND UPPER(NEW.level) NOT IN ('INFO', 'SEPARATOR')) THEN
BEGIN
UPDATE runbot_build b
SET triggered_result = CASE WHEN UPPER(NEW.level) = 'WARNING' THEN 'warn'
ELSE 'ko'
END
WHERE b.id = NEW.build_id;
END;
END IF;
RETURN NEW;
END;
$runbot_set_logging_build$ language plpgsql;
DROP TRIGGER IF EXISTS runbot_new_logging ON ir_logging;
CREATE TRIGGER runbot_new_logging BEFORE INSERT ON ir_logging
FOR EACH ROW EXECUTE PROCEDURE runbot_set_logging_build();
""")
def _markdown(self):
""" Apply pseudo markdown parser for message.
"""
self.ensure_one()
return pseudo_markdown(self.message)
def _compute_known_error(self):
cleaning_regexes = self.env['runbot.error.regex'].search([('re_type', '=', 'cleaning')])
fingerprints = defaultdict(list)
for ir_logging in self:
ir_logging.error_id = False
if ir_logging.level == 'ERROR' and ir_logging.type == 'server':
fingerprints[self.env['runbot.build.error']._digest(cleaning_regexes.r_sub('%', ir_logging.message))].append(ir_logging)
for build_error in self.env['runbot.build.error'].search([('fingerprint', 'in', list(fingerprints.keys()))]):
for ir_logging in fingerprints[build_error.fingerprint]:
ir_logging.error_id = build_error.id
class RunbotErrorLog(models.Model):
_name = 'runbot.error.log'
_description = "Error log"
_auto = False
_order = 'id desc'
id = fields.Many2one('ir.logging', string='Log', readonly=True)
name = fields.Char(string='Module', readonly=True)
message = fields.Text(string='Message', readonly=True)
summary = fields.Text(string='Summary', readonly=True)
log_type = fields.Char(string='Type', readonly=True)
log_create_date = fields.Datetime(string='Log create date', readonly=True)
func = fields.Char(string='Method', readonly=True)
path = fields.Char(string='Path', readonly=True)
line = fields.Char(string='Line', readonly=True)
build_id = fields.Many2one('runbot.build', string='Build', readonly=True)
dest = fields.Char(string='Build dest', readonly=True)
local_state = fields.Char(string='Local state', readonly=True)
local_result = fields.Char(string='Local result', readonly=True)
global_state = fields.Char(string='Global state', readonly=True)
global_result = fields.Char(string='Global result', readonly=True)
bu_create_date = fields.Datetime(string='Build create date', readonly=True)
host = fields.Char(string='Host', readonly=True)
parent_id = fields.Many2one('runbot.build', string='Parent build', readonly=True)
top_parent_id = fields.Many2one('runbot.build', string="Top parent", readonly=True)
bundle_ids = fields.Many2many('runbot.bundle', compute='_compute_bundle_id', search='_search_bundle', string='Bundle', readonly=True)
sticky = fields.Boolean(string='Bundle Sticky', compute='_compute_bundle_id', search='_search_sticky', readonly=True)
build_url = fields.Char(compute='_compute_build_url', readonly=True)
def _compute_repo_short_name(self):
for l in self:
l.repo_short_name = '%s/%s' % (l.repo_id.owner, l.repo_id.repo_name)
def _compute_build_url(self):
for l in self:
l.build_url = '/runbot/build/%s' % l.build_id.id
def action_goto_build(self):
self.ensure_one()
return {
"type": "ir.actions.act_url",
"url": "runbot/build/%s" % self.build_id.id,
"target": "new",
}
def _compute_bundle_id(self):
slots = self.env['runbot.batch.slot'].search([('build_id', 'in', self.mapped('top_parent_id').ids)])
for l in self:
l.bundle_ids = slots.filtered(lambda rec: rec.build_id.id == l.top_parent_id.id).batch_id.bundle_id
l.sticky = any(l.bundle_ids.filtered('sticky'))
def _search_bundle(self, operator, value):
query = """
SELECT id
FROM runbot_build as build
WHERE EXISTS(
SELECT * FROM runbot_batch_slot as slot
JOIN
runbot_batch batch ON batch.id = slot.batch_id
JOIN
runbot_bundle bundle ON bundle.id = batch.bundle_id
%s
"""
if operator in ('ilike', '=', 'in'):
value = '%%%s%%' % value if operator == 'ilike' else value
col_name = 'id' if operator == 'in' else 'name'
where_condition = "WHERE slot.build_id = build.id AND bundle.%s %s any(%%s));" if operator == 'in' else "WHERE slot.build_id = build.id AND bundle.%s %s %%s);"
operator = '=' if operator == 'in' else operator
where_condition = where_condition % (col_name, operator)
query = query % where_condition
self.env.cr.execute(query, (value,))
build_ids = [t[0] for t in self.env.cr.fetchall()]
return [('top_parent_id', 'in', build_ids)]
raise UserError('Operator `%s` not implemented for bundle search' % operator)
def search_count(self, args):
return 4242 # hack to speed up the view
def _search_sticky(self, operator, value):
if operator == '=':
self.env.cr.execute("""
SELECT id
FROM runbot_build as build
WHERE EXISTS(
SELECT * FROM runbot_batch_slot as slot
JOIN
runbot_batch batch ON batch.id = slot.batch_id
JOIN
runbot_bundle bundle ON bundle.id = batch.bundle_id
WHERE
bundle.sticky = %s AND slot.build_id = build.id);
""", (value,))
build_ids = [t[0] for t in self.env.cr.fetchall()]
return [('top_parent_id', 'in', build_ids)]
return []
def _parse_logs(self):
BuildError = self.env['runbot.build.error']
return BuildError._parse_logs(self)
def init(self):
""" Create an SQL view for ir.logging """
tools.drop_view_if_exists(self._cr, 'runbot_error_log')
self._cr.execute(""" CREATE VIEW runbot_error_log AS (
SELECT
l.id AS id,
l.name AS name,
l.message AS message,
left(l.message, 50) as summary,
l.type AS log_type,
l.create_date AS log_create_date,
l.func AS func,
l.path AS path,
l.line AS line,
bu.id AS build_id,
bu.dest AS dest,
bu.local_state AS local_state,
bu.local_result AS local_result,
bu.global_state AS global_state,
bu.global_result AS global_result,
bu.create_date AS bu_create_date,
bu.host AS host,
bu.parent_id AS parent_id,
split_part(bu.parent_path, '/',1)::int AS top_parent_id
FROM
ir_logging AS l
JOIN
runbot_build bu ON l.build_id = bu.id
WHERE
l.level = 'ERROR'
)""")

View File

@ -1,146 +0,0 @@
import logging
import getpass
from odoo import models, fields, api
from odoo.tools import config
from ..common import fqdn, local_pgadmin_cursor, os
from ..container import docker_build
_logger = logging.getLogger(__name__)
forced_host_name = None
class Host(models.Model):
_name = 'runbot.host'
_description = "Host"
_order = 'id'
_inherit = 'mail.thread'
name = fields.Char('Host name', required=True)
disp_name = fields.Char('Display name')
active = fields.Boolean('Active', default=True, tracking=True)
last_start_loop = fields.Datetime('Last start')
last_end_loop = fields.Datetime('Last end')
last_success = fields.Datetime('Last success')
assigned_only = fields.Boolean('Only accept assigned build', default=False, tracking=True)
nb_worker = fields.Integer(
'Number of max paralel build',
default=lambda self: self.env['ir.config_parameter'].sudo().get_param('runbot.runbot_workers', default=2),
tracking=True
)
nb_testing = fields.Integer(compute='_compute_nb')
nb_running = fields.Integer(compute='_compute_nb')
last_exception = fields.Char('Last exception')
exception_count = fields.Integer('Exception count')
psql_conn_count = fields.Integer('SQL connections count', default=0)
def _compute_nb(self):
groups = self.env['runbot.build'].read_group(
[('host', 'in', self.mapped('name')), ('local_state', 'in', ('testing', 'running'))],
['host', 'local_state'],
['host', 'local_state'],
lazy=False
)
count_by_host_state = {host.name: {} for host in self}
for group in groups:
count_by_host_state[group['host']][group['local_state']] = group['__count']
for host in self:
host.nb_testing = count_by_host_state[host.name].get('testing', 0)
host.nb_running = count_by_host_state[host.name].get('running', 0)
@api.model_create_single
def create(self, values):
if 'disp_name' not in values:
values['disp_name'] = values['name']
return super().create(values)
def _bootstrap_db_template(self):
""" boostrap template database if needed """
icp = self.env['ir.config_parameter']
db_template = icp.get_param('runbot.runbot_db_template', default='template0')
if db_template and db_template != 'template0':
with local_pgadmin_cursor() as local_cr:
local_cr.execute("""SELECT datname FROM pg_catalog.pg_database WHERE datname = '%s';""" % db_template)
res = local_cr.fetchone()
if not res:
local_cr.execute("""CREATE DATABASE "%s" TEMPLATE template0 LC_COLLATE 'C' ENCODING 'unicode'""" % db_template)
# TODO UPDATE pg_database set datallowconn = false, datistemplate = true (but not enough privileges)
def _bootstrap(self):
""" Create needed directories in static """
dirs = ['build', 'nginx', 'repo', 'sources', 'src', 'docker']
static_path = self._get_work_path()
static_dirs = {d: os.path.join(static_path, d) for d in dirs}
for dir, path in static_dirs.items():
os.makedirs(path, exist_ok=True)
self._bootstrap_db_template()
def _docker_build(self):
""" build docker images needed by locally pending builds"""
_logger.info('Building docker images...')
self.ensure_one()
static_path = self._get_work_path()
self.clear_caches() # needed to ensure that content is updated on all hosts
for dockerfile in self.env['runbot.dockerfile'].search([('to_build', '=', True)]):
self._docker_build_dockerfile(dockerfile, static_path)
def _docker_build_dockerfile(self, dockerfile, workdir):
_logger.info('Building %s, %s', dockerfile.name, hash(str(dockerfile.dockerfile)))
docker_build_path = os.path.join(workdir, 'docker', dockerfile.image_tag)
os.makedirs(docker_build_path, exist_ok=True)
user = getpass.getuser()
docker_append = f"""
RUN groupadd -g {os.getgid()} {user} \\
&& useradd -u {os.getuid()} -g {user} -G audio,video {user} \\
&& mkdir /home/{user} \\
&& chown -R {user}:{user} /home/{user}
USER {user}
ENV COVERAGE_FILE /data/build/.coverage
"""
with open(os.path.join(docker_build_path, 'Dockerfile'), 'w') as Dockerfile:
Dockerfile.write(dockerfile.dockerfile + docker_append)
docker_build_success, msg = docker_build(docker_build_path, dockerfile.image_tag)
if not docker_build_success:
dockerfile.to_build = False
dockerfile.message_post(body=f'Build failure:\n{msg}')
self.env['runbot.runbot'].warning(f'Dockerfile build "{dockerfile.image_tag}" failed on host {self.name}')
def _get_work_path(self):
return os.path.abspath(os.path.join(os.path.dirname(__file__), '../static'))
@api.model
def _get_current(self):
name = self._get_current_name()
return self.search([('name', '=', name)]) or self.create({'name': name})
@api.model
def _get_current_name(self):
return config.get('forced_host_name') or fqdn()
def get_running_max(self):
icp = self.env['ir.config_parameter']
return int(icp.get_param('runbot.runbot_running_max', default=5))
def set_psql_conn_count(self):
_logger.info('Updating psql connection count...')
self.ensure_one()
with local_pgadmin_cursor() as local_cr:
local_cr.execute("SELECT sum(numbackends) FROM pg_stat_database;")
res = local_cr.fetchone()
self.psql_conn_count = res and res[0] or 0
def _total_testing(self):
return sum(host.nb_testing for host in self)
def _total_workers(self):
return sum(host.nb_worker for host in self)
def disable(self):
""" Reserve host if possible """
self.ensure_one()
nb_hosts = self.env['runbot.host'].search_count([])
nb_reserved = self.env['runbot.host'].search_count([('assigned_only', '=', True)])
if nb_reserved < (nb_hosts / 2):
self.assigned_only = True

View File

@ -1,13 +0,0 @@
import odoo
from dateutil.relativedelta import relativedelta
from odoo import models, fields
odoo.service.server.SLEEP_INTERVAL = 5
odoo.addons.base.models.ir_cron._intervalTypes['seconds'] = lambda interval: relativedelta(seconds=interval)
class ir_cron(models.Model):
_inherit = "ir.cron"
interval_type = fields.Selection(selection_add=[('seconds', 'Seconds')])

View File

@ -1,15 +0,0 @@
from ..common import s2human, s2human_long
from odoo import models
from odoo.http import request
class IrUiView(models.Model):
_inherit = ["ir.ui.view"]
def _prepare_qcontext(self):
qcontext = super(IrUiView, self)._prepare_qcontext()
if request and getattr(request, 'is_frontend', False):
qcontext['s2human'] = s2human
qcontext['s2human_long'] = s2human_long
return qcontext

View File

@ -1,24 +0,0 @@
from odoo import models, fields
class Project(models.Model):
_name = 'runbot.project'
_description = 'Project'
_order = 'sequence, id'
name = fields.Char('Project name', required=True)
group_ids = fields.Many2many('res.groups', string='Required groups')
keep_sticky_running = fields.Boolean('Keep last sticky builds running')
trigger_ids = fields.One2many('runbot.trigger', 'project_id', string='Triggers')
dockerfile_id = fields.Many2one('runbot.dockerfile', index=True, help="Project Default Dockerfile")
repo_ids = fields.One2many('runbot.repo', 'project_id', string='Repos')
sequence = fields.Integer('Sequence')
class Category(models.Model):
_name = 'runbot.category'
_description = 'Trigger category'
name = fields.Char("Name")
icon = fields.Char("Font awesome icon")
view_id = fields.Many2one('ir.ui.view', "Link template")

View File

@ -1,579 +0,0 @@
# -*- coding: utf-8 -*-
import datetime
import json
import logging
import re
import subprocess
import time
import requests
from pathlib import Path
from odoo import models, fields, api
from ..common import os, RunbotException
from odoo.exceptions import UserError
from odoo.tools.safe_eval import safe_eval
_logger = logging.getLogger(__name__)
def _sanitize(name):
for i in '@:/':
name = name.replace(i, '_')
return name
class Trigger(models.Model):
"""
List of repo parts that must be part of the same bundle
"""
_name = 'runbot.trigger'
_inherit = 'mail.thread'
_description = 'Triggers'
_order = 'sequence, id'
sequence = fields.Integer('Sequence')
name = fields.Char("Name")
description = fields.Char("Description", help="Informative description")
project_id = fields.Many2one('runbot.project', string="Project id", required=True) # main/security/runbot
repo_ids = fields.Many2many('runbot.repo', relation='runbot_trigger_triggers', string="Triggers", domain="[('project_id', '=', project_id)]")
dependency_ids = fields.Many2many('runbot.repo', relation='runbot_trigger_dependencies', string="Dependencies")
config_id = fields.Many2one('runbot.build.config', string="Config", required=True)
batch_dependent = fields.Boolean('Batch Dependent', help="Force adding batch in build parameters to make it unique and give access to bundle")
ci_context = fields.Char("Ci context", default='ci/runbot', tracking=True)
category_id = fields.Many2one('runbot.category', default=lambda self: self.env.ref('runbot.default_category', raise_if_not_found=False))
version_domain = fields.Char(string="Version domain")
hide = fields.Boolean('Hide trigger on main page')
manual = fields.Boolean('Only start trigger manually', default=False)
upgrade_dumps_trigger_id = fields.Many2one('runbot.trigger', string='Template/complement trigger', tracking=True)
upgrade_step_id = fields.Many2one('runbot.build.config.step', compute="_compute_upgrade_step_id", store=True)
ci_url = fields.Char("ci url")
ci_description = fields.Char("ci description")
has_stats = fields.Boolean('Has a make_stats config step', compute="_compute_has_stats", store=True)
team_ids = fields.Many2many('runbot.team', string="Runbot Teams", help="Teams responsible of this trigger, mainly usefull for nightly")
active = fields.Boolean("Active", default=True)
@api.depends('config_id.step_order_ids.step_id.make_stats')
def _compute_has_stats(self):
for trigger in self:
trigger.has_stats = any(trigger.config_id.step_order_ids.step_id.mapped('make_stats'))
@api.depends('upgrade_dumps_trigger_id', 'config_id', 'config_id.step_order_ids.step_id.job_type')
def _compute_upgrade_step_id(self):
for trigger in self:
trigger.upgrade_step_id = False
if trigger.upgrade_dumps_trigger_id:
trigger.upgrade_step_id = self._upgrade_step_from_config(trigger.config_id)
def _upgrade_step_from_config(self, config):
upgrade_step = next((step_order.step_id for step_order in config.step_order_ids if step_order.step_id._is_upgrade_step()), False)
if not upgrade_step:
raise UserError('Upgrade trigger should have a config with step of type Configure Upgrade')
return upgrade_step
def _reference_builds(self, bundle):
self.ensure_one()
if self.upgrade_step_id: # this is an upgrade trigger, add corresponding builds
custom_config = next((trigger_custom.config_id for trigger_custom in bundle.trigger_custom_ids if trigger_custom.trigger_id == self), False)
step = self._upgrade_step_from_config(custom_config) if custom_config else self.upgrade_step_id
refs_builds = step._reference_builds(bundle, self)
return [(4, b.id) for b in refs_builds]
return []
def get_version_domain(self):
if self.version_domain:
return safe_eval(self.version_domain)
return []
class Remote(models.Model):
"""
Regroups repo and it duplicates (forks): odoo+odoo-dev for each repo
"""
_name = 'runbot.remote'
_description = 'Remote'
_order = 'sequence, id'
_inherit = 'mail.thread'
name = fields.Char('Url', required=True, tracking=True)
repo_id = fields.Many2one('runbot.repo', required=True, tracking=True)
owner = fields.Char(compute='_compute_base_infos', string='Repo Owner', store=True, readonly=True, tracking=True)
repo_name = fields.Char(compute='_compute_base_infos', string='Repo Name', store=True, readonly=True, tracking=True)
repo_domain = fields.Char(compute='_compute_base_infos', string='Repo domain', store=True, readonly=True, tracking=True)
base_url = fields.Char(compute='_compute_base_url', string='Base URL', readonly=True, tracking=True)
short_name = fields.Char('Short name', compute='_compute_short_name', tracking=True)
remote_name = fields.Char('Remote name', compute='_compute_remote_name', tracking=True)
sequence = fields.Integer('Sequence', tracking=True)
fetch_heads = fields.Boolean('Fetch branches', default=True, tracking=True)
fetch_pull = fields.Boolean('Fetch PR', default=False, tracking=True)
send_status = fields.Boolean('Send status', default=False, tracking=True)
token = fields.Char("Github token", groups="runbot.group_runbot_admin")
@api.depends('name')
def _compute_base_infos(self):
for remote in self:
name = re.sub('.+@', '', remote.name)
name = re.sub('^https://', '', name) # support https repo style
name = re.sub('.git$', '', name)
name = name.replace(':', '/')
s = name.split('/')
remote.repo_domain = s[-3]
remote.owner = s[-2]
remote.repo_name = s[-1]
@api.depends('repo_domain', 'owner', 'repo_name')
def _compute_base_url(self):
for remote in self:
remote.base_url = '%s/%s/%s' % (remote.repo_domain, remote.owner, remote.repo_name)
@api.depends('name', 'base_url')
def _compute_short_name(self):
for remote in self:
remote.short_name = '/'.join(remote.base_url.split('/')[-2:])
def _compute_remote_name(self):
for remote in self:
remote.remote_name = _sanitize(remote.short_name)
def create(self, values_list):
remote = super().create(values_list)
if not remote.repo_id.main_remote_id:
remote.repo_id.main_remote_id = remote
remote._cr.postcommit.add(remote.repo_id._update_git_config)
return remote
def write(self, values):
res = super().write(values)
self._cr.postcommit.add(self.repo_id._update_git_config)
return res
def _make_github_session(self):
session = requests.Session()
if self.token:
session.auth = (self.token, 'x-oauth-basic')
session.headers.update({'Accept': 'application/vnd.github.she-hulk-preview+json'})
return session
def _github(self, url, payload=None, ignore_errors=False, nb_tries=2, recursive=False, session=None):
generator = self.sudo()._github_generator(url, payload=payload, ignore_errors=ignore_errors, nb_tries=nb_tries, recursive=recursive, session=session)
if recursive:
return generator
result = list(generator)
return result[0] if result else False
def _github_generator(self, url, payload=None, ignore_errors=False, nb_tries=2, recursive=False, session=None):
"""Return a http request to be sent to github"""
for remote in self:
if remote.owner and remote.repo_name and remote.repo_domain:
url = url.replace(':owner', remote.owner)
url = url.replace(':repo', remote.repo_name)
url = 'https://api.%s%s' % (remote.repo_domain, url)
session = session or remote._make_github_session()
while url:
if recursive:
_logger.info('Getting page %s', url)
try_count = 0
while try_count < nb_tries:
try:
if payload:
response = session.post(url, data=json.dumps(payload))
else:
response = session.get(url)
response.raise_for_status()
if try_count > 0:
_logger.info('Success after %s tries', (try_count + 1))
if recursive:
link = response.headers.get('link')
url = False
if link:
url = {link.split(';')[1]: link.split(';')[0] for link in link.split(',')}.get(' rel="next"')
if url:
url = url.strip('<> ')
yield response.json()
break
else:
yield response.json()
return
except requests.HTTPError:
try_count += 1
if try_count < nb_tries:
time.sleep(2)
else:
if ignore_errors:
_logger.exception('Ignored github error %s %r (try %s/%s)', url, payload, try_count, nb_tries)
url = False
else:
raise
class Repo(models.Model):
_name = 'runbot.repo'
_description = "Repo"
_order = 'sequence, id'
_inherit = 'mail.thread'
name = fields.Char("Name", tracking=True) # odoo/enterprise/upgrade/security/runbot/design_theme
identity_file = fields.Char("Identity File", help="Identity file to use with git/ssh", groups="runbot.group_runbot_admin")
main_remote_id = fields.Many2one('runbot.remote', "Main remote", tracking=True)
remote_ids = fields.One2many('runbot.remote', 'repo_id', "Remotes")
project_id = fields.Many2one('runbot.project', required=True, tracking=True,
help="Default bundle project to use when pushing on this repos",
default=lambda self: self.env.ref('runbot.main_project', raise_if_not_found=False))
# -> not verry usefull, remove it? (iterate on projects or contraints triggers:
# all trigger where a repo is used must be in the same project.
modules = fields.Char("Modules to install", help="Comma-separated list of modules to install and test.", tracking=True)
server_files = fields.Char('Server files', help='Comma separated list of possible server files', tracking=True) # odoo-bin,openerp-server,openerp-server.py
manifest_files = fields.Char('Manifest files', help='Comma separated list of possible manifest files', default='__manifest__.py', tracking=True)
addons_paths = fields.Char('Addons paths', help='Comma separated list of possible addons path', default='', tracking=True)
sequence = fields.Integer('Sequence', tracking=True)
path = fields.Char(compute='_get_path', string='Directory', readonly=True)
mode = fields.Selection([('disabled', 'Disabled'),
('poll', 'Poll'),
('hook', 'Hook')],
default='poll',
string="Mode", required=True, help="hook: Wait for webhook on /runbot/hook/<id> i.e. github push event", tracking=True)
hook_time = fields.Float('Last hook time', compute='_compute_hook_time')
last_processed_hook_time = fields.Float('Last processed hook time')
get_ref_time = fields.Float('Last refs db update', compute='_compute_get_ref_time')
trigger_ids = fields.Many2many('runbot.trigger', relation='runbot_trigger_triggers', readonly=True)
single_version = fields.Many2one('runbot.version', "Single version", help="Limit the repo to a single version for non versionned repo")
forbidden_regex = fields.Char('Forbidden regex', help="Regex that forid bundle creation if branch name is matching", tracking=True)
invalid_branch_message = fields.Char('Forbidden branch message', tracking=True)
def _compute_get_ref_time(self):
self.env.cr.execute("""
SELECT repo_id, time FROM runbot_repo_reftime
WHERE id IN (
SELECT max(id) FROM runbot_repo_reftime
WHERE repo_id = any(%s) GROUP BY repo_id
)
""", [self.ids])
times = dict(self.env.cr.fetchall())
for repo in self:
repo.get_ref_time = times.get(repo.id, 0)
def _compute_hook_time(self):
self.env.cr.execute("""
SELECT repo_id, time FROM runbot_repo_hooktime
WHERE id IN (
SELECT max(id) FROM runbot_repo_hooktime
WHERE repo_id = any(%s) GROUP BY repo_id
)
""", [self.ids])
times = dict(self.env.cr.fetchall())
for repo in self:
repo.hook_time = times.get(repo.id, 0)
def set_hook_time(self, value):
for repo in self:
self.env['runbot.repo.hooktime'].create({'time': value, 'repo_id': repo.id})
self.invalidate_cache()
def set_ref_time(self, value):
for repo in self:
self.env['runbot.repo.reftime'].create({'time': value, 'repo_id': repo.id})
self.invalidate_cache()
def _gc_times(self):
self.env.cr.execute("""
DELETE from runbot_repo_reftime WHERE id NOT IN (
SELECT max(id) FROM runbot_repo_reftime GROUP BY repo_id
)
""")
self.env.cr.execute("""
DELETE from runbot_repo_hooktime WHERE id NOT IN (
SELECT max(id) FROM runbot_repo_hooktime GROUP BY repo_id
)
""")
@api.depends('name')
def _get_path(self):
"""compute the server path of repo from the name"""
root = self.env['runbot.runbot']._root()
for repo in self:
repo.path = os.path.join(root, 'repo', _sanitize(repo.name))
def _git(self, cmd, errors='strict'):
"""Execute a git command 'cmd'"""
self.ensure_one()
config_args = []
if self.identity_file:
config_args = ['-c', 'core.sshCommand=ssh -i %s/.ssh/%s' % (str(Path.home()), self.identity_file)]
cmd = ['git', '-C', self.path] + config_args + cmd
_logger.info("git command: %s", ' '.join(cmd))
return subprocess.check_output(cmd, stderr=subprocess.STDOUT).decode(errors=errors)
def _fetch(self, sha):
if not self._hash_exists(sha):
self._update(force=True)
if not self._hash_exists(sha):
for remote in self.remote_ids:
try:
self._git(['fetch', remote.remote_name, sha])
_logger.info('Success fetching specific head %s on %s', sha, remote)
break
except subprocess.CalledProcessError:
pass
if not self._hash_exists(sha):
raise RunbotException("Commit %s is unreachable. Did you force push the branch?" % sha)
def _hash_exists(self, commit_hash):
""" Verify that a commit hash exists in the repo """
self.ensure_one()
try:
self._git(['cat-file', '-e', commit_hash])
except subprocess.CalledProcessError:
return False
return True
def _is_branch_forbidden(self, branch_name):
self.ensure_one()
if self.forbidden_regex:
return re.match(self.forbidden_regex, branch_name)
return False
def _get_fetch_head_time(self):
self.ensure_one()
fname_fetch_head = os.path.join(self.path, 'FETCH_HEAD')
if os.path.exists(fname_fetch_head):
return os.path.getmtime(fname_fetch_head)
return 0
def _get_refs(self, max_age=30, ignore=None):
"""Find new refs
:return: list of tuples with following refs informations:
name, sha, date, author, author_email, subject, committer, committer_email
"""
self.ensure_one()
get_ref_time = round(self._get_fetch_head_time(), 4)
commit_limit = time.time() - 60*60*24*max_age
if not self.get_ref_time or get_ref_time > self.get_ref_time:
try:
self.set_ref_time(get_ref_time)
fields = ['refname', 'objectname', 'committerdate:unix', 'authorname', 'authoremail', 'subject', 'committername', 'committeremail']
fmt = "%00".join(["%(" + field + ")" for field in fields])
cmd = ['for-each-ref', '--format', fmt, '--sort=-committerdate', 'refs/*/heads/*']
if any(remote.fetch_pull for remote in self.remote_ids):
cmd.append('refs/*/pull/*')
git_refs = self._git(cmd)
git_refs = git_refs.strip()
if not git_refs:
return []
refs = [tuple(field for field in line.split('\x00')) for line in git_refs.split('\n')]
refs = [r for r in refs if int(r[2]) > commit_limit or self.env['runbot.branch'].match_is_base(r[0].split('/')[-1])]
if ignore:
refs = [r for r in refs if r[0].split('/')[-1] not in ignore]
return refs
except Exception:
_logger.exception('Fail to get refs for repo %s', self.name)
self.env['runbot.runbot'].warning('Fail to get refs for repo %s', self.name)
return []
def _find_or_create_branches(self, refs):
"""Parse refs and create branches that does not exists yet
:param refs: list of tuples returned by _get_refs()
:return: dict {branch.name: branch.id}
The returned structure contains all the branches from refs newly created
or older ones.
"""
# FIXME WIP
names = [r[0].split('/')[-1] for r in refs]
branches = self.env['runbot.branch'].search([('name', 'in', names), ('remote_id', 'in', self.remote_ids.ids)])
ref_branches = {branch.ref(): branch for branch in branches}
new_branch_values = []
for ref_name, sha, date, author, author_email, subject, committer, committer_email in refs:
if not ref_branches.get(ref_name):
# format example:
# refs/ruodoo-dev/heads/12.0-must-fail
# refs/ruodoo/pull/1
_, remote_name, branch_type, name = ref_name.split('/')
remote_id = self.remote_ids.filtered(lambda r: r.remote_name == remote_name).id
if not remote_id:
_logger.warning('Remote %s not found', remote_name)
continue
new_branch_values.append({'remote_id': remote_id, 'name': name, 'is_pr': branch_type == 'pull'})
# TODO catch error for pr info. It may fail for multiple raison. closed? external? check corner cases
_logger.info('new branch %s found in %s', name, self.name)
if new_branch_values:
_logger.info('Creating new branches')
new_branches = self.env['runbot.branch'].create(new_branch_values)
for branch in new_branches:
ref_branches[branch.ref()] = branch
return ref_branches
def _find_new_commits(self, refs, ref_branches):
"""Find new commits in bare repo
:param refs: list of tuples returned by _get_refs()
:param ref_branches: dict structure {branch.name: branch.id}
described in _find_or_create_branches
"""
self.ensure_one()
for ref_name, sha, date, author, author_email, subject, committer, committer_email in refs:
branch = ref_branches[ref_name]
if branch.head_name != sha: # new push on branch
_logger.info('repo %s branch %s new commit found: %s', self.name, branch.name, sha)
commit = self.env['runbot.commit']._get(sha, self.id, {
'author': author,
'author_email': author_email,
'committer': committer,
'committer_email': committer_email,
'subject': subject,
'date': datetime.datetime.fromtimestamp(int(date)),
})
branch.head = commit
if not branch.alive:
if branch.is_pr:
_logger.info('Recomputing infos of dead pr %s', branch.name)
branch._compute_branch_infos()
else:
branch.alive = True
if branch.reference_name and branch.remote_id and branch.remote_id.repo_id._is_branch_forbidden(branch.reference_name):
message = "This branch name is incorrect. Branch name should be prefixed with a valid version"
message = branch.remote_id.repo_id.invalid_branch_message or message
branch.head._github_status(False, "Branch naming", 'failure', False, message)
bundle = branch.bundle_id
if bundle.no_build:
continue
if bundle.last_batch.state != 'preparing':
preparing = self.env['runbot.batch'].create({
'last_update': fields.Datetime.now(),
'bundle_id': bundle.id,
'state': 'preparing',
})
bundle.last_batch = preparing
if bundle.last_batch.state == 'preparing':
bundle.last_batch._new_commit(branch)
def _update_batches(self, force=False, ignore=None):
""" Find new commits in physical repos"""
updated = False
for repo in self:
if repo.remote_ids and self._update(poll_delay=30 if force else 60*5):
max_age = int(self.env['ir.config_parameter'].get_param('runbot.runbot_max_age', default=30))
ref = repo._get_refs(max_age, ignore=ignore)
ref_branches = repo._find_or_create_branches(ref)
repo._find_new_commits(ref, ref_branches)
updated = True
return updated
def _update_git_config(self):
""" Update repo git config file """
for repo in self:
if repo.mode == 'disabled':
_logger.info(f'skipping disabled repo {repo.name}')
continue
if os.path.isdir(os.path.join(repo.path, 'refs')):
git_config_path = os.path.join(repo.path, 'config')
template_params = {'repo': repo}
git_config = self.env['ir.ui.view']._render_template("runbot.git_config", template_params)
with open(git_config_path, 'w') as config_file:
config_file.write(str(git_config))
_logger.info('Config updated for repo %s' % repo.name)
else:
_logger.info('Repo not cloned, skiping config update for %s' % repo.name)
def _git_init(self):
""" Clone the remote repo if needed """
self.ensure_one()
repo = self
if not os.path.isdir(os.path.join(repo.path, 'refs')):
_logger.info("Initiating repository '%s' in '%s'" % (repo.name, repo.path))
git_init = subprocess.run(['git', 'init', '--bare', repo.path], stderr=subprocess.PIPE)
if git_init.returncode:
_logger.warning('Git init failed with code %s and message: "%s"', git_init.returncode, git_init.stderr)
return
self._update_git_config()
return True
def _update_git(self, force=False, poll_delay=5*60):
""" Update the git repo on FS """
self.ensure_one()
repo = self
if not repo.remote_ids:
return False
if not os.path.isdir(os.path.join(repo.path)):
os.makedirs(repo.path)
force = self._git_init() or force
fname_fetch_head = os.path.join(repo.path, 'FETCH_HEAD')
if not force and os.path.isfile(fname_fetch_head):
fetch_time = os.path.getmtime(fname_fetch_head)
if repo.mode == 'hook':
if not repo.hook_time or (repo.last_processed_hook_time and repo.hook_time <= repo.last_processed_hook_time):
return False
repo.last_processed_hook_time = repo.hook_time
if repo.mode == 'poll':
if (time.time() < fetch_time + poll_delay):
return False
_logger.info('Updating repo %s', repo.name)
return self._update_fetch_cmd()
def _update_fetch_cmd(self):
# Extracted from update_git to be easily overriden in external module
self.ensure_one()
try_count = 0
success = False
delay = 0
while not success and try_count < 5:
time.sleep(delay)
try:
self._git(['fetch', '-p', '--all', ])
success = True
except subprocess.CalledProcessError as e:
try_count += 1
delay = delay * 1.5 if delay else 0.5
if try_count > 4:
message = 'Failed to fetch repo %s: %s' % (self.name, e.output.decode())
host = self.env['runbot.host']._get_current()
host.message_post(body=message)
self.env['runbot.runbot'].warning('Host %s got reserved because of fetch failure' % host.name)
_logger.exception(message)
host.disable()
return success
def _update(self, force=False, poll_delay=5*60):
""" Update the physical git reposotories on FS"""
self.ensure_one()
try:
return self._update_git(force, poll_delay)
except Exception:
_logger.exception('Fail to update repo %s', self.name)
class RefTime(models.Model):
_name = 'runbot.repo.reftime'
_description = "Repo reftime"
_log_access = False
time = fields.Float('Time', index=True, required=True)
repo_id = fields.Many2one('runbot.repo', 'Repository', required=True, ondelete='cascade')
class HookTime(models.Model):
_name = 'runbot.repo.hooktime'
_description = "Repo hooktime"
_log_access = False
time = fields.Float('Time')
repo_id = fields.Many2one('runbot.repo', 'Repository', required=True, ondelete='cascade')

View File

@ -1,111 +0,0 @@
# -*- coding: utf-8 -*-
import re
from .. import common
from odoo import api, fields, models
from odoo.exceptions import UserError
class ResConfigSettings(models.TransientModel):
_inherit = 'res.config.settings'
runbot_workers = fields.Integer('Default number of workers')
runbot_containers_memory = fields.Float('Memory limit for containers (in GiB)')
runbot_memory_bytes = fields.Float('Bytes', compute='_compute_memory_bytes')
runbot_running_max = fields.Integer('Maximum number of running builds')
runbot_timeout = fields.Integer('Max allowed step timeout (in seconds)')
runbot_starting_port = fields.Integer('Starting port for running builds')
runbot_max_age = fields.Integer('Max commit age (in days)')
runbot_logdb_uri = fields.Char('Runbot URI for build logs',
help='postgres://user:password@host/db formated uri to give to a build to log in database. Should be a user with limited access rights (ir_logging, runbot_build)')
runbot_update_frequency = fields.Integer('Update frequency (in seconds)')
runbot_template = fields.Char('Postgresql template', help="Postgresql template to use when creating DB's")
runbot_message = fields.Text('Frontend warning message', help="Will be displayed on the frontend when not empty")
runbot_default_odoorc = fields.Text('Default odoorc for builds')
runbot_upgrade_exception_message = fields.Text('Upgrade exception message', help='Template to auto-generate a github message when creating an upgrade exception')
runbot_do_fetch = fields.Boolean('Discover new commits')
runbot_do_schedule = fields.Boolean('Schedule builds')
runbot_is_base_regex = fields.Char('Regex is_base')
runbot_db_gc_days = fields.Integer(
'Days before gc',
default=30,
config_parameter='runbot.db_gc_days',
help="Time after the build finished (running time included) to wait before droping db and non log files")
runbot_db_gc_days_child = fields.Integer(
'Days before gc of child',
default=15,
config_parameter='runbot.db_gc_days_child',
help='Children should have a lower gc delay since the database usually comes from the parent or a multibuild')
runbot_full_gc_days = fields.Integer(
'Days before directory removal',
default=365,
config_parameter='runbot.full_gc_days',
help='Number of days to wait after to first gc to completely remove build directory (remaining test/log files)')
runbot_pending_warning = fields.Integer('Pending warning limit', default=5, config_parameter='runbot.pending.warning')
runbot_pending_critical = fields.Integer('Pending critical limit', default=5, config_parameter='runbot.pending.critical')
# TODO other icp
# runbot.runbot_maxlogs 100
# migration db
# ln path
@api.model
def get_values(self):
res = super(ResConfigSettings, self).get_values()
get_param = self.env['ir.config_parameter'].sudo().get_param
res.update(runbot_workers=int(get_param('runbot.runbot_workers', default=2)),
runbot_containers_memory=float(get_param('runbot.runbot_containers_memory', default=0)),
runbot_running_max=int(get_param('runbot.runbot_running_max', default=5)),
runbot_timeout=int(get_param('runbot.runbot_timeout', default=10000)),
runbot_starting_port=int(get_param('runbot.runbot_starting_port', default=2000)),
runbot_max_age=int(get_param('runbot.runbot_max_age', default=30)),
runbot_logdb_uri=get_param('runbot.runbot_logdb_uri', default=False),
runbot_update_frequency=int(get_param('runbot.runbot_update_frequency', default=10)),
runbot_template=get_param('runbot.runbot_db_template'),
runbot_message=get_param('runbot.runbot_message', default=''),
runbot_default_odoorc=get_param('runbot.runbot_default_odoorc'),
runbot_upgrade_exception_message=get_param('runbot.runbot_upgrade_exception_message'),
runbot_do_fetch=get_param('runbot.runbot_do_fetch', default=False),
runbot_do_schedule=get_param('runbot.runbot_do_schedule', default=False),
runbot_is_base_regex=get_param('runbot.runbot_is_base_regex', default='')
)
return res
def set_values(self):
super(ResConfigSettings, self).set_values()
set_param = self.env['ir.config_parameter'].sudo().set_param
set_param("runbot.runbot_workers", self.runbot_workers)
set_param("runbot.runbot_containers_memory", self.runbot_containers_memory)
set_param("runbot.runbot_running_max", self.runbot_running_max)
set_param("runbot.runbot_timeout", self.runbot_timeout)
set_param("runbot.runbot_starting_port", self.runbot_starting_port)
set_param("runbot.runbot_max_age", self.runbot_max_age)
set_param("runbot.runbot_logdb_uri", self.runbot_logdb_uri)
set_param('runbot.runbot_update_frequency', self.runbot_update_frequency)
set_param('runbot.runbot_db_template', self.runbot_template)
set_param('runbot.runbot_message', self.runbot_message)
set_param('runbot.runbot_default_odoorc', self.runbot_default_odoorc)
set_param('runbot.runbot_upgrade_exception_message', self.runbot_upgrade_exception_message)
set_param('runbot.runbot_do_fetch', self.runbot_do_fetch)
set_param('runbot.runbot_do_schedule', self.runbot_do_schedule)
set_param('runbot.runbot_is_base_regex', self.runbot_is_base_regex)
@api.onchange('runbot_is_base_regex')
def _on_change_is_base_regex(self):
""" verify that the base_regex is valid
"""
if self.runbot_is_base_regex:
try:
re.compile(self.runbot_is_base_regex)
except re.error:
raise UserError("The regex is invalid")
@api.depends('runbot_containers_memory')
def _compute_memory_bytes(self):
for rec in self:
if rec.runbot_containers_memory > 0:
rec.runbot_memory_bytes = rec.runbot_containers_memory * 1024 ** 3
else:
rec.runbot_memory_bytes = 0

View File

@ -1,10 +0,0 @@
# Part of Odoo. See LICENSE file for full copyright and licensing details.
from odoo import fields, models
class ResUsers(models.Model):
_inherit = 'res.users'
runbot_team_ids = fields.Many2many('runbot.team', string="Runbot Teams")

View File

@ -1,404 +0,0 @@
import time
import logging
import glob
import random
import re
import signal
import subprocess
import shutil
from contextlib import contextmanager
from requests.exceptions import HTTPError
from subprocess import CalledProcessError
from ..common import fqdn, dest_reg, os
from ..container import docker_ps, docker_stop
from odoo import models, fields
from odoo.osv import expression
from odoo.tools import config
from odoo.modules.module import get_module_resource
_logger = logging.getLogger(__name__)
# after this point, not realy a repo buisness
class Runbot(models.AbstractModel):
_name = 'runbot.runbot'
_description = 'Base runbot model'
def _commit(self):
self.env.cr.commit()
self.env.cache.invalidate()
self.env.clear()
def _root(self):
"""Return root directory of repository"""
default = os.path.join(os.path.dirname(__file__), '../static')
return os.path.abspath(default)
def _scheduler(self, host):
self._gc_testing(host)
self._commit()
for build in self._get_builds_with_requested_actions(host):
build._process_requested_actions()
self._commit()
for build in self._get_builds_to_schedule(host):
build._schedule()
self._commit()
self._assign_pending_builds(host, host.nb_worker, [('build_type', '!=', 'scheduled')])
self._commit()
self._assign_pending_builds(host, host.nb_worker-1 or host.nb_worker)
self._commit()
for build in self._get_builds_to_init(host):
build._init_pendings(host)
self._commit()
self._gc_running(host)
self._commit()
self._reload_nginx()
def build_domain_host(self, host, domain=None):
domain = domain or []
return [('host', '=', host.name)] + domain
def _get_builds_with_requested_actions(self, host):
return self.env['runbot.build'].search(self.build_domain_host(host, [('requested_action', 'in', ['wake_up', 'deathrow'])]))
def _get_builds_to_schedule(self, host):
return self.env['runbot.build'].search(self.build_domain_host(host, [('local_state', 'in', ['testing', 'running'])]))
def _assign_pending_builds(self, host, nb_worker, domain=None):
if host.assigned_only or nb_worker <= 0:
return
domain_host = self.build_domain_host(host)
reserved_slots = self.env['runbot.build'].search_count(domain_host + [('local_state', 'in', ('testing', 'pending'))])
assignable_slots = (nb_worker - reserved_slots)
if assignable_slots > 0:
allocated = self._allocate_builds(host, assignable_slots, domain)
if allocated:
_logger.info('Builds %s where allocated to runbot', allocated)
def _get_builds_to_init(self, host):
domain_host = self.build_domain_host(host)
used_slots = self.env['runbot.build'].search_count(domain_host + [('local_state', '=', 'testing')])
available_slots = host.nb_worker - used_slots
if available_slots <= 0:
return self.env['runbot.build']
return self.env['runbot.build'].search(domain_host + [('local_state', '=', 'pending')], limit=available_slots)
def _gc_running(self, host):
running_max = host.get_running_max()
domain_host = self.build_domain_host(host)
Build = self.env['runbot.build']
cannot_be_killed_ids = Build.search(domain_host + [('keep_running', '=', True)]).ids
sticky_bundles = self.env['runbot.bundle'].search([('sticky', '=', True), ('project_id.keep_sticky_running', '=', True)])
cannot_be_killed_ids += [
build.id
for build in sticky_bundles.mapped('last_batchs.slot_ids.build_id')
if build.host == host.name
][:running_max]
build_ids = Build.search(domain_host + [('local_state', '=', 'running'), ('id', 'not in', cannot_be_killed_ids)], order='job_start desc').ids
Build.browse(build_ids)[running_max:]._kill()
def _gc_testing(self, host):
"""garbage collect builds that could be killed"""
# decide if we need room
Build = self.env['runbot.build']
domain_host = self.build_domain_host(host)
testing_builds = Build.search(domain_host + [('local_state', 'in', ['testing', 'pending']), ('requested_action', '!=', 'deathrow')])
used_slots = len(testing_builds)
available_slots = host.nb_worker - used_slots
nb_pending = Build.search_count([('local_state', '=', 'pending'), ('host', '=', False)])
if available_slots > 0 or nb_pending == 0:
return
for build in testing_builds:
if build.killable:
build.top_parent._ask_kill(message='Build automatically killed, new build found.')
def _allocate_builds(self, host, nb_slots, domain=None):
if nb_slots <= 0:
return []
non_allocated_domain = [('local_state', '=', 'pending'), ('host', '=', False)]
if domain:
non_allocated_domain = expression.AND([non_allocated_domain, domain])
e = expression.expression(non_allocated_domain, self.env['runbot.build'])
query = e.query
query.order = '"runbot_build".parent_path'
select_query, select_params = query.select()
# self-assign to be sure that another runbot batch cannot self assign the same builds
query = """UPDATE
runbot_build
SET
host = %%s
WHERE
runbot_build.id IN (
%s
FOR UPDATE OF runbot_build SKIP LOCKED
LIMIT %%s
)
RETURNING id""" % select_query
self.env.cr.execute(query, [host.name] + select_params + [nb_slots])
return self.env.cr.fetchall()
def _reload_nginx(self):
env = self.env
settings = {}
settings['port'] = config.get('http_port')
settings['runbot_static'] = os.path.join(get_module_resource('runbot', 'static'), '')
settings['base_url'] = self.get_base_url()
nginx_dir = os.path.join(self._root(), 'nginx')
settings['nginx_dir'] = nginx_dir
settings['re_escape'] = re.escape
host_name = self.env['runbot.host']._get_current_name()
settings['host_name'] = self.env['runbot.host']._get_current_name()
settings['builds'] = env['runbot.build'].search([('local_state', '=', 'running'), ('host', '=', host_name)])
nginx_config = env['ir.ui.view']._render_template("runbot.nginx_config", settings)
os.makedirs(nginx_dir, exist_ok=True)
content = None
nginx_conf_path = os.path.join(nginx_dir, 'nginx.conf')
content = ''
if os.path.isfile(nginx_conf_path):
with open(nginx_conf_path, 'r') as f:
content = f.read()
if content != nginx_config:
_logger.info('reload nginx')
with open(nginx_conf_path, 'w') as f:
f.write(str(nginx_config))
try:
pid = int(open(os.path.join(nginx_dir, 'nginx.pid')).read().strip(' \n'))
os.kill(pid, signal.SIGHUP)
except Exception:
_logger.info('start nginx')
if subprocess.call(['/usr/sbin/nginx', '-p', nginx_dir, '-c', 'nginx.conf']):
# obscure nginx bug leaving orphan worker listening on nginx port
if not subprocess.call(['pkill', '-f', '-P1', 'nginx: worker']):
_logger.warning('failed to start nginx - orphan worker killed, retrying')
subprocess.call(['/usr/sbin/nginx', '-p', nginx_dir, '-c', 'nginx.conf'])
else:
_logger.warning('failed to start nginx - failed to kill orphan worker - oh well')
def _get_cron_period(self):
""" Compute a randomized cron period with a 2 min margin below
real cron timeout from config.
"""
cron_limit = config.get('limit_time_real_cron')
req_limit = config.get('limit_time_real')
cron_timeout = cron_limit if cron_limit > -1 else req_limit
return cron_timeout / 2
def _cron(self):
"""
This method is the default cron for new commit discovery and build sheduling.
The cron runs for a long time to avoid spamming logs
"""
pull_info_failures = {}
start_time = time.time()
timeout = self._get_cron_period()
get_param = self.env['ir.config_parameter'].get_param
update_frequency = int(get_param('runbot.runbot_update_frequency', default=10))
runbot_do_fetch = get_param('runbot.runbot_do_fetch')
runbot_do_schedule = get_param('runbot.runbot_do_schedule')
host = self.env['runbot.host']._get_current()
host.set_psql_conn_count()
host.last_start_loop = fields.Datetime.now()
self._commit()
# Bootstrap
host._bootstrap()
if runbot_do_schedule:
host._docker_build()
self._source_cleanup()
self.env['runbot.build']._local_cleanup()
self._docker_cleanup()
_logger.info('Starting loop')
if runbot_do_schedule or runbot_do_fetch:
while time.time() - start_time < timeout:
if runbot_do_fetch:
self._fetch_loop_turn(host, pull_info_failures)
if runbot_do_schedule:
sleep_time = self._scheduler_loop_turn(host, update_frequency)
self.sleep(sleep_time)
else:
self.sleep(update_frequency)
self._commit()
host.last_end_loop = fields.Datetime.now()
def sleep(self, t):
time.sleep(t)
def _fetch_loop_turn(self, host, pull_info_failures, default_sleep=1):
with self.manage_host_exception(host) as manager:
repos = self.env['runbot.repo'].search([('mode', '!=', 'disabled')])
processing_batch = self.env['runbot.batch'].search([('state', 'in', ('preparing', 'ready'))], order='id asc')
preparing_batch = processing_batch.filtered(lambda b: b.state == 'preparing')
self._commit()
for repo in repos:
try:
repo._update_batches(force=bool(preparing_batch), ignore=pull_info_failures)
self._commit() # commit is mainly here to avoid to lose progression in case of fetch failure or concurrent update
except HTTPError as e:
# Sometimes a pr pull info can fail.
# - Most of the time it is only temporary and it will be successfull on next try.
# - In some rare case the pr will always fail (github inconsistency) The pr exists in git (for-each-ref) but not on github api.
# For this rare case, we store the pr in memory in order to unstuck other pr/branches update.
# We consider that this error should not remain, in this case github needs to fix this inconsistency.
# Another solution would be to create the pr with fake pull info. This idea is not the best one
# since we want to avoid to have many pr with fake pull_info in case of temporary failure of github services.
# With this solution, the pr will be retried once every cron loop (~10 minutes).
# We dont except to have pr with this kind of persistent failure more than every few mounths/years.
self.env.cr.rollback()
self.env.clear()
pull_number = e.response.url.split('/')[-1]
pull_info_failures[pull_number] = time.time()
self.warning('Pr pull info failed for %s', pull_number)
self._commit()
if processing_batch:
for batch in processing_batch:
if batch._process():
self._commit()
self._commit()
self.env['runbot.commit.status']._send_to_process()
self._commit()
# cleanup old pull_info_failures
for pr_number, t in pull_info_failures.items():
if t + 15*60 < time.time():
_logger.warning('Removing %s from pull_info_failures', pr_number)
del pull_info_failures[pr_number]
return manager.get('sleep', default_sleep)
def _scheduler_loop_turn(self, host, default_sleep=5):
_logger.info('Scheduling...')
with self.manage_host_exception(host) as manager:
self._scheduler(host)
return manager.get('sleep', default_sleep)
@contextmanager
def manage_host_exception(self, host):
res = {}
try:
yield res
host.last_success = fields.Datetime.now()
self._commit()
except Exception as e:
self.env.cr.rollback()
self.env.clear()
_logger.exception(e)
message = str(e)
if host.last_exception == message:
host.exception_count += 1
else:
host.last_exception = str(e)
host.exception_count = 1
self._commit()
res['sleep'] = random.uniform(0, 3)
else:
if host.last_exception:
host.last_exception = ""
host.exception_count = 0
def _source_cleanup(self):
try:
if self.pool._init:
return
_logger.info('Source cleaning')
host_name = self.env['runbot.host']._get_current_name()
cannot_be_deleted_path = set()
for commit in self.env['runbot.commit.export'].search([('host', '=', host_name)]).mapped('commit_id'):
cannot_be_deleted_path.add(commit._source_path())
# the following part won't be usefull anymore once runbot.commit.export is populated
cannot_be_deleted_builds = self.env['runbot.build'].search([('host', '=', host_name), ('local_state', '!=', 'done')])
cannot_be_deleted_builds |= cannot_be_deleted_builds.mapped('params_id.builds_reference_ids')
for build in cannot_be_deleted_builds:
for build_commit in build.params_id.commit_link_ids:
cannot_be_deleted_path.add(build_commit.commit_id._source_path())
to_delete = set()
to_keep = set()
repos = self.env['runbot.repo'].search([('mode', '!=', 'disabled')])
for repo in repos:
repo_source = os.path.join(self._root(), 'sources', repo.name, '*')
for source_dir in glob.glob(repo_source):
if source_dir not in cannot_be_deleted_path:
to_delete.add(source_dir)
else:
to_keep.add(source_dir)
# we are comparing cannot_be_deleted_path with to keep to sensure that the algorithm is working, we want to avoid to erase file by mistake
# note: it is possible that a parent_build is in testing without checkouting sources, but it should be exceptions
if to_delete:
if cannot_be_deleted_path != to_keep:
_logger.warning('Inconsistency between sources and database: \n%s \n%s' % (cannot_be_deleted_path-to_keep, to_keep-cannot_be_deleted_path))
to_delete = list(to_delete)
to_keep = list(to_keep)
cannot_be_deleted_path = list(cannot_be_deleted_path)
for source_dir in to_delete:
_logger.info('Deleting source: %s' % source_dir)
assert 'static' in source_dir
shutil.rmtree(source_dir)
_logger.info('%s/%s source folder where deleted (%s kept)' % (len(to_delete), len(to_delete+to_keep), len(to_keep)))
except:
_logger.exception('An exception occured while cleaning sources')
pass
def _docker_cleanup(self):
_logger.info('Docker cleaning')
docker_ps_result = docker_ps()
containers = {}
ignored = []
for dc in docker_ps_result:
build = self.env['runbot.build']._build_from_dest(dc)
if build:
containers[build.id] = dc
if containers:
candidates = self.env['runbot.build'].search([('id', 'in', list(containers.keys())), ('local_state', '=', 'done')])
for c in candidates:
_logger.info('container %s found running with build state done', containers[c.id])
docker_stop(containers[c.id], c._path())
ignored = {dc for dc in docker_ps_result if not dest_reg.match(dc)}
if ignored:
_logger.info('docker (%s) not deleted because not dest format', list(ignored))
def _git_gc(self, host):
"""
cleanup and optimize git repositories on the host
"""
for repo in self.env['runbot.repo'].search([]):
try:
repo._git(['gc', '--prune=all', '--quiet'])
except CalledProcessError as e:
message = f'git gc failed for {repo.name} on {host.name} with exit status {e.returncode} and message "{e.output[:60]} ..."'
self.warning(message)
def warning(self, message, *args):
if args:
message = message % args
existing = self.env['runbot.warning'].search([('message', '=', message)], limit=1)
if existing:
existing.count += 1
else:
return self.env['runbot.warning'].create({'message': message})
class RunbotWarning(models.Model):
"""
Generic Warnings for runbot
"""
_order = 'write_date desc, id desc'
_name = 'runbot.warning'
_description = 'Generic Runbot Warning'
message = fields.Char("Warning", index=True)
count = fields.Integer("Count", default=1)

View File

@ -1,70 +0,0 @@
import re
from odoo import models, fields
from odoo.exceptions import UserError
class UpgradeExceptions(models.Model):
_name = 'runbot.upgrade.exception'
_description = 'Upgrade exception'
active = fields.Boolean('Active', default=True)
elements = fields.Text('Elements')
bundle_id = fields.Many2one('runbot.bundle', index=True)
info = fields.Text('Info')
team_id = fields.Many2one('runbot.team', 'Assigned team', index=True)
message = fields.Text('Upgrade exception message', compute="_compute_message")
def _compute_message(self):
message_layout = self.env['ir.config_parameter'].sudo().get_param('runbot.runbot_upgrade_exception_message')
for exception in self:
exception.message = message_layout.format(exception=exception, base_url=exception.get_base_url())
def _generate(self):
exceptions = self.search([])
if exceptions:
return 'suppress_upgrade_warnings=%s' % (','.join(exceptions.mapped('elements'))).replace(' ', '').replace('\n', ',')
return False
class UpgradeRegex(models.Model):
_name = 'runbot.upgrade.regex'
_description = 'Upgrade regex'
active = fields.Boolean('Active', default=True)
prefix = fields.Char('Type')
regex = fields.Char('Regex')
class BuildResult(models.Model):
_inherit = 'runbot.build'
def _parse_upgrade_errors(self):
ir_logs = self.env['ir.logging'].search([('level', 'in', ('ERROR', 'WARNING', 'CRITICAL')), ('type', '=', 'server'), ('build_id', 'in', self.ids)])
upgrade_regexes = self.env['runbot.upgrade.regex'].search([])
exception = {}
for log in ir_logs:
for upgrade_regex in upgrade_regexes:
m = re.search(upgrade_regex.regex, log.message)
if m:
exception['%s:%s' % (upgrade_regex.prefix, m.groups()[0])] = None
exception = list(exception)
if exception:
bundle = False
batches = self.top_parent.slot_ids.mapped('batch_id')
if batches:
bundle = batches[0].bundle_id.id
res = {
'name': 'Upgrade Exception',
'type': 'ir.actions.act_window',
'res_model': 'runbot.upgrade.exception',
'view_mode': 'form',
'context': {
'default_elements': '\n'.join(exception),
'default_bundle_id': bundle,
'default_info': 'Automatically generated from build %s' % self.id
}
}
return res
else:
raise UserError('Nothing found here')

View File

@ -1,10 +0,0 @@
from odoo import models, fields
class User(models.Model):
_inherit = 'res.users'
# Add default action_id
action_id = fields.Many2one('ir.actions.actions',
default=lambda self: self.env.ref('runbot.open_view_warning_tree', raise_if_not_found=False))

View File

@ -1,105 +0,0 @@
import logging
import re
from odoo import models, fields, api, tools
_logger = logging.getLogger(__name__)
class Version(models.Model):
_name = 'runbot.version'
_description = "Version"
_order = 'sequence desc, number desc,id'
name = fields.Char('Version name')
number = fields.Char('Version number', compute='_compute_version_number', store=True, help="Usefull to sort by version")
sequence = fields.Integer('sequence')
is_major = fields.Char('Is major version', compute='_compute_version_number', store=True)
base_bundle_id = fields.Many2one('runbot.bundle', compute='_compute_base_bundle_id')
previous_major_version_id = fields.Many2one('runbot.version', compute='_compute_version_relations')
intermediate_version_ids = fields.Many2many('runbot.version', compute='_compute_version_relations')
next_major_version_id = fields.Many2one('runbot.version', compute='_compute_version_relations')
next_intermediate_version_ids = fields.Many2many('runbot.version', compute='_compute_version_relations')
dockerfile_id = fields.Many2one('runbot.dockerfile', default=lambda self: self.env.ref('runbot.docker_default', raise_if_not_found=False))
@api.depends('name')
def _compute_version_number(self):
for version in self:
if version.name == 'master':
version.number = '~'
version.is_major = False
else:
# max version number with this format: 99.99
version.number = '.'.join([elem.zfill(2) for elem in re.sub(r'[^0-9\.]', '', version.name or '').split('.')])
version.is_major = all(elem == '00' for elem in version.number.split('.')[1:])
@api.model_create_multi
def create(self, vals_list):
model = self.browse()
model._get_id.clear_cache(model)
return super().create(vals_list)
def _get(self, name):
return self.browse(self._get_id(name))
@tools.ormcache('name')
def _get_id(self, name):
version = self.search([('name', '=', name)])
if not version:
version = self.create({
'name': name,
})
return version.id
@api.depends('is_major', 'number')
def _compute_version_relations(self):
all_versions = self.search([], order='sequence, number')
for version in self:
version.previous_major_version_id = next(
(
v
for v in reversed(all_versions)
if v.is_major and v.number < version.number and v.sequence <= version.sequence # TODO FIXME, make version comparable?
), self.browse())
if version.previous_major_version_id:
version.intermediate_version_ids = all_versions.filtered(
lambda v, current=version: v.number > current.previous_major_version_id.number and v.number < current.number and v.sequence <= current.sequence and v.sequence >= current.previous_major_version_id.sequence
)
else:
version.intermediate_version_ids = all_versions.filtered(
lambda v, current=version: v.number < current.number and v.sequence <= current.sequence
)
version.next_major_version_id = next(
(
v
for v in all_versions
if (v.is_major or v.name == 'master') and v.number > version.number and v.sequence >= version.sequence
), self.browse())
if version.next_major_version_id:
version.next_intermediate_version_ids = all_versions.filtered(
lambda v, current=version: v.number < current.next_major_version_id.number and v.number > current.number and v.sequence <= current.next_major_version_id.sequence and v.sequence >= current.sequence
)
else:
version.next_intermediate_version_ids = all_versions.filtered(
lambda v, current=version: v.number > current.number and v.sequence >= current.sequence
)
# @api.depends('base_bundle_id.is_base', 'base_bundle_id.version_id', 'base_bundle_id.project_id')
@api.depends_context('project_id')
def _compute_base_bundle_id(self):
project_id = self.env.context.get('project_id')
if not project_id:
_logger.warning("_compute_base_bundle_id: no project_id in context")
project_id = self.env.ref('runbot.main_project').id
bundles = self.env['runbot.bundle'].search([
('version_id', 'in', self.ids),
('is_base', '=', True),
('project_id', '=', project_id)
])
bundle_by_version = {bundle.version_id.id: bundle for bundle in bundles}
for version in self:
version.base_bundle_id = bundle_by_version.get(version.id)

View File

@ -1,118 +0,0 @@
id,name,model_id:id,group_id:id,perm_read,perm_write,perm_create,perm_unlink
access_runbot_remote,runbot_remote,runbot.model_runbot_remote,group_user,1,0,0,0
access_runbot_branch,runbot_branch,runbot.model_runbot_branch,group_user,1,0,0,0
access_runbot_build,runbot_build,runbot.model_runbot_build,group_user,1,0,0,0
access_runbot_remote_admin,runbot_remote_admin,runbot.model_runbot_remote,runbot.group_runbot_admin,1,1,1,1
access_runbot_branch_admin,runbot_branch_admin,runbot.model_runbot_branch,runbot.group_runbot_admin,1,1,1,1
access_runbot_build_admin,runbot_build_admin,runbot.model_runbot_build,runbot.group_runbot_admin,1,1,1,1
access_irlogging,log by runbot users,base.model_ir_logging,group_user,0,0,1,0
access_runbot_build_config_step_user,runbot_build_config_step_user,runbot.model_runbot_build_config_step,group_user,1,0,0,0
access_runbot_build_config_step_manager,runbot_build_config_step_manager,runbot.model_runbot_build_config_step,runbot.group_build_config_user,1,1,1,1
access_runbot_build_config_user,runbot_build_config_user,runbot.model_runbot_build_config,group_user,1,0,0,0
access_runbot_build_config_manager,runbot_build_config_manager,runbot.model_runbot_build_config,runbot.group_build_config_user,1,1,1,1
access_runbot_build_config_step_order_user,runbot_build_config_step_order_user,runbot.model_runbot_build_config_step_order,group_user,1,0,0,0
access_runbot_build_config_step_order_manager,runbot_build_config_step_order_manager,runbot.model_runbot_build_config_step_order,runbot.group_build_config_user,1,1,1,1
access_runbot_config_step_upgrade_db_user,runbot_config_step_upgrade_db_user,runbot.model_runbot_config_step_upgrade_db,group_user,1,0,0,0
access_runbot_config_step_upgrade_db_manager,runbot_config_step_upgrade_db_manager,runbot.model_runbot_config_step_upgrade_db,runbot.group_build_config_user,1,1,1,1
access_runbot_build_error_user,runbot_build_error_user,runbot.model_runbot_build_error,group_user,1,0,0,0
access_runbot_build_error_admin,runbot_build_error_admin,runbot.model_runbot_build_error,runbot.group_runbot_admin,1,1,1,1
access_runbot_build_error_manager,runbot_build_error_manager,runbot.model_runbot_build_error,runbot.group_runbot_error_manager,1,1,1,1
access_runbot_build_error_tag_user,runbot_build_error_tag_user,runbot.model_runbot_build_error_tag,group_user,1,0,0,0
access_runbot_build_error_tag_admin,runbot_build_error_tag_admin,runbot.model_runbot_build_error_tag,runbot.group_runbot_admin,1,1,1,1
access_runbot_build_error_tag_manager,runbot_build_error_tag_manager,runbot.model_runbot_build_error_tag,runbot.group_runbot_error_manager,1,1,1,1
access_runbot_team_admin,runbot_team_admin,runbot.model_runbot_team,runbot.group_runbot_admin,1,1,1,1
access_runbot_team_user,runbot_team_user,runbot.model_runbot_team,group_user,1,0,0,0
access_runbot_dashboard_admin,runbot_dashboard_admin,runbot.model_runbot_dashboard,runbot.group_runbot_admin,1,1,1,1
access_runbot_dashboard_user,runbot_dashboard_user,runbot.model_runbot_dashboard,group_user,1,0,0,0
access_runbot_dashboard_tile_admin,runbot_dashboard_tile_admin,runbot.model_runbot_dashboard_tile,runbot.group_runbot_admin,1,1,1,1
access_runbot_dashboard_tile_user,runbot_dashboard_tile_user,runbot.model_runbot_dashboard_tile,group_user,1,0,0,0
access_runbot_error_regex_user,runbot_error_regex_user,runbot.model_runbot_error_regex,group_user,1,0,0,0
access_runbot_error_regex_manager,runbot_error_regex_manager,runbot.model_runbot_error_regex,runbot.group_runbot_admin,1,1,1,1
access_runbot_host_user,runbot_host_user,runbot.model_runbot_host,group_user,1,0,0,0
access_runbot_host_manager,runbot_host_manager,runbot.model_runbot_host,runbot.group_runbot_admin,1,1,1,1
access_runbot_error_log_user,runbot_error_log_user,runbot.model_runbot_error_log,group_user,1,0,0,0
access_runbot_error_log_manager,runbot_error_log_manager,runbot.model_runbot_error_log,runbot.group_runbot_admin,1,1,1,1
access_runbot_repo_hooktime,runbot_repo_hooktime,runbot.model_runbot_repo_hooktime,group_user,1,0,0,0
access_runbot_repo_referencetime,runbot_repo_referencetime,runbot.model_runbot_repo_reftime,group_user,1,0,0,0
access_runbot_build_stat_user,runbot_build_stat_user,runbot.model_runbot_build_stat,group_user,1,0,0,0
access_runbot_build_stat_admin,runbot_build_stat_admin,runbot.model_runbot_build_stat,runbot.group_runbot_admin,1,1,1,1
access_runbot_build_stat_regex_user,access_runbot_build_stat_regex_user,runbot.model_runbot_build_stat_regex,runbot.group_user,1,0,0,0
access_runbot_build_stat_regex_admin,access_runbot_build_stat_regex_admin,runbot.model_runbot_build_stat_regex,runbot.group_runbot_admin,1,1,1,1
access_runbot_trigger_user,access_runbot_trigger_user,runbot.model_runbot_trigger,runbot.group_user,1,0,0,0
access_runbot_trigger_runbot_admin,access_runbot_trigger_runbot_admin,runbot.model_runbot_trigger,runbot.group_runbot_admin,1,1,1,1
access_runbot_repo_user,access_runbot_repo_user,runbot.model_runbot_repo,runbot.group_user,1,0,0,0
access_runbot_repo_runbot_admin,access_runbot_repo_runbot_admin,runbot.model_runbot_repo,runbot.group_runbot_admin,1,1,1,1
access_runbot_commit_user,access_runbot_commit_user,runbot.model_runbot_commit,runbot.group_user,1,0,0,0
access_runbot_build_params_user,access_runbot_build_params_user,runbot.model_runbot_build_params,runbot.group_user,1,0,0,0
access_runbot_build_params_runbot_admin,access_runbot_build_params_runbot_admin,runbot.model_runbot_build_params,runbot.group_runbot_admin,1,1,1,1
access_runbot_commit_link_user,access_runbot_commit_link_user,runbot.model_runbot_commit_link,runbot.group_user,1,0,0,0
access_runbot_commit_link_runbot_admin,access_runbot_commit_link_runbot_admin,runbot.model_runbot_commit_link,runbot.group_runbot_admin,1,1,1,1
access_runbot_version_user,access_runbot_version_user,runbot.model_runbot_version,runbot.group_user,1,0,0,0
access_runbot_version_runbot_admin,access_runbot_version_runbot_admin,runbot.model_runbot_version,runbot.group_runbot_admin,1,1,1,1
access_runbot_project_user,access_runbot_project_user,runbot.model_runbot_project,runbot.group_user,1,0,0,0
access_runbot_project_runbot_admin,access_runbot_project_runbot_admin,runbot.model_runbot_project,runbot.group_runbot_admin,1,1,1,1
access_runbot_bundle_user,access_runbot_bundle_user,runbot.model_runbot_bundle,runbot.group_user,1,0,0,0
access_runbot_bundle_runbot_admin,access_runbot_bundle_runbot_admin,runbot.model_runbot_bundle,runbot.group_runbot_admin,1,1,1,1
access_runbot_batch_user,access_runbot_batch_user,runbot.model_runbot_batch,runbot.group_user,1,0,0,0
access_runbot_batch_runbot_admin,access_runbot_batch_runbot_admin,runbot.model_runbot_batch,runbot.group_runbot_admin,1,1,1,1
access_runbot_batch_slot_user,access_runbot_batch_slot_user,runbot.model_runbot_batch_slot,runbot.group_user,1,0,0,0
access_runbot_batch_slot_runbot_admin,access_runbot_batch_slot_runbot_admin,runbot.model_runbot_batch_slot,runbot.group_runbot_admin,1,1,1,1
access_runbot_ref_log_runbot_user,access_runbot_ref_log_runbot_user,runbot.model_runbot_ref_log,runbot.group_user,1,0,0,0
access_runbot_ref_log_runbot_admin,access_runbot_ref_log_runbot_admin,runbot.model_runbot_ref_log,runbot.group_runbot_admin,1,1,1,1
access_runbot_commit_status_runbot_user,access_runbot_commit_status_runbot_user,runbot.model_runbot_commit_status,runbot.group_user,1,0,0,0
access_runbot_commit_status_runbot_admin,access_runbot_commit_status_runbot_admin,runbot.model_runbot_commit_status,runbot.group_runbot_admin,1,1,1,1
access_runbot_bundle_trigger_custom_runbot_user,access_runbot_bundle_trigger_custom_runbot_user,runbot.model_runbot_bundle_trigger_custom,runbot.group_user,1,0,0,0
access_runbot_bundle_trigger_custom_runbot_admin,access_runbot_bundle_trigger_custom_runbot_admin,runbot.model_runbot_bundle_trigger_custom,runbot.group_runbot_admin,1,1,1,1
access_runbot_category_runbot_user,access_runbot_category_runbot_user,runbot.model_runbot_category,runbot.group_user,1,0,0,0
access_runbot_category_runbot_admin,access_runbot_category_runbot_admin,runbot.model_runbot_category,runbot.group_runbot_admin,1,1,1,1
access_runbot_batch_log_runbot_user,access_runbot_batch_log_runbot_user,runbot.model_runbot_batch_log,runbot.group_user,1,0,0,0
access_runbot_warning_user,access_runbot_warning_user,runbot.model_runbot_warning,runbot.group_user,1,0,0,0
access_runbot_warning_admin,access_runbot_warning_admin,runbot.model_runbot_warning,runbot.group_runbot_admin,1,1,1,1
access_runbot_database_user,access_runbot_database_user,runbot.model_runbot_database,runbot.group_user,1,0,0,0
access_runbot_database_admin,access_runbot_database_admin,runbot.model_runbot_database,runbot.group_runbot_admin,1,1,1,1
access_runbot_upgrade_regex_user,access_runbot_upgrade_regex_user,runbot.model_runbot_upgrade_regex,runbot.group_user,1,0,0,0
access_runbot_upgrade_regex_admin,access_runbot_upgrade_regex_admin,runbot.model_runbot_upgrade_regex,runbot.group_runbot_admin,1,1,1,1
access_runbot_upgrade_exception_user,access_runbot_upgrade_exception_user,runbot.model_runbot_upgrade_exception,runbot.group_user,1,0,0,0
access_runbot_upgrade_exception_admin,access_runbot_upgrade_exception_admin,runbot.model_runbot_upgrade_exception,runbot.group_runbot_admin,1,1,1,1
access_runbot_dockerfile_user,access_runbot_dockerfile_user,runbot.model_runbot_dockerfile,runbot.group_user,1,0,0,0
access_runbot_dockerfile_admin,access_runbot_dockerfile_admin,runbot.model_runbot_dockerfile,runbot.group_runbot_admin,1,1,1,1
access_runbot_codeowner_admin,runbot_codeowner_admin,runbot.model_runbot_codeowner,runbot.group_runbot_admin,1,1,1,1
access_runbot_codeowner_user,runbot_codeowner_user,runbot.model_runbot_codeowner,group_user,1,0,0,0
access_runbot_commit_export_admin,runbot_commit_export_admin,runbot.model_runbot_commit_export,runbot.group_runbot_admin,1,1,1,1
access_runbot_trigger_custom_wizard,access_runbot_trigger_custom_wizard,model_runbot_trigger_custom_wizard,runbot.group_runbot_admin,1,1,1,1
access_runbot_build_stat_regex_wizard,access_runbot_build_stat_regex_wizard,model_runbot_build_stat_regex_wizard,runbot.group_runbot_admin,1,1,1,1
1 id name model_id:id group_id:id perm_read perm_write perm_create perm_unlink
2 access_runbot_remote runbot_remote runbot.model_runbot_remote group_user 1 0 0 0
3 access_runbot_branch runbot_branch runbot.model_runbot_branch group_user 1 0 0 0
4 access_runbot_build runbot_build runbot.model_runbot_build group_user 1 0 0 0
5 access_runbot_remote_admin runbot_remote_admin runbot.model_runbot_remote runbot.group_runbot_admin 1 1 1 1
6 access_runbot_branch_admin runbot_branch_admin runbot.model_runbot_branch runbot.group_runbot_admin 1 1 1 1
7 access_runbot_build_admin runbot_build_admin runbot.model_runbot_build runbot.group_runbot_admin 1 1 1 1
8 access_irlogging log by runbot users base.model_ir_logging group_user 0 0 1 0
9 access_runbot_build_config_step_user runbot_build_config_step_user runbot.model_runbot_build_config_step group_user 1 0 0 0
10 access_runbot_build_config_step_manager runbot_build_config_step_manager runbot.model_runbot_build_config_step runbot.group_build_config_user 1 1 1 1
11 access_runbot_build_config_user runbot_build_config_user runbot.model_runbot_build_config group_user 1 0 0 0
12 access_runbot_build_config_manager runbot_build_config_manager runbot.model_runbot_build_config runbot.group_build_config_user 1 1 1 1
13 access_runbot_build_config_step_order_user runbot_build_config_step_order_user runbot.model_runbot_build_config_step_order group_user 1 0 0 0
14 access_runbot_build_config_step_order_manager runbot_build_config_step_order_manager runbot.model_runbot_build_config_step_order runbot.group_build_config_user 1 1 1 1
15 access_runbot_config_step_upgrade_db_user runbot_config_step_upgrade_db_user runbot.model_runbot_config_step_upgrade_db group_user 1 0 0 0
16 access_runbot_config_step_upgrade_db_manager runbot_config_step_upgrade_db_manager runbot.model_runbot_config_step_upgrade_db runbot.group_build_config_user 1 1 1 1
17 access_runbot_build_error_user runbot_build_error_user runbot.model_runbot_build_error group_user 1 0 0 0
18 access_runbot_build_error_admin runbot_build_error_admin runbot.model_runbot_build_error runbot.group_runbot_admin 1 1 1 1
19 access_runbot_build_error_manager runbot_build_error_manager runbot.model_runbot_build_error runbot.group_runbot_error_manager 1 1 1 1
20 access_runbot_build_error_tag_user runbot_build_error_tag_user runbot.model_runbot_build_error_tag group_user 1 0 0 0
21 access_runbot_build_error_tag_admin runbot_build_error_tag_admin runbot.model_runbot_build_error_tag runbot.group_runbot_admin 1 1 1 1
22 access_runbot_build_error_tag_manager runbot_build_error_tag_manager runbot.model_runbot_build_error_tag runbot.group_runbot_error_manager 1 1 1 1
23 access_runbot_team_admin runbot_team_admin runbot.model_runbot_team runbot.group_runbot_admin 1 1 1 1
24 access_runbot_team_user runbot_team_user runbot.model_runbot_team group_user 1 0 0 0
25 access_runbot_dashboard_admin runbot_dashboard_admin runbot.model_runbot_dashboard runbot.group_runbot_admin 1 1 1 1
26 access_runbot_dashboard_user runbot_dashboard_user runbot.model_runbot_dashboard group_user 1 0 0 0
27 access_runbot_dashboard_tile_admin runbot_dashboard_tile_admin runbot.model_runbot_dashboard_tile runbot.group_runbot_admin 1 1 1 1
28 access_runbot_dashboard_tile_user runbot_dashboard_tile_user runbot.model_runbot_dashboard_tile group_user 1 0 0 0
29 access_runbot_error_regex_user runbot_error_regex_user runbot.model_runbot_error_regex group_user 1 0 0 0
30 access_runbot_error_regex_manager runbot_error_regex_manager runbot.model_runbot_error_regex runbot.group_runbot_admin 1 1 1 1
31 access_runbot_host_user runbot_host_user runbot.model_runbot_host group_user 1 0 0 0
32 access_runbot_host_manager runbot_host_manager runbot.model_runbot_host runbot.group_runbot_admin 1 1 1 1
33 access_runbot_error_log_user runbot_error_log_user runbot.model_runbot_error_log group_user 1 0 0 0
34 access_runbot_error_log_manager runbot_error_log_manager runbot.model_runbot_error_log runbot.group_runbot_admin 1 1 1 1
35 access_runbot_repo_hooktime runbot_repo_hooktime runbot.model_runbot_repo_hooktime group_user 1 0 0 0
36 access_runbot_repo_referencetime runbot_repo_referencetime runbot.model_runbot_repo_reftime group_user 1 0 0 0
37 access_runbot_build_stat_user runbot_build_stat_user runbot.model_runbot_build_stat group_user 1 0 0 0
38 access_runbot_build_stat_admin runbot_build_stat_admin runbot.model_runbot_build_stat runbot.group_runbot_admin 1 1 1 1
39 access_runbot_build_stat_regex_user access_runbot_build_stat_regex_user runbot.model_runbot_build_stat_regex runbot.group_user 1 0 0 0
40 access_runbot_build_stat_regex_admin access_runbot_build_stat_regex_admin runbot.model_runbot_build_stat_regex runbot.group_runbot_admin 1 1 1 1
41 access_runbot_trigger_user access_runbot_trigger_user runbot.model_runbot_trigger runbot.group_user 1 0 0 0
42 access_runbot_trigger_runbot_admin access_runbot_trigger_runbot_admin runbot.model_runbot_trigger runbot.group_runbot_admin 1 1 1 1
43 access_runbot_repo_user access_runbot_repo_user runbot.model_runbot_repo runbot.group_user 1 0 0 0
44 access_runbot_repo_runbot_admin access_runbot_repo_runbot_admin runbot.model_runbot_repo runbot.group_runbot_admin 1 1 1 1
45 access_runbot_commit_user access_runbot_commit_user runbot.model_runbot_commit runbot.group_user 1 0 0 0
46 access_runbot_build_params_user access_runbot_build_params_user runbot.model_runbot_build_params runbot.group_user 1 0 0 0
47 access_runbot_build_params_runbot_admin access_runbot_build_params_runbot_admin runbot.model_runbot_build_params runbot.group_runbot_admin 1 1 1 1
48 access_runbot_commit_link_user access_runbot_commit_link_user runbot.model_runbot_commit_link runbot.group_user 1 0 0 0
49 access_runbot_commit_link_runbot_admin access_runbot_commit_link_runbot_admin runbot.model_runbot_commit_link runbot.group_runbot_admin 1 1 1 1
50 access_runbot_version_user access_runbot_version_user runbot.model_runbot_version runbot.group_user 1 0 0 0
51 access_runbot_version_runbot_admin access_runbot_version_runbot_admin runbot.model_runbot_version runbot.group_runbot_admin 1 1 1 1
52 access_runbot_project_user access_runbot_project_user runbot.model_runbot_project runbot.group_user 1 0 0 0
53 access_runbot_project_runbot_admin access_runbot_project_runbot_admin runbot.model_runbot_project runbot.group_runbot_admin 1 1 1 1
54 access_runbot_bundle_user access_runbot_bundle_user runbot.model_runbot_bundle runbot.group_user 1 0 0 0
55 access_runbot_bundle_runbot_admin access_runbot_bundle_runbot_admin runbot.model_runbot_bundle runbot.group_runbot_admin 1 1 1 1
56 access_runbot_batch_user access_runbot_batch_user runbot.model_runbot_batch runbot.group_user 1 0 0 0
57 access_runbot_batch_runbot_admin access_runbot_batch_runbot_admin runbot.model_runbot_batch runbot.group_runbot_admin 1 1 1 1
58 access_runbot_batch_slot_user access_runbot_batch_slot_user runbot.model_runbot_batch_slot runbot.group_user 1 0 0 0
59 access_runbot_batch_slot_runbot_admin access_runbot_batch_slot_runbot_admin runbot.model_runbot_batch_slot runbot.group_runbot_admin 1 1 1 1
60 access_runbot_ref_log_runbot_user access_runbot_ref_log_runbot_user runbot.model_runbot_ref_log runbot.group_user 1 0 0 0
61 access_runbot_ref_log_runbot_admin access_runbot_ref_log_runbot_admin runbot.model_runbot_ref_log runbot.group_runbot_admin 1 1 1 1
62 access_runbot_commit_status_runbot_user access_runbot_commit_status_runbot_user runbot.model_runbot_commit_status runbot.group_user 1 0 0 0
63 access_runbot_commit_status_runbot_admin access_runbot_commit_status_runbot_admin runbot.model_runbot_commit_status runbot.group_runbot_admin 1 1 1 1
64 access_runbot_bundle_trigger_custom_runbot_user access_runbot_bundle_trigger_custom_runbot_user runbot.model_runbot_bundle_trigger_custom runbot.group_user 1 0 0 0
65 access_runbot_bundle_trigger_custom_runbot_admin access_runbot_bundle_trigger_custom_runbot_admin runbot.model_runbot_bundle_trigger_custom runbot.group_runbot_admin 1 1 1 1
66 access_runbot_category_runbot_user access_runbot_category_runbot_user runbot.model_runbot_category runbot.group_user 1 0 0 0
67 access_runbot_category_runbot_admin access_runbot_category_runbot_admin runbot.model_runbot_category runbot.group_runbot_admin 1 1 1 1
68 access_runbot_batch_log_runbot_user access_runbot_batch_log_runbot_user runbot.model_runbot_batch_log runbot.group_user 1 0 0 0
69 access_runbot_warning_user access_runbot_warning_user runbot.model_runbot_warning runbot.group_user 1 0 0 0
70 access_runbot_warning_admin access_runbot_warning_admin runbot.model_runbot_warning runbot.group_runbot_admin 1 1 1 1
71 access_runbot_database_user access_runbot_database_user runbot.model_runbot_database runbot.group_user 1 0 0 0
72 access_runbot_database_admin access_runbot_database_admin runbot.model_runbot_database runbot.group_runbot_admin 1 1 1 1
73 access_runbot_upgrade_regex_user access_runbot_upgrade_regex_user runbot.model_runbot_upgrade_regex runbot.group_user 1 0 0 0
74 access_runbot_upgrade_regex_admin access_runbot_upgrade_regex_admin runbot.model_runbot_upgrade_regex runbot.group_runbot_admin 1 1 1 1
75 access_runbot_upgrade_exception_user access_runbot_upgrade_exception_user runbot.model_runbot_upgrade_exception runbot.group_user 1 0 0 0
76 access_runbot_upgrade_exception_admin access_runbot_upgrade_exception_admin runbot.model_runbot_upgrade_exception runbot.group_runbot_admin 1 1 1 1
77 access_runbot_dockerfile_user access_runbot_dockerfile_user runbot.model_runbot_dockerfile runbot.group_user 1 0 0 0
78 access_runbot_dockerfile_admin access_runbot_dockerfile_admin runbot.model_runbot_dockerfile runbot.group_runbot_admin 1 1 1 1
79 access_runbot_codeowner_admin runbot_codeowner_admin runbot.model_runbot_codeowner runbot.group_runbot_admin 1 1 1 1
80 access_runbot_codeowner_user runbot_codeowner_user runbot.model_runbot_codeowner group_user 1 0 0 0
81 access_runbot_commit_export_admin runbot_commit_export_admin runbot.model_runbot_commit_export runbot.group_runbot_admin 1 1 1 1
82 access_runbot_trigger_custom_wizard access_runbot_trigger_custom_wizard model_runbot_trigger_custom_wizard runbot.group_runbot_admin 1 1 1 1
83 access_runbot_build_stat_regex_wizard access_runbot_build_stat_regex_wizard model_runbot_build_stat_regex_wizard runbot.group_runbot_admin 1 1 1 1

View File

@ -1,14 +0,0 @@
id,name,model_id/id,groups/id,domain_force,perm_read,perm_create,perm_write,perm_unlink
rule_project,"limited to groups",model_runbot_project,group_user,"['|', ('group_ids', '=', False), ('group_ids', 'in', [g.id for g in user.groups_id])]",1,1,1,1
rule_project_mgmt,"manager can see all",model_runbot_project,group_runbot_admin,"[(1, '=', 1)]",1,1,1,1
rule_repo,"limited to groups",model_runbot_repo,group_user,"['|', ('project_id.group_ids', '=', False), ('project_id.group_ids', 'in', [g.id for g in user.groups_id])]",1,1,1,1
rule_repo_mgmt,"manager can see all",model_runbot_repo,group_runbot_admin,"[(1, '=', 1)]",1,1,1,1
rule_branch,"limited to groups",model_runbot_branch,group_user,"['|', ('remote_id.repo_id.project_id.group_ids', '=', False), ('remote_id.repo_id.project_id.group_ids', 'in', [g.id for g in user.groups_id])]",1,1,1,1
rule_branch_mgmt,"manager can see all",model_runbot_branch,group_runbot_admin,"[(1, '=', 1)]",1,1,1,1
rule_commit,"limited to groups",model_runbot_commit,group_user,"['|', ('repo_id.project_id.group_ids', '=', False), ('repo_id.project_id.group_ids', 'in', [g.id for g in user.groups_id])]",1,1,1,1
rule_commit_mgmt,"manager can see all",model_runbot_commit,group_runbot_admin,"[(1, '=', 1)]",1,1,1,1
rule_build,"limited to groups",model_runbot_build,group_user,"['|', ('params_id.project_id.group_ids', '=', False), ('params_id.project_id.group_ids', 'in', [g.id for g in user.groups_id])]",1,1,1,1
rule_build_mgmt,"manager can see all",model_runbot_build,group_runbot_admin,"[(1, '=', 1)]",1,1,1,1
1 id name model_id/id groups/id domain_force perm_read perm_create perm_write perm_unlink
2 rule_project limited to groups model_runbot_project group_user ['|', ('group_ids', '=', False), ('group_ids', 'in', [g.id for g in user.groups_id])] 1 1 1 1
3 rule_project_mgmt manager can see all model_runbot_project group_runbot_admin [(1, '=', 1)] 1 1 1 1
4 rule_repo limited to groups model_runbot_repo group_user ['|', ('project_id.group_ids', '=', False), ('project_id.group_ids', 'in', [g.id for g in user.groups_id])] 1 1 1 1
5 rule_repo_mgmt manager can see all model_runbot_repo group_runbot_admin [(1, '=', 1)] 1 1 1 1
6 rule_branch limited to groups model_runbot_branch group_user ['|', ('remote_id.repo_id.project_id.group_ids', '=', False), ('remote_id.repo_id.project_id.group_ids', 'in', [g.id for g in user.groups_id])] 1 1 1 1
7 rule_branch_mgmt manager can see all model_runbot_branch group_runbot_admin [(1, '=', 1)] 1 1 1 1
8 rule_commit limited to groups model_runbot_commit group_user ['|', ('repo_id.project_id.group_ids', '=', False), ('repo_id.project_id.group_ids', 'in', [g.id for g in user.groups_id])] 1 1 1 1
9 rule_commit_mgmt manager can see all model_runbot_commit group_runbot_admin [(1, '=', 1)] 1 1 1 1
10 rule_build limited to groups model_runbot_build group_user ['|', ('params_id.project_id.group_ids', '=', False), ('params_id.project_id.group_ids', 'in', [g.id for g in user.groups_id])] 1 1 1 1
11 rule_build_mgmt manager can see all model_runbot_build group_runbot_admin [(1, '=', 1)] 1 1 1 1

View File

@ -1,123 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<odoo>
<data>
<record model="ir.module.category" id="module_project">
<field name="name">Runbot</field>
</record>
<record id="group_user" model="res.groups">
<field name="name">User</field>
<field name="category_id" ref="module_project"/>
<!-- as public user is inactive, it wont be automatically added
to this group via implied groups. add it manually -->
<field name="users" eval="[(4, ref('base.public_user'))]"/>
</record>
<record id="base.group_public" model="res.groups">
<field name="implied_ids" eval="[(4, ref('runbot.group_user'))]"/>
</record>
<record id="base.group_user" model="res.groups">
<field name="implied_ids" eval="[(4, ref('runbot.group_user'))]"/>
</record>
<record id="base.group_portal" model="res.groups">
<field name="implied_ids" eval="[(4, ref('runbot.group_user'))]"/>
</record>
<record model="ir.module.category" id="build_config_project">
<field name="name">Build Config</field>
</record>
<record id="group_build_config_user" model="res.groups">
<field name="name">Build config user</field>
<field name="category_id" ref="build_config_project"/>
</record>
<record id="group_build_config_manager" model="res.groups">
<field name="name">Build config manager</field>
<field name="category_id" ref="build_config_project"/>
<field name="implied_ids" eval="[(4, ref('runbot.group_build_config_user'))]"/>
</record>
<record id="group_build_config_administrator" model="res.groups">
<field name="name">Build config administrator</field>
<field name="category_id" ref="build_config_project"/>
<field name="implied_ids" eval="[(4, ref('runbot.group_build_config_manager'))]"/>
<field name="users" eval="[(4, ref('base.user_root'))]"/>
</record>
<record id="group_runbot_error_manager" model="res.groups">
<field name="name">Build error manager</field>
<field name="category_id" ref="module_project"/>
</record>
<record id="group_runbot_admin" model="res.groups">
<field name="name">Runbot administrator</field>
<field name="category_id" ref="module_project"/>
<field name="users" eval="[(4, ref('base.user_root')), (4, ref('base.user_admin'))]"/>
<field name="implied_ids" eval="[(4, ref('runbot.group_user')), (4, ref('runbot.group_build_config_administrator'))]"/>
</record>
<!-- config access rules-->
<record id="runbot_build_config_access_administrator" model="ir.rule">
<field name="name">All config can be edited by config admin</field>
<field name="groups" eval="[(4, ref('group_build_config_administrator'))]"/>
<field name="model_id" ref="model_runbot_build_config"/>
<field name="domain_force">[(1, '=', 1)]</field>
<field name="perm_write" eval="True"/>
<field name="perm_unlink" eval="True"/>
<field name="perm_read" eval="False"/>
<field name="perm_create" eval="False"/>
</record>
<record id="runbot_build_config_access_manager" model="ir.rule">
<field name="name">Own config can be edited by user</field>
<field name="groups" eval="[(4, ref('group_build_config_manager'))]"/>
<field name="model_id" ref="model_runbot_build_config"/>
<field name="domain_force">[('protected', '=', False)]</field>
<field name="perm_write" eval="True"/>
<field name="perm_unlink" eval="True"/>
<field name="perm_read" eval="False"/>
<field name="perm_create" eval="True"/>
</record>
<record id="runbot_build_config_access_user" model="ir.rule">
<field name="name">All config can be edited by config admin</field>
<field name="groups" eval="[(4, ref('group_build_config_user'))]"/>
<field name="model_id" ref="model_runbot_build_config"/>
<field name="domain_force">[('create_uid', '=', user.id)]</field>
<field name="perm_write" eval="True"/>
<field name="perm_unlink" eval="True"/>
<field name="perm_read" eval="False"/>
<field name="perm_create" eval="True"/>
</record>
<!-- step access rules-->
<record id="runbot_build_config_step_access_administrator" model="ir.rule">
<field name="name">All config step can be edited by config admin</field>
<field name="groups" eval="[(4, ref('group_build_config_administrator'))]"/>
<field name="model_id" ref="model_runbot_build_config_step"/>
<field name="domain_force">[(1, '=', 1)]</field>
<field name="perm_read" eval="False"/>
</record>
<record id="runbot_build_config_step_access_manager" model="ir.rule">
<field name="name">Unprotected config step can be edited by manager</field>
<field name="groups" eval="[(4, ref('group_build_config_manager'))]"/>
<field name="model_id" ref="model_runbot_build_config_step"/>
<field name="domain_force">[('protected', '=', False)]</field>
<field name="perm_read" eval="False"/>
</record>
<record id="runbot_build_config_step_access_user" model="ir.rule">
<field name="name">Own config step can be edited by user</field>
<field name="groups" eval="[(4, ref('group_build_config_user'))]"/>
<field name="model_id" ref="model_runbot_build_config_step"/>
<field name="domain_force">[('protected', '=', False), ('create_uid', '=', user.id)]</field>
<field name="perm_read" eval="False"/>
</record>
</data>
</odoo>

View File

@ -1,329 +0,0 @@
body {
margin: 0;
font-size: 0.875rem;
font-weight: 400;
line-height: 1.5;
color: #212529;
text-align: left;
background-color: white;
}
form {
margin: 0;
}
table {
font-size: 0.875rem;
}
.fa {
line-height: inherit; /* reset fa icon line height to body height*/
}
a {
color: #00A09D;
text-decoration: none;
}
a:hover {
color: #005452;
}
a.slots_infos:hover {
text-decoration: none;
}
.breadcrumb-item.active a {
color: #6c757d;
}
.breadcrumb {
background-color: inherit;
margin-bottom: 0;
}
.build_details {
padding: 5px;
}
.separator {
border-top: 2px solid #666;
}
[data-toggle="collapse"] .fa:before {
content: "\f139";
}
[data-toggle="collapse"].collapsed .fa:before {
content: "\f13a";
}
body, .table {
font-family: "Helvetica Neue", Helvetica, Arial, sans-serif;
color: #444;
}
.btn-default {
background-color: #fff;
color: #444;
border-color: #ccc;
}
.btn-default:hover {
background-color: #ccc;
color: #444;
border-color: #ccc;
}
.btn-sm, .btn-group-sm > .btn {
padding: 0.25rem 0.5rem;
font-size: 0.89rem;
line-height: 1.5;
border-radius: 0.2rem;
}
.btn-ssm, .btn-group-ssm > .btn {
padding: 0.22rem 0.4rem;
font-size: 0.82rem;
line-height: 1;
border-radius: 0.2rem;
}
.killed, .bg-killed, .bg-killed-light {
background-color: #aaa;
}
.dropdown-toggle:after {
content: none;
}
.one_line {
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
}
.batch_tile {
padding: 6px;
}
.branch_time {
float: right;
margin-left: 10px;
}
:root {
--info-light: #d9edf7;
}
.bg-success-light {
background-color: #dff0d8;
}
.bg-danger-light {
background-color: #f2dede;
}
.bg-info-light {
background-color: var(--info-light);
}
.bg-warning-light {
background-color: #fff9e6;
}
.text-info {
color: #096b72 !important;
}
.build_subject_buttons {
display: flex;
}
.build_buttons {
margin-left: auto;
}
.bg-killed {
background-color: #aaa;
}
.badge-killed {
background-color: #aaa;
}
.table-condensed td {
padding: 0.25rem;
}
.line-through {
text-decoration: line-through;
}
.badge-light {
border: 1px solid #AAA;
}
.slot_button_group {
display: flex;
padding: 0 1px;
}
.slot_button_group .btn {
flex: 0 0 25px;
}
.slot_button_group .btn.slot_name {
width: 40px;
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
flex: 1 1 auto;
text-align: left;
}
.batch_header {
padding: 6px;
}
.batch_header:hover {
background-color: rgba(0, 0, 0, 0.1)
}
.header_hover {
visibility: hidden;
}
.batch_header:hover .header_hover {
visibility: visible;
}
.batch_slots {
display: flex;
flex-wrap: wrap;
padding: 6px;
}
.batch_commits {
background-color: white;
}
.batch_commits {
padding: 2px;
}
.match_type_new {
background-color: var(--info-light);
}
.batch_row .slot_container {
flex: 1 0 200px;
padding: 0 4px;
}
.batch_row .slot_filler {
width: 100px;
height: 0px;
flex: 1 0 200px;
padding: 0 4px;
}
.bundle_row {
border-bottom: 1px solid var(--gray);
}
.bundle_row .batch_commits {
font-size: 80%;
}
.bundle_row .slot_container {
flex: 1 0 50%;
}
.bundle_row .slot_filler {
flex: 1 0 50%;
}
.bundle_row .more .batch_commits {
display: block;
}
/*.bundle_row .nomore .batch_commits {
display: none;
padding: 8px;
}
.bundle_row .nomore.batch_tile:hover .batch_commits {
display: block;
position: absolute;
bottom: 1px;
transform: translateY(100%);
z-index: 100;
border: 1px solid rgba(0, 0, 0, 0.125);
border-radius: 0.2rem;
box-sizing: border-box;
margin-left: -1px;
}*/
.chart-legend {
max-height: calc(100vh - 160px);
overflow-y: scroll;
overflow-x: hidden;
cursor: pointer;
padding: 5px;
}
.chart-legend .label {
margin-left: 5px;
font-weight: bold;
}
.chart-legend .disabled .color {
visibility: hidden;
}
.chart-legend .disabled .label {
font-weight: normal;
text-decoration: line-through;
margin-left: 5px;
}
.chart-legend ul {
list-style-type: none;
margin: 0;
padding: 0;
}
.limited-height {
max-height: 180px;
overflow: scroll;
-ms-overflow-style: none;
scrollbar-width: none;
}
.limited-height > hr {
margin: 2px 0px;
}
.limited-height:before {
content: '';
width: 100%;
height: 30px;
position: absolute;
left: 0;
bottom: 0;
background: linear-gradient(transparent 0px, white 27px);
}
.limited-height::-webkit-scrollbar {
display: none;
}
.limited-height-toggle:hover {
background-color: #DDD;
}
.o_runbot_team_searchbar .nav {
margin-left: 0px !important;
}

View File

@ -1,72 +0,0 @@
odoo.define('runbot.json_field', function (require) {
"use strict";
var basic_fields = require('web.basic_fields');
var relational_fields = require('web.relational_fields');
var registry = require('web.field_registry');
var field_utils = require('web.field_utils');
var dom = require('web.dom');
var FieldJson = basic_fields.FieldChar.extend({
init: function () {
this._super.apply(this, arguments);
if (this.mode === 'edit') {
this.tagName = 'textarea';
}
this.autoResizeOptions = {parent: this};
},
start: function () {
if (this.mode === 'edit') {
dom.autoresize(this.$el, this.autoResizeOptions);
}
return this._super();
},
_onKeydown: function (ev) {
if (ev.which === $.ui.keyCode.ENTER) {
ev.stopPropagation();
return;
}
this._super.apply(this, arguments);
},
});
registry.add('jsonb', FieldJson)
var FrontendUrl = relational_fields.FieldMany2One.extend({
isQuickEditable: false,
events: _.extend({'click .external_link': '_stopPropagation'}, relational_fields.FieldMany2One.prototype.events),
init() {
this._super.apply(this, arguments);
if (this.value) {
const model = this.value.model.split('.').slice(1).join('_');
const res_id = this.value.res_id;
this.route = '/runbot/' + model+ '/' + res_id;
} else {
this.route = false;
}
},
_renderReadonly: function () {
this._super.apply(this, arguments);
var link = ''
if (this.route) {
link = ' <a href="'+this.route+'" ><i class="external_link fa fa-fw o_button_icon fa-external-link "/></a>'
}
this.$el.html('<span>' + this.$el.html() + link + '<span>')
},
_stopPropagation: function(event) {
event.stopPropagation()
}
});
registry.add('frontend_url', FrontendUrl)
function stringify(obj) {
return JSON.stringify(obj, null, '\t')
}
field_utils.format.jsonb = stringify;
field_utils.parse.jsonb = JSON.parse;
});

View File

@ -1,32 +0,0 @@
(function($) {
"use strict";
$(function () {
$(document).on('click', '[data-runbot]', function (e) {
e.preventDefault();
var data = $(this).data();
var operation = data.runbot;
if (!operation) {
return;
}
var xhr = new XMLHttpRequest();
xhr.addEventListener('load', function () {
if (operation == 'rebuild' && window.location.href.split('?')[0].endsWith('/build/' + data.runbotBuild)){
window.location.href = window.location.href.replace('/build/' + data.runbotBuild, '/build/' + xhr.responseText);
} else {
window.location.reload();
}
});
xhr.open('POST', '/runbot/build/' + data.runbotBuild + '/' + operation);
xhr.send();
});
});
})(jQuery);
function copyToClipboard(text) {
if (!navigator.clipboard) {
console.error('Clipboard not supported');
return;
}
navigator.clipboard.writeText(text);
}

Some files were not shown because too many files have changed in this diff Show More