mirror of
https://github.com/odoo/runbot.git
synced 2025-03-15 23:45:44 +07:00
[ADD] runbot_merge
Does not change anything in runbot itself, so if there's things which don't work or we need to change we can probably fix them in master directly.
This commit is contained in:
commit
5c4018b91e
175
runbot_merge/README.rst
Normal file
175
runbot_merge/README.rst
Normal file
@ -0,0 +1,175 @@
|
||||
Merge Bot
|
||||
=========
|
||||
|
||||
Setup
|
||||
-----
|
||||
|
||||
* Setup a project with relevant repositories and branches the bot
|
||||
should manage (e.g. odoo/odoo and 10.0).
|
||||
* Set up reviewers (github_login + boolean flag on partners).
|
||||
* Add "Issue comments", "Pull request reviews", "Pull requests" and
|
||||
"Statuses" webhooks to managed repositories.
|
||||
* If applicable, add "Statuses" webhook to the *source* repositories.
|
||||
|
||||
Github does not seem to send statuses cross-repository when commits
|
||||
get transmigrated so if a user creates a branch in odoo-dev/odoo,
|
||||
waits for CI to run then creates a PR targeted to odoo/odoo the PR
|
||||
will never get status-checked (unless we modify runbot to re-send
|
||||
statuses on pull_request webhook).
|
||||
|
||||
Working Principles
|
||||
------------------
|
||||
|
||||
Useful information (new PRs, CI, comments, ...) is pushed to the MB
|
||||
via webhooks. Most of the staging work is performed via a cron job:
|
||||
|
||||
1. for each active staging, check if their are done
|
||||
|
||||
1. if successful
|
||||
|
||||
* ``push --ff`` to target branches
|
||||
* close PRs
|
||||
|
||||
2. if only one batch, mark as failed
|
||||
|
||||
for batches of multiple PRs, the MB attempts to infer which
|
||||
specific PR failed
|
||||
|
||||
3. otherwise split staging in 2 (bisection search of problematic
|
||||
batch)
|
||||
|
||||
2. for each branch with no active staging
|
||||
|
||||
* if there are inactive stagings, stage one of them
|
||||
* otherwise look for batches targered to that PR (PRs grouped by
|
||||
label with branch as target)
|
||||
* attempt staging
|
||||
|
||||
1. reset temp branches (one per repo) to corresponding targets
|
||||
2. merge each batch's PR into the relevant temp branch
|
||||
|
||||
* on merge failure, mark PRs as failed
|
||||
|
||||
3. once no more batch or limit reached, reset staging branches to
|
||||
tmp
|
||||
4. mark staging as active
|
||||
|
||||
Commands
|
||||
--------
|
||||
|
||||
A command string is a line starting with the mergebot's name and
|
||||
followed by various commands. Self-reviewers count as reviewers for
|
||||
the purpose of their own PRs, but delegate reviewers don't.
|
||||
|
||||
retry
|
||||
resets a PR in error mode to ready for staging
|
||||
|
||||
can be used by a reviewer or the PR author to re-stage the PR after
|
||||
it's been updated or the target has been updated & fixed.
|
||||
|
||||
r(review)+
|
||||
approves a PR, can be used by a reviewer or delegate reviewer
|
||||
|
||||
submitting an "approve" review implicitly r+'s the PR
|
||||
|
||||
r(eview)-
|
||||
removes approval from a PR, allows un-reviewing a PR in error (staging
|
||||
failed) so it can be updated and re-submitted
|
||||
|
||||
.. squash+/squash-
|
||||
.. marks the PR as squash or merge, can override squash inference or a
|
||||
.. previous squash command, can only be used by reviewers
|
||||
|
||||
delegate+/delegate=<users>
|
||||
adds either PR author or the specified (github) users as authorised
|
||||
reviewers for this PR. ``<users>`` is a comma-separated list of
|
||||
github usernames (no @), can be used by reviewers
|
||||
|
||||
p(riority)=2|1|0
|
||||
sets the priority to normal (2), pressing (1) or urgent (0),
|
||||
lower-priority PRs are selected first and batched together, can be
|
||||
used by reviewers
|
||||
|
||||
rebase-
|
||||
the default merge mode is to rebase and merge the PR into the
|
||||
target, however for some situations this is not suitable and
|
||||
a regular merge is necessary; this command toggles rebasing
|
||||
mode off (and thus back to a regular merge)
|
||||
|
||||
Structure
|
||||
---------
|
||||
|
||||
A *project* is used to manage multiple *repositories* across many
|
||||
*branches*.
|
||||
|
||||
Each *PR* targets a specific branch in a specific repository.
|
||||
|
||||
A *batch* is a number of co-dependent PRs, PRs which are assumed to
|
||||
depend on one another (the exact relationship is irrelevant) and thus
|
||||
always need to be batched together. Batches are normally created on
|
||||
the fly during staging.
|
||||
|
||||
A *staging* is a number of batches (up to 8 by default) which will be
|
||||
tested together, and split if CI fails. Each staging applies to a
|
||||
single *branch* the target) across all managed repositories. Stagings
|
||||
can be active (currently live on the various staging branches) or
|
||||
inactive (to be staged later, generally as a result of splitting a
|
||||
failed staging).
|
||||
|
||||
Notes
|
||||
-----
|
||||
|
||||
* When looking for stageable batches, priority is taken in account and
|
||||
isolating e.g. if there's a single high-priority PR, low-priority
|
||||
PRs are ignored completely and only that will be staged on its own
|
||||
* Reviewers are set up on partners so we can e.g. have author-tracking
|
||||
& delegate reviewers without needing to create proper users for
|
||||
every contributor.
|
||||
* MB collates statuses on commits independently from other objects, so
|
||||
a commit getting CI'd in odoo-dev/odoo then made into a PR on
|
||||
odoo/odoo should be correctly interpreted assuming odoo-dev/odoo
|
||||
sent its statuses to the MB.
|
||||
* Github does not support transactional sequences of API calls, so
|
||||
it's possible that "intermediate" staging states are visible & have
|
||||
to be rollbacked e.g. a staging succeeds in a 2-repo scenario,
|
||||
A.{target} is ff-d to A.{staging}, then B.{target}'s ff to
|
||||
B.{staging} fails, we have to rollback A.{target}.
|
||||
* Co-dependence is currently inferred through *labels*, which is a
|
||||
pair of ``{repo}:{branchname}`` e.g. odoo-dev:11.0-pr-flanker-jke.
|
||||
If this label is present in a PR to A and a PR to B, these two
|
||||
PRs will be collected into a single batch to ensure they always
|
||||
get batched (and failed) together.
|
||||
|
||||
Previous Work
|
||||
-------------
|
||||
|
||||
bors-ng
|
||||
~~~~~~~
|
||||
|
||||
* r+: accept (only for trusted reviewers)
|
||||
* r-: unaccept
|
||||
* r=users...: accept on behalf of users
|
||||
* delegate+: allows author to self-review
|
||||
* delegate=users...: allow non-reviewers users to review
|
||||
* try: stage build (to separate branch) but don't merge on succes
|
||||
|
||||
Why not bors-ng
|
||||
###############
|
||||
|
||||
* no concurrent staging (can only stage one target at a time)
|
||||
* can't do co-dependent repositories/multi-repo staging
|
||||
* cancels/forgets r+'d branches on FF failure (emergency pushes)
|
||||
instead of re-staging
|
||||
|
||||
homu
|
||||
~~~~
|
||||
|
||||
Additionally to bors-ng's:
|
||||
|
||||
* SHA option on r+/r=, guards
|
||||
* p=NUMBER: set priority (unclear if best = low/high)
|
||||
* rollup/rollup-: should be default
|
||||
* retry: re-attempt PR (flaky?)
|
||||
* delegate-: remove delegate+/delegate=
|
||||
* force: ???
|
||||
* clean: ???
|
1
runbot_merge/__init__.py
Normal file
1
runbot_merge/__init__.py
Normal file
@ -0,0 +1 @@
|
||||
from . import models, controllers
|
13
runbot_merge/__manifest__.py
Normal file
13
runbot_merge/__manifest__.py
Normal file
@ -0,0 +1,13 @@
|
||||
{
|
||||
'name': 'merge bot',
|
||||
'depends': ['contacts', 'website'],
|
||||
'data': [
|
||||
'security/security.xml',
|
||||
'security/ir.model.access.csv',
|
||||
|
||||
'data/merge_cron.xml',
|
||||
'views/res_partner.xml',
|
||||
'views/mergebot.xml',
|
||||
'views/templates.xml',
|
||||
]
|
||||
}
|
262
runbot_merge/controllers/__init__.py
Normal file
262
runbot_merge/controllers/__init__.py
Normal file
@ -0,0 +1,262 @@
|
||||
import hashlib
|
||||
import hmac
|
||||
import logging
|
||||
import json
|
||||
|
||||
import werkzeug.exceptions
|
||||
|
||||
from odoo.http import Controller, request, route
|
||||
|
||||
from . import dashboard
|
||||
|
||||
_logger = logging.getLogger(__name__)
|
||||
|
||||
class MergebotController(Controller):
|
||||
@route('/runbot_merge/hooks', auth='none', type='json', csrf=False, methods=['POST'])
|
||||
def index(self):
|
||||
req = request.httprequest
|
||||
event = req.headers['X-Github-Event']
|
||||
|
||||
c = EVENTS.get(event)
|
||||
if not c:
|
||||
_logger.warn('Unknown event %s', event)
|
||||
return 'Unknown event {}'.format(event)
|
||||
|
||||
repo = request.jsonrequest['repository']['full_name']
|
||||
env = request.env(user=1)
|
||||
|
||||
secret = env['runbot_merge.repository'].search([
|
||||
('name', '=', repo),
|
||||
]).project_id.secret
|
||||
if secret:
|
||||
signature = 'sha1=' + hmac.new(secret.encode('ascii'), req.get_data(), hashlib.sha1).hexdigest()
|
||||
if not hmac.compare_digest(signature, req.headers.get('X-Hub-Signature', '')):
|
||||
_logger.warn("Ignored hook with incorrect signature %s",
|
||||
req.headers.get('X-Hub-Signature'))
|
||||
return werkzeug.exceptions.Forbidden()
|
||||
|
||||
return c(env, request.jsonrequest)
|
||||
|
||||
def handle_pr(env, event):
|
||||
if event['action'] in [
|
||||
'assigned', 'unassigned', 'review_requested', 'review_request_removed',
|
||||
'labeled', 'unlabeled'
|
||||
]:
|
||||
_logger.debug(
|
||||
'Ignoring pull_request[%s] on %s:%s',
|
||||
event['action'],
|
||||
event['pull_request']['base']['repo']['full_name'],
|
||||
event['pull_request']['number'],
|
||||
)
|
||||
return 'Ignoring'
|
||||
|
||||
pr = event['pull_request']
|
||||
r = pr['base']['repo']['full_name']
|
||||
b = pr['base']['ref']
|
||||
|
||||
repo = env['runbot_merge.repository'].search([('name', '=', r)])
|
||||
if not repo:
|
||||
_logger.warning("Received a PR for %s but not configured to handle that repo", r)
|
||||
# sadly shit's retarded so odoo json endpoints really mean
|
||||
# jsonrpc and it's LITERALLY NOT POSSIBLE TO REPLY WITH
|
||||
# ACTUAL RAW HTTP RESPONSES and thus not possible to
|
||||
# report actual errors to the webhooks listing thing on
|
||||
# github (not that we'd be looking at them but it'd be
|
||||
# useful for tests)
|
||||
return "Not configured to handle {}".format(r)
|
||||
|
||||
# PRs to unmanaged branches are not necessarily abnormal and
|
||||
# we don't care
|
||||
branch = env['runbot_merge.branch'].search([
|
||||
('name', '=', b),
|
||||
('project_id', '=', repo.project_id.id),
|
||||
])
|
||||
|
||||
def find(target):
|
||||
return env['runbot_merge.pull_requests'].search([
|
||||
('repository', '=', repo.id),
|
||||
('number', '=', pr['number']),
|
||||
('target', '=', target.id),
|
||||
])
|
||||
# edition difficulty: pr['base']['ref] is the *new* target, the old one
|
||||
# is at event['change']['base']['ref'] (if the target changed), so edition
|
||||
# handling must occur before the rest of the steps
|
||||
if event['action'] == 'edited':
|
||||
source = event['changes'].get('base', {'ref': {'from': b}})['ref']['from']
|
||||
source_branch = env['runbot_merge.branch'].search([
|
||||
('name', '=', source),
|
||||
('project_id', '=', repo.project_id.id),
|
||||
])
|
||||
# retargeting to un-managed => delete
|
||||
if not branch:
|
||||
pr = find(source_branch)
|
||||
pr.unlink()
|
||||
return 'Retargeted {} to un-managed branch {}, deleted'.format(pr.id, b)
|
||||
|
||||
# retargeting from un-managed => create
|
||||
if not source_branch:
|
||||
return handle_pr(env, dict(event, action='opened'))
|
||||
|
||||
updates = {}
|
||||
if source_branch != branch:
|
||||
updates['target'] = branch.id
|
||||
if event['changes'].keys() & {'title', 'body'}:
|
||||
updates['message'] = "{}\n\n{}".format(pr['title'].strip(), pr['body'].strip())
|
||||
if updates:
|
||||
pr_obj = find(source_branch)
|
||||
pr_obj.write(updates)
|
||||
return 'Updated {}'.format(pr_obj.id)
|
||||
return "Nothing to update ({})".format(event['changes'].keys())
|
||||
|
||||
if not branch:
|
||||
_logger.info("Ignoring PR for un-managed branch %s:%s", r, b)
|
||||
return "Not set up to care about {}:{}".format(r, b)
|
||||
|
||||
author_name = pr['user']['login']
|
||||
author = env['res.partner'].search([('github_login', '=', author_name)], limit=1)
|
||||
if not author:
|
||||
author = env['res.partner'].create({
|
||||
'name': author_name,
|
||||
'github_login': author_name,
|
||||
})
|
||||
|
||||
_logger.info("%s: %s:%s (%s) (%s)", event['action'], repo.name, pr['number'], pr['title'].strip(), author.github_login)
|
||||
if event['action'] == 'opened':
|
||||
# some PRs have leading/trailing newlines in body/title (resp)
|
||||
title = pr['title'].strip()
|
||||
body = pr['body'].strip()
|
||||
pr_obj = env['runbot_merge.pull_requests'].create({
|
||||
'number': pr['number'],
|
||||
'label': pr['head']['label'],
|
||||
'author': author.id,
|
||||
'target': branch.id,
|
||||
'repository': repo.id,
|
||||
'head': pr['head']['sha'],
|
||||
'squash': pr['commits'] == 1,
|
||||
'message': '{}\n\n{}'.format(title, body),
|
||||
})
|
||||
return "Tracking PR as {}".format(pr_obj.id)
|
||||
|
||||
pr_obj = env['runbot_merge.pull_requests']._get_or_schedule(r, pr['number'])
|
||||
if not pr_obj:
|
||||
_logger.warn("webhook %s on unknown PR %s:%s, scheduled fetch", event['action'], repo.name, pr['number'])
|
||||
return "Unknown PR {}:{}, scheduling fetch".format(repo.name, pr['number'])
|
||||
if event['action'] == 'synchronize':
|
||||
if pr_obj.head == pr['head']['sha']:
|
||||
return 'No update to pr head'
|
||||
|
||||
if pr_obj.state in ('closed', 'merged'):
|
||||
_logger.error("Tentative sync to closed PR %s:%s", repo.name, pr['number'])
|
||||
return "It's my understanding that closed/merged PRs don't get sync'd"
|
||||
|
||||
if pr_obj.state == 'validated':
|
||||
pr_obj.state = 'opened'
|
||||
elif pr_obj.state == 'ready':
|
||||
pr_obj.state = 'approved'
|
||||
pr_obj.staging_id.cancel(
|
||||
"Updated PR %s:%s, removing staging %s",
|
||||
pr_obj.repository.name, pr_obj.number,
|
||||
pr_obj.staging_id,
|
||||
)
|
||||
|
||||
pr_obj.head = pr['head']['sha']
|
||||
pr_obj.squash = pr['commits'] == 1
|
||||
return 'Updated {} to {}'.format(pr_obj.id, pr_obj.head)
|
||||
|
||||
# don't marked merged PRs as closed (!!!)
|
||||
if event['action'] == 'closed' and pr_obj.state != 'merged':
|
||||
pr_obj.state = 'closed'
|
||||
pr_obj.staging_id.cancel(
|
||||
"Closed PR %s:%s, removing staging %s",
|
||||
pr_obj.repository.name, pr_obj.number,
|
||||
pr_obj.staging_id
|
||||
)
|
||||
return 'Closed {}'.format(pr_obj.id)
|
||||
|
||||
if event['action'] == 'reopened' and pr_obj.state == 'closed':
|
||||
pr_obj.state = 'opened'
|
||||
return 'Reopened {}'.format(pr_obj.id)
|
||||
|
||||
_logger.info("Ignoring event %s on PR %s", event['action'], pr['number'])
|
||||
return "Not handling {} yet".format(event['action'])
|
||||
|
||||
def handle_status(env, event):
|
||||
_logger.info(
|
||||
'status %s:%s on commit %s',
|
||||
event['context'], event['state'],
|
||||
event['sha'],
|
||||
)
|
||||
Commits = env['runbot_merge.commit']
|
||||
c = Commits.search([('sha', '=', event['sha'])])
|
||||
if c:
|
||||
c.statuses = json.dumps({
|
||||
**json.loads(c.statuses),
|
||||
event['context']: event['state']
|
||||
})
|
||||
else:
|
||||
Commits.create({
|
||||
'sha': event['sha'],
|
||||
'statuses': json.dumps({event['context']: event['state']})
|
||||
})
|
||||
|
||||
return 'ok'
|
||||
|
||||
def handle_comment(env, event):
|
||||
if 'pull_request' not in event['issue']:
|
||||
return "issue comment, ignoring"
|
||||
|
||||
repo = event['repository']['full_name']
|
||||
issue = event['issue']['number']
|
||||
author = event['sender']['login']
|
||||
comment = event['comment']['body']
|
||||
_logger.info('comment: %s %s:%s "%s"', author, repo, issue, comment)
|
||||
|
||||
partner = env['res.partner'].search([('github_login', '=', author), ])
|
||||
if not partner:
|
||||
_logger.info("ignoring comment from %s: not in system", author)
|
||||
return 'ignored'
|
||||
|
||||
repository = env['runbot_merge.repository'].search([('name', '=', repo)])
|
||||
if not repository.project_id._find_commands(comment):
|
||||
return "No commands, ignoring"
|
||||
|
||||
pr = env['runbot_merge.pull_requests']._get_or_schedule(repo, issue)
|
||||
if not pr:
|
||||
return "Unknown PR, scheduling fetch"
|
||||
|
||||
return pr._parse_commands(partner, comment)
|
||||
|
||||
def handle_review(env, event):
|
||||
partner = env['res.partner'].search([('github_login', '=', event['review']['user']['login'])])
|
||||
if not partner:
|
||||
_logger.info('ignoring comment from %s: not in system', event['review']['user']['login'])
|
||||
return 'ignored'
|
||||
|
||||
pr = env['runbot_merge.pull_requests']._get_or_schedule(
|
||||
event['repository']['full_name'],
|
||||
event['pull_request']['number'],
|
||||
event['pull_request']['base']['ref']
|
||||
)
|
||||
if not pr:
|
||||
return "Unknown PR, scheduling fetch"
|
||||
|
||||
firstline = ''
|
||||
state = event['review']['state'].lower()
|
||||
if state == 'approved':
|
||||
firstline = pr.repository.project_id.github_prefix + ' r+\n'
|
||||
elif state == 'request_changes':
|
||||
firstline = pr.repository.project_id.github_prefix + ' r-\n'
|
||||
|
||||
return pr._parse_commands(partner, firstline + event['review']['body'])
|
||||
|
||||
def handle_ping(env, event):
|
||||
print("Got ping! {}".format(event['zen']))
|
||||
return "pong"
|
||||
|
||||
EVENTS = {
|
||||
'pull_request': handle_pr,
|
||||
'status': handle_status,
|
||||
'issue_comment': handle_comment,
|
||||
'pull_request_review': handle_review,
|
||||
'ping': handle_ping,
|
||||
}
|
10
runbot_merge/controllers/dashboard.py
Normal file
10
runbot_merge/controllers/dashboard.py
Normal file
@ -0,0 +1,10 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
from odoo.http import Controller, route, request
|
||||
|
||||
|
||||
class MergebotDashboard(Controller):
|
||||
@route('/runbot_merge', auth="public", type="http", website=True)
|
||||
def dashboard(self):
|
||||
return request.render('runbot_merge.dashboard', {
|
||||
'projects': request.env['runbot_merge.project'].sudo().search([]),
|
||||
})
|
22
runbot_merge/data/merge_cron.xml
Normal file
22
runbot_merge/data/merge_cron.xml
Normal file
@ -0,0 +1,22 @@
|
||||
<odoo>
|
||||
<record model="ir.cron" id="merge_cron">
|
||||
<field name="name">Check for progress of PRs & Stagings</field>
|
||||
<field name="model_id" ref="model_runbot_merge_project"/>
|
||||
<field name="state">code</field>
|
||||
<field name="code">model._check_progress()</field>
|
||||
<field name="interval_number">1</field>
|
||||
<field name="interval_type">minutes</field>
|
||||
<field name="numbercall">-1</field>
|
||||
<field name="doall" eval="False"/>
|
||||
</record>
|
||||
<record model="ir.cron" id="fetch_prs_cron">
|
||||
<field name="name">Check for PRs to fetch</field>
|
||||
<field name="model_id" ref="model_runbot_merge_project"/>
|
||||
<field name="state">code</field>
|
||||
<field name="code">model._check_fetch(True)</field>
|
||||
<field name="interval_number">1</field>
|
||||
<field name="interval_type">minutes</field>
|
||||
<field name="numbercall">-1</field>
|
||||
<field name="doall" eval="False"/>
|
||||
</record>
|
||||
</odoo>
|
4
runbot_merge/exceptions.py
Normal file
4
runbot_merge/exceptions.py
Normal file
@ -0,0 +1,4 @@
|
||||
class MergeError(Exception):
|
||||
pass
|
||||
class FastForwardError(Exception):
|
||||
pass
|
173
runbot_merge/github.py
Normal file
173
runbot_merge/github.py
Normal file
@ -0,0 +1,173 @@
|
||||
import collections
|
||||
import functools
|
||||
import itertools
|
||||
import logging
|
||||
|
||||
import requests
|
||||
|
||||
from . import exceptions
|
||||
|
||||
_logger = logging.getLogger(__name__)
|
||||
class GH(object):
|
||||
def __init__(self, token, repo):
|
||||
self._url = 'https://api.github.com'
|
||||
self._repo = repo
|
||||
session = self._session = requests.Session()
|
||||
session.headers['Authorization'] = 'token {}'.format(token)
|
||||
|
||||
def __call__(self, method, path, params=None, json=None, check=True):
|
||||
"""
|
||||
:type check: bool | dict[int:Exception]
|
||||
"""
|
||||
r = self._session.request(
|
||||
method,
|
||||
'{}/repos/{}/{}'.format(self._url, self._repo, path),
|
||||
params=params,
|
||||
json=json
|
||||
)
|
||||
if check:
|
||||
if isinstance(check, collections.Mapping):
|
||||
exc = check.get(r.status_code)
|
||||
if exc:
|
||||
raise exc(r.content)
|
||||
r.raise_for_status()
|
||||
return r
|
||||
|
||||
def head(self, branch):
|
||||
d = self('get', 'git/refs/heads/{}'.format(branch)).json()
|
||||
|
||||
assert d['ref'] == 'refs/heads/{}'.format(branch)
|
||||
assert d['object']['type'] == 'commit'
|
||||
return d['object']['sha']
|
||||
|
||||
def commit(self, sha):
|
||||
return self('GET', 'git/commits/{}'.format(sha)).json()
|
||||
|
||||
def comment(self, pr, message):
|
||||
self('POST', 'issues/{}/comments'.format(pr), json={'body': message})
|
||||
|
||||
def close(self, pr, message):
|
||||
self.comment(pr, message)
|
||||
self('PATCH', 'pulls/{}'.format(pr), json={'state': 'closed'})
|
||||
|
||||
def change_tags(self, pr, from_, to_):
|
||||
to_add, to_remove = to_ - from_, from_ - to_
|
||||
for t in to_remove:
|
||||
r = self('DELETE', 'issues/{}/labels/{}'.format(pr, t), check=False)
|
||||
r.raise_for_status()
|
||||
# successful deletion or attempt to delete a tag which isn't there
|
||||
# is fine, otherwise trigger an error
|
||||
if r.status_code not in (200, 404):
|
||||
r.raise_for_status()
|
||||
|
||||
if to_add:
|
||||
self('POST', 'issues/{}/labels'.format(pr), json=list(to_add))
|
||||
|
||||
def fast_forward(self, branch, sha):
|
||||
try:
|
||||
self('patch', 'git/refs/heads/{}'.format(branch), json={'sha': sha})
|
||||
except requests.HTTPError:
|
||||
raise exceptions.FastForwardError()
|
||||
|
||||
def set_ref(self, branch, sha):
|
||||
# force-update ref
|
||||
r = self('patch', 'git/refs/heads/{}'.format(branch), json={
|
||||
'sha': sha,
|
||||
'force': True,
|
||||
}, check=False)
|
||||
if r.status_code == 200:
|
||||
return
|
||||
|
||||
# 422 makes no sense but that's what github returns, leaving 404 just
|
||||
# in case
|
||||
if r.status_code in (404, 422):
|
||||
# fallback: create ref
|
||||
r = self('post', 'git/refs', json={
|
||||
'ref': 'refs/heads/{}'.format(branch),
|
||||
'sha': sha,
|
||||
}, check=False)
|
||||
if r.status_code == 201:
|
||||
return
|
||||
raise AssertionError("{}: {}".format(r.status_code, r.json()))
|
||||
|
||||
def merge(self, sha, dest, message):
|
||||
r = self('post', 'merges', json={
|
||||
'base': dest,
|
||||
'head': sha,
|
||||
'commit_message': message,
|
||||
}, check={409: exceptions.MergeError})
|
||||
r = r.json()
|
||||
return dict(r['commit'], sha=r['sha'])
|
||||
|
||||
def rebase(self, pr, dest, reset=False, commits=None):
|
||||
""" Rebase pr's commits on top of dest, updates dest unless ``reset``
|
||||
is set.
|
||||
|
||||
Returns the hash of the rebased head.
|
||||
"""
|
||||
original_head = self.head(dest)
|
||||
if commits is None:
|
||||
commits = self.commits(pr)
|
||||
|
||||
assert commits, "can't rebase a PR with no commits"
|
||||
for c in commits:
|
||||
assert len(c['parents']) == 1, "can't rebase commits with more than one parent"
|
||||
tmp_msg = 'temp rebasing PR %s (%s)' % (pr, c['sha'])
|
||||
c['new_tree'] = self.merge(c['sha'], dest, tmp_msg)['tree']['sha']
|
||||
self.set_ref(dest, original_head)
|
||||
|
||||
prev = original_head
|
||||
for c in commits:
|
||||
copy = self('post', 'git/commits', json={
|
||||
'message': c['commit']['message'],
|
||||
'tree': c['new_tree'],
|
||||
'parents': [prev],
|
||||
'author': c['commit']['author'],
|
||||
'committer': c['commit']['committer'],
|
||||
}, check={409: exceptions.MergeError}).json()
|
||||
prev = copy['sha']
|
||||
|
||||
if reset:
|
||||
self.set_ref(dest, original_head)
|
||||
|
||||
# prev is updated after each copy so it's the rebased PR head
|
||||
return prev
|
||||
|
||||
# fetch various bits of issues / prs to load them
|
||||
def pr(self, number):
|
||||
return (
|
||||
self('get', 'issues/{}'.format(number)).json(),
|
||||
self('get', 'pulls/{}'.format(number)).json()
|
||||
)
|
||||
|
||||
def comments(self, number):
|
||||
for page in itertools.count(1):
|
||||
r = self('get', 'issues/{}/comments'.format(number), params={'page': page})
|
||||
yield from r.json()
|
||||
if not r.links.get('next'):
|
||||
return
|
||||
|
||||
def reviews(self, number):
|
||||
for page in itertools.count(1):
|
||||
r = self('get', 'pulls/{}/reviews'.format(number), params={'page': page})
|
||||
yield from r.json()
|
||||
if not r.links.get('next'):
|
||||
return
|
||||
|
||||
def commits(self, pr):
|
||||
""" Returns a PR's commits oldest first (that's what GH does &
|
||||
is what we want)
|
||||
"""
|
||||
r = self('get', 'pulls/{}/commits'.format(pr), params={'per_page': PR_COMMITS_MAX})
|
||||
assert not r.links.get('next'), "more than {} commits".format(PR_COMMITS_MAX)
|
||||
return r.json()
|
||||
|
||||
def statuses(self, h):
|
||||
r = self('get', 'commits/{}/status'.format(h)).json()
|
||||
return [{
|
||||
'sha': r['sha'],
|
||||
'context': s['context'],
|
||||
'state': s['state'],
|
||||
} for s in r['statuses']]
|
||||
|
||||
PR_COMMITS_MAX = 50
|
2
runbot_merge/models/__init__.py
Normal file
2
runbot_merge/models/__init__.py
Normal file
@ -0,0 +1,2 @@
|
||||
from . import res_partner
|
||||
from . import pull_requests
|
974
runbot_merge/models/pull_requests.py
Normal file
974
runbot_merge/models/pull_requests.py
Normal file
@ -0,0 +1,974 @@
|
||||
import collections
|
||||
import datetime
|
||||
import json
|
||||
import logging
|
||||
import pprint
|
||||
import re
|
||||
|
||||
from itertools import takewhile
|
||||
|
||||
from odoo import api, fields, models, tools
|
||||
from odoo.exceptions import ValidationError
|
||||
|
||||
from .. import github, exceptions, controllers
|
||||
|
||||
_logger = logging.getLogger(__name__)
|
||||
class Project(models.Model):
|
||||
_name = 'runbot_merge.project'
|
||||
|
||||
name = fields.Char(required=True, index=True)
|
||||
repo_ids = fields.One2many(
|
||||
'runbot_merge.repository', 'project_id',
|
||||
help="Repos included in that project, they'll be staged together. "\
|
||||
"*Not* to be used for cross-repo dependencies (that is to be handled by the CI)"
|
||||
)
|
||||
branch_ids = fields.One2many(
|
||||
'runbot_merge.branch', 'project_id',
|
||||
help="Branches of all project's repos which are managed by the merge bot. Also "\
|
||||
"target branches of PR this project handles."
|
||||
)
|
||||
|
||||
required_statuses = fields.Char(
|
||||
help="Comma-separated list of status contexts which must be "\
|
||||
"`success` for a PR or staging to be valid",
|
||||
default='legal/cla,ci/runbot'
|
||||
)
|
||||
ci_timeout = fields.Integer(
|
||||
default=60, required=True,
|
||||
help="Delay (in minutes) before a staging is considered timed out and failed"
|
||||
)
|
||||
|
||||
github_token = fields.Char("Github Token", required=True)
|
||||
github_prefix = fields.Char(
|
||||
required=True,
|
||||
default="hanson", # mergebot du bot du bot du~
|
||||
help="Prefix (~bot name) used when sending commands from PR "
|
||||
"comments e.g. [hanson retry] or [hanson r+ p=1]"
|
||||
)
|
||||
|
||||
batch_limit = fields.Integer(
|
||||
default=8, help="Maximum number of PRs staged together")
|
||||
|
||||
secret = fields.Char(
|
||||
help="Webhook secret. If set, will be checked against the signature "
|
||||
"of (valid) incoming webhook signatures, failing signatures "
|
||||
"will lead to webhook rejection. Should only use ASCII."
|
||||
)
|
||||
|
||||
def _check_progress(self):
|
||||
logger = _logger.getChild('cron')
|
||||
Batch = self.env['runbot_merge.batch']
|
||||
PRs = self.env['runbot_merge.pull_requests']
|
||||
for project in self.search([]):
|
||||
gh = {repo.name: repo.github() for repo in project.repo_ids}
|
||||
# check status of staged PRs
|
||||
for staging in project.mapped('branch_ids.active_staging_id'):
|
||||
logger.info(
|
||||
"Checking active staging %s (state=%s)",
|
||||
staging, staging.state
|
||||
)
|
||||
if staging.state == 'success':
|
||||
old_heads = {
|
||||
n: g.head(staging.target.name)
|
||||
for n, g in gh.items()
|
||||
}
|
||||
repo_name = None
|
||||
staging_heads = json.loads(staging.heads)
|
||||
updated = []
|
||||
try:
|
||||
for repo_name, head in staging_heads.items():
|
||||
gh[repo_name].fast_forward(
|
||||
staging.target.name,
|
||||
head
|
||||
)
|
||||
updated.append(repo_name)
|
||||
except exceptions.FastForwardError:
|
||||
logger.warning(
|
||||
"Could not fast-forward successful staging on %s:%s, reverting updated repos %s and re-staging",
|
||||
repo_name, staging.target.name,
|
||||
', '.join(updated),
|
||||
exc_info=True
|
||||
)
|
||||
for name in reversed(updated):
|
||||
gh[name].set_ref(staging.target.name, old_heads[name])
|
||||
else:
|
||||
prs = staging.mapped('batch_ids.prs')
|
||||
logger.info(
|
||||
"%s FF successful, marking %s as merged",
|
||||
staging, prs
|
||||
)
|
||||
prs.write({'state': 'merged'})
|
||||
for pr in prs:
|
||||
# FIXME: this is the staging head rather than the actual merge commit for the PR
|
||||
gh[pr.repository.name].close(pr.number, 'Merged in {}'.format(staging_heads[pr.repository.name]))
|
||||
finally:
|
||||
staging.batch_ids.write({'active': False})
|
||||
staging.write({'active': False})
|
||||
elif staging.state == 'failure' or project.is_timed_out(staging):
|
||||
staging.try_splitting()
|
||||
# else let flow
|
||||
|
||||
# check for stageable branches/prs
|
||||
for branch in project.branch_ids:
|
||||
logger.info(
|
||||
"Checking %s (%s) for staging: %s, skip? %s",
|
||||
branch, branch.name,
|
||||
branch.active_staging_id,
|
||||
bool(branch.active_staging_id)
|
||||
)
|
||||
if branch.active_staging_id:
|
||||
continue
|
||||
|
||||
# noinspection SqlResolve
|
||||
self.env.cr.execute("""
|
||||
SELECT
|
||||
min(pr.priority) as priority,
|
||||
array_agg(pr.id) AS match
|
||||
FROM runbot_merge_pull_requests pr
|
||||
LEFT JOIN runbot_merge_batch batch ON pr.batch_id = batch.id AND batch.active
|
||||
WHERE pr.target = %s
|
||||
-- exclude terminal states (so there's no issue when
|
||||
-- deleting branches & reusing labels)
|
||||
AND pr.state != 'merged'
|
||||
AND pr.state != 'closed'
|
||||
GROUP BY pr.label
|
||||
HAVING (bool_or(pr.priority = 0) AND NOT bool_or(pr.state = 'error'))
|
||||
OR bool_and(pr.state = 'ready')
|
||||
ORDER BY min(pr.priority), min(pr.id)
|
||||
""", [branch.id])
|
||||
# result: [(priority, [(repo_id, pr_id) for repo in repos]
|
||||
rows = self.env.cr.fetchall()
|
||||
priority = rows[0][0] if rows else -1
|
||||
if priority == 0:
|
||||
# p=0 take precedence over all else
|
||||
batched_prs = [
|
||||
PRs.browse(pr_ids)
|
||||
for _, pr_ids in takewhile(lambda r: r[0] == priority, rows)
|
||||
]
|
||||
elif branch.split_ids:
|
||||
split_ids = branch.split_ids[0]
|
||||
logger.info("Found split of PRs %s, re-staging", split_ids.mapped('batch_ids.prs'))
|
||||
batched_prs = [batch.prs for batch in split_ids.batch_ids]
|
||||
split_ids.unlink()
|
||||
elif rows:
|
||||
# p=1 or p=2
|
||||
batched_prs = [PRs.browse(pr_ids) for _, pr_ids in takewhile(lambda r: r[0] == priority, rows)]
|
||||
else:
|
||||
continue
|
||||
|
||||
staged = Batch
|
||||
meta = {repo: {} for repo in project.repo_ids}
|
||||
for repo, it in meta.items():
|
||||
gh = it['gh'] = repo.github()
|
||||
it['head'] = gh.head(branch.name)
|
||||
# create tmp staging branch
|
||||
gh.set_ref('tmp.{}'.format(branch.name), it['head'])
|
||||
|
||||
batch_limit = project.batch_limit
|
||||
for batch in batched_prs:
|
||||
if len(staged) >= batch_limit:
|
||||
break
|
||||
staged |= Batch.stage(meta, batch)
|
||||
|
||||
if staged:
|
||||
# create actual staging object
|
||||
st = self.env['runbot_merge.stagings'].create({
|
||||
'target': branch.id,
|
||||
'batch_ids': [(4, batch.id, 0) for batch in staged],
|
||||
'heads': json.dumps({
|
||||
repo.name: it['head']
|
||||
for repo, it in meta.items()
|
||||
})
|
||||
})
|
||||
# create staging branch from tmp
|
||||
for r, it in meta.items():
|
||||
it['gh'].set_ref('staging.{}'.format(branch.name), it['head'])
|
||||
|
||||
# creating the staging doesn't trigger a write on the prs
|
||||
# and thus the ->staging taggings, so do that by hand
|
||||
Tagging = self.env['runbot_merge.pull_requests.tagging']
|
||||
for pr in st.mapped('batch_ids.prs'):
|
||||
Tagging.create({
|
||||
'pull_request': pr.number,
|
||||
'repository': pr.repository.id,
|
||||
'state_from': pr._tagstate,
|
||||
'state_to': 'staged',
|
||||
})
|
||||
|
||||
logger.info("Created staging %s (%s)", st, staged)
|
||||
|
||||
Repos = self.env['runbot_merge.repository']
|
||||
ghs = {}
|
||||
# noinspection SqlResolve
|
||||
self.env.cr.execute("""
|
||||
SELECT
|
||||
t.repository as repo_id,
|
||||
t.pull_request as pr_number,
|
||||
array_agg(t.id) as ids,
|
||||
(array_agg(t.state_from ORDER BY t.id))[1] as state_from,
|
||||
(array_agg(t.state_to ORDER BY t.id DESC))[1] as state_to
|
||||
FROM runbot_merge_pull_requests_tagging t
|
||||
GROUP BY t.repository, t.pull_request
|
||||
""")
|
||||
to_remove = []
|
||||
for repo_id, pr, ids, from_, to_ in self.env.cr.fetchall():
|
||||
repo = Repos.browse(repo_id)
|
||||
from_tags = _TAGS[from_ or False]
|
||||
to_tags = _TAGS[to_ or False]
|
||||
|
||||
gh = ghs.get(repo)
|
||||
if not gh:
|
||||
gh = ghs[repo] = repo.github()
|
||||
|
||||
try:
|
||||
gh.change_tags(pr, from_tags, to_tags)
|
||||
except Exception:
|
||||
_logger.exception(
|
||||
"Error while trying to change the tags of %s:%s from %s to %s",
|
||||
repo.name, pr, from_tags, to_tags,
|
||||
)
|
||||
else:
|
||||
to_remove.extend(ids)
|
||||
self.env['runbot_merge.pull_requests.tagging'].browse(to_remove).unlink()
|
||||
|
||||
def is_timed_out(self, staging):
|
||||
return fields.Datetime.from_string(staging.staged_at) + datetime.timedelta(minutes=self.ci_timeout) < datetime.datetime.now()
|
||||
|
||||
def _check_fetch(self, commit=False):
|
||||
"""
|
||||
:param bool commit: commit after each fetch has been executed
|
||||
"""
|
||||
while True:
|
||||
f = self.env['runbot_merge.fetch_job'].search([], limit=1)
|
||||
if not f:
|
||||
return
|
||||
|
||||
f.repository._load_pr(f.number)
|
||||
|
||||
# commit after each fetched PR
|
||||
f.active = False
|
||||
if commit:
|
||||
self.env.cr.commit()
|
||||
|
||||
def _find_commands(self, comment):
|
||||
return re.findall(
|
||||
'^{}:? (.*)$'.format(self.github_prefix),
|
||||
comment, re.MULTILINE)
|
||||
|
||||
def _has_branch(self, name):
|
||||
self.env.cr.execute("""
|
||||
SELECT 1 FROM runbot_merge_branch
|
||||
WHERE project_id = %s AND name = %s
|
||||
LIMIT 1
|
||||
""", (self.id, name))
|
||||
return bool(self.env.cr.rowcount)
|
||||
|
||||
class Repository(models.Model):
|
||||
_name = 'runbot_merge.repository'
|
||||
|
||||
name = fields.Char(required=True)
|
||||
project_id = fields.Many2one('runbot_merge.project', required=True)
|
||||
|
||||
def github(self):
|
||||
return github.GH(self.project_id.github_token, self.name)
|
||||
|
||||
def _auto_init(self):
|
||||
res = super(Repository, self)._auto_init()
|
||||
tools.create_unique_index(
|
||||
self._cr, 'runbot_merge_unique_repo', self._table, ['name'])
|
||||
return res
|
||||
|
||||
def _load_pr(self, number):
|
||||
gh = self.github()
|
||||
|
||||
# fetch PR object and handle as *opened*
|
||||
issue, pr = gh.pr(number)
|
||||
|
||||
if not self.project_id._has_branch(pr['base']['ref']):
|
||||
_logger.info("Tasked with loading PR %d for un-managed branch %s, ignoring", pr['number'], pr['base']['ref'])
|
||||
return
|
||||
|
||||
controllers.handle_pr(self.env, {
|
||||
'action': 'opened',
|
||||
'pull_request': pr,
|
||||
})
|
||||
for st in gh.statuses(pr['head']['sha']):
|
||||
controllers.handle_status(self.env, st)
|
||||
# get and handle all comments
|
||||
for comment in gh.comments(number):
|
||||
controllers.handle_comment(self.env, {
|
||||
'issue': issue,
|
||||
'sender': comment['user'],
|
||||
'comment': comment,
|
||||
'repository': {'full_name': self.name},
|
||||
})
|
||||
# get and handle all reviews
|
||||
for review in gh.reviews(number):
|
||||
controllers.handle_review(self.env, {
|
||||
'review': review,
|
||||
'pull_request': pr,
|
||||
'repository': {'full_name': self.name},
|
||||
})
|
||||
|
||||
class Branch(models.Model):
|
||||
_name = 'runbot_merge.branch'
|
||||
|
||||
name = fields.Char(required=True)
|
||||
project_id = fields.Many2one('runbot_merge.project', required=True)
|
||||
|
||||
active_staging_id = fields.Many2one(
|
||||
'runbot_merge.stagings', compute='_compute_active_staging', store=True,
|
||||
help="Currently running staging for the branch."
|
||||
)
|
||||
staging_ids = fields.One2many('runbot_merge.stagings', 'target')
|
||||
split_ids = fields.One2many('runbot_merge.split', 'target')
|
||||
|
||||
prs = fields.One2many('runbot_merge.pull_requests', 'target', domain=[
|
||||
('state', '!=', 'closed'),
|
||||
('state', '!=', 'merged'),
|
||||
])
|
||||
|
||||
def _auto_init(self):
|
||||
res = super(Branch, self)._auto_init()
|
||||
tools.create_unique_index(
|
||||
self._cr, 'runbot_merge_unique_branch_per_repo',
|
||||
self._table, ['name', 'project_id'])
|
||||
return res
|
||||
|
||||
@api.depends('staging_ids.active')
|
||||
def _compute_active_staging(self):
|
||||
for b in self:
|
||||
b.active_staging_id = b.staging_ids
|
||||
|
||||
class PullRequests(models.Model):
|
||||
_name = 'runbot_merge.pull_requests'
|
||||
_order = 'number desc'
|
||||
|
||||
target = fields.Many2one('runbot_merge.branch', required=True)
|
||||
repository = fields.Many2one('runbot_merge.repository', required=True)
|
||||
# NB: check that target & repo have same project & provide project related?
|
||||
|
||||
state = fields.Selection([
|
||||
('opened', 'Opened'),
|
||||
('closed', 'Closed'),
|
||||
('validated', 'Validated'),
|
||||
('approved', 'Approved'),
|
||||
('ready', 'Ready'),
|
||||
# staged?
|
||||
('merged', 'Merged'),
|
||||
('error', 'Error'),
|
||||
], default='opened')
|
||||
|
||||
number = fields.Integer(required=True, index=True)
|
||||
author = fields.Many2one('res.partner')
|
||||
head = fields.Char(required=True)
|
||||
label = fields.Char(
|
||||
required=True, index=True,
|
||||
help="Label of the source branch (owner:branchname), used for "
|
||||
"cross-repository branch-matching"
|
||||
)
|
||||
message = fields.Text(required=True)
|
||||
squash = fields.Boolean(default=False)
|
||||
rebase = fields.Boolean(default=True)
|
||||
|
||||
delegates = fields.Many2many('res.partner', help="Delegate reviewers, not intrinsically reviewers but can review this PR")
|
||||
priority = fields.Selection([
|
||||
(0, 'Urgent'),
|
||||
(1, 'Pressing'),
|
||||
(2, 'Normal'),
|
||||
], default=2, index=True)
|
||||
|
||||
statuses = fields.Text(compute='_compute_statuses')
|
||||
|
||||
batch_id = fields.Many2one('runbot_merge.batch',compute='_compute_active_batch', store=True)
|
||||
batch_ids = fields.Many2many('runbot_merge.batch')
|
||||
staging_id = fields.Many2one(related='batch_id.staging_id', store=True)
|
||||
|
||||
@api.depends('head')
|
||||
def _compute_statuses(self):
|
||||
Commits = self.env['runbot_merge.commit']
|
||||
for s in self:
|
||||
c = Commits.search([('sha', '=', s.head)])
|
||||
if c and c.statuses:
|
||||
s.statuses = pprint.pformat(json.loads(c.statuses))
|
||||
|
||||
@api.depends('batch_ids.active')
|
||||
def _compute_active_batch(self):
|
||||
for r in self:
|
||||
r.batch_id = r.batch_ids.filtered(lambda b: b.active)[:1]
|
||||
|
||||
def _get_or_schedule(self, repo_name, number, target=None):
|
||||
repo = self.env['runbot_merge.repository'].search([('name', '=', repo_name)])
|
||||
if not repo:
|
||||
return
|
||||
|
||||
if target and not repo.project_id._has_branch(target):
|
||||
return
|
||||
|
||||
pr = self.search([
|
||||
('repository', '=', repo.id),
|
||||
('number', '=', number,)
|
||||
])
|
||||
if pr:
|
||||
return pr
|
||||
|
||||
Fetch = self.env['runbot_merge.fetch_job']
|
||||
if Fetch.search([('repository', '=', repo.id), ('number', '=', number)]):
|
||||
return
|
||||
Fetch.create({
|
||||
'repository': repo.id,
|
||||
'number': number,
|
||||
})
|
||||
|
||||
def _parse_command(self, commandstring):
|
||||
m = re.match(r'(\w+)(?:([+-])|=(.*))?', commandstring)
|
||||
if not m:
|
||||
return None
|
||||
|
||||
name, flag, param = m.groups()
|
||||
if name == 'retry':
|
||||
return ('retry', True)
|
||||
elif name in ('r', 'review'):
|
||||
if flag == '+':
|
||||
return ('review', True)
|
||||
elif flag == '-':
|
||||
return ('review', False)
|
||||
elif name == 'delegate':
|
||||
if flag == '+':
|
||||
return ('delegate', True)
|
||||
elif param:
|
||||
return ('delegate', param.split(','))
|
||||
elif name in ('p', 'priority'):
|
||||
if param in ('0', '1', '2'):
|
||||
return ('priority', int(param))
|
||||
elif name == 'rebase':
|
||||
return ('rebase', flag != '-')
|
||||
|
||||
return None
|
||||
|
||||
def _parse_commands(self, author, comment):
|
||||
"""Parses a command string prefixed by Project::github_prefix.
|
||||
|
||||
A command string can contain any number of space-separated commands:
|
||||
|
||||
retry
|
||||
resets a PR in error mode to ready for staging
|
||||
r(eview)+/-
|
||||
approves or disapproves a PR (disapproving just cancels an approval)
|
||||
delegate+/delegate=<users>
|
||||
adds either PR author or the specified (github) users as
|
||||
authorised reviewers for this PR. ``<users>`` is a
|
||||
comma-separated list of github usernames (no @)
|
||||
p(riority)=2|1|0
|
||||
sets the priority to normal (2), pressing (1) or urgent (0).
|
||||
Lower-priority PRs are selected first and batched together.
|
||||
rebase+/-
|
||||
Whether the PR should be rebased-and-merged (the default) or just
|
||||
merged normally.
|
||||
"""
|
||||
assert self, "parsing commands must be executed in an actual PR"
|
||||
|
||||
is_admin = (author.reviewer and self.author != author) or (author.self_reviewer and self.author == author)
|
||||
is_reviewer = is_admin or self in author.delegate_reviewer
|
||||
# TODO: should delegate reviewers be able to retry PRs?
|
||||
is_author = is_reviewer or self.author == author
|
||||
|
||||
if not is_author:
|
||||
# no point even parsing commands
|
||||
_logger.info("ignoring comment of %s (%s): no ACL to %s:%s",
|
||||
author.github_login, author.display_name,
|
||||
self.repository.name, self.number)
|
||||
return 'ignored'
|
||||
|
||||
commands = dict(
|
||||
ps
|
||||
for m in self.repository.project_id._find_commands(comment)
|
||||
for c in m.strip().split()
|
||||
for ps in [self._parse_command(c)]
|
||||
if ps is not None
|
||||
)
|
||||
|
||||
if not commands:
|
||||
_logger.info("found no commands in comment of %s (%s) (%s%s)", author.github_login, author.display_name,
|
||||
comment[:50], '...' if len(comment) > 50 else ''
|
||||
)
|
||||
return 'ok'
|
||||
|
||||
applied, ignored = [], []
|
||||
for command, param in commands.items():
|
||||
ok = False
|
||||
if command == 'retry':
|
||||
if is_author and self.state == 'error':
|
||||
ok = True
|
||||
self.state = 'ready'
|
||||
elif command == 'review':
|
||||
if param and is_reviewer:
|
||||
if self.state == 'opened':
|
||||
ok = True
|
||||
self.state = 'approved'
|
||||
elif self.state == 'validated':
|
||||
ok = True
|
||||
self.state = 'ready'
|
||||
elif not param and is_author and self.state == 'error':
|
||||
# TODO: r- on something which isn't in error?
|
||||
ok = True
|
||||
self.state = 'validated'
|
||||
elif command == 'delegate':
|
||||
if is_reviewer:
|
||||
ok = True
|
||||
Partners = delegates = self.env['res.partner']
|
||||
if param is True:
|
||||
delegates |= self.author
|
||||
else:
|
||||
for login in param:
|
||||
delegates |= Partners.search([('github_login', '=', login)]) or Partners.create({
|
||||
'name': login,
|
||||
'github_login': login,
|
||||
})
|
||||
delegates.write({'delegate_reviewer': [(4, self.id, 0)]})
|
||||
elif command == 'priority':
|
||||
if is_admin:
|
||||
ok = True
|
||||
self.priority = param
|
||||
if param == 0:
|
||||
self.target.active_staging_id.cancel(
|
||||
"P=0 on %s:%s by %s, unstaging %s",
|
||||
self.repository.name, self.number,
|
||||
author.github_login, self.target.name,
|
||||
)
|
||||
elif command == 'rebase':
|
||||
# anyone can rebase- their PR I guess?
|
||||
self.rebase = param
|
||||
|
||||
_logger.info(
|
||||
"%s %s(%s) on %s:%s by %s (%s)",
|
||||
"applied" if ok else "ignored",
|
||||
command, param,
|
||||
self.repository.name, self.number,
|
||||
author.github_login, author.display_name,
|
||||
)
|
||||
if ok:
|
||||
applied.append('{}({})'.format(command, param))
|
||||
else:
|
||||
ignored.append('{}({})'.format(command, param))
|
||||
msg = []
|
||||
if applied:
|
||||
msg.append('applied ' + ' '.join(applied))
|
||||
if ignored:
|
||||
msg.append('ignored ' + ' '.join(ignored))
|
||||
return '\n'.join(msg)
|
||||
|
||||
def _validate(self, statuses):
|
||||
# could have two PRs (e.g. one open and one closed) at least
|
||||
# temporarily on the same head, or on the same head with different
|
||||
# targets
|
||||
for pr in self:
|
||||
required = pr.repository.project_id.required_statuses.split(',')
|
||||
if all(statuses.get(r.strip()) == 'success' for r in required):
|
||||
oldstate = pr.state
|
||||
if oldstate == 'opened':
|
||||
pr.state = 'validated'
|
||||
elif oldstate == 'approved':
|
||||
pr.state = 'ready'
|
||||
|
||||
# _logger.info("CI+ (%s) for PR %s:%s: %s -> %s",
|
||||
# statuses, pr.repository.name, pr.number, oldstate, pr.state)
|
||||
# else:
|
||||
# _logger.info("CI- (%s) for PR %s:%s", statuses, pr.repository.name, pr.number)
|
||||
|
||||
def _auto_init(self):
|
||||
res = super(PullRequests, self)._auto_init()
|
||||
tools.create_unique_index(
|
||||
self._cr, 'runbot_merge_unique_pr_per_target', self._table, ['number', 'target', 'repository'])
|
||||
self._cr.execute("CREATE INDEX IF NOT EXISTS runbot_merge_pr_head "
|
||||
"ON runbot_merge_pull_requests "
|
||||
"USING hash (head)")
|
||||
return res
|
||||
|
||||
@property
|
||||
def _tagstate(self):
|
||||
if self.state == 'ready' and self.staging_id.heads:
|
||||
return 'staged'
|
||||
return self.state
|
||||
|
||||
@api.model
|
||||
def create(self, vals):
|
||||
pr = super().create(vals)
|
||||
c = self.env['runbot_merge.commit'].search([('sha', '=', pr.head)])
|
||||
if c and c.statuses:
|
||||
pr._validate(json.loads(c.statuses))
|
||||
|
||||
if pr.state not in ('closed', 'merged'):
|
||||
self.env['runbot_merge.pull_requests.tagging'].create({
|
||||
'pull_request': pr.number,
|
||||
'repository': pr.repository.id,
|
||||
'state_from': False,
|
||||
'state_to': pr._tagstate,
|
||||
})
|
||||
return pr
|
||||
|
||||
@api.multi
|
||||
def write(self, vals):
|
||||
oldstate = { pr: pr._tagstate for pr in self }
|
||||
w = super().write(vals)
|
||||
for pr in self:
|
||||
before, after = oldstate[pr], pr._tagstate
|
||||
if after != before:
|
||||
self.env['runbot_merge.pull_requests.tagging'].create({
|
||||
'pull_request': pr.number,
|
||||
'repository': pr.repository.id,
|
||||
'state_from': oldstate[pr],
|
||||
'state_to': pr._tagstate,
|
||||
})
|
||||
return w
|
||||
|
||||
@api.multi
|
||||
def unlink(self):
|
||||
for pr in self:
|
||||
self.env['runbot_merge.pull_requests.tagging'].create({
|
||||
'pull_request': pr.number,
|
||||
'repository': pr.repository.id,
|
||||
'state_from': pr._tagstate,
|
||||
'state_to': False,
|
||||
})
|
||||
return super().unlink()
|
||||
|
||||
|
||||
_TAGS = {
|
||||
False: set(),
|
||||
'opened': {'seen 🙂'},
|
||||
}
|
||||
_TAGS['validated'] = _TAGS['opened'] | {'CI 🤖'}
|
||||
_TAGS['approved'] = _TAGS['opened'] | {'r+ 👌'}
|
||||
_TAGS['ready'] = _TAGS['validated'] | _TAGS['approved']
|
||||
_TAGS['staged'] = _TAGS['ready'] | {'merging 👷'}
|
||||
_TAGS['merged'] = _TAGS['ready'] | {'merged 🎉'}
|
||||
_TAGS['error'] = _TAGS['opened'] | {'error 🙅'}
|
||||
_TAGS['closed'] = _TAGS['opened'] | {'closed 💔'}
|
||||
|
||||
class Tagging(models.Model):
|
||||
"""
|
||||
Queue of tag changes to make on PRs.
|
||||
|
||||
Several PR state changes are driven by webhooks, webhooks should return
|
||||
quickly, performing calls to the Github API would *probably* get in the
|
||||
way of that. Instead, queue tagging changes into this table whose
|
||||
execution can be cron-driven.
|
||||
"""
|
||||
_name = 'runbot_merge.pull_requests.tagging'
|
||||
|
||||
repository = fields.Many2one('runbot_merge.repository', required=True)
|
||||
# store the PR number (not id) as we need a Tagging for PR objects
|
||||
# being deleted (retargeted to non-managed branches)
|
||||
pull_request = fields.Integer()
|
||||
|
||||
state_from = fields.Selection([
|
||||
('opened', 'Opened'),
|
||||
('closed', 'Closed'),
|
||||
('validated', 'Validated'),
|
||||
('approved', 'Approved'),
|
||||
('ready', 'Ready'),
|
||||
('staged', 'Staged'),
|
||||
('merged', 'Merged'),
|
||||
('error', 'Error'),
|
||||
])
|
||||
state_to = fields.Selection([
|
||||
('opened', 'Opened'),
|
||||
('closed', 'Closed'),
|
||||
('validated', 'Validated'),
|
||||
('approved', 'Approved'),
|
||||
('ready', 'Ready'),
|
||||
('staged', 'Staged'),
|
||||
('merged', 'Merged'),
|
||||
('error', 'Error'),
|
||||
])
|
||||
|
||||
class Commit(models.Model):
|
||||
"""Represents a commit onto which statuses might be posted,
|
||||
independent of everything else as commits can be created by
|
||||
statuses only, by PR pushes, by branch updates, ...
|
||||
"""
|
||||
_name = 'runbot_merge.commit'
|
||||
|
||||
sha = fields.Char(required=True)
|
||||
statuses = fields.Char(help="json-encoded mapping of status contexts to states", default="{}")
|
||||
|
||||
def create(self, values):
|
||||
r = super(Commit, self).create(values)
|
||||
r._notify()
|
||||
return r
|
||||
|
||||
def write(self, values):
|
||||
r = super(Commit, self).write(values)
|
||||
self._notify()
|
||||
return r
|
||||
|
||||
# NB: GH recommends doing heavy work asynchronously, may be a good
|
||||
# idea to defer this to a cron or something
|
||||
def _notify(self):
|
||||
Stagings = self.env['runbot_merge.stagings']
|
||||
PRs = self.env['runbot_merge.pull_requests']
|
||||
# chances are low that we'll have more than one commit
|
||||
for c in self:
|
||||
st = json.loads(c.statuses)
|
||||
pr = PRs.search([('head', '=', c.sha)])
|
||||
if pr:
|
||||
pr._validate(st)
|
||||
# heads is a json-encoded mapping of reponame:head, so chances
|
||||
# are if a sha matches a heads it's matching one of the shas
|
||||
stagings = Stagings.search([('heads', 'ilike', c.sha)])
|
||||
if stagings:
|
||||
stagings._validate()
|
||||
|
||||
_sql_constraints = [
|
||||
('unique_sha', 'unique (sha)', 'no duplicated commit'),
|
||||
]
|
||||
|
||||
def _auto_init(self):
|
||||
res = super(Commit, self)._auto_init()
|
||||
self._cr.execute("""
|
||||
CREATE INDEX IF NOT EXISTS runbot_merge_unique_statuses
|
||||
ON runbot_merge_commit
|
||||
USING hash (sha)
|
||||
""")
|
||||
return res
|
||||
|
||||
class Stagings(models.Model):
|
||||
_name = 'runbot_merge.stagings'
|
||||
|
||||
target = fields.Many2one('runbot_merge.branch', required=True)
|
||||
|
||||
batch_ids = fields.One2many(
|
||||
'runbot_merge.batch', 'staging_id',
|
||||
)
|
||||
state = fields.Selection([
|
||||
('success', 'Success'),
|
||||
('failure', 'Failure'),
|
||||
('pending', 'Pending'),
|
||||
])
|
||||
active = fields.Boolean(default=True)
|
||||
|
||||
staged_at = fields.Datetime(default=fields.Datetime.now)
|
||||
restaged = fields.Integer(default=0)
|
||||
|
||||
# seems simpler than adding yet another indirection through a model
|
||||
heads = fields.Char(required=True, help="JSON-encoded map of heads, one per repo in the project")
|
||||
|
||||
def _validate(self):
|
||||
Commits = self.env['runbot_merge.commit']
|
||||
for s in self:
|
||||
heads = list(json.loads(s.heads).values())
|
||||
commits = Commits.search([
|
||||
('sha', 'in', heads)
|
||||
])
|
||||
if len(commits) < len(heads):
|
||||
s.state = 'pending'
|
||||
continue
|
||||
|
||||
reqs = [r.strip() for r in s.target.project_id.required_statuses.split(',')]
|
||||
st = 'success'
|
||||
for c in commits:
|
||||
statuses = json.loads(c.statuses)
|
||||
for v in map(statuses.get, reqs):
|
||||
if st == 'failure' or v in ('error', 'failure'):
|
||||
st = 'failure'
|
||||
elif v in (None, 'pending'):
|
||||
st = 'pending'
|
||||
else:
|
||||
assert v == 'success'
|
||||
s.state = st
|
||||
|
||||
def cancel(self, reason, *args):
|
||||
if not self:
|
||||
return
|
||||
|
||||
_logger.info(reason, *args)
|
||||
self.batch_ids.write({'active': False})
|
||||
self.active = False
|
||||
|
||||
def fail(self, message, prs=None):
|
||||
_logger.error("Staging %s failed: %s", self, message)
|
||||
prs = prs or self.batch_ids.prs
|
||||
prs.write({'state': 'error'})
|
||||
for pr in prs:
|
||||
pr.repository.github().comment(
|
||||
pr.number, "Staging failed: %s" % message)
|
||||
|
||||
self.batch_ids.write({'active': False})
|
||||
self.active = False
|
||||
|
||||
def try_splitting(self):
|
||||
batches = len(self.batch_ids)
|
||||
if batches > 1:
|
||||
midpoint = batches // 2
|
||||
h, t = self.batch_ids[:midpoint], self.batch_ids[midpoint:]
|
||||
# NB: batches remain attached to their original staging
|
||||
sh = self.env['runbot_merge.split'].create({
|
||||
'target': self.target.id,
|
||||
'batch_ids': [(4, batch.id, 0) for batch in h],
|
||||
})
|
||||
st = self.env['runbot_merge.split'].create({
|
||||
'target': self.target.id,
|
||||
'batch_ids': [(4, batch.id, 0) for batch in t],
|
||||
})
|
||||
_logger.info("Split %s to %s (%s) and %s (%s)",
|
||||
self, h, sh, t, st)
|
||||
self.batch_ids.write({'active': False})
|
||||
self.active = False
|
||||
return True
|
||||
|
||||
# single batch => the staging is an unredeemable failure
|
||||
if self.state != 'failure':
|
||||
# timed out, just mark all PRs (wheee)
|
||||
self.fail('timed out (>{} minutes)'.format(self.target.project_id.ci_timeout))
|
||||
return False
|
||||
|
||||
# try inferring which PR failed and only mark that one
|
||||
for repo, head in json.loads(self.heads).items():
|
||||
commit = self.env['runbot_merge.commit'].search([
|
||||
('sha', '=', head)
|
||||
])
|
||||
reason = next((
|
||||
ctx for ctx, result in json.loads(commit.statuses).items()
|
||||
if result in ('error', 'failure')
|
||||
), None)
|
||||
if not reason:
|
||||
continue
|
||||
|
||||
pr = next((
|
||||
pr for pr in self.batch_ids.prs
|
||||
if pr.repository.name == repo
|
||||
), None)
|
||||
if pr:
|
||||
self.fail(reason, pr)
|
||||
return False
|
||||
|
||||
# the staging failed but we don't have a specific culprit, fail
|
||||
# everything
|
||||
self.fail("unknown reason")
|
||||
|
||||
return False
|
||||
|
||||
class Split(models.Model):
|
||||
_name = 'runbot_merge.split'
|
||||
|
||||
target = fields.Many2one('runbot_merge.branch', required=True)
|
||||
batch_ids = fields.One2many('runbot_merge.batch', 'split_id', context={'active_test': False})
|
||||
|
||||
class Batch(models.Model):
|
||||
""" A batch is a "horizontal" grouping of *codependent* PRs: PRs with
|
||||
the same label & target but for different repositories. These are
|
||||
assumed to be part of the same "change" smeared over multiple
|
||||
repositories e.g. change an API in repo1, this breaks use of that API
|
||||
in repo2 which now needs to be updated.
|
||||
"""
|
||||
_name = 'runbot_merge.batch'
|
||||
|
||||
target = fields.Many2one('runbot_merge.branch', required=True)
|
||||
staging_id = fields.Many2one('runbot_merge.stagings')
|
||||
split_id = fields.Many2one('runbot_merge.split')
|
||||
|
||||
prs = fields.Many2many('runbot_merge.pull_requests')
|
||||
|
||||
active = fields.Boolean(default=True)
|
||||
|
||||
@api.constrains('target', 'prs')
|
||||
def _check_prs(self):
|
||||
for batch in self:
|
||||
repos = self.env['runbot_merge.repository']
|
||||
for pr in batch.prs:
|
||||
if pr.target != batch.target:
|
||||
raise ValidationError("A batch and its PRs must have the same branch, got %s and %s" % (batch.target, pr.target))
|
||||
if pr.repository in repos:
|
||||
raise ValidationError("All prs of a batch must have different target repositories, got a duplicate %s on %s" % (pr.repository, pr))
|
||||
repos |= pr.repository
|
||||
|
||||
def stage(self, meta, prs):
|
||||
"""
|
||||
Updates meta[*][head] on success
|
||||
|
||||
:return: () or Batch object (if all prs successfully staged)
|
||||
"""
|
||||
new_heads = {}
|
||||
for pr in prs:
|
||||
gh = meta[pr.repository]['gh']
|
||||
|
||||
_logger.info(
|
||||
"Staging pr %s:%s for target %s; squash=%s",
|
||||
pr.repository.name, pr.number, pr.target.name, pr.squash
|
||||
)
|
||||
|
||||
target = 'tmp.{}'.format(pr.target.name)
|
||||
suffix = '\n\ncloses {pr.repository.name}#{pr.number}'.format(pr=pr)
|
||||
try:
|
||||
# nb: pr_commits is oldest to newest so pr.head is pr_commits[-1]
|
||||
pr_commits = gh.commits(pr.number)
|
||||
rebase_and_merge = pr.rebase
|
||||
squash = rebase_and_merge and len(pr_commits) == 1
|
||||
if squash:
|
||||
pr_commits[0]['commit']['message'] += suffix
|
||||
new_heads[pr] = gh.rebase(pr.number, target, commits=pr_commits)
|
||||
elif rebase_and_merge:
|
||||
msg = pr.message + suffix
|
||||
h = gh.rebase(pr.number, target, reset=True, commits=pr_commits)
|
||||
new_heads[pr] = gh.merge(h, target, msg)['sha']
|
||||
else:
|
||||
pr_head = pr_commits[-1] # pr_commits is oldest to newest
|
||||
base_commit = None
|
||||
head_parents = {p['sha'] for p in pr_head['parents']}
|
||||
if len(head_parents) > 1:
|
||||
# look for parent(s?) of pr_head not in PR, means it's
|
||||
# from target (so we merged target in pr)
|
||||
merge = head_parents - {c['sha'] for c in pr_commits}
|
||||
assert len(merge) <= 1, \
|
||||
">1 parent from base in PR's head is not supported"
|
||||
if len(merge) == 1:
|
||||
[base_commit] = merge
|
||||
|
||||
if base_commit:
|
||||
# replicate pr_head with base_commit replaced by
|
||||
# the current head
|
||||
original_head = gh.head(target)
|
||||
merge_tree = gh.merge(pr_head['sha'], target, 'temp merge')['tree']['sha']
|
||||
new_parents = [original_head] + list(head_parents - {base_commit})
|
||||
copy = gh('post', 'git/commits', json={
|
||||
'message': pr_head['commit']['message'] + suffix,
|
||||
'tree': merge_tree,
|
||||
'author': pr_head['commit']['author'],
|
||||
'committer': pr_head['commit']['committer'],
|
||||
'parents': new_parents,
|
||||
}).json()
|
||||
gh.set_ref(target, copy['sha'])
|
||||
new_heads[pr] = copy['sha']
|
||||
else:
|
||||
# otherwise do a regular merge
|
||||
msg = pr.message + suffix
|
||||
new_heads[pr] = gh.merge(pr.head, target, msg)['sha']
|
||||
except (exceptions.MergeError, AssertionError) as e:
|
||||
_logger.exception("Failed to merge %s:%s into staging branch (error: %s)", pr.repository.name, pr.number, e)
|
||||
pr.state = 'error'
|
||||
gh.comment(pr.number, "Unable to stage PR (merge conflict)")
|
||||
|
||||
# reset other PRs
|
||||
for to_revert in new_heads.keys():
|
||||
it = meta[to_revert.repository]
|
||||
it['gh'].set_ref('tmp.{}'.format(to_revert.target.name), it['head'])
|
||||
|
||||
return self.env['runbot_merge.batch']
|
||||
|
||||
# update meta to new heads
|
||||
for pr, head in new_heads.items():
|
||||
meta[pr.repository]['head'] = head
|
||||
if not self.env['runbot_merge.commit'].search([('sha', '=', head)]):
|
||||
self.env['runbot_merge.commit'].create({'sha': head})
|
||||
return self.create({
|
||||
'target': prs[0].target.id,
|
||||
'prs': [(4, pr.id, 0) for pr in prs],
|
||||
})
|
||||
|
||||
class FetchJob(models.Model):
|
||||
_name = 'runbot_merge.fetch_job'
|
||||
|
||||
active = fields.Boolean(default=True)
|
||||
repository = fields.Many2one('runbot_merge.repository', index=True, required=True)
|
||||
number = fields.Integer(index=True, required=True)
|
15
runbot_merge/models/res_partner.py
Normal file
15
runbot_merge/models/res_partner.py
Normal file
@ -0,0 +1,15 @@
|
||||
from odoo import fields, models, tools
|
||||
|
||||
class Partner(models.Model):
|
||||
_inherit = 'res.partner'
|
||||
|
||||
github_login = fields.Char()
|
||||
reviewer = fields.Boolean(default=False, help="Can review PRs (maybe m2m to repos/branches?)")
|
||||
self_reviewer = fields.Boolean(default=False, help="Can review own PRs (independent from reviewer)")
|
||||
delegate_reviewer = fields.Many2many('runbot_merge.pull_requests')
|
||||
|
||||
def _auto_init(self):
|
||||
res = super(Partner, self)._auto_init()
|
||||
tools.create_unique_index(
|
||||
self._cr, 'runbot_merge_unique_gh_login', self._table, ['github_login'])
|
||||
return res
|
15
runbot_merge/security/ir.model.access.csv
Normal file
15
runbot_merge/security/ir.model.access.csv
Normal file
@ -0,0 +1,15 @@
|
||||
id,name,model_id:id,group_id:id,perm_read,perm_write,perm_create,perm_unlink
|
||||
access_runbot_merge_project_admin,Admin access to project,model_runbot_merge_project,runbot_merge.group_admin,1,1,1,1
|
||||
access_runbot_merge_repository_admin,Admin access to repo,model_runbot_merge_repository,runbot_merge.group_admin,1,1,1,1
|
||||
access_runbot_merge_branch_admin,Admin access to branches,model_runbot_merge_branch,runbot_merge.group_admin,1,1,1,1
|
||||
access_runbot_merge_pull_requests_admin,Admin access to PR,model_runbot_merge_pull_requests,runbot_merge.group_admin,1,1,1,1
|
||||
access_runbot_merge_pull_requests_tagging_admin,Admin access to tagging,model_runbot_merge_pull_requests_tagging,runbot_merge.group_admin,1,1,1,1
|
||||
access_runbot_merge_commit_admin,Admin access to commits,model_runbot_merge_commit,runbot_merge.group_admin,1,1,1,1
|
||||
access_runbot_merge_stagings_admin,Admin access to stagings,model_runbot_merge_stagings,runbot_merge.group_admin,1,1,1,1
|
||||
access_runbot_merge_split_admin,Admin access to splits,model_runbot_merge_split,runbot_merge.group_admin,1,1,1,1
|
||||
access_runbot_merge_batch_admin,Admin access to batches,model_runbot_merge_batch,runbot_merge.group_admin,1,1,1,1
|
||||
access_runbot_merge_fetch_job_admin,Admin access to fetch jobs,model_runbot_merge_fetch_job,runbot_merge.group_admin,1,1,1,1
|
||||
access_runbot_merge_project,User access to project,model_runbot_merge_project,base.group_user,1,0,0,0
|
||||
access_runbot_merge_repository,User access to repo,model_runbot_merge_repository,base.group_user,1,0,0,0
|
||||
access_runbot_merge_branch,User access to branches,model_runbot_merge_branch,base.group_user,1,0,0,0
|
||||
access_runbot_merge_pull_requests,User access to PR,model_runbot_merge_pull_requests,base.group_user,1,0,0,0
|
|
8
runbot_merge/security/security.xml
Normal file
8
runbot_merge/security/security.xml
Normal file
@ -0,0 +1,8 @@
|
||||
<odoo>
|
||||
<record model="res.groups" id="group_admin">
|
||||
<field name="name">Mergebot Administrator</field>
|
||||
</record>
|
||||
<record model="res.groups" id="base.group_system">
|
||||
<field name="implied_ids" eval="[(4, ref('runbot_merge.group_admin'))]"/>
|
||||
</record>
|
||||
</odoo>
|
47
runbot_merge/tests/README.rst
Normal file
47
runbot_merge/tests/README.rst
Normal file
@ -0,0 +1,47 @@
|
||||
Execute this test suite using pytest.
|
||||
|
||||
The default mode is to run tests locally using a mock github.com.
|
||||
|
||||
See the docstring of remote.py for instructions to run against github "actual"
|
||||
(including remote-specific options) and the end of this file for a sample.
|
||||
|
||||
Shared properties running tests, regardless of the github implementation:
|
||||
|
||||
* test should be run from the root of the runbot repository providing the
|
||||
name of this module aka ``pytest runbot_merge`` or
|
||||
``python -mpytest runbot_merge``
|
||||
* a database name to use must be provided using ``--db``, the database should
|
||||
not exist beforehand
|
||||
* the addons path must be specified using ``--addons-path``, both "runbot" and
|
||||
the standard addons (odoo/addons) must be provided explicitly
|
||||
|
||||
See pytest's documentation for other options, I would recommend ``-rXs``,
|
||||
``-v`` and ``--showlocals``.
|
||||
|
||||
When running "remote" tests as they take a very long time (hours) ``-x``
|
||||
(aka ``--maxfail=1``) and ``--ff`` (run previously failed first) is also
|
||||
recommended unless e.g. you run the tests overnight.
|
||||
|
||||
``pytest.ini`` sample
|
||||
---------------------
|
||||
|
||||
.. code:: ini
|
||||
|
||||
[github]
|
||||
owner = test-org
|
||||
token = aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
|
||||
|
||||
[role_reviewer]
|
||||
name = Dick Bong
|
||||
user = loginb
|
||||
token = bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
|
||||
|
||||
[role_self_reviewer]
|
||||
name = Fanny Chmelar
|
||||
user = loginc
|
||||
token = cccccccccccccccccccccccccccccccccccccccc
|
||||
|
||||
[role_other]
|
||||
name = Harry Baals
|
||||
user = logind
|
||||
token = dddddddddddddddddddddddddddddddddddddddd
|
5
runbot_merge/tests/conftest.py
Normal file
5
runbot_merge/tests/conftest.py
Normal file
@ -0,0 +1,5 @@
|
||||
pytest_plugins = ["local"]
|
||||
|
||||
def pytest_addoption(parser):
|
||||
parser.addoption("--db", action="store", help="Odoo DB to run tests with")
|
||||
parser.addoption('--addons-path', action='store', help="Odoo's addons path")
|
748
runbot_merge/tests/fake_github/__init__.py
Normal file
748
runbot_merge/tests/fake_github/__init__.py
Normal file
@ -0,0 +1,748 @@
|
||||
import collections
|
||||
import hashlib
|
||||
import hmac
|
||||
import io
|
||||
import json
|
||||
import logging
|
||||
import re
|
||||
|
||||
import responses
|
||||
import werkzeug.urls
|
||||
import werkzeug.test
|
||||
import werkzeug.wrappers
|
||||
|
||||
from . import git
|
||||
|
||||
API_PATTERN = re.compile(
|
||||
r'https://api.github.com/repos/(?P<repo>\w+/\w+)/(?P<path>.+)'
|
||||
)
|
||||
class APIResponse(responses.BaseResponse):
|
||||
def __init__(self, sim):
|
||||
super(APIResponse, self).__init__(
|
||||
method=None,
|
||||
url=API_PATTERN
|
||||
)
|
||||
self.sim = sim
|
||||
self.content_type = 'application/json'
|
||||
self.stream = False
|
||||
|
||||
def matches(self, request):
|
||||
return self._url_matches(self.url, request.url, self.match_querystring)
|
||||
|
||||
def get_response(self, request):
|
||||
m = self.url.match(request.url)
|
||||
|
||||
(status, r) = self.sim.repos[m.group('repo')].api(m.group('path'), request)
|
||||
|
||||
headers = self.get_headers()
|
||||
body = io.BytesIO(b'')
|
||||
if r is not None:
|
||||
body = io.BytesIO(json.dumps(r).encode('utf-8'))
|
||||
|
||||
return responses.HTTPResponse(
|
||||
status=status,
|
||||
reason=r.get('message') if isinstance(r, dict) else "bollocks",
|
||||
body=body,
|
||||
headers=headers,
|
||||
preload_content=False, )
|
||||
|
||||
class Github(object):
|
||||
""" Github simulator
|
||||
|
||||
When enabled (by context-managing):
|
||||
|
||||
* intercepts all ``requests`` calls & replies to api.github.com
|
||||
* sends relevant hooks (registered per-repo as pairs of WSGI app and URL)
|
||||
* stores repo content
|
||||
"""
|
||||
def __init__(self):
|
||||
# {repo: {name, issues, objects, refs, hooks}}
|
||||
self.repos = {}
|
||||
|
||||
def repo(self, name, hooks=()):
|
||||
r = self.repos[name] = Repo(name)
|
||||
for hook, events in hooks:
|
||||
r.hook(hook, events)
|
||||
return self.repos[name]
|
||||
|
||||
def __enter__(self):
|
||||
# otherwise swallows errors from within the test
|
||||
self._requests = responses.RequestsMock(assert_all_requests_are_fired=False).__enter__()
|
||||
self._requests.add(APIResponse(self))
|
||||
return self
|
||||
|
||||
def __exit__(self, *args):
|
||||
return self._requests.__exit__(*args)
|
||||
|
||||
class Repo(object):
|
||||
def __init__(self, name):
|
||||
self.name = name
|
||||
self.issues = {}
|
||||
#: we're cheating, instead of storing serialised in-memory
|
||||
#: objects we're storing the Python stuff directly, Commit
|
||||
#: objects for commits, {str: hash} for trees and bytes for
|
||||
#: blobs. We're still indirecting via hashes and storing a
|
||||
#: h:o map because going through the API probably requires it
|
||||
self.objects = {}
|
||||
# branches: refs/heads/*
|
||||
# PRs: refs/pull/*
|
||||
self.refs = {}
|
||||
# {event: (wsgi_app, url)}
|
||||
self.hooks = collections.defaultdict(list)
|
||||
|
||||
def hook(self, hook, events):
|
||||
for event in events:
|
||||
self.hooks[event].append(Client(*hook))
|
||||
|
||||
def notify(self, event_type, *payload):
|
||||
for client in self.hooks.get(event_type, []):
|
||||
getattr(client, event_type)(*payload)
|
||||
|
||||
def set_secret(self, secret):
|
||||
for clients in self.hooks.values():
|
||||
for client in clients:
|
||||
client.secret = secret
|
||||
|
||||
def issue(self, number):
|
||||
return self.issues[number]
|
||||
|
||||
def make_issue(self, title, body):
|
||||
return Issue(self, title, body)
|
||||
|
||||
def make_pr(self, title, body, target, ctid, user, label=None):
|
||||
assert 'heads/%s' % target in self.refs
|
||||
return PR(self, title, body, target, ctid, user=user, label='{}:{}'.format(user, label or target))
|
||||
|
||||
def make_ref(self, name, commit, force=False):
|
||||
assert isinstance(self.objects[commit], Commit)
|
||||
if not force and name in self.refs:
|
||||
raise ValueError("ref %s already exists" % name)
|
||||
self.refs[name] = commit
|
||||
|
||||
def commit(self, ref):
|
||||
sha = self.refs.get(ref) or ref
|
||||
commit = self.objects[sha]
|
||||
assert isinstance(commit, Commit)
|
||||
return commit
|
||||
|
||||
def log(self, ref):
|
||||
commits = [self.commit(ref)]
|
||||
while commits:
|
||||
c = commits.pop(0)
|
||||
commits.extend(self.commit(r) for r in c.parents)
|
||||
yield c.to_json()
|
||||
|
||||
def post_status(self, ref, status, context='default', description=""):
|
||||
assert status in ('error', 'failure', 'pending', 'success')
|
||||
c = self.commit(ref)
|
||||
c.statuses.append((status, context, description))
|
||||
self.notify('status', self.name, context, status, c.id)
|
||||
|
||||
def make_commit(self, ref, message, author, committer=None, tree=None):
|
||||
assert tree, "a commit must provide either a full tree"
|
||||
|
||||
refs = ref or []
|
||||
if not isinstance(refs, list):
|
||||
refs = [ref]
|
||||
|
||||
pids = [
|
||||
ref if re.match(r'[0-9a-f]{40}', ref) else self.refs[ref]
|
||||
for ref in refs
|
||||
]
|
||||
|
||||
if type(tree) is type(u''):
|
||||
assert isinstance(self.objects.get(tree), dict)
|
||||
tid = tree
|
||||
else:
|
||||
tid = self._save_tree(tree)
|
||||
|
||||
c = Commit(tid, message, author, committer or author, parents=pids)
|
||||
self.objects[c.id] = c
|
||||
if refs and refs[0] != pids[0]:
|
||||
self.refs[refs[0]] = c.id
|
||||
return c.id
|
||||
|
||||
def _save_tree(self, t):
|
||||
""" t: Dict String (String | Tree)
|
||||
"""
|
||||
t = {name: self._make_obj(obj) for name, obj in t.items()}
|
||||
h, _ = git.make_tree(
|
||||
self.objects,
|
||||
t
|
||||
)
|
||||
self.objects[h] = t
|
||||
return h
|
||||
|
||||
def _make_obj(self, o):
|
||||
if type(o) is type(u''):
|
||||
o = o.encode('utf-8')
|
||||
|
||||
if type(o) is bytes:
|
||||
h, b = git.make_blob(o)
|
||||
self.objects[h] = o
|
||||
return h
|
||||
return self._save_tree(o)
|
||||
|
||||
def api(self, path, request):
|
||||
# a better version would be some sort of longest-match?
|
||||
for method, pattern, handler in sorted(self._handlers, key=lambda t: -len(t[1])):
|
||||
if method and request.method != method:
|
||||
continue
|
||||
# FIXME: remove qs from path & ensure path is entirely matched, maybe finally use proper routing?
|
||||
m = re.match(pattern, path)
|
||||
if m:
|
||||
return handler(self, request, **m.groupdict())
|
||||
return (404, {'message': "No match for {} {}".format(request.method, path)})
|
||||
|
||||
def read_tree(self, commit):
|
||||
return git.read_object(self.objects, commit.tree)
|
||||
|
||||
def is_ancestor(self, sha, of):
|
||||
assert not git.is_ancestor(self.objects, sha, of=of)
|
||||
|
||||
def _read_ref(self, _, ref):
|
||||
obj = self.refs.get(ref)
|
||||
if obj is None:
|
||||
return (404, None)
|
||||
return (200, {
|
||||
"ref": "refs/%s" % ref,
|
||||
"object": {
|
||||
"type": "commit",
|
||||
"sha": obj,
|
||||
}
|
||||
})
|
||||
def _create_ref(self, r):
|
||||
body = json.loads(r.body)
|
||||
ref = body['ref']
|
||||
# ref must start with refs/ and contain at least two slashes
|
||||
if not (ref.startswith('refs/') and ref.count('/') >= 2):
|
||||
return (400, None)
|
||||
ref = ref[5:]
|
||||
# if ref already exists conflict?
|
||||
if ref in self.refs:
|
||||
return (409, None)
|
||||
|
||||
sha = body['sha']
|
||||
obj = self.objects.get(sha)
|
||||
# if sha is not in the repo or not a commit, 404
|
||||
if not isinstance(obj, Commit):
|
||||
return (404, None)
|
||||
|
||||
self.make_ref(ref, sha)
|
||||
|
||||
return (201, {
|
||||
"ref": "refs/%s" % ref,
|
||||
"object": {
|
||||
"type": "commit",
|
||||
"sha": sha,
|
||||
}
|
||||
})
|
||||
|
||||
def _write_ref(self, r, ref):
|
||||
current = self.refs.get(ref)
|
||||
if current is None:
|
||||
return (404, None)
|
||||
body = json.loads(r.body)
|
||||
sha = body['sha']
|
||||
if sha not in self.objects:
|
||||
return (404, None)
|
||||
|
||||
if not body.get('force'):
|
||||
if not git.is_ancestor(self.objects, current, sha):
|
||||
return (400, None)
|
||||
|
||||
self.make_ref(ref, sha, force=True)
|
||||
return (200, {
|
||||
"ref": "refs/%s" % ref,
|
||||
"object": {
|
||||
"type": "commit",
|
||||
"sha": sha,
|
||||
}
|
||||
})
|
||||
|
||||
def _create_commit(self, r):
|
||||
body = json.loads(r.body)
|
||||
author = body.get('author') or {'name': 'default', 'email': 'default', 'date': 'Z'}
|
||||
try:
|
||||
sha = self.make_commit(
|
||||
ref=(body.get('parents')),
|
||||
message=body['message'],
|
||||
author=author,
|
||||
committer=body.get('committer') or author,
|
||||
tree=body['tree']
|
||||
)
|
||||
except (KeyError, AssertionError):
|
||||
# either couldn't find the parent or couldn't find the tree
|
||||
return (404, None)
|
||||
|
||||
return (201, {
|
||||
"sha": sha,
|
||||
"author": author,
|
||||
"committer": body.get('committer') or author,
|
||||
"message": body['message'],
|
||||
"tree": {"sha": body['tree']},
|
||||
"parents": [{"sha": sha}],
|
||||
})
|
||||
def _read_commit(self, _, sha):
|
||||
c = self.objects.get(sha)
|
||||
if not isinstance(c, Commit):
|
||||
return (404, None)
|
||||
return (200, {
|
||||
"sha": sha,
|
||||
"author": c.author,
|
||||
"committer": c.committer,
|
||||
"message": c.message,
|
||||
"tree": {"sha": c.tree},
|
||||
"parents": [{"sha": p} for p in c.parents],
|
||||
})
|
||||
|
||||
def _read_statuses(self, _, ref):
|
||||
try:
|
||||
c = self.commit(ref)
|
||||
except KeyError:
|
||||
return (404, None)
|
||||
|
||||
return (200, {
|
||||
'sha': c.id,
|
||||
'total_count': len(c.statuses),
|
||||
# TODO: combined?
|
||||
'statuses': [
|
||||
{'context': context, 'state': state}
|
||||
for state, context, _ in reversed(c.statuses)
|
||||
]
|
||||
})
|
||||
|
||||
def _read_issue(self, r, number):
|
||||
try:
|
||||
issue = self.issues[int(number)]
|
||||
except KeyError:
|
||||
return (404, None)
|
||||
attr = {'pull_request': True} if isinstance(issue, PR) else {}
|
||||
return (200, {'number': issue.number, **attr})
|
||||
|
||||
def _read_issue_comments(self, r, number):
|
||||
try:
|
||||
issue = self.issues[int(number)]
|
||||
except KeyError:
|
||||
return (404, None)
|
||||
return (200, [{
|
||||
'user': {'login': author},
|
||||
'body': body,
|
||||
} for author, body in issue.comments
|
||||
if not body.startswith('REVIEW')
|
||||
])
|
||||
|
||||
def _create_issue_comment(self, r, number):
|
||||
try:
|
||||
issue = self.issues[int(number)]
|
||||
except KeyError:
|
||||
return (404, None)
|
||||
try:
|
||||
body = json.loads(r.body)['body']
|
||||
except KeyError:
|
||||
return (400, None)
|
||||
|
||||
issue.post_comment(body, "user")
|
||||
return (201, {
|
||||
'id': 0,
|
||||
'body': body,
|
||||
'user': { 'login': "user" },
|
||||
})
|
||||
|
||||
def _read_pr(self, r, number):
|
||||
try:
|
||||
pr = self.issues[int(number)]
|
||||
except KeyError:
|
||||
return (404, None)
|
||||
# FIXME: dedup with Client
|
||||
return (200, {
|
||||
'number': pr.number,
|
||||
'head': {
|
||||
'sha': pr.head,
|
||||
'label': pr.label,
|
||||
},
|
||||
'base': {
|
||||
'ref': pr.base,
|
||||
'repo': {
|
||||
'name': self.name.split('/')[1],
|
||||
'full_name': self.name,
|
||||
},
|
||||
},
|
||||
'title': pr.title,
|
||||
'body': pr.body,
|
||||
'commits': len(pr.commits),
|
||||
'user': {'login': pr.user},
|
||||
})
|
||||
|
||||
def _edit_pr(self, r, number):
|
||||
try:
|
||||
pr = self.issues[int(number)]
|
||||
except KeyError:
|
||||
return (404, None)
|
||||
|
||||
body = json.loads(r.body)
|
||||
if not body.keys() & {'title', 'body', 'state', 'base'}:
|
||||
# FIXME: return PR content
|
||||
return (200, {})
|
||||
assert body.get('state') in ('open', 'closed', None)
|
||||
|
||||
pr.state = body.get('state') or pr.state
|
||||
if body.get('title'):
|
||||
pr.title = body.get('title')
|
||||
if body.get('body'):
|
||||
pr.body = body.get('body')
|
||||
if body.get('base'):
|
||||
pr.base = body.get('base')
|
||||
|
||||
if body.get('state') == 'open':
|
||||
self.notify('pull_request', 'reopened', pr)
|
||||
elif body.get('state') == 'closed':
|
||||
self.notify('pull_request', 'closed', pr)
|
||||
|
||||
return (200, {})
|
||||
|
||||
def _read_pr_reviews(self, _, number):
|
||||
pr = self.issues.get(int(number))
|
||||
if not isinstance(pr, PR):
|
||||
return (404, None)
|
||||
|
||||
return (200, [{
|
||||
'user': {'login': author},
|
||||
'state': r.group(1),
|
||||
'body': r.group(2),
|
||||
}
|
||||
for author, body in pr.comments
|
||||
for r in [re.match(r'REVIEW (\w+)\n\n(.*)', body)]
|
||||
if r
|
||||
])
|
||||
|
||||
def _read_pr_commits(self, r, number):
|
||||
pr = self.issues.get(int(number))
|
||||
if not isinstance(pr, PR):
|
||||
return (404, None)
|
||||
|
||||
return (200, [c.to_json() for c in pr.commits])
|
||||
|
||||
|
||||
def _add_labels(self, r, number):
|
||||
try:
|
||||
pr = self.issues[int(number)]
|
||||
except KeyError:
|
||||
return (404, None)
|
||||
|
||||
pr.labels.update(json.loads(r.body))
|
||||
|
||||
return (200, {})
|
||||
|
||||
def _remove_label(self, _, number, label):
|
||||
try:
|
||||
pr = self.issues[int(number)]
|
||||
except KeyError:
|
||||
return (404, None)
|
||||
|
||||
try:
|
||||
pr.labels.remove(werkzeug.urls.url_unquote(label))
|
||||
except KeyError:
|
||||
return (404, None)
|
||||
else:
|
||||
return (200, {})
|
||||
|
||||
def _do_merge(self, r):
|
||||
body = json.loads(r.body) # {base, head, commit_message}
|
||||
if not body.get('commit_message'):
|
||||
return (400, {'message': "Merges require a commit message"})
|
||||
base = 'heads/%s' % body['base']
|
||||
target = self.refs.get(base)
|
||||
if not target:
|
||||
return (404, {'message': "Base does not exist"})
|
||||
# head can be either a branch or a sha
|
||||
sha = self.refs.get('heads/%s' % body['head']) or body['head']
|
||||
if sha not in self.objects:
|
||||
return (404, {'message': "Head does not exist"})
|
||||
|
||||
if git.is_ancestor(self.objects, sha, of=target):
|
||||
return (204, None)
|
||||
|
||||
# merging according to read-tree:
|
||||
# get common ancestor (base) of commits
|
||||
try:
|
||||
base = git.merge_base(self.objects, target, sha)
|
||||
except Exception:
|
||||
return (400, {'message': "No common ancestor between %(base)s and %(head)s" % body})
|
||||
try:
|
||||
tid = git.merge_objects(
|
||||
self.objects,
|
||||
self.objects[base].tree,
|
||||
self.objects[target].tree,
|
||||
self.objects[sha].tree,
|
||||
)
|
||||
except Exception as e:
|
||||
logging.exception("Merge Conflict")
|
||||
return (409, {'message': 'Merge Conflict %r' % e})
|
||||
|
||||
c = Commit(tid, body['commit_message'], author=None, committer=None, parents=[target, sha])
|
||||
self.objects[c.id] = c
|
||||
|
||||
return (201, c.to_json())
|
||||
|
||||
_handlers = [
|
||||
('POST', r'git/refs', _create_ref),
|
||||
('GET', r'git/refs/(?P<ref>.*)', _read_ref),
|
||||
('PATCH', r'git/refs/(?P<ref>.*)', _write_ref),
|
||||
|
||||
# nb: there's a different commits at /commits with repo-level metadata
|
||||
('GET', r'git/commits/(?P<sha>[0-9A-Fa-f]{40})', _read_commit),
|
||||
('POST', r'git/commits', _create_commit),
|
||||
('GET', r'commits/(?P<ref>[^/]+)/status', _read_statuses),
|
||||
|
||||
('GET', r'issues/(?P<number>\d+)', _read_issue),
|
||||
('GET', r'issues/(?P<number>\d+)/comments', _read_issue_comments),
|
||||
('POST', r'issues/(?P<number>\d+)/comments', _create_issue_comment),
|
||||
|
||||
('POST', r'merges', _do_merge),
|
||||
|
||||
('GET', r'pulls/(?P<number>\d+)', _read_pr),
|
||||
('PATCH', r'pulls/(?P<number>\d+)', _edit_pr),
|
||||
('GET', r'pulls/(?P<number>\d+)/reviews', _read_pr_reviews),
|
||||
('GET', r'pulls/(?P<number>\d+)/commits', _read_pr_commits),
|
||||
|
||||
('POST', r'issues/(?P<number>\d+)/labels', _add_labels),
|
||||
('DELETE', r'issues/(?P<number>\d+)/labels/(?P<label>.+)', _remove_label),
|
||||
]
|
||||
|
||||
class Issue(object):
|
||||
def __init__(self, repo, title, body):
|
||||
self.repo = repo
|
||||
self._title = title
|
||||
self._body = body
|
||||
self.number = max(repo.issues or [0]) + 1
|
||||
self.comments = []
|
||||
self.labels = set()
|
||||
repo.issues[self.number] = self
|
||||
|
||||
def post_comment(self, body, user):
|
||||
self.comments.append((user, body))
|
||||
self.repo.notify('issue_comment', self, user, body)
|
||||
|
||||
@property
|
||||
def title(self):
|
||||
return self._title
|
||||
@title.setter
|
||||
def title(self, value):
|
||||
self._title = value
|
||||
|
||||
@property
|
||||
def body(self):
|
||||
return self._body
|
||||
@body.setter
|
||||
def body(self, value):
|
||||
self._body = value
|
||||
|
||||
class PR(Issue):
|
||||
def __init__(self, repo, title, body, target, ctid, user, label):
|
||||
super(PR, self).__init__(repo, title, body)
|
||||
assert ctid in repo.objects
|
||||
repo.refs['pull/%d' % self.number] = ctid
|
||||
self.head = ctid
|
||||
self._base = target
|
||||
self.user = user
|
||||
self.label = label
|
||||
self.state = 'open'
|
||||
|
||||
repo.notify('pull_request', 'opened', self)
|
||||
|
||||
@Issue.title.setter
|
||||
def title(self, value):
|
||||
old = self.title
|
||||
Issue.title.fset(self, value)
|
||||
self.repo.notify('pull_request', 'edited', self, {
|
||||
'title': {'from': old}
|
||||
})
|
||||
@Issue.body.setter
|
||||
def body(self, value):
|
||||
old = self.body
|
||||
Issue.body.fset(self, value)
|
||||
self.repo.notify('pull_request', 'edited', self, {
|
||||
'body': {'from': old}
|
||||
})
|
||||
@property
|
||||
def base(self):
|
||||
return self._base
|
||||
@base.setter
|
||||
def base(self, value):
|
||||
old, self._base = self._base, value
|
||||
self.repo.notify('pull_request', 'edited', self, {
|
||||
'base': {'ref': {'from': old}}
|
||||
})
|
||||
|
||||
def push(self, sha):
|
||||
self.head = sha
|
||||
self.repo.notify('pull_request', 'synchronize', self)
|
||||
|
||||
def open(self):
|
||||
assert self.state == 'closed'
|
||||
self.state = 'open'
|
||||
self.repo.notify('pull_request', 'reopened', self)
|
||||
|
||||
def close(self):
|
||||
self.state = 'closed'
|
||||
self.repo.notify('pull_request', 'closed', self)
|
||||
|
||||
@property
|
||||
def commits(self):
|
||||
store = self.repo.objects
|
||||
target = self.repo.commit('heads/%s' % self.base).id
|
||||
|
||||
base = {h for h, _ in git.walk_ancestors(store, target, False)}
|
||||
own = [
|
||||
h for h, _ in git.walk_ancestors(store, self.head, False)
|
||||
if h not in base
|
||||
]
|
||||
return list(map(self.repo.commit, reversed(own)))
|
||||
|
||||
def post_review(self, state, user, body):
|
||||
self.comments.append((user, "REVIEW %s\n\n%s " % (state, body)))
|
||||
self.repo.notify('pull_request_review', state, self, user, body)
|
||||
|
||||
class Commit(object):
|
||||
__slots__ = ['tree', 'message', 'author', 'committer', 'parents', 'statuses']
|
||||
def __init__(self, tree, message, author, committer, parents):
|
||||
self.tree = tree
|
||||
self.message = message
|
||||
self.author = author
|
||||
self.committer = committer or author
|
||||
self.parents = parents
|
||||
self.statuses = []
|
||||
|
||||
@property
|
||||
def id(self):
|
||||
return git.make_commit(self.tree, self.message, self.author, self.committer, parents=self.parents)[0]
|
||||
|
||||
def to_json(self):
|
||||
return {
|
||||
"sha": self.id,
|
||||
"commit": {
|
||||
"author": self.author,
|
||||
"committer": self.committer,
|
||||
"message": self.message,
|
||||
"tree": {"sha": self.tree},
|
||||
},
|
||||
"parents": [{"sha": p} for p in self.parents]
|
||||
}
|
||||
|
||||
def __str__(self):
|
||||
parents = '\n'.join('parent {}'.format(p) for p in self.parents) + '\n'
|
||||
return """commit {}
|
||||
tree {}
|
||||
{}author {}
|
||||
committer {}
|
||||
|
||||
{}""".format(
|
||||
self.id,
|
||||
self.tree,
|
||||
parents,
|
||||
self.author,
|
||||
self.committer,
|
||||
self.message
|
||||
)
|
||||
|
||||
class Client(werkzeug.test.Client):
|
||||
def __init__(self, application, path):
|
||||
self._webhook_path = path
|
||||
self.secret = None
|
||||
super(Client, self).__init__(application, werkzeug.wrappers.BaseResponse)
|
||||
|
||||
def _make_env(self, event_type, data):
|
||||
headers = [('X-Github-Event', event_type)]
|
||||
body = json.dumps(data).encode('utf-8')
|
||||
if self.secret:
|
||||
sig = hmac.new(self.secret.encode('ascii'), body, hashlib.sha1).hexdigest()
|
||||
headers.append(('X-Hub-Signature', 'sha1=' + sig))
|
||||
|
||||
return werkzeug.test.EnvironBuilder(
|
||||
path=self._webhook_path,
|
||||
method='POST',
|
||||
headers=headers,
|
||||
content_type='application/json',
|
||||
data=body,
|
||||
)
|
||||
def _repo(self, name):
|
||||
return {
|
||||
'name': name.split('/')[1],
|
||||
'full_name': name,
|
||||
}
|
||||
|
||||
def pull_request(self, action, pr, changes=None):
|
||||
assert action in ('opened', 'reopened', 'closed', 'synchronize', 'edited')
|
||||
return self.open(self._make_env(
|
||||
'pull_request', {
|
||||
'action': action,
|
||||
'pull_request': self._pr(pr),
|
||||
'repository': self._repo(pr.repo.name),
|
||||
**({'changes': changes} if changes else {})
|
||||
}
|
||||
))
|
||||
|
||||
def pull_request_review(self, action, pr, user, body):
|
||||
"""
|
||||
:type action: 'APPROVE' | 'REQUEST_CHANGES' | 'COMMENT'
|
||||
:type pr: PR
|
||||
:type user: str
|
||||
:type body: str
|
||||
"""
|
||||
assert action in ('APPROVE', 'REQUEST_CHANGES', 'COMMENT')
|
||||
return self.open(self._make_env(
|
||||
'pull_request_review', {
|
||||
'review': {
|
||||
'state': 'APPROVED' if action == 'APPROVE' else action,
|
||||
'body': body,
|
||||
'user': {'login': user},
|
||||
},
|
||||
'pull_request': self._pr(pr),
|
||||
'repository': self._repo(pr.repo.name),
|
||||
}
|
||||
))
|
||||
|
||||
def status(self, repository, context, state, sha):
|
||||
assert state in ('success', 'failure', 'pending')
|
||||
return self.open(self._make_env(
|
||||
'status', {
|
||||
'name': repository,
|
||||
'context': context,
|
||||
'state': state,
|
||||
'sha': sha,
|
||||
'repository': self._repo(repository),
|
||||
}
|
||||
))
|
||||
|
||||
def issue_comment(self, issue, user, body):
|
||||
contents = {
|
||||
'action': 'created',
|
||||
'issue': { 'number': issue.number },
|
||||
'repository': self._repo(issue.repo.name),
|
||||
'sender': { 'login': user },
|
||||
'comment': { 'body': body },
|
||||
}
|
||||
if isinstance(issue, PR):
|
||||
contents['issue']['pull_request'] = { 'url': 'fake' }
|
||||
return self.open(self._make_env('issue_comment', contents))
|
||||
|
||||
def _pr(self, pr):
|
||||
"""
|
||||
:type pr: PR
|
||||
"""
|
||||
return {
|
||||
'number': pr.number,
|
||||
'head': {
|
||||
'sha': pr.head,
|
||||
'label': pr.label,
|
||||
},
|
||||
'base': {
|
||||
'ref': pr.base,
|
||||
'repo': self._repo(pr.repo.name),
|
||||
},
|
||||
'title': pr.title,
|
||||
'body': pr.body,
|
||||
'commits': len(pr.commits),
|
||||
'user': {'login': pr.user},
|
||||
}
|
126
runbot_merge/tests/fake_github/git.py
Normal file
126
runbot_merge/tests/fake_github/git.py
Normal file
@ -0,0 +1,126 @@
|
||||
import collections
|
||||
import hashlib
|
||||
|
||||
def make_obj(t, contents):
|
||||
assert t in ('blob', 'tree', 'commit')
|
||||
obj = b'%s %d\0%s' % (t.encode('utf-8'), len(contents), contents)
|
||||
return hashlib.sha1(obj).hexdigest(), obj
|
||||
|
||||
def make_blob(contents):
|
||||
return make_obj('blob', contents)
|
||||
|
||||
def make_tree(store, objs):
|
||||
""" objs should be a mapping or iterable of (name, object)
|
||||
"""
|
||||
if isinstance(objs, collections.Mapping):
|
||||
objs = objs.items()
|
||||
|
||||
return make_obj('tree', b''.join(
|
||||
b'%s %s\0%s' % (
|
||||
b'040000' if isinstance(obj, collections.Mapping) else b'100644',
|
||||
name.encode('utf-8'),
|
||||
h.encode('utf-8'),
|
||||
)
|
||||
for name, h in sorted(objs)
|
||||
for obj in [store[h]]
|
||||
# TODO: check that obj is a blob or tree
|
||||
))
|
||||
|
||||
def make_commit(tree, message, author, committer=None, parents=()):
|
||||
contents = ['tree %s' % tree]
|
||||
for parent in parents:
|
||||
contents.append('parent %s' % parent)
|
||||
contents.append('author %s' % author)
|
||||
contents.append('committer %s' % committer or author)
|
||||
contents.append('')
|
||||
contents.append(message)
|
||||
|
||||
return make_obj('commit', '\n'.join(contents).encode('utf-8'))
|
||||
|
||||
def walk_ancestors(store, commit, exclude_self=True):
|
||||
"""
|
||||
:param store: mapping of hashes to commit objects (w/ a parents attribute)
|
||||
:param str commit: starting commit's hash
|
||||
:param exclude_self: whether the starting commit shoudl be returned as
|
||||
part of the sequence
|
||||
:rtype: Iterator[(str, int)]
|
||||
"""
|
||||
q = [(commit, 0)]
|
||||
while q:
|
||||
node, distance = q.pop()
|
||||
q.extend((p, distance+1) for p in store[node].parents)
|
||||
if not (distance == 0 and exclude_self):
|
||||
yield (node, distance)
|
||||
|
||||
def is_ancestor(store, candidate, of):
|
||||
# could have candidate == of after all
|
||||
return any(
|
||||
current == candidate
|
||||
for current, _ in walk_ancestors(store, of, exclude_self=False)
|
||||
)
|
||||
|
||||
|
||||
def merge_base(store, c1, c2):
|
||||
""" Find LCA between two commits. Brute-force: get all ancestors of A,
|
||||
all ancestors of B, intersect, and pick the one with the lowest distance
|
||||
"""
|
||||
a1 = walk_ancestors(store, c1, exclude_self=False)
|
||||
# map of sha:distance
|
||||
a2 = dict(walk_ancestors(store, c2, exclude_self=False))
|
||||
# find lowest ancestor by distance(ancestor, c1) + distance(ancestor, c2)
|
||||
_distance, lca = min(
|
||||
(d1 + d2, a)
|
||||
for a, d1 in a1
|
||||
for d2 in [a2.get(a)]
|
||||
if d2 is not None
|
||||
)
|
||||
return lca
|
||||
|
||||
def merge_objects(store, b, o1, o2):
|
||||
""" Merges trees and blobs.
|
||||
|
||||
Store = Mapping<Hash, (Blob | Tree)>
|
||||
Blob = bytes
|
||||
Tree = Mapping<Name, Hash>
|
||||
"""
|
||||
# FIXME: handle None input (similarly named entry added in two
|
||||
# branches, or delete in one branch & change in other)
|
||||
if not (b and o1 or o2):
|
||||
raise ValueError("Don't know how to merge additions/removals yet")
|
||||
b, o1, o2 = store[b], store[o1], store[o2]
|
||||
if any(isinstance(o, bytes) for o in [b, o1, o2]):
|
||||
raise TypeError("Don't know how to merge blobs")
|
||||
|
||||
entries = sorted(set(b).union(o1, o2))
|
||||
|
||||
t = {}
|
||||
for entry in entries:
|
||||
base = b.get(entry)
|
||||
e1 = o1.get(entry)
|
||||
e2 = o2.get(entry)
|
||||
if e1 == e2:
|
||||
merged = e1 # either no change or same change on both side
|
||||
elif base == e1:
|
||||
merged = e2 # e1 did not change, use e2
|
||||
elif base == e2:
|
||||
merged = e1 # e2 did not change, use e1
|
||||
else:
|
||||
merged = merge_objects(store, base, e1, e2)
|
||||
# None => entry removed
|
||||
if merged is not None:
|
||||
t[entry] = merged
|
||||
|
||||
# FIXME: fix partial redundancy with make_tree
|
||||
tid, _ = make_tree(store, t)
|
||||
store[tid] = t
|
||||
return tid
|
||||
|
||||
def read_object(store, tid):
|
||||
# recursively reads tree of objects
|
||||
o = store[tid]
|
||||
if isinstance(o, bytes):
|
||||
return o
|
||||
return {
|
||||
k: read_object(store, v)
|
||||
for k, v in o.items()
|
||||
}
|
94
runbot_merge/tests/local.py
Normal file
94
runbot_merge/tests/local.py
Normal file
@ -0,0 +1,94 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
import odoo
|
||||
import pytest
|
||||
import fake_github
|
||||
|
||||
@pytest.fixture
|
||||
def gh():
|
||||
with fake_github.Github() as gh:
|
||||
yield gh
|
||||
|
||||
@pytest.fixture(scope='session')
|
||||
def registry(request):
|
||||
""" Set up Odoo & yields a registry to the specified db
|
||||
"""
|
||||
db = request.config.getoption('--db')
|
||||
addons = request.config.getoption('--addons-path')
|
||||
odoo.tools.config.parse_config(['--addons-path', addons, '-d', db, '--db-filter', db])
|
||||
try:
|
||||
odoo.service.db._create_empty_database(db)
|
||||
except odoo.service.db.DatabaseExists:
|
||||
pass
|
||||
|
||||
#odoo.service.server.load_server_wide_modules()
|
||||
#odoo.service.server.preload_registries([db])
|
||||
|
||||
with odoo.api.Environment.manage():
|
||||
# ensure module is installed
|
||||
r0 = odoo.registry(db)
|
||||
with r0.cursor() as cr:
|
||||
env = odoo.api.Environment(cr, 1, {})
|
||||
[mod] = env['ir.module.module'].search([('name', '=', 'runbot_merge')])
|
||||
mod.button_immediate_install()
|
||||
|
||||
yield odoo.registry(db)
|
||||
|
||||
@pytest.fixture
|
||||
def env(registry):
|
||||
with registry.cursor() as cr:
|
||||
env = odoo.api.Environment(cr, odoo.SUPERUSER_ID, {})
|
||||
ctx = env['res.users'].context_get()
|
||||
registry.enter_test_mode(cr)
|
||||
yield env(context=ctx)
|
||||
registry.leave_test_mode()
|
||||
|
||||
cr.rollback()
|
||||
|
||||
@pytest.fixture
|
||||
def owner():
|
||||
return 'user'
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def users(env):
|
||||
env['res.partner'].create({
|
||||
'name': "Reviewer",
|
||||
'github_login': 'reviewer',
|
||||
'reviewer': True,
|
||||
})
|
||||
env['res.partner'].create({
|
||||
'name': "Self Reviewer",
|
||||
'github_login': 'self_reviewer',
|
||||
'self_reviewer': True,
|
||||
})
|
||||
|
||||
return {
|
||||
'reviewer': 'reviewer',
|
||||
'self_reviewer': 'self_reviewer',
|
||||
'other': 'other',
|
||||
'user': 'user',
|
||||
}
|
||||
|
||||
@pytest.fixture
|
||||
def project(env):
|
||||
return env['runbot_merge.project'].create({
|
||||
'name': 'odoo',
|
||||
'github_token': 'okokok',
|
||||
'github_prefix': 'hansen',
|
||||
'branch_ids': [(0, 0, {'name': 'master'})],
|
||||
'required_statuses': 'legal/cla,ci/runbot',
|
||||
})
|
||||
|
||||
@pytest.fixture
|
||||
def make_repo(gh, project):
|
||||
def make_repo(name):
|
||||
fullname = 'org/' + name
|
||||
project.write({'repo_ids': [(0, 0, {'name': fullname})]})
|
||||
return gh.repo(fullname, hooks=[
|
||||
((odoo.http.root, '/runbot_merge/hooks'), [
|
||||
'pull_request', 'issue_comment', 'status', 'pull_request_review'
|
||||
])
|
||||
])
|
||||
return make_repo
|
||||
# TODO: project fixture
|
||||
# TODO: repos (indirect/parameterize?) w/ WS hook
|
||||
# + repo proxy object
|
676
runbot_merge/tests/remote.py
Normal file
676
runbot_merge/tests/remote.py
Normal file
@ -0,0 +1,676 @@
|
||||
"""
|
||||
Replaces relevant fixtures to allow running the test suite against github
|
||||
actual (instead of a mocked version).
|
||||
|
||||
To enable this plugin, load it using ``-p runbot_merge.tests.remote``
|
||||
|
||||
.. WARNING:: this requires running ``python -mpytest`` from the root of the
|
||||
runbot repository, running ``pytest`` directly will not pick it
|
||||
up (as it does not setup ``sys.path``)
|
||||
|
||||
Configuration:
|
||||
|
||||
* an ``odoo`` binary in the path, which runs the relevant odoo; to ensure a
|
||||
clean slate odoo is re-started and a new database is created before each
|
||||
test
|
||||
|
||||
* pytest.ini (at the root of the runbot repo) with the following sections and
|
||||
keys
|
||||
|
||||
``github``
|
||||
- owner, the name of the account (personal or org) under which test repos
|
||||
will be created & deleted
|
||||
- token, either personal or oauth, must have the scopes ``public_repo``,
|
||||
``delete_repo`` and ``admin:repo_hook``, if personal the owner must be
|
||||
the corresponding user account, not an org
|
||||
|
||||
``role_reviewer``, ``role_self_reviewer`` and ``role_other``
|
||||
- name (optional)
|
||||
- user, the login of the user for that role
|
||||
- token, a personal access token with the ``public_repo`` scope (otherwise
|
||||
the API can't leave comments)
|
||||
|
||||
.. warning:: the accounts must *not* be flagged, or the webhooks on
|
||||
commenting or creating reviews will not trigger, and the
|
||||
tests will fail
|
||||
|
||||
* either ``ngrok`` or ``lt`` (localtunnel) available on the path. ngrok with
|
||||
a configured account is recommended: ngrok is more reliable than localtunnel
|
||||
but a free account is necessary to get a high-enough rate limiting for some
|
||||
of the multi-repo tests to work
|
||||
|
||||
Finally the tests aren't 100% reliable as they rely on quite a bit of network
|
||||
traffic, it's possible that the tests fail due to network issues rather than
|
||||
logic errors.
|
||||
"""
|
||||
import base64
|
||||
import collections
|
||||
import configparser
|
||||
import itertools
|
||||
import re
|
||||
import socket
|
||||
import subprocess
|
||||
import sys
|
||||
import time
|
||||
import xmlrpc.client
|
||||
|
||||
import pytest
|
||||
import requests
|
||||
|
||||
# Should be pytest_configure, but apparently once a plugin is registered
|
||||
# its fixtures don't get unloaded even if it's unregistered, so prevent
|
||||
# registering local entirely. This works because explicit plugins (-p)
|
||||
# are loaded before conftest and conftest-specified plugins (officially:
|
||||
# https://docs.pytest.org/en/latest/writing_plugins.html#plugin-discovery-order-at-tool-startup).
|
||||
|
||||
def pytest_addhooks(pluginmanager):
|
||||
pluginmanager.set_blocked('local')
|
||||
|
||||
def pytest_addoption(parser):
|
||||
parser.addoption("--no-delete", action="store_true", help="Don't delete repo after a failed run")
|
||||
parser.addoption(
|
||||
'--tunnel', action="store", type="choice", choices=['ngrok', 'localtunnel'],
|
||||
help="Which tunneling method to use to expose the local Odoo server "
|
||||
"to hook up github's webhook. ngrok is more reliable, but "
|
||||
"creating a free account is necessary to avoid rate-limiting "
|
||||
"issues (anonymous limiting is rate-limited at 20 incoming "
|
||||
"queries per minute, free is 40, multi-repo batching tests will "
|
||||
"blow through the former); localtunnel has no rate-limiting but "
|
||||
"the servers are way less reliable")
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def config(pytestconfig):
|
||||
conf = configparser.ConfigParser(interpolation=None)
|
||||
conf.read([pytestconfig.inifile])
|
||||
return {
|
||||
name: dict(s.items())
|
||||
for name, s in conf.items()
|
||||
}
|
||||
|
||||
PORT=8069
|
||||
|
||||
def wait_for_hook(n=1):
|
||||
# TODO: find better way to wait for roundtrip of actions which can trigger webhooks
|
||||
time.sleep(10 * n)
|
||||
|
||||
def wait_for_server(db, timeout=120):
|
||||
""" Polls for server to be response & have installed our module.
|
||||
|
||||
Raises socket.timeout on failure
|
||||
"""
|
||||
limit = time.time() + timeout
|
||||
while True:
|
||||
try:
|
||||
uid = xmlrpc.client.ServerProxy(
|
||||
'http://localhost:{}/xmlrpc/2/common'.format(PORT))\
|
||||
.authenticate(db, 'admin', 'admin', {})
|
||||
xmlrpc.client.ServerProxy(
|
||||
'http://localhost:{}/xmlrpc/2/object'.format(PORT)) \
|
||||
.execute_kw(db, uid, 'admin', 'runbot_merge.batch', 'search',
|
||||
[[]], {'limit': 1})
|
||||
break
|
||||
except ConnectionRefusedError:
|
||||
if time.time() > limit:
|
||||
raise socket.timeout()
|
||||
|
||||
@pytest.fixture
|
||||
def env(request):
|
||||
"""
|
||||
creates a db & an environment object as a proxy to xmlrpc calls
|
||||
"""
|
||||
db = request.config.getoption('--db')
|
||||
p = subprocess.Popen([
|
||||
'odoo', '--http-port', str(PORT),
|
||||
'--addons-path', request.config.getoption('--addons-path'),
|
||||
'-d', db, '-i', 'runbot_merge',
|
||||
'--load', 'base,web,runbot_merge',
|
||||
'--max-cron-threads', '0', # disable cron threads (we're running crons by hand)
|
||||
])
|
||||
|
||||
try:
|
||||
wait_for_server(db)
|
||||
|
||||
yield Environment(PORT, db)
|
||||
|
||||
db_service = xmlrpc.client.ServerProxy('http://localhost:{}/xmlrpc/2/db'.format(PORT))
|
||||
db_service.drop('admin', db)
|
||||
finally:
|
||||
p.terminate()
|
||||
p.wait(timeout=30)
|
||||
|
||||
@pytest.fixture(scope='session')
|
||||
def tunnel(request):
|
||||
""" Creates a tunnel to localhost:8069 using ~~ngrok~~ localtunnel, should yield the
|
||||
publicly routable address & terminate the process at the end of the session
|
||||
"""
|
||||
|
||||
tunnel = request.config.getoption('--tunnel')
|
||||
if tunnel == 'ngrok':
|
||||
p = subprocess.Popen(['ngrok', 'http', '--region', 'eu', str(PORT)])
|
||||
time.sleep(5)
|
||||
try:
|
||||
r = requests.get('http://localhost:4040/api/tunnels')
|
||||
r.raise_for_status()
|
||||
yield next(
|
||||
t['public_url']
|
||||
for t in r.json()['tunnels']
|
||||
if t['proto'] == 'https'
|
||||
)
|
||||
finally:
|
||||
p.terminate()
|
||||
p.wait(30)
|
||||
elif tunnel == 'localtunnel':
|
||||
p = subprocess.Popen(['lt', '-p', str(PORT)], stdout=subprocess.PIPE)
|
||||
try:
|
||||
r = p.stdout.readline()
|
||||
m = re.match(br'your url is: (https://.*\.localtunnel\.me)', r)
|
||||
assert m, "could not get the localtunnel URL"
|
||||
yield m.group(1).decode('ascii')
|
||||
finally:
|
||||
p.terminate()
|
||||
p.wait(30)
|
||||
else:
|
||||
raise ValueError("Unsupported %s tunnel method" % tunnel)
|
||||
|
||||
ROLES = ['reviewer', 'self_reviewer', 'other']
|
||||
@pytest.fixture(autouse=True)
|
||||
def users(env, github, config):
|
||||
# get github login of "current user"
|
||||
r = github.get('https://api.github.com/user')
|
||||
r.raise_for_status()
|
||||
rolemap = {
|
||||
'user': r.json()['login']
|
||||
}
|
||||
for role in ROLES:
|
||||
data = config['role_' + role]
|
||||
username = data['user']
|
||||
rolemap[role] = username
|
||||
if role == 'other':
|
||||
continue
|
||||
env['res.partner'].create({
|
||||
'name': data.get('name', username),
|
||||
'github_login': username,
|
||||
'reviewer': role == 'reviewer',
|
||||
'self_reviewer': role == 'self_reviewer',
|
||||
})
|
||||
|
||||
return rolemap
|
||||
|
||||
@pytest.fixture
|
||||
def project(env, config):
|
||||
return env['runbot_merge.project'].create({
|
||||
'name': 'odoo',
|
||||
'github_token': config['github']['token'],
|
||||
'github_prefix': 'hansen',
|
||||
'branch_ids': [(0, 0, {'name': 'master'})],
|
||||
'required_statuses': 'legal/cla,ci/runbot',
|
||||
})
|
||||
|
||||
@pytest.fixture(scope='session')
|
||||
def github(config):
|
||||
s = requests.Session()
|
||||
s.headers['Authorization'] = 'token {}'.format(config['github']['token'])
|
||||
return s
|
||||
|
||||
@pytest.fixture
|
||||
def owner(config):
|
||||
return config['github']['owner']
|
||||
|
||||
@pytest.fixture
|
||||
def make_repo(request, config, project, github, tunnel, users, owner):
|
||||
# check whether "owner" is a user or an org, as repo-creation endpoint is
|
||||
# different
|
||||
q = github.get('https://api.github.com/users/{}'.format(owner))
|
||||
q.raise_for_status()
|
||||
if q.json().get('type') == 'Organization':
|
||||
endpoint = 'https://api.github.com/orgs/{}/repos'.format(owner)
|
||||
else:
|
||||
# if not creating repos under an org, ensure the token matches the owner
|
||||
assert users['user'] == owner, "when testing against a user (rather than an organisation) the API token must be the user's"
|
||||
endpoint = 'https://api.github.com/user/repos'
|
||||
|
||||
repos = []
|
||||
def repomaker(name):
|
||||
fullname = '{}/{}'.format(owner, name)
|
||||
# create repo
|
||||
r = github.post(endpoint, json={
|
||||
'name': name,
|
||||
'has_issues': False,
|
||||
'has_projects': False,
|
||||
'has_wiki': False,
|
||||
'auto_init': False,
|
||||
# at least one merge method must be enabled :(
|
||||
'allow_squash_merge': False,
|
||||
# 'allow_merge_commit': False,
|
||||
'allow_rebase_merge': False,
|
||||
})
|
||||
r.raise_for_status()
|
||||
repos.append(fullname)
|
||||
# unwatch repo
|
||||
github.put('https://api.github.com/repos/{}/subscription'.format(fullname), json={
|
||||
'subscribed': False,
|
||||
'ignored': True,
|
||||
})
|
||||
# create webhook
|
||||
github.post('https://api.github.com/repos/{}/hooks'.format(fullname), json={
|
||||
'name': 'web',
|
||||
'config': {
|
||||
'url': '{}/runbot_merge/hooks'.format(tunnel),
|
||||
'content_type': 'json',
|
||||
'insecure_ssl': '1',
|
||||
},
|
||||
'events': ['pull_request', 'issue_comment', 'status', 'pull_request_review']
|
||||
})
|
||||
project.write({'repo_ids': [(0, 0, {'name': fullname})]})
|
||||
|
||||
tokens = {
|
||||
r: config['role_' + r]['token']
|
||||
for r in ROLES
|
||||
}
|
||||
tokens['user'] = config['github']['token']
|
||||
|
||||
return Repo(github, fullname, tokens)
|
||||
|
||||
yield repomaker
|
||||
|
||||
if not request.config.getoption('--no-delete'):
|
||||
for repo in reversed(repos):
|
||||
github.delete('https://api.github.com/repos/{}'.format(repo)).raise_for_status()
|
||||
|
||||
class Environment:
|
||||
def __init__(self, port, db):
|
||||
self._uid = xmlrpc.client.ServerProxy('http://localhost:{}/xmlrpc/2/common'.format(port)).authenticate(db, 'admin', 'admin', {})
|
||||
self._object = xmlrpc.client.ServerProxy('http://localhost:{}/xmlrpc/2/object'.format(port))
|
||||
self._db = db
|
||||
|
||||
def __call__(self, model, method, *args, **kwargs):
|
||||
return self._object.execute_kw(
|
||||
self._db, self._uid, 'admin',
|
||||
model, method,
|
||||
args, kwargs
|
||||
)
|
||||
|
||||
def __getitem__(self, name):
|
||||
return Model(self, name)
|
||||
|
||||
class Model:
|
||||
__slots__ = ['_env', '_model', '_ids', '_fields']
|
||||
def __init__(self, env, model, ids=(), fields=None):
|
||||
object.__setattr__(self, '_env', env)
|
||||
object.__setattr__(self, '_model', model)
|
||||
object.__setattr__(self, '_ids', tuple(ids or ()))
|
||||
|
||||
object.__setattr__(self, '_fields', fields or self._env(self._model, 'fields_get', attributes=['type', 'relation']))
|
||||
|
||||
@property
|
||||
def ids(self):
|
||||
return self._ids
|
||||
|
||||
def __bool__(self):
|
||||
return bool(self._ids)
|
||||
|
||||
def __len__(self):
|
||||
return len(self._ids)
|
||||
|
||||
def __eq__(self, other):
|
||||
if not isinstance(other, Model):
|
||||
return NotImplemented
|
||||
return self._model == other._model and self._ids == other._ids
|
||||
|
||||
def __repr__(self):
|
||||
return "{}({})".format(self._model, ', '.join(str(id) for id in self._ids))
|
||||
|
||||
def exists(self):
|
||||
ids = self._env(self._model, 'exists', self._ids)
|
||||
return Model(self._env, self._model, ids)
|
||||
|
||||
def search(self, domain):
|
||||
ids = self._env(self._model, 'search', domain)
|
||||
return Model(self._env, self._model, ids)
|
||||
|
||||
def create(self, values):
|
||||
return Model(self._env, self._model, [self._env(self._model, 'create', values)])
|
||||
|
||||
def write(self, values):
|
||||
return self._env(self._model, 'write', self._ids, values)
|
||||
|
||||
def read(self, fields):
|
||||
return self._env(self._model, 'read', self._ids, fields)
|
||||
|
||||
def unlink(self):
|
||||
return self._env(self._model, 'unlink', self._ids)
|
||||
|
||||
def _check_progress(self):
|
||||
assert self._model == 'runbot_merge.project'
|
||||
self._run_cron('runbot_merge.merge_cron')
|
||||
|
||||
def _check_fetch(self):
|
||||
assert self._model == 'runbot_merge.project'
|
||||
self._run_cron('runbot_merge.fetch_prs_cron')
|
||||
|
||||
def _run_cron(self, xid):
|
||||
_, model, cron_id = self._env('ir.model.data', 'xmlid_lookup', xid)
|
||||
assert model == 'ir.cron', "Expected {} to be a cron, got {}".format(xid, model)
|
||||
self._env('ir.cron', 'method_direct_trigger', [cron_id])
|
||||
# sleep for some time as a lot of crap may have happened (?)
|
||||
wait_for_hook()
|
||||
|
||||
def __getattr__(self, fieldname):
|
||||
if not self._ids:
|
||||
return False
|
||||
|
||||
assert len(self._ids) == 1
|
||||
if fieldname == 'id':
|
||||
return self._ids[0]
|
||||
|
||||
val = self.read([fieldname])[0][fieldname]
|
||||
field_description = self._fields[fieldname]
|
||||
if field_description['type'] in ('many2one', 'one2many', 'many2many'):
|
||||
val = val or []
|
||||
if field_description['type'] == 'many2one':
|
||||
val = val[:1] # (id, name) => [id]
|
||||
return Model(self._env, field_description['relation'], val)
|
||||
|
||||
return val
|
||||
|
||||
def __setattr__(self, fieldname, value):
|
||||
assert self._fields[fieldname]['type'] not in ('many2one', 'one2many', 'many2many')
|
||||
self._env(self._model, 'write', self._ids, {fieldname: value})
|
||||
|
||||
def __iter__(self):
|
||||
return (
|
||||
Model(self._env, self._model, [i], fields=self._fields)
|
||||
for i in self._ids
|
||||
)
|
||||
|
||||
def mapped(self, path):
|
||||
field, *rest = path.split('.', 1)
|
||||
descr = self._fields[field]
|
||||
if descr['type'] in ('many2one', 'one2many', 'many2many'):
|
||||
result = Model(self._env, descr['relation'])
|
||||
for record in self:
|
||||
result |= getattr(record, field)
|
||||
|
||||
return result.mapped(rest[0]) if rest else result
|
||||
|
||||
assert not rest
|
||||
return [getattr(r, field) for r in self]
|
||||
|
||||
def __or__(self, other):
|
||||
if not isinstance(other, Model) or self._model != other._model:
|
||||
return NotImplemented
|
||||
|
||||
return Model(self._env, self._model, {*self._ids, *other._ids}, fields=self._fields)
|
||||
|
||||
class Repo:
|
||||
__slots__ = ['name', '_session', '_tokens']
|
||||
def __init__(self, session, name, user_tokens):
|
||||
self.name = name
|
||||
self._session = session
|
||||
self._tokens = user_tokens
|
||||
|
||||
def set_secret(self, secret):
|
||||
r = self._session.get(
|
||||
'https://api.github.com/repos/{}/hooks'.format(self.name))
|
||||
response = r.json()
|
||||
assert 200 <= r.status_code < 300, response
|
||||
[hook] = response
|
||||
|
||||
r = self._session.patch('https://api.github.com/repos/{}/hooks/{}'.format(self.name, hook['id']), json={
|
||||
'config': {**hook['config'], 'secret': secret},
|
||||
})
|
||||
assert 200 <= r.status_code < 300, r.json()
|
||||
|
||||
def get_ref(self, ref):
|
||||
if re.match(r'[0-9a-f]{40}', ref):
|
||||
return ref
|
||||
|
||||
assert ref.startswith('heads/')
|
||||
r = self._session.get('https://api.github.com/repos/{}/git/refs/{}'.format(self.name, ref))
|
||||
response = r.json()
|
||||
|
||||
assert 200 <= r.status_code < 300, response
|
||||
assert isinstance(response, dict), "{} doesn't exist (got {} refs)".format(ref, len(response))
|
||||
assert response['object']['type'] == 'commit'
|
||||
|
||||
return response['object']['sha']
|
||||
|
||||
def make_ref(self, name, commit, force=False):
|
||||
assert name.startswith('heads/')
|
||||
r = self._session.post('https://api.github.com/repos/{}/git/refs'.format(self.name), json={
|
||||
'ref': 'refs/' + name,
|
||||
'sha': commit,
|
||||
})
|
||||
if force and r.status_code == 422:
|
||||
r = self._session.patch('https://api.github.com/repos/{}/git/refs/{}'.format(self.name, name), json={'sha': commit, 'force': True})
|
||||
assert 200 <= r.status_code < 300, r.json()
|
||||
wait_for_hook()
|
||||
|
||||
def update_ref(self, name, commit, force=False):
|
||||
r = self._session.patch('https://api.github.com/repos/{}/git/refs/{}'.format(self.name, name), json={'sha': commit, 'force': force})
|
||||
assert 200 <= r.status_code < 300, r.json()
|
||||
wait_for_hook()
|
||||
|
||||
def make_commit(self, ref, message, author, committer=None, tree=None):
|
||||
assert tree, "not supporting changes/updates"
|
||||
assert not (author or committer)
|
||||
|
||||
if not ref: # None / []
|
||||
# apparently github refuses to create trees/commits in empty repos
|
||||
# using the regular API...
|
||||
[(path, contents)] = tree.items()
|
||||
r = self._session.put('https://api.github.com/repos/{}/contents/{}'.format(self.name, path), json={
|
||||
'path': path,
|
||||
'message': message,
|
||||
'content': base64.b64encode(contents.encode('utf-8')).decode('ascii'),
|
||||
'branch': 'nootherwaytocreateaninitialcommitbutidontwantamasteryet%d' % next(ct)
|
||||
})
|
||||
assert 200 <= r.status_code < 300, r.json()
|
||||
return r.json()['commit']['sha']
|
||||
|
||||
if isinstance(ref, list):
|
||||
refs = ref
|
||||
else:
|
||||
refs = [ref]
|
||||
parents = [self.get_ref(r) for r in refs]
|
||||
|
||||
r = self._session.post('https://api.github.com/repos/{}/git/trees'.format(self.name), json={
|
||||
'tree': [
|
||||
{'path': k, 'mode': '100644', 'type': 'blob', 'content': v}
|
||||
for k, v in tree.items()
|
||||
]
|
||||
})
|
||||
assert 200 <= r.status_code < 300, r.json()
|
||||
h = r.json()['sha']
|
||||
|
||||
r = self._session.post('https://api.github.com/repos/{}/git/commits'.format(self.name), json={
|
||||
'parents': parents,
|
||||
'message': message,
|
||||
'tree': h,
|
||||
|
||||
})
|
||||
assert 200 <= r.status_code < 300, r.json()
|
||||
|
||||
commit_sha = r.json()['sha']
|
||||
|
||||
# if the first parent is an actual ref (rather than a hash) update it
|
||||
if parents[0] != refs[0]:
|
||||
self.update_ref(refs[0], commit_sha)
|
||||
else:
|
||||
wait_for_hook()
|
||||
return commit_sha
|
||||
|
||||
def make_pr(self, title, body, target, ctid, user, label=None):
|
||||
# github only allows PRs from actual branches, so create an actual branch
|
||||
ref = label or "temp_trash_because_head_must_be_a_ref_%d" % next(ct)
|
||||
self.make_ref('heads/' + ref, ctid)
|
||||
|
||||
r = self._session.post(
|
||||
'https://api.github.com/repos/{}/pulls'.format(self.name),
|
||||
json={'title': title, 'body': body, 'head': ref, 'base': target,},
|
||||
headers={'Authorization': 'token {}'.format(self._tokens[user])}
|
||||
)
|
||||
assert 200 <= r.status_code < 300, r.json()
|
||||
# wait extra for PRs creating many PRs and relying on their ordering
|
||||
# (test_batching & test_batching_split)
|
||||
# would be nice to make the tests more reliable but not quite sure
|
||||
# how...
|
||||
wait_for_hook(2)
|
||||
return PR(self, 'heads/' + ref, r.json()['number'])
|
||||
|
||||
def post_status(self, ref, status, context='default', description=""):
|
||||
assert status in ('error', 'failure', 'pending', 'success')
|
||||
r = self._session.post('https://api.github.com/repos/{}/statuses/{}'.format(self.name, self.get_ref(ref)), json={
|
||||
'state': status,
|
||||
'context': context,
|
||||
'description': description,
|
||||
})
|
||||
assert 200 <= r.status_code < 300, r.json()
|
||||
wait_for_hook()
|
||||
|
||||
def commit(self, ref):
|
||||
# apparently heads/<branch> ~ refs/heads/<branch> but are not
|
||||
# necessarily up to date ??? unlike the git ref system where :ref
|
||||
# starts at heads/
|
||||
if ref.startswith('heads/'):
|
||||
ref = 'refs/' + ref
|
||||
|
||||
r = self._session.get('https://api.github.com/repos/{}/commits/{}'.format(self.name, ref))
|
||||
response = r.json()
|
||||
assert 200 <= r.status_code < 300, response
|
||||
|
||||
c = response['commit']
|
||||
return Commit(
|
||||
id=response['sha'],
|
||||
tree=c['tree']['sha'],
|
||||
message=c['message'],
|
||||
author=c['author'],
|
||||
committer=c['committer'],
|
||||
parents=[p['sha'] for p in response['parents']],
|
||||
)
|
||||
|
||||
def read_tree(self, commit):
|
||||
# read tree object
|
||||
r = self._session.get('https://api.github.com/repos/{}/git/trees/{}'.format(self.name, commit.tree))
|
||||
assert 200 <= r.status_code < 300, r.json()
|
||||
|
||||
# read tree's blobs
|
||||
tree = {}
|
||||
for t in r.json()['tree']:
|
||||
assert t['type'] == 'blob', "we're *not* doing recursive trees in test cases"
|
||||
r = self._session.get('https://api.github.com/repos/{}/git/blobs/{}'.format(self.name, t['sha']))
|
||||
assert 200 <= r.status_code < 300, r.json()
|
||||
tree[t['path']] = base64.b64decode(r.json()['content'])
|
||||
|
||||
return tree
|
||||
|
||||
def is_ancestor(self, sha, of):
|
||||
return any(c['sha'] == sha for c in self.log(of))
|
||||
|
||||
def log(self, ref_or_sha):
|
||||
for page in itertools.count(1):
|
||||
r = self._session.get(
|
||||
'https://api.github.com/repos/{}/commits'.format(self.name),
|
||||
params={'sha': ref_or_sha, 'page': page}
|
||||
)
|
||||
assert 200 <= r.status_code < 300, r.json()
|
||||
yield from r.json()
|
||||
if not r.links.get('next'):
|
||||
return
|
||||
|
||||
ct = itertools.count()
|
||||
|
||||
Commit = collections.namedtuple('Commit', 'id tree message author committer parents')
|
||||
|
||||
class PR:
|
||||
__slots__ = ['number', '_branch', 'repo']
|
||||
def __init__(self, repo, branch, number):
|
||||
"""
|
||||
:type repo: Repo
|
||||
:type branch: str
|
||||
:type number: int
|
||||
"""
|
||||
self.number = number
|
||||
self._branch = branch
|
||||
self.repo = repo
|
||||
|
||||
@property
|
||||
def _session(self):
|
||||
return self.repo._session
|
||||
|
||||
@property
|
||||
def _pr(self):
|
||||
r = self._session.get('https://api.github.com/repos/{}/pulls/{}'.format(self.repo.name, self.number))
|
||||
assert 200 <= r.status_code < 300, r.json()
|
||||
return r.json()
|
||||
|
||||
@property
|
||||
def head(self):
|
||||
return self._pr['head']['sha']
|
||||
|
||||
@property
|
||||
def user(self):
|
||||
return self._pr['user']['login']
|
||||
|
||||
@property
|
||||
def state(self):
|
||||
return self._pr['state']
|
||||
|
||||
@property
|
||||
def labels(self):
|
||||
r = self._session.get('https://api.github.com/repos/{}/issues/{}/labels'.format(self.repo.name, self.number))
|
||||
assert 200 <= r.status_code < 300, r.json()
|
||||
return {label['name'] for label in r.json()}
|
||||
|
||||
@property
|
||||
def comments(self):
|
||||
r = self._session.get('https://api.github.com/repos/{}/issues/{}/comments'.format(self.repo.name, self.number))
|
||||
assert 200 <= r.status_code < 300, r.json()
|
||||
return [
|
||||
(c['user']['login'], c['body'])
|
||||
for c in r.json()
|
||||
]
|
||||
|
||||
def _set_prop(self, prop, value):
|
||||
r = self._session.patch('https://api.github.com/repos/{}/pulls/{}'.format(self.repo.name, self.number), json={
|
||||
prop: value
|
||||
})
|
||||
assert 200 <= r.status_code < 300, r.json()
|
||||
wait_for_hook()
|
||||
|
||||
@property
|
||||
def title(self):
|
||||
raise NotImplementedError()
|
||||
title = title.setter(lambda self, v: self._set_prop('title', v))
|
||||
|
||||
@property
|
||||
def base(self):
|
||||
raise NotImplementedError()
|
||||
base = base.setter(lambda self, v: self._set_prop('base', v))
|
||||
|
||||
def post_comment(self, body, user):
|
||||
r = self._session.post(
|
||||
'https://api.github.com/repos/{}/issues/{}/comments'.format(self.repo.name, self.number),
|
||||
json={'body': body},
|
||||
headers={'Authorization': 'token {}'.format(self.repo._tokens[user])}
|
||||
)
|
||||
assert 200 <= r.status_code < 300, r.json()
|
||||
wait_for_hook()
|
||||
|
||||
def open(self):
|
||||
self._set_prop('state', 'open')
|
||||
|
||||
def close(self):
|
||||
self._set_prop('state', 'closed')
|
||||
|
||||
def push(self, sha):
|
||||
self.repo.update_ref(self._branch, sha, force=True)
|
||||
|
||||
def post_review(self, state, user, body):
|
||||
r = self._session.post(
|
||||
'https://api.github.com/repos/{}/pulls/{}/reviews'.format(self.repo.name, self.number),
|
||||
json={'body': body, 'event': state,},
|
||||
headers={'Authorization': 'token {}'.format(self.repo._tokens[user])}
|
||||
)
|
||||
assert 200 <= r.status_code < 300, r.json()
|
||||
wait_for_hook()
|
1441
runbot_merge/tests/test_basic.py
Normal file
1441
runbot_merge/tests/test_basic.py
Normal file
File diff suppressed because it is too large
Load Diff
348
runbot_merge/tests/test_multirepo.py
Normal file
348
runbot_merge/tests/test_multirepo.py
Normal file
@ -0,0 +1,348 @@
|
||||
""" The mergebot does not work on a dependency basis, rather all
|
||||
repositories of a project are co-equal and get (on target and
|
||||
source branches).
|
||||
|
||||
When preparing a staging, we simply want to ensure branch-matched PRs
|
||||
are staged concurrently in all repos
|
||||
"""
|
||||
import json
|
||||
|
||||
import pytest
|
||||
|
||||
@pytest.fixture
|
||||
def repo_a(make_repo):
|
||||
return make_repo('a')
|
||||
|
||||
@pytest.fixture
|
||||
def repo_b(make_repo):
|
||||
return make_repo('b')
|
||||
|
||||
@pytest.fixture
|
||||
def repo_c(make_repo):
|
||||
return make_repo('c')
|
||||
|
||||
def make_pr(repo, prefix, trees, *, target='master', user='user', label=None,
|
||||
statuses=(('ci/runbot', 'success'), ('legal/cla', 'success')),
|
||||
reviewer='reviewer'):
|
||||
"""
|
||||
:type repo: fake_github.Repo
|
||||
:type prefix: str
|
||||
:type trees: list[dict]
|
||||
:type target: str
|
||||
:type user: str
|
||||
:type label: str | None
|
||||
:type statuses: list[(str, str)]
|
||||
:type reviewer: str | None
|
||||
:rtype: fake_github.PR
|
||||
"""
|
||||
base = repo.commit('heads/{}'.format(target))
|
||||
tree = repo.read_tree(base)
|
||||
c = base.id
|
||||
for i, t in enumerate(trees):
|
||||
tree.update(t)
|
||||
c = repo.make_commit(c, 'commit_{}_{:02}'.format(prefix, i), None,
|
||||
tree=dict(tree))
|
||||
pr = repo.make_pr('title {}'.format(prefix), 'body {}'.format(prefix), target=target, ctid=c, user=user, label=label)
|
||||
for context, result in statuses:
|
||||
repo.post_status(c, result, context)
|
||||
if reviewer:
|
||||
pr.post_comment('hansen r+', reviewer)
|
||||
return pr
|
||||
def to_pr(env, pr):
|
||||
return env['runbot_merge.pull_requests'].search([
|
||||
('repository.name', '=', pr.repo.name),
|
||||
('number', '=', pr.number),
|
||||
])
|
||||
def test_stage_one(env, project, repo_a, repo_b):
|
||||
""" First PR is non-matched from A => should not select PR from B
|
||||
"""
|
||||
project.batch_limit = 1
|
||||
|
||||
repo_a.make_ref(
|
||||
'heads/master',
|
||||
repo_a.make_commit(None, 'initial', None, tree={'a': 'a_0'})
|
||||
)
|
||||
pr_a = make_pr(repo_a, 'A', [{'a': 'a_1'}], label='do-a-thing')
|
||||
|
||||
repo_b.make_ref(
|
||||
'heads/master',
|
||||
repo_b.make_commit(None, 'initial', None, tree={'a': 'b_0'})
|
||||
)
|
||||
pr_b = make_pr(repo_b, 'B', [{'a': 'b_1'}], label='do-other-thing')
|
||||
|
||||
env['runbot_merge.project']._check_progress()
|
||||
|
||||
assert to_pr(env, pr_a).state == 'ready'
|
||||
assert to_pr(env, pr_a).staging_id
|
||||
assert to_pr(env, pr_b).state == 'ready'
|
||||
assert not to_pr(env, pr_b).staging_id
|
||||
|
||||
def test_stage_match(env, project, repo_a, repo_b):
|
||||
""" First PR is matched from A, => should select matched PR from B
|
||||
"""
|
||||
project.batch_limit = 1
|
||||
repo_a.make_ref(
|
||||
'heads/master',
|
||||
repo_a.make_commit(None, 'initial', None, tree={'a': 'a_0'})
|
||||
)
|
||||
pr_a = make_pr(repo_a, 'A', [{'a': 'a_1'}], label='do-a-thing')
|
||||
|
||||
repo_b.make_ref(
|
||||
'heads/master',
|
||||
repo_b.make_commit(None, 'initial', None, tree={'a': 'b_0'})
|
||||
)
|
||||
pr_b = make_pr(repo_b, 'B', [{'a': 'b_1'}], label='do-a-thing')
|
||||
|
||||
env['runbot_merge.project']._check_progress()
|
||||
|
||||
pr_a = to_pr(env, pr_a)
|
||||
pr_b = to_pr(env, pr_b)
|
||||
assert pr_a.state == 'ready'
|
||||
assert pr_a.staging_id
|
||||
assert pr_b.state == 'ready'
|
||||
assert pr_b.staging_id
|
||||
# should be part of the same staging
|
||||
assert pr_a.staging_id == pr_b.staging_id, \
|
||||
"branch-matched PRs should be part of the same staging"
|
||||
|
||||
def test_sub_match(env, project, repo_a, repo_b, repo_c):
|
||||
""" Branch-matching should work on a subset of repositories
|
||||
"""
|
||||
project.batch_limit = 1
|
||||
repo_a.make_ref(
|
||||
'heads/master',
|
||||
repo_a.make_commit(None, 'initial', None, tree={'a': 'a_0'})
|
||||
)
|
||||
# no pr here
|
||||
|
||||
repo_b.make_ref(
|
||||
'heads/master',
|
||||
repo_b.make_commit(None, 'initial', None, tree={'a': 'b_0'})
|
||||
)
|
||||
pr_b = make_pr(repo_b, 'B', [{'a': 'b_1'}], label='do-a-thing')
|
||||
|
||||
repo_c.make_ref(
|
||||
'heads/master',
|
||||
repo_c.make_commit(None, 'initial', None, tree={'a': 'c_0'})
|
||||
)
|
||||
pr_c = make_pr(repo_c, 'C', [{'a': 'c_1'}], label='do-a-thing')
|
||||
|
||||
env['runbot_merge.project']._check_progress()
|
||||
|
||||
pr_b = to_pr(env, pr_b)
|
||||
pr_c = to_pr(env, pr_c)
|
||||
assert pr_b.state == 'ready'
|
||||
assert pr_b.staging_id
|
||||
assert pr_c.state == 'ready'
|
||||
assert pr_c.staging_id
|
||||
# should be part of the same staging
|
||||
assert pr_c.staging_id == pr_b.staging_id, \
|
||||
"branch-matched PRs should be part of the same staging"
|
||||
st = pr_b.staging_id
|
||||
assert json.loads(st.heads) == {
|
||||
repo_a.name: repo_a.commit('heads/master').id,
|
||||
repo_b.name: repo_b.commit('heads/staging.master').id,
|
||||
repo_c.name: repo_c.commit('heads/staging.master').id,
|
||||
}
|
||||
|
||||
def test_merge_fail(env, project, repo_a, repo_b, users):
|
||||
""" In a matched-branch scenario, if merging in one of the linked repos
|
||||
fails it should revert the corresponding merges
|
||||
"""
|
||||
project.batch_limit = 1
|
||||
|
||||
root_a = repo_a.make_commit(None, 'initial', None, tree={'a': 'a_0'})
|
||||
repo_a.make_ref('heads/master', root_a)
|
||||
root_b = repo_b.make_commit(None, 'initial', None, tree={'a': 'b_0'})
|
||||
repo_b.make_ref('heads/master', root_b)
|
||||
|
||||
# first set of matched PRs
|
||||
pr1a = make_pr(repo_a, 'A', [{'a': 'a_1'}], label='do-a-thing')
|
||||
pr1b = make_pr(repo_b, 'B', [{'a': 'b_1'}], label='do-a-thing')
|
||||
|
||||
# add a conflicting commit to B so the staging fails
|
||||
repo_b.make_commit('heads/master', 'cn', None, tree={'a': 'cn'})
|
||||
|
||||
# and a second set of PRs which should get staged while the first set
|
||||
# fails
|
||||
pr2a = make_pr(repo_a, 'A2', [{'b': 'ok'}], label='do-b-thing')
|
||||
pr2b = make_pr(repo_b, 'B2', [{'b': 'ok'}], label='do-b-thing')
|
||||
|
||||
env['runbot_merge.project']._check_progress()
|
||||
|
||||
s2 = to_pr(env, pr2a) | to_pr(env, pr2b)
|
||||
st = env['runbot_merge.stagings'].search([])
|
||||
assert set(st.batch_ids.prs.ids) == set(s2.ids)
|
||||
|
||||
failed = to_pr(env, pr1b)
|
||||
assert failed.state == 'error'
|
||||
assert pr1b.comments == [
|
||||
(users['reviewer'], 'hansen r+'),
|
||||
(users['user'], 'Unable to stage PR (merge conflict)'),
|
||||
]
|
||||
other = to_pr(env, pr1a)
|
||||
assert not other.staging_id
|
||||
assert len(list(repo_a.log('heads/staging.master'))) == 2,\
|
||||
"root commit + squash-merged PR commit"
|
||||
|
||||
def test_ff_fail(env, project, repo_a, repo_b):
|
||||
""" In a matched-branch scenario, fast-forwarding one of the repos fails
|
||||
the entire thing should be rolled back
|
||||
"""
|
||||
project.batch_limit = 1
|
||||
root_a = repo_a.make_commit(None, 'initial', None, tree={'a': 'a_0'})
|
||||
repo_a.make_ref('heads/master', root_a)
|
||||
make_pr(repo_a, 'A', [{'a': 'a_1'}], label='do-a-thing')
|
||||
|
||||
root_b = repo_b.make_commit(None, 'initial', None, tree={'a': 'b_0'})
|
||||
repo_b.make_ref('heads/master', root_b)
|
||||
make_pr(repo_b, 'B', [{'a': 'b_1'}], label='do-a-thing')
|
||||
|
||||
env['runbot_merge.project']._check_progress()
|
||||
|
||||
# add second commit blocking FF
|
||||
cn = repo_b.make_commit('heads/master', 'second', None, tree={'a': 'b_0', 'b': 'other'})
|
||||
assert repo_b.commit('heads/master').id == cn
|
||||
|
||||
repo_a.post_status('heads/staging.master', 'success', 'ci/runbot')
|
||||
repo_a.post_status('heads/staging.master', 'success', 'legal/cla')
|
||||
repo_b.post_status('heads/staging.master', 'success', 'ci/runbot')
|
||||
repo_b.post_status('heads/staging.master', 'success', 'legal/cla')
|
||||
|
||||
env['runbot_merge.project']._check_progress()
|
||||
assert repo_b.commit('heads/master').id == cn,\
|
||||
"B should still be at the conflicting commit"
|
||||
assert repo_a.commit('heads/master').id == root_a,\
|
||||
"FF A should have been rolled back when B failed"
|
||||
|
||||
# should be re-staged
|
||||
st = env['runbot_merge.stagings'].search([])
|
||||
assert len(st) == 1
|
||||
assert len(st.batch_ids.prs) == 2
|
||||
|
||||
def test_one_failed(env, project, repo_a, repo_b, owner):
|
||||
""" If the companion of a ready branch-matched PR is not ready,
|
||||
they should not get staged
|
||||
"""
|
||||
project.batch_limit = 1
|
||||
c_a = repo_a.make_commit(None, 'initial', None, tree={'a': 'a_0'})
|
||||
repo_a.make_ref('heads/master', c_a)
|
||||
# pr_a is born ready
|
||||
pr_a = make_pr(repo_a, 'A', [{'a': 'a_1'}], label='do-a-thing')
|
||||
|
||||
c_b = repo_b.make_commit(None, 'initial', None, tree={'a': 'b_0'})
|
||||
repo_b.make_ref('heads/master', c_b)
|
||||
c_pr = repo_b.make_commit(c_b, 'pr', None, tree={'a': 'b_1'})
|
||||
pr_b = repo_b.make_pr(
|
||||
'title', 'body', target='master', ctid=c_pr,
|
||||
user='user', label='do-a-thing',
|
||||
)
|
||||
repo_b.post_status(c_pr, 'success', 'ci/runbot')
|
||||
repo_b.post_status(c_pr, 'success', 'legal/cla')
|
||||
|
||||
pr_a = to_pr(env, pr_a)
|
||||
pr_b = to_pr(env, pr_b)
|
||||
assert pr_a.state == 'ready'
|
||||
assert pr_b.state == 'validated'
|
||||
assert pr_a.label == pr_b.label == '{}:do-a-thing'.format(owner)
|
||||
|
||||
env['runbot_merge.project']._check_progress()
|
||||
|
||||
assert not pr_b.staging_id
|
||||
assert not pr_a.staging_id, \
|
||||
"pr_a should not have been staged as companion is not ready"
|
||||
|
||||
def test_batching(env, project, repo_a, repo_b):
|
||||
""" If multiple batches (label groups) are ready they should get batched
|
||||
together (within the limits of teh project's batch limit)
|
||||
"""
|
||||
project.batch_limit = 3
|
||||
repo_a.make_ref('heads/master', repo_a.make_commit(None, 'initial', None, tree={'a': 'a0'}))
|
||||
repo_b.make_ref('heads/master', repo_b.make_commit(None, 'initial', None, tree={'b': 'b0'}))
|
||||
|
||||
prs = [(
|
||||
a and to_pr(env, make_pr(repo_a, 'A{}'.format(i), [{'a{}'.format(i): 'a{}'.format(i)}], label='batch{}'.format(i))),
|
||||
b and to_pr(env, make_pr(repo_b, 'B{}'.format(i), [{'b{}'.format(i): 'b{}'.format(i)}], label='batch{}'.format(i)))
|
||||
)
|
||||
for i, (a, b) in enumerate([(1, 1), (0, 1), (1, 1), (1, 1), (1, 0)])
|
||||
]
|
||||
|
||||
env['runbot_merge.project']._check_progress()
|
||||
|
||||
st = env['runbot_merge.stagings'].search([])
|
||||
assert st
|
||||
assert len(st.batch_ids) == 3,\
|
||||
"Should have batched the first <batch_limit> batches"
|
||||
assert st.mapped('batch_ids.prs') == (
|
||||
prs[0][0] | prs[0][1]
|
||||
| prs[1][1]
|
||||
| prs[2][0] | prs[2][1]
|
||||
)
|
||||
|
||||
assert not prs[3][0].staging_id
|
||||
assert not prs[3][1].staging_id
|
||||
assert not prs[4][0].staging_id
|
||||
|
||||
def test_batching_split(env, repo_a, repo_b):
|
||||
""" If a staging fails, it should get split properly across repos
|
||||
"""
|
||||
repo_a.make_ref('heads/master', repo_a.make_commit(None, 'initial', None, tree={'a': 'a0'}))
|
||||
repo_b.make_ref('heads/master', repo_b.make_commit(None, 'initial', None, tree={'b': 'b0'}))
|
||||
|
||||
prs = [(
|
||||
a and to_pr(env, make_pr(repo_a, 'A{}'.format(i), [{'a{}'.format(i): 'a{}'.format(i)}], label='batch{}'.format(i))),
|
||||
b and to_pr(env, make_pr(repo_b, 'B{}'.format(i), [{'b{}'.format(i): 'b{}'.format(i)}], label='batch{}'.format(i)))
|
||||
)
|
||||
for i, (a, b) in enumerate([(1, 1), (0, 1), (1, 1), (1, 1), (1, 0)])
|
||||
]
|
||||
|
||||
env['runbot_merge.project']._check_progress()
|
||||
|
||||
st0 = env['runbot_merge.stagings'].search([])
|
||||
assert len(st0.batch_ids) == 5
|
||||
assert len(st0.mapped('batch_ids.prs')) == 8
|
||||
|
||||
# mark b.staging as failed -> should create two splits with (0, 1)
|
||||
# and (2, 3, 4) and stage the first one
|
||||
repo_b.post_status('heads/staging.master', 'success', 'legal/cla')
|
||||
repo_b.post_status('heads/staging.master', 'failure', 'ci/runbot')
|
||||
|
||||
env['runbot_merge.project']._check_progress()
|
||||
|
||||
assert not st0.active
|
||||
|
||||
# at this point we have a re-staged split and an unstaged split
|
||||
st = env['runbot_merge.stagings'].search([])
|
||||
sp = env['runbot_merge.split'].search([])
|
||||
assert st
|
||||
assert sp
|
||||
|
||||
assert len(st.batch_ids) == 2
|
||||
assert st.mapped('batch_ids.prs') == \
|
||||
prs[0][0] | prs[0][1] | prs[1][1]
|
||||
|
||||
assert len(sp.batch_ids) == 3
|
||||
assert sp.mapped('batch_ids.prs') == \
|
||||
prs[2][0] | prs[2][1] | prs[3][0] | prs[3][1] | prs[4][0]
|
||||
|
||||
def test_urgent(env, repo_a, repo_b):
|
||||
""" Either PR of a co-dependent pair being p=0 leads to the entire pair
|
||||
being prioritized
|
||||
"""
|
||||
repo_a.make_ref('heads/master', repo_a.make_commit(None, 'initial', None, tree={'a0': 'a'}))
|
||||
repo_b.make_ref('heads/master', repo_b.make_commit(None, 'initial', None, tree={'b0': 'b'}))
|
||||
|
||||
pr_a = make_pr(repo_a, 'A', [{'a1': 'a'}, {'a2': 'a'}], label='batch', reviewer=None, statuses=[])
|
||||
pr_b = make_pr(repo_b, 'B', [{'b1': 'b'}, {'b2': 'b'}], label='batch', reviewer=None, statuses=[])
|
||||
pr_c = make_pr(repo_a, 'C', [{'c1': 'c', 'c2': 'c'}])
|
||||
|
||||
pr_b.post_comment('hansen p=0', 'reviewer')
|
||||
|
||||
env['runbot_merge.project']._check_progress()
|
||||
# should have batched pr_a and pr_b despite neither being reviewed or
|
||||
# approved
|
||||
p_a, p_b = to_pr(env, pr_a), to_pr(env, pr_b)
|
||||
p_c = to_pr(env, pr_c)
|
||||
assert p_a.batch_id and p_b.batch_id and p_a.batch_id == p_b.batch_id,\
|
||||
"a and b should have been recognised as co-dependent"
|
||||
assert not p_c.staging_id
|
192
runbot_merge/views/mergebot.xml
Normal file
192
runbot_merge/views/mergebot.xml
Normal file
@ -0,0 +1,192 @@
|
||||
<odoo>
|
||||
<record id="runbot_merge_form_project" model="ir.ui.view">
|
||||
<field name="name">Project Form</field>
|
||||
<field name="model">runbot_merge.project</field>
|
||||
<field name="arch" type="xml">
|
||||
<form>
|
||||
<sheet>
|
||||
<!--
|
||||
<div class="oe_button_box" name="button_box">
|
||||
<button class="oe_stat_button" name="action_see_attachments" type="object" icon="fa-book" attrs="{'invisible': ['|', ('state', '=', 'confirmed'), ('type', '=', 'routing')]}">
|
||||
<field string="Attachments" name="mrp_document_count" widget="statinfo"/>
|
||||
</button>
|
||||
</div>
|
||||
-->
|
||||
<div class="oe_title">
|
||||
<h1><field name="name" placeholder="Name"/></h1>
|
||||
</div>
|
||||
<group>
|
||||
<group>
|
||||
<field name="github_prefix" string="bot name"/>
|
||||
</group>
|
||||
<group>
|
||||
<field name="required_statuses"/>
|
||||
</group>
|
||||
</group>
|
||||
<group>
|
||||
<group>
|
||||
<field name="github_token"/>
|
||||
</group>
|
||||
<group>
|
||||
<field name="ci_timeout"/>
|
||||
<field name="batch_limit"/>
|
||||
</group>
|
||||
</group>
|
||||
|
||||
<field name="repo_ids">
|
||||
<tree editable="bottom">
|
||||
<field name="name"/>
|
||||
</tree>
|
||||
</field>
|
||||
<field name="branch_ids">
|
||||
<tree editable="bottom">
|
||||
<field name="name"/>
|
||||
</tree>
|
||||
</field>
|
||||
</sheet>
|
||||
</form>
|
||||
</field>
|
||||
</record>
|
||||
|
||||
<record id="runbot_merge_action_projects" model="ir.actions.act_window">
|
||||
<field name="name">Projects</field>
|
||||
<field name="res_model">runbot_merge.project</field>
|
||||
<field name="view_mode">tree,form</field>
|
||||
</record>
|
||||
|
||||
<record id="runbot_merge_action_prs" model="ir.actions.act_window">
|
||||
<field name="name">Pull Requests</field>
|
||||
<field name="res_model">runbot_merge.pull_requests</field>
|
||||
<field name="view_mode">tree,form</field>
|
||||
<field name="context">{'search_default_open': True}</field>
|
||||
</record>
|
||||
<record id="runbot_merge_search_prs" model="ir.ui.view">
|
||||
<field name="name">PR search</field>
|
||||
<field name="model">runbot_merge.pull_requests</field>
|
||||
<field name="arch" type="xml">
|
||||
<search>
|
||||
<filter
|
||||
name="open" string="Open"
|
||||
domain="[('state', 'not in', ['merged', 'closed'])]"
|
||||
/>
|
||||
<field name="author"/>
|
||||
<field name="label"/>
|
||||
<field name="target"/>
|
||||
<field name="repository"/>
|
||||
<field name="state"/>
|
||||
|
||||
<group>
|
||||
<filter string="Target" name="target_" context="{'group_by':'target'}"/>
|
||||
<filter string="Repository" name="repo_" context="{'group_by':'repository'}"/>
|
||||
<filter string="State" name="state_" context="{'group_by':'state'}"/>
|
||||
<filter string="Priority" name="priority_" context="{'group_by':'priority'}"/>
|
||||
</group>
|
||||
</search>
|
||||
</field>
|
||||
</record>
|
||||
<record id="runbot_merge_tree_prs" model="ir.ui.view">
|
||||
<field name="name">PR tree</field>
|
||||
<field name="model">runbot_merge.pull_requests</field>
|
||||
<field name="arch" type="xml">
|
||||
<tree>
|
||||
<field name="repository"/>
|
||||
<field name="number"/>
|
||||
<field name="target"/>
|
||||
<field name="state"/>
|
||||
</tree>
|
||||
</field>
|
||||
</record>
|
||||
<record id="runbot_merge_form_prs" model="ir.ui.view">
|
||||
<field name="name">PR form</field>
|
||||
<field name="model">runbot_merge.pull_requests</field>
|
||||
<field name="arch" type="xml">
|
||||
<form>
|
||||
<header/>
|
||||
<sheet>
|
||||
<div class="oe_title">
|
||||
<h1>
|
||||
<field name="repository"/>#<field name="number"/>
|
||||
</h1>
|
||||
</div>
|
||||
<group>
|
||||
<group>
|
||||
<field name="target"/>
|
||||
<field name="state"/>
|
||||
<field name="author"/>
|
||||
<field name="priority"/>
|
||||
</group>
|
||||
<group>
|
||||
<field name="label"/>
|
||||
<field name="squash"/>
|
||||
</group>
|
||||
</group>
|
||||
<group>
|
||||
<group colspan="4">
|
||||
<field name="head"/>
|
||||
<field name="statuses"/>
|
||||
</group>
|
||||
</group>
|
||||
<group>
|
||||
<group colspan="4" string="Message">
|
||||
<field name="message" nolabel="1"/>
|
||||
</group>
|
||||
</group>
|
||||
<group>
|
||||
<group colspan="4" string="Delegates">
|
||||
<field name="delegates" nolabel="1">
|
||||
<tree>
|
||||
<field name="name"/>
|
||||
<field name="github_login"/>
|
||||
</tree>
|
||||
</field>
|
||||
</group>
|
||||
</group>
|
||||
</sheet>
|
||||
</form>
|
||||
</field>
|
||||
</record>
|
||||
|
||||
<record id="runbot_merge_action_stagings" model="ir.actions.act_window">
|
||||
<field name="name">Stagings</field>
|
||||
<field name="res_model">runbot_merge.stagings</field>
|
||||
<field name="view_mode">tree,form</field>
|
||||
<field name="context">{'default_active': True}</field>
|
||||
</record>
|
||||
<record id="runbot_merge_search_stagings" model="ir.ui.view">
|
||||
<field name="name">Stagings Search</field>
|
||||
<field name="model">runbot_merge.stagings</field>
|
||||
<field name="arch" type="xml">
|
||||
<search>
|
||||
<filter string="Active" name="active"
|
||||
domain="[('heads', '!=', False)]"/>
|
||||
<field name="state"/>
|
||||
<field name="target"/>
|
||||
|
||||
<group>
|
||||
<filter string="Target" name="target_" context="{'group_by': 'target'}"/>
|
||||
</group>
|
||||
</search>
|
||||
</field>
|
||||
</record>
|
||||
<record id="runbot_merge_tree_stagings" model="ir.ui.view">
|
||||
<field name="name">Stagings Tree</field>
|
||||
<field name="model">runbot_merge.stagings</field>
|
||||
<field name="arch" type="xml">
|
||||
<tree>
|
||||
<field name="target"/>
|
||||
<field name="state"/>
|
||||
</tree>
|
||||
</field>
|
||||
</record>
|
||||
|
||||
<menuitem name="Mergebot" id="runbot_merge_menu"/>
|
||||
<menuitem name="Projects" id="runbot_merge_menu_project"
|
||||
parent="runbot_merge_menu"
|
||||
action="runbot_merge_action_projects"/>
|
||||
<menuitem name="Pull Requests" id="runbot_merge_menu_prs"
|
||||
parent="runbot_merge_menu"
|
||||
action="runbot_merge_action_prs"/>
|
||||
<menuitem name="Stagings" id="runbot_merge_menu_stagings"
|
||||
parent="runbot_merge_menu"
|
||||
action="runbot_merge_action_stagings"/>
|
||||
</odoo>
|
27
runbot_merge/views/res_partner.xml
Normal file
27
runbot_merge/views/res_partner.xml
Normal file
@ -0,0 +1,27 @@
|
||||
<odoo>
|
||||
<record id="runbot_merge_form_partner" model="ir.ui.view">
|
||||
<field name="name">Add mergebot/GH info to partners form</field>
|
||||
<field name="model">res.partner</field>
|
||||
<field name="inherit_id" ref="base.view_partner_form"/>
|
||||
<field name="arch" type="xml">
|
||||
<xpath expr="//notebook" position="inside">
|
||||
<page string="Mergebot">
|
||||
<group>
|
||||
<group>
|
||||
<field name="github_login"/>
|
||||
</group>
|
||||
<group>
|
||||
<field name="reviewer"/>
|
||||
<field name="self_reviewer"/>
|
||||
</group>
|
||||
</group>
|
||||
<group>
|
||||
<group colspan="4" string="Delegate On">
|
||||
<field name="delegate_reviewer" nolabel="1"/>
|
||||
</group>
|
||||
</group>
|
||||
</page>
|
||||
</xpath>
|
||||
</field>
|
||||
</record>
|
||||
</odoo>
|
36
runbot_merge/views/templates.xml
Normal file
36
runbot_merge/views/templates.xml
Normal file
@ -0,0 +1,36 @@
|
||||
<odoo>
|
||||
<template id="dashboard" name="mergebot dashboard">
|
||||
<t t-call="website.layout">
|
||||
<div id="wrap"><div class="container">
|
||||
<section t-foreach="projects" t-as="project" class="row">
|
||||
<h1 class="col-md-12"><t t-esc="project.name"/></h1>
|
||||
<section t-foreach="project.branch_ids" t-as="branch" class="col-md-12">
|
||||
<h2><t t-esc="branch.name"/></h2>
|
||||
<t t-call="runbot_merge.stagings"/>
|
||||
</section>
|
||||
</section>
|
||||
</div></div>
|
||||
</t>
|
||||
</template>
|
||||
<template id="stagings" name="mergebot branch stagings">
|
||||
<ul class="row list-unstyled">
|
||||
<li t-foreach="branch.staging_ids.sorted(lambda s: not s.heads)"
|
||||
t-as="staging"
|
||||
t-attf-class="col-md-3 {{'bg-info' if staging.heads else ''}}">
|
||||
<ul class="list-unstyled">
|
||||
<li t-foreach="staging.batch_ids" t-as="batch">
|
||||
<t t-esc="batch.prs[:1].label"/>
|
||||
<t t-foreach="batch.prs" t-as="pr">
|
||||
<a t-attf-href="https://github.com/{{ pr.repository.name }}/pull/{{ pr.number }}"
|
||||
t-att-title="pr.message.split('\n')[0]"
|
||||
><t t-esc="pr.repository.name"/>#<t t-esc="pr.number"/></a><t t-if="not pr_last">,</t>
|
||||
</t>
|
||||
</li>
|
||||
</ul>
|
||||
<t t-if="staging.heads">
|
||||
Staged at: <t t-esc="staging.staged_at" t-options="{'widget': 'datetime'}"/>
|
||||
</t>
|
||||
</li>
|
||||
</ul>
|
||||
</template>
|
||||
</odoo>
|
Loading…
Reference in New Issue
Block a user