mirror of
https://github.com/odoo/runbot.git
synced 2025-03-15 23:45:44 +07:00
[ADD] runbot_merge: a merge bot
No actual dependency on runbot, should be usable completely independently.
This commit is contained in:
parent
cfba7da06d
commit
e52d08ecdf
0
conftest.py
Normal file
0
conftest.py
Normal file
200
runbot_merge/README.rst
Normal file
200
runbot_merge/README.rst
Normal file
@ -0,0 +1,200 @@
|
||||
Merge Bot
|
||||
=========
|
||||
|
||||
Setup
|
||||
-----
|
||||
|
||||
* Setup a project with relevant repositories and branches the bot
|
||||
should manage (e.g. odoo/odoo and 10.0).
|
||||
* Set up reviewers (github_login + boolean flag on partners).
|
||||
* Sync PRs.
|
||||
* Add "Issue comments","Pull requests" and "Statuses" webhooks to
|
||||
managed repositories.
|
||||
* If applicable, add "Statuses" webhook to the *source* repositories.
|
||||
|
||||
Github does not seem to send statuses cross-repository when commits
|
||||
get transmigrated so if a user creates a branch in odoo-dev/odoo,
|
||||
waits for CI to run then creates a PR targeted to odoo/odoo the PR
|
||||
will never get status-checked (unless we modify runbot to re-send
|
||||
statuses on pull_request webhook).
|
||||
|
||||
Working Principles
|
||||
------------------
|
||||
|
||||
Useful information (new PRs, CI, comments, ...) is pushed to the MB
|
||||
via webhooks. Most of the staging work is performed via a cron job:
|
||||
|
||||
1. for each active staging, check if their are done
|
||||
|
||||
1. if successful
|
||||
|
||||
* ``push --ff`` to target branches
|
||||
* close PRs
|
||||
|
||||
2. if only one batch, mark as failed
|
||||
|
||||
for batches of multiple PRs, the MB attempts to infer which
|
||||
specific PR failed
|
||||
|
||||
3. otherwise split staging in 2 (bisection search of problematic
|
||||
batch)
|
||||
|
||||
2. for each branch with no active staging
|
||||
|
||||
* if there are inactive stagings, stage one of them
|
||||
* otherwise look for batches targered to that PR (PRs grouped by
|
||||
label with branch as target)
|
||||
* attempt staging
|
||||
|
||||
1. reset temp branches (one per repo) to corresponding targets
|
||||
2. merge each batch's PR into the relevant temp branch
|
||||
|
||||
* on merge failure, mark PRs as failed
|
||||
|
||||
3. once no more batch or limit reached, reset staging branches to
|
||||
tmp
|
||||
4. mark staging as active
|
||||
|
||||
Commands
|
||||
--------
|
||||
|
||||
A command string is a line starting with the mergebot's name and
|
||||
followed by various commands. Self-reviewers count as reviewers for
|
||||
the purpose of their own PRs, but delegate reviewers don't.
|
||||
|
||||
retry
|
||||
resets a PR in error mode to ready for staging
|
||||
|
||||
can be used by a reviewer or the PR author to re-stage the PR after
|
||||
it's been updated or the target has been updated & fixed.
|
||||
|
||||
r(review)+
|
||||
approves a PR, can be used by a reviewer or delegate reviewer
|
||||
|
||||
r(eview)-
|
||||
removes approval from a PR, currently only active for PRs in error
|
||||
mode: unclear what should happen if a PR got unapproved while in
|
||||
staging (cancel staging?), can be used by a reviewer or the PR
|
||||
author
|
||||
|
||||
squash+/squash-
|
||||
marks the PR as squash or merge, can override squash inference or a
|
||||
previous squash command, can only be used by reviewers
|
||||
|
||||
delegate+/delegate=<users>
|
||||
adds either PR author or the specified (github) users as authorised
|
||||
reviewers for this PR. ``<users>`` is a comma-separated list of
|
||||
github usernames (no @), can be used by reviewers
|
||||
|
||||
p(riority)=2|1|0
|
||||
sets the priority to normal (2), pressing (1) or urgent (0),
|
||||
lower-priority PRs are selected first and batched together, can be
|
||||
used by reviewers
|
||||
|
||||
currently only used for staging, but p=0 could cancel an active
|
||||
staging to force staging the specific PR and ignore CI on the PR
|
||||
itself? AKA pr=0 would cancel a pending staging and ignore
|
||||
(non-error) state? Q: what of co-dependent PRs, staging currently
|
||||
looks for co-dependent PRs where all are ready, could be something
|
||||
along the lines of::
|
||||
|
||||
(any(priority = 0) and every(state != error)) or every(state = ready)
|
||||
|
||||
TODO
|
||||
----
|
||||
|
||||
* PR edition (retarget, title/message)
|
||||
* Ability to disable/ignore branches in runbot (tmp branches where
|
||||
staging is being built)
|
||||
* What happens when cancelling staging during bisection
|
||||
|
||||
TODO?
|
||||
-----
|
||||
|
||||
* Prioritize urgent PRs over existing batches?
|
||||
* Make autosquash dynamic? Currently PR marked as squash if only 1
|
||||
commit on creation, this is not changed if more commits are added.
|
||||
* Use actual GH reviews? Currently only PR comments count.
|
||||
* Rebase? Not sure what use that would have & would need to be done by
|
||||
hand
|
||||
|
||||
Structure
|
||||
---------
|
||||
|
||||
A *project* is used to manage multiple *repositories* across many
|
||||
*branches*.
|
||||
|
||||
Each *PR* targets a specific branch in a specific repository.
|
||||
|
||||
A *batch* is a number of co-dependent PRs, PRs which are assumed to
|
||||
depend on one another (the exact relationship is irrelevant) and thus
|
||||
always need to be batched together. Batches are normally created on
|
||||
the fly during staging.
|
||||
|
||||
A *staging* is a number of batches (up to 8 by default) which will be
|
||||
tested together, and split if CI fails. Each staging applies to a
|
||||
single *branch* the target) across all managed repositories. Stagings
|
||||
can be active (currently live on the various staging branches) or
|
||||
inactive (to be staged later, generally as a result of splitting a
|
||||
failed staging).
|
||||
|
||||
Notes
|
||||
-----
|
||||
|
||||
* When looking for stageable batches, priority is taken in account and
|
||||
isolating e.g. if there's a single high-priority PR, low-priority
|
||||
PRs are ignored completely and only that will be staged on its own
|
||||
* Reviewers are set up on partners so we can e.g. have author-tracking
|
||||
& deletate reviewers without needing to create proper users for
|
||||
every contributor.
|
||||
* MB collates statuses on commits independently from other objects, so
|
||||
a commit getting CI'd in odoo-dev/odoo then made into a PR on
|
||||
odoo/odoo should be correctly interpreted assuming odoo-dev/odoo
|
||||
sent its statuses to the MB.
|
||||
* Github does not support transactional sequences of API calls, so
|
||||
it's possible that "intermediate" staging states are visible & have
|
||||
to be rollbacked e.g. a staging succeeds in a 2-repo scenario,
|
||||
A.{target} is ff-d to A.{staging}, then B.{target}'s ff to
|
||||
B.{staging} fails, we have to rollback A.{target}.
|
||||
* Batches & stagings are non-permanent, they are deleted after success
|
||||
or failure.
|
||||
* Co-dependence is currently inferred through *labels*, which is a
|
||||
pair of ``{login}:{branchname}``
|
||||
e.g. odoo-dev:11.0-pr-flanker-jke. If this label is present in a PR
|
||||
to A and a PR to B, these two PRs will be collected into a single
|
||||
batch to ensure they always get batched (and failed) together.
|
||||
|
||||
Previous Work
|
||||
-------------
|
||||
|
||||
bors-ng
|
||||
~~~~~~~
|
||||
|
||||
* r+: accept (only for trusted reviewers)
|
||||
* r-: unaccept
|
||||
* r=users...: accept on behalf of users
|
||||
* delegate+: allows author to self-review
|
||||
* delegate=users...: allow non-reviewers users to review
|
||||
* try: stage build (to separate branch) but don't merge on succes
|
||||
|
||||
Why not bors-ng
|
||||
###############
|
||||
|
||||
* no concurrent staging (can only stage one target at a time)
|
||||
* can't do co-dependent repositories/multi-repo staging
|
||||
* cancels/forgets r+'d branches on FF failure (emergency pushes)
|
||||
instead of re-staging
|
||||
* unclear whether prioritisation supported
|
||||
|
||||
homu
|
||||
~~~~
|
||||
|
||||
Additionally to bors-ng's:
|
||||
|
||||
* SHA option on r+/r=, guards
|
||||
* p=NUMBER: set priority (unclear if best = low/high)
|
||||
* rollup/rollup-: should be default
|
||||
* retry: re-attempt PR (flaky?)
|
||||
* delegate-: remove delegate+/delegate=
|
||||
* force: ???
|
||||
* clean: ???
|
1
runbot_merge/__init__.py
Normal file
1
runbot_merge/__init__.py
Normal file
@ -0,0 +1 @@
|
||||
from . import models, controllers
|
9
runbot_merge/__manifest__.py
Normal file
9
runbot_merge/__manifest__.py
Normal file
@ -0,0 +1,9 @@
|
||||
{
|
||||
'name': 'merge bot',
|
||||
'depends': ['contacts'],
|
||||
'data': [
|
||||
'data/merge_cron.xml',
|
||||
'views/res_partner.xml',
|
||||
'views/mergebot.xml',
|
||||
]
|
||||
}
|
209
runbot_merge/controllers.py
Normal file
209
runbot_merge/controllers.py
Normal file
@ -0,0 +1,209 @@
|
||||
import logging
|
||||
import json
|
||||
|
||||
from odoo.http import Controller, request, route
|
||||
|
||||
_logger = logging.getLogger(__name__)
|
||||
|
||||
class MergebotController(Controller):
|
||||
@route('/runbot_merge/hooks', auth='none', type='json', csrf=False, methods=['POST'])
|
||||
def index(self):
|
||||
event = request.httprequest.headers['X-Github-Event']
|
||||
|
||||
return EVENTS.get(event, lambda _: f"Unknown event {event}")(request.jsonrequest)
|
||||
|
||||
def handle_pr(event):
|
||||
if event['action'] in [
|
||||
'assigned', 'unassigned', 'review_requested', 'review_request_removed',
|
||||
'labeled', 'unlabeled'
|
||||
]:
|
||||
_logger.debug(
|
||||
'Ignoring pull_request[%s] on %s:%s',
|
||||
event['action'],
|
||||
event['pull_request']['base']['repo']['full_name'],
|
||||
event['pull_request']['number'],
|
||||
)
|
||||
return f'Ignoring'
|
||||
|
||||
env = request.env(user=1)
|
||||
pr = event['pull_request']
|
||||
r = pr['base']['repo']['full_name']
|
||||
b = pr['base']['ref']
|
||||
|
||||
repo = env['runbot_merge.repository'].search([('name', '=', r)])
|
||||
if not repo:
|
||||
_logger.warning("Received a PR for %s but not configured to handle that repo", r)
|
||||
# sadly shit's retarded so odoo json endpoints really mean
|
||||
# jsonrpc and it's LITERALLY NOT POSSIBLE TO REPLY WITH
|
||||
# ACTUAL RAW HTTP RESPONSES and thus not possible to
|
||||
# report actual errors to the webhooks listing thing on
|
||||
# github (not that we'd be looking at them but it'd be
|
||||
# useful for tests)
|
||||
return f"Not configured to handle {r}"
|
||||
|
||||
# PRs to unmanaged branches are not necessarily abnormal and
|
||||
# we don't care
|
||||
branch = env['runbot_merge.branch'].search([
|
||||
('name', '=', b),
|
||||
('project_id', '=', repo.project_id.id),
|
||||
])
|
||||
|
||||
def find(target):
|
||||
return env['runbot_merge.pull_requests'].search([
|
||||
('repository', '=', repo.id),
|
||||
('number', '=', pr['number']),
|
||||
('target', '=', target.id),
|
||||
])
|
||||
# edition difficulty: pr['base']['ref] is the *new* target, the old one
|
||||
# is at event['change']['base']['ref'] (if the target changed), so edition
|
||||
# handling must occur before the rest of the steps
|
||||
if event['action'] == 'edited':
|
||||
source = event['changes'].get('base', {'from': pr['base']})['from']['ref']
|
||||
source_branch = env['runbot_merge.branch'].search([
|
||||
('name', '=', source),
|
||||
('project_id', '=', repo.project_id.id),
|
||||
])
|
||||
# retargeting to un-managed => delete
|
||||
if not branch:
|
||||
pr = find(source_branch)
|
||||
pr.unlink()
|
||||
return f'Retargeted {pr.id} to un-managed branch {b}, deleted'
|
||||
|
||||
# retargeting from un-managed => create
|
||||
if not source_branch:
|
||||
return handle_pr(dict(event, action='opened'))
|
||||
|
||||
updates = {}
|
||||
if source_branch != branch:
|
||||
updates['target'] = branch.id
|
||||
if event['changes'].keys() & {'title', 'body'}:
|
||||
updates['message'] = f"{pr['title'].strip()}\n\n{pr['body'].strip()}"
|
||||
if updates:
|
||||
pr_obj = find(source_branch)
|
||||
pr_obj.write(updates)
|
||||
return f'Updated {pr_obj.id}'
|
||||
return f"Nothing to update ({event['changes'].keys()})"
|
||||
|
||||
if not branch:
|
||||
_logger.info("Ignoring PR for un-managed branch %s:%s", r, b)
|
||||
return f"Not set up to care about {r}:{b}"
|
||||
|
||||
author_name = pr['user']['login']
|
||||
author = env['res.partner'].search([('github_login', '=', author_name)], limit=1)
|
||||
if not author:
|
||||
author = env['res.partner'].create({
|
||||
'name': author_name,
|
||||
'github_login': author_name,
|
||||
})
|
||||
|
||||
_logger.info("%s: %s:%s (%s)", event['action'], repo.name, pr['number'], author.github_login)
|
||||
if event['action'] == 'opened':
|
||||
# some PRs have leading/trailing newlines in body/title (resp)
|
||||
title = pr['title'].strip()
|
||||
body = pr['body'].strip()
|
||||
pr_obj = env['runbot_merge.pull_requests'].create({
|
||||
'number': pr['number'],
|
||||
'label': pr['head']['label'],
|
||||
'author': author.id,
|
||||
'target': branch.id,
|
||||
'repository': repo.id,
|
||||
'head': pr['head']['sha'],
|
||||
'squash': pr['commits'] == 1,
|
||||
'message': f'{title}\n\n{body}',
|
||||
})
|
||||
return f"Tracking PR as {pr_obj.id}"
|
||||
|
||||
pr_obj = find(branch)
|
||||
if not pr_obj:
|
||||
_logger.warn("webhook %s on unknown PR %s:%s", event['action'], repo.name, pr['number'])
|
||||
return f"Unknown PR {repo.name}:{pr['number']}"
|
||||
if event['action'] == 'synchronize':
|
||||
if pr_obj.head == pr['head']['sha']:
|
||||
return f'No update to pr head'
|
||||
|
||||
if pr_obj.state in ('closed', 'merged'):
|
||||
pr_obj.repository.github().comment(
|
||||
pr_obj.number, f"This pull request is closed, ignoring the update to {pr['head']['sha']}")
|
||||
# actually still update the head of closed (but not merged) PRs
|
||||
if pr_obj.state == 'merged':
|
||||
return f'Ignoring update to {pr_obj.id}'
|
||||
|
||||
if pr_obj.state == 'validated':
|
||||
pr_obj.state = 'opened'
|
||||
elif pr_obj.state == 'ready':
|
||||
pr_obj.state = 'approved'
|
||||
if pr_obj.staging_id:
|
||||
_logger.info(
|
||||
"Updated PR %s:%s, removing staging %s",
|
||||
pr_obj.repository.name, pr_obj.number,
|
||||
pr_obj.staging_id,
|
||||
)
|
||||
# immediately cancel the staging?
|
||||
staging = pr_obj.staging_id
|
||||
staging.batch_ids.unlink()
|
||||
staging.unlink()
|
||||
|
||||
# TODO: should we update squash as well? What of explicit squash commands?
|
||||
pr_obj.head = pr['head']['sha']
|
||||
return f'Updated {pr_obj.id} to {pr_obj.head}'
|
||||
|
||||
# don't marked merged PRs as closed (!!!)
|
||||
if event['action'] == 'closed' and pr_obj.state != 'merged':
|
||||
pr_obj.state = 'closed'
|
||||
return f'Closed {pr_obj.id}'
|
||||
|
||||
if event['action'] == 'reopened' and pr_obj.state == 'closed':
|
||||
pr_obj.state = 'opened'
|
||||
return f'Reopened {pr_obj.id}'
|
||||
|
||||
_logger.info("Ignoring event %s on PR %s", event['action'], pr['number'])
|
||||
return f"Not handling {event['action']} yet"
|
||||
|
||||
def handle_status(event):
|
||||
_logger.info(
|
||||
'status %s:%s on commit %s',
|
||||
event['context'], event['state'],
|
||||
event['sha'],
|
||||
)
|
||||
Commits = request.env(user=1)['runbot_merge.commit']
|
||||
c = Commits.search([('sha', '=', event['sha'])])
|
||||
if c:
|
||||
c.statuses = json.dumps({
|
||||
**json.loads(c.statuses),
|
||||
event['context']: event['state']
|
||||
})
|
||||
else:
|
||||
Commits.create({
|
||||
'sha': event['sha'],
|
||||
'statuses': json.dumps({event['context']: event['state']})
|
||||
})
|
||||
|
||||
return 'ok'
|
||||
|
||||
def handle_comment(event):
|
||||
if 'pull_request' not in event['issue']:
|
||||
return "issue comment, ignoring"
|
||||
|
||||
env = request.env(user=1)
|
||||
partner = env['res.partner'].search([('github_login', '=', event['sender']['login']),])
|
||||
pr = env['runbot_merge.pull_requests'].search([
|
||||
('number', '=', event['issue']['number']),
|
||||
('repository.name', '=', event['repository']['full_name']),
|
||||
])
|
||||
if not partner:
|
||||
_logger.info("ignoring comment from %s: not in system", event['sender']['login'])
|
||||
return 'ignored'
|
||||
|
||||
return pr._parse_commands(partner, event['comment']['body'])
|
||||
|
||||
def handle_ping(event):
|
||||
print(f"Got ping! {event['zen']}")
|
||||
return "pong"
|
||||
|
||||
EVENTS = {
|
||||
# TODO: add review?
|
||||
'pull_request': handle_pr,
|
||||
'status': handle_status,
|
||||
'issue_comment': handle_comment,
|
||||
'ping': handle_ping,
|
||||
}
|
12
runbot_merge/data/merge_cron.xml
Normal file
12
runbot_merge/data/merge_cron.xml
Normal file
@ -0,0 +1,12 @@
|
||||
<odoo>
|
||||
<record model="ir.cron" id="merge_cron">
|
||||
<field name="name">Check for progress of PRs & Stagings</field>
|
||||
<field name="model_id" ref="model_runbot_merge_project"/>
|
||||
<field name="state">code</field>
|
||||
<field name="code">model._check_progress()</field>
|
||||
<field name="interval_number">1</field>
|
||||
<field name="interval_type">minutes</field>
|
||||
<field name="numbercall">-1</field>
|
||||
<field name="doall" eval="False"/>
|
||||
</record>
|
||||
</odoo>
|
4
runbot_merge/exceptions.py
Normal file
4
runbot_merge/exceptions.py
Normal file
@ -0,0 +1,4 @@
|
||||
class MergeError(Exception):
|
||||
pass
|
||||
class FastForwardError(Exception):
|
||||
pass
|
205
runbot_merge/github.py
Normal file
205
runbot_merge/github.py
Normal file
@ -0,0 +1,205 @@
|
||||
import collections
|
||||
import functools
|
||||
import logging
|
||||
import pprint
|
||||
|
||||
import requests
|
||||
|
||||
from odoo.exceptions import UserError
|
||||
from . import exceptions
|
||||
|
||||
_logger = logging.getLogger(__name__)
|
||||
class GH(object):
|
||||
def __init__(self, token, repo):
|
||||
self._url = 'https://api.github.com'
|
||||
self._repo = repo
|
||||
session = self._session = requests.Session()
|
||||
session.headers['Authorization'] = f'token {token}'
|
||||
|
||||
def __call__(self, method, path, json=None, check=True):
|
||||
"""
|
||||
:type check: bool | dict[int:Exception]
|
||||
"""
|
||||
r = self._session.request(
|
||||
method,
|
||||
f'{self._url}/repos/{self._repo}/{path}',
|
||||
json=json
|
||||
)
|
||||
if check:
|
||||
if isinstance(check, collections.Mapping):
|
||||
exc = check.get(r.status_code)
|
||||
if exc:
|
||||
raise exc(r.content)
|
||||
r.raise_for_status()
|
||||
return r
|
||||
|
||||
def head(self, branch):
|
||||
d = self('get', f'git/refs/heads/{branch}').json()
|
||||
|
||||
assert d['ref'] == f'refs/heads/{branch}'
|
||||
assert d['object']['type'] == 'commit'
|
||||
return d['object']['sha']
|
||||
|
||||
def commit(self, sha):
|
||||
return self('GET', f'git/commits/{sha}').json()
|
||||
|
||||
def comment(self, pr, message):
|
||||
self('POST', f'issues/{pr}/comments', json={'body': message})
|
||||
|
||||
def close(self, pr, message):
|
||||
self.comment(pr, message)
|
||||
self('PATCH', f'pulls/{pr}', json={'state': 'closed'})
|
||||
|
||||
def fast_forward(self, branch, sha):
|
||||
try:
|
||||
self('patch', f'git/refs/heads/{branch}', json={'sha': sha})
|
||||
except requests.HTTPError:
|
||||
raise exceptions.FastForwardError()
|
||||
|
||||
def set_ref(self, branch, sha):
|
||||
# force-update ref
|
||||
r = self('patch', f'git/refs/heads/{branch}', json={
|
||||
'sha': sha,
|
||||
'force': True,
|
||||
}, check=False)
|
||||
if r.status_code == 200:
|
||||
return
|
||||
|
||||
if r.status_code == 404:
|
||||
# fallback: create ref
|
||||
r = self('post', 'git/refs', json={
|
||||
'ref': f'refs/heads/{branch}',
|
||||
'sha': sha,
|
||||
}, check=False)
|
||||
if r.status_code == 201:
|
||||
return
|
||||
r.raise_for_status()
|
||||
|
||||
def merge(self, sha, dest, message, squash=False, author=None):
|
||||
if not squash:
|
||||
r = self('post', 'merges', json={
|
||||
'base': dest,
|
||||
'head': sha,
|
||||
'commit_message': message,
|
||||
}, check={409: exceptions.MergeError})
|
||||
r = r.json()
|
||||
return dict(r['commit'], sha=r['sha'])
|
||||
|
||||
current_head = self.head(dest)
|
||||
tree = self.merge(sha, dest, "temp")['tree']['sha']
|
||||
c = self('post', 'git/commits', json={
|
||||
'message': message,
|
||||
'tree': tree,
|
||||
'parents': [current_head],
|
||||
'author': author,
|
||||
}, check={409: exceptions.MergeError}).json()
|
||||
self.set_ref(dest, c['sha'])
|
||||
return c
|
||||
|
||||
# --
|
||||
|
||||
def prs(self):
|
||||
cursor = None
|
||||
owner, name = self._repo.split('/')
|
||||
while True:
|
||||
response = self._session.post(f'{self._url}/graphql', json={
|
||||
'query': PR_QUERY,
|
||||
'variables': {
|
||||
'owner': owner,
|
||||
'name': name,
|
||||
'cursor': cursor,
|
||||
}
|
||||
}).json()
|
||||
|
||||
result = response['data']['repository']['pullRequests']
|
||||
for pr in result['nodes']:
|
||||
statuses = into(pr, 'headRef.target.status.contexts') or []
|
||||
|
||||
author = into(pr, 'author.login') or into(pr, 'headRepositoryOwner.login')
|
||||
source = into(pr, 'headRepositoryOwner.login') or into(pr, 'author.login')
|
||||
label = source and f"{source}:{pr['headRefName']}"
|
||||
yield {
|
||||
'number': pr['number'],
|
||||
'title': pr['title'],
|
||||
'body': pr['body'],
|
||||
'head': {
|
||||
'ref': pr['headRefName'],
|
||||
'sha': pr['headRefOid'],
|
||||
# headRef may be null if the pr branch was ?deleted?
|
||||
# (mostly closed PR concerns?)
|
||||
'statuses': {
|
||||
c['context']: c['state']
|
||||
for c in statuses
|
||||
},
|
||||
'label': label,
|
||||
},
|
||||
'state': pr['state'].lower(),
|
||||
'user': {'login': author},
|
||||
'base': {
|
||||
'ref': pr['baseRefName'],
|
||||
'repo': {
|
||||
'full_name': pr['repository']['nameWithOwner'],
|
||||
}
|
||||
},
|
||||
'commits': pr['commits']['totalCount'],
|
||||
}
|
||||
|
||||
if result['pageInfo']['hasPreviousPage']:
|
||||
cursor = result['pageInfo']['startCursor']
|
||||
else:
|
||||
break
|
||||
def into(d, path):
|
||||
return functools.reduce(
|
||||
lambda v, segment: v and v.get(segment),
|
||||
path.split('.'),
|
||||
d
|
||||
)
|
||||
|
||||
PR_QUERY = """
|
||||
query($owner: String!, $name: String!, $cursor: String) {
|
||||
rateLimit { remaining }
|
||||
repository(owner: $owner, name: $name) {
|
||||
pullRequests(last: 100, before: $cursor) {
|
||||
pageInfo { startCursor hasPreviousPage }
|
||||
nodes {
|
||||
author { # optional
|
||||
login
|
||||
}
|
||||
number
|
||||
title
|
||||
body
|
||||
state
|
||||
repository { nameWithOwner }
|
||||
baseRefName
|
||||
headRefOid
|
||||
headRepositoryOwner { # optional
|
||||
login
|
||||
}
|
||||
headRefName
|
||||
headRef { # optional
|
||||
target {
|
||||
... on Commit {
|
||||
status {
|
||||
contexts {
|
||||
context
|
||||
state
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
commits { totalCount }
|
||||
#comments(last: 100) {
|
||||
# nodes {
|
||||
# author {
|
||||
# login
|
||||
# }
|
||||
# body
|
||||
# bodyText
|
||||
# }
|
||||
#}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
"""
|
2
runbot_merge/models/__init__.py
Normal file
2
runbot_merge/models/__init__.py
Normal file
@ -0,0 +1,2 @@
|
||||
from . import res_partner
|
||||
from . import pull_requests
|
718
runbot_merge/models/pull_requests.py
Normal file
718
runbot_merge/models/pull_requests.py
Normal file
@ -0,0 +1,718 @@
|
||||
import collections
|
||||
import datetime
|
||||
import json
|
||||
import logging
|
||||
import pprint
|
||||
import re
|
||||
|
||||
from itertools import takewhile
|
||||
|
||||
from odoo import api, fields, models, tools
|
||||
from odoo.exceptions import ValidationError
|
||||
|
||||
from .. import github, exceptions
|
||||
|
||||
_logger = logging.getLogger(__name__)
|
||||
class Project(models.Model):
|
||||
_name = 'runbot_merge.project'
|
||||
|
||||
name = fields.Char(required=True, index=True)
|
||||
repo_ids = fields.One2many(
|
||||
'runbot_merge.repository', 'project_id',
|
||||
help="Repos included in that project, they'll be staged together. "\
|
||||
"*Not* to be used for cross-repo dependencies (that is to be handled by the CI)"
|
||||
)
|
||||
branch_ids = fields.One2many(
|
||||
'runbot_merge.branch', 'project_id',
|
||||
help="Branches of all project's repos which are managed by the merge bot. Also "\
|
||||
"target branches of PR this project handles."
|
||||
)
|
||||
|
||||
required_statuses = fields.Char(
|
||||
help="Comma-separated list of status contexts which must be "\
|
||||
"`success` for a PR or staging to be valid",
|
||||
default='legal/cla,ci/runbot'
|
||||
)
|
||||
ci_timeout = fields.Integer(
|
||||
default=60, required=True,
|
||||
help="Delay (in minutes) before a staging is considered timed out and failed"
|
||||
)
|
||||
|
||||
github_token = fields.Char("Github Token", required=True)
|
||||
github_prefix = fields.Char(
|
||||
required=True,
|
||||
default="hanson", # mergebot du bot du bot du~
|
||||
help="Prefix (~bot name) used when sending commands from PR "
|
||||
"comments e.g. [hanson retry] or [hanson r+ p=1 squash+]"
|
||||
)
|
||||
|
||||
batch_limit = fields.Integer(
|
||||
default=8, help="Maximum number of PRs staged together")
|
||||
|
||||
def _check_progress(self):
|
||||
logger = _logger.getChild('cron')
|
||||
Batch = self.env['runbot_merge.batch']
|
||||
PRs = self.env['runbot_merge.pull_requests']
|
||||
for project in self.search([]):
|
||||
gh = {repo.name: repo.github() for repo in project.repo_ids}
|
||||
# check status of staged PRs
|
||||
for staging in project.mapped('branch_ids.active_staging_id'):
|
||||
logger.info(
|
||||
"Checking active staging %s (state=%s)",
|
||||
staging, staging.state
|
||||
)
|
||||
if staging.state == 'success':
|
||||
old_heads = {
|
||||
n: g.head(staging.target.name)
|
||||
for n, g in gh.items()
|
||||
}
|
||||
repo_name = None
|
||||
staging_heads = json.loads(staging.heads)
|
||||
try:
|
||||
updated = []
|
||||
for repo_name, head in staging_heads.items():
|
||||
gh[repo_name].fast_forward(
|
||||
staging.target.name,
|
||||
head
|
||||
)
|
||||
updated.append(repo_name)
|
||||
except exceptions.FastForwardError:
|
||||
logger.warning(
|
||||
"Could not fast-forward successful staging on %s:%s, reverting updated repos %s and re-staging",
|
||||
repo_name, staging.target.name,
|
||||
', '.join(updated),
|
||||
exc_info=True
|
||||
)
|
||||
for name in reversed(updated):
|
||||
gh[name].set_ref(staging.target.name, old_heads[name])
|
||||
else:
|
||||
prs = staging.mapped('batch_ids.prs')
|
||||
logger.info(
|
||||
"%s FF successful, marking %s as merged",
|
||||
staging, prs
|
||||
)
|
||||
prs.write({'state': 'merged'})
|
||||
for pr in prs:
|
||||
# FIXME: this is the staging head rather than the actual merge commit for the PR
|
||||
gh[pr.repository.name].close(pr.number, f'Merged in {staging_heads[pr.repository.name]}')
|
||||
finally:
|
||||
staging.batch_ids.unlink()
|
||||
staging.unlink()
|
||||
elif staging.state == 'failure' or project.is_timed_out(staging):
|
||||
staging.try_splitting()
|
||||
# else let flow
|
||||
|
||||
# check for stageable branches/prs
|
||||
for branch in project.branch_ids:
|
||||
logger.info(
|
||||
"Checking %s (%s) for staging: %s, ignore? %s",
|
||||
branch, branch.name,
|
||||
branch.active_staging_id,
|
||||
bool(branch.active_staging_id)
|
||||
)
|
||||
if branch.active_staging_id:
|
||||
continue
|
||||
|
||||
# Splits can generate inactive stagings, restage these first
|
||||
if branch.staging_ids:
|
||||
staging = branch.staging_ids[0]
|
||||
logger.info("Found inactive staging %s, reactivating", staging)
|
||||
batches = [batch.prs for batch in staging.batch_ids]
|
||||
staging.unlink()
|
||||
else:
|
||||
self.env.cr.execute("""
|
||||
SELECT
|
||||
min(pr.priority) as priority,
|
||||
array_agg(pr.id) AS match
|
||||
FROM runbot_merge_pull_requests pr
|
||||
WHERE pr.target = %s
|
||||
AND pr.batch_id IS NULL
|
||||
-- exclude terminal states (so there's no issue when
|
||||
-- deleting branches & reusing labels)
|
||||
AND pr.state != 'merged'
|
||||
AND pr.state != 'closed'
|
||||
GROUP BY pr.label
|
||||
HAVING every(pr.state = 'ready')
|
||||
ORDER BY min(pr.priority), min(pr.id)
|
||||
""", [branch.id])
|
||||
# result: [(priority, [(repo_id, pr_id) for repo in repos]
|
||||
rows = self.env.cr.fetchall()
|
||||
logger.info(
|
||||
"Looking for PRs to stage for %s... %s",
|
||||
branch.name, rows
|
||||
)
|
||||
if not rows:
|
||||
continue
|
||||
|
||||
priority = rows[0][0]
|
||||
batches = [PRs.browse(pr_ids) for _, pr_ids in takewhile(lambda r: r[0] == priority, rows)]
|
||||
|
||||
staged = Batch
|
||||
meta = {repo: {} for repo in project.repo_ids}
|
||||
for repo, it in meta.items():
|
||||
gh = it['gh'] = repo.github()
|
||||
it['head'] = gh.head(branch.name)
|
||||
# create tmp staging branch
|
||||
gh.set_ref(f'tmp.{branch.name}', it['head'])
|
||||
|
||||
batch_limit = project.batch_limit
|
||||
for batch in batches:
|
||||
if len(staged) >= batch_limit:
|
||||
break
|
||||
staged |= Batch.stage(meta, batch)
|
||||
|
||||
if staged:
|
||||
# create actual staging object
|
||||
st = self.env['runbot_merge.stagings'].create({
|
||||
'target': branch.id,
|
||||
'batch_ids': [(4, batch.id, 0) for batch in staged],
|
||||
'heads': json.dumps({
|
||||
repo.name: it['head']
|
||||
for repo, it in meta.items()
|
||||
})
|
||||
})
|
||||
# create staging branch from tmp
|
||||
for r, it in meta.items():
|
||||
it['gh'].set_ref(f'staging.{branch.name}', it['head'])
|
||||
logger.info("Created staging %s", st)
|
||||
|
||||
def is_timed_out(self, staging):
|
||||
return fields.Datetime.from_string(staging.staged_at) + datetime.timedelta(minutes=self.ci_timeout) < datetime.datetime.now()
|
||||
|
||||
def sync_prs(self):
|
||||
_logger.info("Synchronizing PRs for %s", self.name)
|
||||
Commits = self.env['runbot_merge.commit']
|
||||
PRs = self.env['runbot_merge.pull_requests']
|
||||
Partners = self.env['res.partner']
|
||||
branches = {
|
||||
b.name: b
|
||||
for b in self.branch_ids
|
||||
}
|
||||
authors = {
|
||||
p.github_login: p
|
||||
for p in Partners.search([])
|
||||
if p.github_login
|
||||
}
|
||||
for repo in self.repo_ids:
|
||||
gh = repo.github()
|
||||
created = 0
|
||||
ignored_targets = collections.Counter()
|
||||
prs = {
|
||||
pr.number: pr
|
||||
for pr in PRs.search([
|
||||
('repository', '=', repo.id),
|
||||
])
|
||||
}
|
||||
for i, pr in enumerate(gh.prs()):
|
||||
message = f"{pr['title'].strip()}\n\n{pr['body'].strip()}"
|
||||
existing = prs.get(pr['number'])
|
||||
target = pr['base']['ref']
|
||||
if existing:
|
||||
if target not in branches:
|
||||
_logger.info("PR %d retargeted to non-managed branch %s, deleting", pr['number'],
|
||||
target)
|
||||
ignored_targets.update([target])
|
||||
existing.unlink()
|
||||
else:
|
||||
if message != existing.message:
|
||||
_logger.info("Updating PR %d ({%s} != {%s})", pr['number'], existing.message, message)
|
||||
existing.message = message
|
||||
continue
|
||||
|
||||
# not for a selected target => skip
|
||||
if target not in branches:
|
||||
ignored_targets.update([target])
|
||||
continue
|
||||
|
||||
# old PR, source repo may have been deleted, ignore
|
||||
if not pr['head']['label']:
|
||||
_logger.info('ignoring PR %d: no label', pr['number'])
|
||||
continue
|
||||
|
||||
login = pr['user']['login']
|
||||
# no author on old PRs, account deleted
|
||||
author = authors.get(login, Partners)
|
||||
if login and not author:
|
||||
author = authors[login] = Partners.create({
|
||||
'name': login,
|
||||
'github_login': login,
|
||||
})
|
||||
head = pr['head']['sha']
|
||||
PRs.create({
|
||||
'number': pr['number'],
|
||||
'label': pr['head']['label'],
|
||||
'author': author.id,
|
||||
'target': branches[target].id,
|
||||
'repository': repo.id,
|
||||
'head': head,
|
||||
'squash': pr['commits'] == 1,
|
||||
'message': message,
|
||||
'state': 'opened' if pr['state'] == 'open'
|
||||
else 'merged' if pr.get('merged')
|
||||
else 'closed'
|
||||
})
|
||||
c = Commits.search([('sha', '=', head)]) or Commits.create({'sha': head})
|
||||
c.statuses = json.dumps(pr['head']['statuses'])
|
||||
|
||||
created += 1
|
||||
_logger.info("%d new prs in %s", created, repo.name)
|
||||
_logger.info('%d ignored PRs for un-managed targets: (%s)', sum(ignored_targets.values()), dict(ignored_targets))
|
||||
return False
|
||||
|
||||
class Repository(models.Model):
|
||||
_name = 'runbot_merge.repository'
|
||||
|
||||
name = fields.Char(required=True)
|
||||
project_id = fields.Many2one('runbot_merge.project', required=True)
|
||||
|
||||
def github(self):
|
||||
return github.GH(self.project_id.github_token, self.name)
|
||||
|
||||
def _auto_init(self):
|
||||
res = super(Repository, self)._auto_init()
|
||||
tools.create_unique_index(
|
||||
self._cr, 'runbot_merge_unique_repo', self._table, ['name', 'project_id'])
|
||||
return res
|
||||
|
||||
class Branch(models.Model):
|
||||
_name = 'runbot_merge.branch'
|
||||
|
||||
name = fields.Char(required=True)
|
||||
project_id = fields.Many2one('runbot_merge.project', required=True)
|
||||
|
||||
active_staging_id = fields.One2many(
|
||||
'runbot_merge.stagings', 'target',
|
||||
domain=[("heads", "!=", False)],
|
||||
help="Currently running staging for the branch, there should be only one"
|
||||
)
|
||||
staging_ids = fields.One2many('runbot_merge.stagings', 'target')
|
||||
|
||||
def _auto_init(self):
|
||||
res = super(Branch, self)._auto_init()
|
||||
tools.create_unique_index(
|
||||
self._cr, 'runbot_merge_unique_branch_per_repo',
|
||||
self._table, ['name', 'project_id'])
|
||||
return res
|
||||
|
||||
class PullRequests(models.Model):
|
||||
_name = 'runbot_merge.pull_requests'
|
||||
_order = 'number desc'
|
||||
|
||||
target = fields.Many2one('runbot_merge.branch', required=True)
|
||||
repository = fields.Many2one('runbot_merge.repository', required=True)
|
||||
# NB: check that target & repo have same project & provide project related?
|
||||
|
||||
state = fields.Selection([
|
||||
('opened', 'Opened'),
|
||||
('closed', 'Closed'),
|
||||
('validated', 'Validated'),
|
||||
('approved', 'Approved'),
|
||||
('ready', 'Ready'),
|
||||
# staged?
|
||||
('merged', 'Merged'),
|
||||
('error', 'Error'),
|
||||
], default='opened')
|
||||
|
||||
number = fields.Integer(required=True, index=True)
|
||||
author = fields.Many2one('res.partner')
|
||||
head = fields.Char(required=True, index=True)
|
||||
label = fields.Char(
|
||||
required=True, index=True,
|
||||
help="Label of the source branch (owner:branchname), used for "
|
||||
"cross-repository branch-matching"
|
||||
)
|
||||
message = fields.Text(required=True)
|
||||
squash = fields.Boolean(default=False)
|
||||
|
||||
delegates = fields.Many2many('res.partner', help="Delegate reviewers, not intrisically reviewers but can review this PR")
|
||||
priority = fields.Selection([
|
||||
(0, 'Urgent'),
|
||||
(1, 'Pressing'),
|
||||
(2, 'Normal'),
|
||||
], default=2, index=True)
|
||||
|
||||
statuses = fields.Text(compute='_compute_statuses')
|
||||
|
||||
batch_id = fields.Many2one('runbot_merge.batch')
|
||||
staging_id = fields.Many2one(related='batch_id.staging_id', store=True)
|
||||
|
||||
@api.depends('head')
|
||||
def _compute_statuses(self):
|
||||
Commits = self.env['runbot_merge.commit']
|
||||
for s in self:
|
||||
c = Commits.search([('sha', '=', s.head)])
|
||||
if c and c.statuses:
|
||||
s.statuses = pprint.pformat(json.loads(c.statuses))
|
||||
|
||||
def _parse_command(self, commandstring):
|
||||
m = re.match(r'(\w+)(?:([+-])|=(.*))?', commandstring)
|
||||
if not m:
|
||||
return None
|
||||
|
||||
name, flag, param = m.groups()
|
||||
if name == 'retry':
|
||||
return ('retry', True)
|
||||
elif name in ('r', 'review'):
|
||||
if flag == '+':
|
||||
return ('review', True)
|
||||
elif flag == '-':
|
||||
return ('review', False)
|
||||
elif name == 'squash':
|
||||
if flag == '+':
|
||||
return ('squash', True)
|
||||
elif flag == '-':
|
||||
return ('squash', False)
|
||||
elif name == 'delegate':
|
||||
if flag == '+':
|
||||
return ('delegate', True)
|
||||
elif param:
|
||||
return ('delegate', param.split(','))
|
||||
elif name in ('p', 'priority'):
|
||||
if param in ('0', '1', '2'):
|
||||
return ('priority', int(param))
|
||||
|
||||
return None
|
||||
|
||||
def _parse_commands(self, author, comment):
|
||||
"""Parses a command string prefixed by Project::github_prefix.
|
||||
|
||||
A command string can contain any number of space-separated commands:
|
||||
|
||||
retry
|
||||
resets a PR in error mode to ready for staging
|
||||
r(eview)+/-
|
||||
approves or disapproves a PR (disapproving just cancels an approval)
|
||||
squash+/squash-
|
||||
marks the PR as squash or merge, can override squash inference or a
|
||||
previous squash command
|
||||
delegate+/delegate=<users>
|
||||
adds either PR author or the specified (github) users as
|
||||
authorised reviewers for this PR. ``<users>`` is a
|
||||
comma-separated list of github usernames (no @)
|
||||
p(riority)=2|1|0
|
||||
sets the priority to normal (2), pressing (1) or urgent (0).
|
||||
Lower-priority PRs are selected first and batched together.
|
||||
"""
|
||||
is_admin = (author.reviewer and self.author != author) or (author.self_reviewer and self.author == author)
|
||||
is_reviewer = is_admin or self in author.delegate_reviewer
|
||||
# TODO: should delegate reviewers be able to retry PRs?
|
||||
is_author = is_admin or self.author == author
|
||||
|
||||
if not (is_author or is_reviewer or is_admin):
|
||||
# no point even parsing commands
|
||||
_logger.info("ignoring comment of %s (%s): no ACL to %s:%s",
|
||||
author.github_login, author.display_name,
|
||||
self.repository.name, self.number)
|
||||
return 'ignored'
|
||||
|
||||
commands = dict(
|
||||
ps
|
||||
for m in re.findall(f'^{self.repository.project_id.github_prefix}:? (.*)$', comment, re.MULTILINE)
|
||||
for c in m.strip().split()
|
||||
for ps in [self._parse_command(c)]
|
||||
if ps is not None
|
||||
)
|
||||
|
||||
applied, ignored = [], []
|
||||
for command, param in commands.items():
|
||||
ok = False
|
||||
if command == 'retry':
|
||||
if is_author and self.state == 'error':
|
||||
ok = True
|
||||
self.state = 'ready'
|
||||
elif command == 'review':
|
||||
if param and is_reviewer:
|
||||
if self.state == 'opened':
|
||||
ok = True
|
||||
self.state = 'approved'
|
||||
elif self.state == 'validated':
|
||||
ok = True
|
||||
self.state = 'ready'
|
||||
elif not param and is_author and self.state == 'error':
|
||||
# TODO: r- on something which isn't in error?
|
||||
ok = True
|
||||
self.state = 'validated'
|
||||
elif command == 'delegate':
|
||||
if is_reviewer:
|
||||
ok = True
|
||||
Partners = delegates = self.env['res.partner']
|
||||
if param is True:
|
||||
delegates |= self.author
|
||||
else:
|
||||
for login in param:
|
||||
delegates |= Partners.search([('github_login', '=', login)]) or Partners.create({
|
||||
'name': login,
|
||||
'github_login': login,
|
||||
})
|
||||
delegates.write({'delegate_reviewer': [(4, self.id, 0)]})
|
||||
|
||||
elif command == 'squash':
|
||||
if is_admin:
|
||||
ok = True
|
||||
self.squash = param
|
||||
elif command == 'priority':
|
||||
if is_admin:
|
||||
ok = True
|
||||
self.priority = param
|
||||
_logger.info(
|
||||
"%s %s(%s) on %s:%s by %s (%s)",
|
||||
"applied" if ok else "ignored",
|
||||
command, param,
|
||||
self.repository.name, self.number,
|
||||
author.github_login, author.display_name,
|
||||
)
|
||||
if ok:
|
||||
applied.append(f'{command}({param})')
|
||||
else:
|
||||
ignored.append(f'{command}({param})')
|
||||
msg = []
|
||||
if applied:
|
||||
msg.append('applied ' + ' '.join(applied))
|
||||
if ignored:
|
||||
msg.append('ignored ' + ' '.join(ignored))
|
||||
return '\n'.join(msg)
|
||||
|
||||
def _validate(self, statuses):
|
||||
# could have two PRs (e.g. one open and one closed) at least
|
||||
# temporarily on the same head, or on the same head with different
|
||||
# targets
|
||||
for pr in self:
|
||||
required = pr.repository.project_id.required_statuses.split(',')
|
||||
if all(statuses.get(r.strip()) == 'success' for r in required):
|
||||
oldstate = pr.state
|
||||
if oldstate == 'opened':
|
||||
pr.state = 'validated'
|
||||
elif oldstate == 'approved':
|
||||
pr.state = 'ready'
|
||||
|
||||
# _logger.info("CI+ (%s) for PR %s:%s: %s -> %s",
|
||||
# statuses, pr.repository.name, pr.number, oldstate, pr.state)
|
||||
# else:
|
||||
# _logger.info("CI- (%s) for PR %s:%s", statuses, pr.repository.name, pr.number)
|
||||
|
||||
def _auto_init(self):
|
||||
res = super(PullRequests, self)._auto_init()
|
||||
tools.create_unique_index(
|
||||
self._cr, 'runbot_merge_unique_pr_per_target', self._table, ['number', 'target', 'repository'])
|
||||
return res
|
||||
|
||||
class Commit(models.Model):
|
||||
"""Represents a commit onto which statuses might be posted,
|
||||
independent of everything else as commits can be created by
|
||||
statuses only, by PR pushes, by branch updates, ...
|
||||
"""
|
||||
_name = 'runbot_merge.commit'
|
||||
|
||||
sha = fields.Char(required=True)
|
||||
statuses = fields.Char(help="json-encoded mapping of status contexts to states", default="{}")
|
||||
|
||||
def create(self, values):
|
||||
r = super(Commit, self).create(values)
|
||||
r._notify()
|
||||
return r
|
||||
|
||||
def write(self, values):
|
||||
r = super(Commit, self).write(values)
|
||||
self._notify()
|
||||
return r
|
||||
|
||||
# NB: GH recommends doing heavy work asynchronously, may be a good
|
||||
# idea to defer this to a cron or something
|
||||
def _notify(self):
|
||||
Stagings = self.env['runbot_merge.stagings']
|
||||
PRs = self.env['runbot_merge.pull_requests']
|
||||
# chances are low that we'll have more than one commit
|
||||
for c in self:
|
||||
st = json.loads(c.statuses)
|
||||
pr = PRs.search([('head', '=', c.sha)])
|
||||
if pr:
|
||||
pr._validate(st)
|
||||
# heads is a json-encoded mapping of reponame:head, so chances
|
||||
# are if a sha matches a heads it's matching one of the shas
|
||||
stagings = Stagings.search([('heads', 'ilike', c.sha)])
|
||||
if stagings:
|
||||
stagings._validate()
|
||||
|
||||
def _auto_init(self):
|
||||
res = super(Commit, self)._auto_init()
|
||||
tools.create_unique_index(
|
||||
self._cr, 'runbot_merge_unique_statuses', self._table, ['sha'])
|
||||
return res
|
||||
|
||||
class Stagings(models.Model):
|
||||
_name = 'runbot_merge.stagings'
|
||||
|
||||
target = fields.Many2one('runbot_merge.branch', required=True)
|
||||
|
||||
batch_ids = fields.One2many(
|
||||
'runbot_merge.batch', 'staging_id',
|
||||
)
|
||||
state = fields.Selection([
|
||||
('success', 'Success'),
|
||||
('failure', 'Failure'),
|
||||
('pending', 'Pending'),
|
||||
])
|
||||
|
||||
staged_at = fields.Datetime(default=fields.Datetime.now)
|
||||
restaged = fields.Integer(default=0)
|
||||
|
||||
# seems simpler than adding yet another indirection through a model and
|
||||
# makes checking for actually staged stagings way easier: just see if
|
||||
# heads is set
|
||||
heads = fields.Char(help="JSON-encoded map of heads, one per repo in the project")
|
||||
|
||||
def _validate(self):
|
||||
Commits = self.env['runbot_merge.commit']
|
||||
for s in self:
|
||||
heads = list(json.loads(s.heads).values())
|
||||
commits = Commits.search([
|
||||
('sha', 'in', heads)
|
||||
])
|
||||
if len(commits) < len(heads):
|
||||
s.state = 'pending'
|
||||
continue
|
||||
|
||||
reqs = [r.strip() for r in s.target.project_id.required_statuses.split(',')]
|
||||
st = 'success'
|
||||
for c in commits:
|
||||
statuses = json.loads(c.statuses)
|
||||
for v in map(statuses.get, reqs):
|
||||
if st == 'failure' or v in ('error', 'failure'):
|
||||
st = 'failure'
|
||||
elif v in (None, 'pending'):
|
||||
st = 'pending'
|
||||
else:
|
||||
assert v == 'success'
|
||||
s.state = st
|
||||
|
||||
def fail(self, message, prs=None):
|
||||
_logger.error("Staging %s failed: %s", self, message)
|
||||
prs = prs or self.batch_ids.prs
|
||||
prs.write({'state': 'error'})
|
||||
for pr in prs:
|
||||
pr.repository.github().comment(
|
||||
pr.number, "Staging failed: %s" % message)
|
||||
|
||||
self.batch_ids.unlink()
|
||||
self.unlink()
|
||||
|
||||
def try_splitting(self):
|
||||
batches = len(self.batch_ids)
|
||||
if batches > 1:
|
||||
midpoint = batches // 2
|
||||
h, t = self.batch_ids[:midpoint], self.batch_ids[midpoint:]
|
||||
self.env['runbot_merge.stagings'].create({
|
||||
'target': self.target.id,
|
||||
'batch_ids': [(4, batch.id, 0) for batch in h],
|
||||
})
|
||||
self.env['runbot_merge.stagings'].create({
|
||||
'target': self.target.id,
|
||||
'batch_ids': [(4, batch.id, 0) for batch in t],
|
||||
})
|
||||
# apoptosis
|
||||
self.unlink()
|
||||
return True
|
||||
|
||||
# single batch => the staging is an unredeemable failure
|
||||
if self.state != 'failure':
|
||||
# timed out, just mark all PRs (wheee)
|
||||
self.fail(f'timed out (>{self.target.project_id.ci_timeout} minutes)')
|
||||
return False
|
||||
|
||||
# try inferring which PR failed and only mark that one
|
||||
for repo, head in json.loads(self.heads).items():
|
||||
commit = self.env['runbot_merge.commit'].search([
|
||||
('sha', '=', head)
|
||||
])
|
||||
reason = next((
|
||||
ctx for ctx, result in json.loads(commit.statuses).items()
|
||||
if result in ('error', 'failure')
|
||||
), None)
|
||||
if not reason:
|
||||
continue
|
||||
|
||||
pr = next((
|
||||
pr for pr in self.batch_ids.prs
|
||||
if pr.repository.name == repo
|
||||
), None)
|
||||
if pr:
|
||||
self.fail(reason, pr)
|
||||
return False
|
||||
|
||||
# the staging failed but we don't have a specific culprit, fail
|
||||
# everything
|
||||
self.fail("unknown reason")
|
||||
|
||||
return False
|
||||
|
||||
class Batch(models.Model):
|
||||
""" A batch is a "horizontal" grouping of *codependent* PRs: PRs with
|
||||
the same label & target but for different repositories. These are
|
||||
assumed to be part of the same "change" smeared over multiple
|
||||
repositories e.g. change an API in repo1, this breaks use of that API
|
||||
in repo2 which now needs to be updated.
|
||||
"""
|
||||
_name = 'runbot_merge.batch'
|
||||
|
||||
target = fields.Many2one('runbot_merge.branch', required=True)
|
||||
staging_id = fields.Many2one('runbot_merge.stagings')
|
||||
|
||||
prs = fields.One2many('runbot_merge.pull_requests', 'batch_id')
|
||||
|
||||
@api.constrains('target', 'prs')
|
||||
def _check_prs(self):
|
||||
for batch in self:
|
||||
repos = self.env['runbot_merge.repository']
|
||||
for pr in batch.prs:
|
||||
if pr.target != batch.target:
|
||||
raise ValidationError("A batch and its PRs must have the same branch, got %s and %s" % (batch.target, pr.target))
|
||||
if pr.repository in repos:
|
||||
raise ValidationError("All prs of a batch must have different target repositories, got a duplicate %s on %s" % (pr.repository, pr))
|
||||
repos |= pr.repository
|
||||
|
||||
def stage(self, meta, prs):
|
||||
"""
|
||||
Updates meta[*][head] on success
|
||||
|
||||
:return: () or Batch object (if all prs successfully staged)
|
||||
"""
|
||||
new_heads = {}
|
||||
for pr in prs:
|
||||
gh = meta[pr.repository]['gh']
|
||||
|
||||
_logger.info(
|
||||
"Staging pr %s:%s for target %s; squash=%s",
|
||||
pr.repository.name, pr.number, pr.target.name, pr.squash
|
||||
)
|
||||
msg = pr.message
|
||||
author=None
|
||||
if pr.squash:
|
||||
# FIXME: maybe should be message of the *first* commit of the branch?
|
||||
# TODO: or depend on # of commits in PR instead of squash flag?
|
||||
commit = gh.commit(pr.head)
|
||||
msg = commit['message']
|
||||
author = commit['author']
|
||||
|
||||
try:
|
||||
new_heads[pr] = gh.merge(pr.head, f'tmp.{pr.target.name}', msg, squash=pr.squash, author=author)['sha']
|
||||
except exceptions.MergeError:
|
||||
_logger.exception("Failed to merge %s:%s into staging branch", pr.repository.name, pr.number)
|
||||
pr.state = 'error'
|
||||
gh.comment(pr.number, "Unable to stage PR (merge conflict)")
|
||||
|
||||
# reset other PRs
|
||||
for to_revert in new_heads.keys():
|
||||
it = meta[to_revert.repository]
|
||||
it['gh'].set_ref(f'tmp.{to_revert.target.name}', it['head'])
|
||||
|
||||
return self.env['runbot_merge.batch']
|
||||
|
||||
# update meta to new heads
|
||||
for pr, head in new_heads.items():
|
||||
meta[pr.repository]['head'] = head
|
||||
if not self.env['runbot_merge.commit'].search([('sha', '=', head)]):
|
||||
self.env['runbot_merge.commit'].create({'sha': head})
|
||||
return self.create({
|
||||
'target': prs[0].target.id,
|
||||
'prs': [(4, pr.id, 0) for pr in prs],
|
||||
})
|
15
runbot_merge/models/res_partner.py
Normal file
15
runbot_merge/models/res_partner.py
Normal file
@ -0,0 +1,15 @@
|
||||
from odoo import fields, models, tools
|
||||
|
||||
class Partner(models.Model):
|
||||
_inherit = 'res.partner'
|
||||
|
||||
github_login = fields.Char()
|
||||
reviewer = fields.Boolean(default=False, help="Can review PRs (maybe m2m to repos/branches?)")
|
||||
self_reviewer = fields.Boolean(default=False, help="Can review own PRs (independent from reviewer)")
|
||||
delegate_reviewer = fields.Many2many('runbot_merge.pull_requests')
|
||||
|
||||
def _auto_init(self):
|
||||
res = super(Partner, self)._auto_init()
|
||||
tools.create_unique_index(
|
||||
self._cr, 'runbot_merge_unique_gh_login', self._table, ['github_login'])
|
||||
return res
|
57
runbot_merge/tests/conftest.py
Normal file
57
runbot_merge/tests/conftest.py
Normal file
@ -0,0 +1,57 @@
|
||||
import odoo
|
||||
|
||||
import pytest
|
||||
|
||||
import fake_github
|
||||
|
||||
@pytest.fixture
|
||||
def gh():
|
||||
with fake_github.Github() as gh:
|
||||
yield gh
|
||||
|
||||
def pytest_addoption(parser):
|
||||
parser.addoption("--db", action="store", help="Odoo DB to run tests with")
|
||||
parser.addoption('--addons-path', action='store', help="Odoo's addons path")
|
||||
|
||||
@pytest.fixture(scope='session')
|
||||
def registry(request):
|
||||
""" Set up Odoo & yields a registry to the specified db
|
||||
"""
|
||||
db = request.config.getoption('--db')
|
||||
addons = request.config.getoption('--addons-path')
|
||||
odoo.tools.config.parse_config(['--addons-path', addons, '-d', db, '--db-filter', db])
|
||||
try:
|
||||
odoo.service.db._create_empty_database(db)
|
||||
except odoo.service.db.DatabaseExists:
|
||||
pass
|
||||
|
||||
#odoo.service.server.load_server_wide_modules()
|
||||
#odoo.service.server.preload_registries([db])
|
||||
|
||||
with odoo.api.Environment.manage():
|
||||
# ensure module is installed
|
||||
r0 = odoo.registry(db)
|
||||
with r0.cursor() as cr:
|
||||
env = odoo.api.Environment(cr, 1, {})
|
||||
[mod] = env['ir.module.module'].search([('name', '=', 'runbot_merge')])
|
||||
mod.button_immediate_install()
|
||||
|
||||
yield odoo.registry(db)
|
||||
|
||||
@pytest.fixture
|
||||
def env(request, registry):
|
||||
"""Generates an environment, can be parameterized on a user's login
|
||||
"""
|
||||
with registry.cursor() as cr:
|
||||
env = odoo.api.Environment(cr, odoo.SUPERUSER_ID, {})
|
||||
login = getattr(request, 'param', 'admin')
|
||||
if login != 'admin':
|
||||
user = env['res.users'].search([('login', '=', login)], limit=1)
|
||||
env = odoo.api.Environment(cr, user.id, {})
|
||||
ctx = env['res.users'].context_get()
|
||||
registry.enter_test_mode(cr)
|
||||
yield env(context=ctx)
|
||||
registry.leave_test_mode()
|
||||
|
||||
cr.rollback()
|
||||
|
568
runbot_merge/tests/fake_github/__init__.py
Normal file
568
runbot_merge/tests/fake_github/__init__.py
Normal file
@ -0,0 +1,568 @@
|
||||
import collections
|
||||
import io
|
||||
import itertools
|
||||
import json
|
||||
import logging
|
||||
import re
|
||||
|
||||
import responses
|
||||
import werkzeug.test
|
||||
import werkzeug.wrappers
|
||||
|
||||
from . import git
|
||||
|
||||
API_PATTERN = re.compile(
|
||||
r'https://api.github.com/repos/(?P<repo>\w+/\w+)/(?P<path>.+)'
|
||||
)
|
||||
class APIResponse(responses.BaseResponse):
|
||||
def __init__(self, sim):
|
||||
super(APIResponse, self).__init__(
|
||||
method=None,
|
||||
url=API_PATTERN
|
||||
)
|
||||
self.sim = sim
|
||||
self.content_type = 'application/json'
|
||||
self.stream = False
|
||||
|
||||
def matches(self, request):
|
||||
return self._url_matches(self.url, request.url, self.match_querystring)
|
||||
|
||||
def get_response(self, request):
|
||||
m = self.url.match(request.url)
|
||||
|
||||
(status, r) = self.sim.repos[m.group('repo')].api(m.group('path'), request)
|
||||
|
||||
headers = self.get_headers()
|
||||
body = io.BytesIO(b'')
|
||||
if r:
|
||||
body = io.BytesIO(json.dumps(r).encode('utf-8'))
|
||||
|
||||
return responses.HTTPResponse(
|
||||
status=status,
|
||||
reason=r.get('message') if r else "bollocks",
|
||||
body=body,
|
||||
headers=headers,
|
||||
preload_content=False, )
|
||||
|
||||
class Github(object):
|
||||
""" Github simulator
|
||||
|
||||
When enabled (by context-managing):
|
||||
|
||||
* intercepts all ``requests`` calls & replies to api.github.com
|
||||
* sends relevant hooks (registered per-repo as pairs of WSGI app and URL)
|
||||
* stores repo content
|
||||
"""
|
||||
def __init__(self):
|
||||
# {repo: {name, issues, objects, refs, hooks}}
|
||||
self.repos = {}
|
||||
|
||||
def repo(self, name, hooks=()):
|
||||
r = self.repos[name] = Repo(name)
|
||||
for hook, events in hooks:
|
||||
r.hook(hook, events)
|
||||
return self.repos[name]
|
||||
|
||||
def __enter__(self):
|
||||
# otherwise swallows errors from within the test
|
||||
self._requests = responses.RequestsMock(assert_all_requests_are_fired=False).__enter__()
|
||||
self._requests.add(APIResponse(self))
|
||||
return self
|
||||
|
||||
def __exit__(self, *args):
|
||||
return self._requests.__exit__(*args)
|
||||
|
||||
class Repo(object):
|
||||
def __init__(self, name):
|
||||
self.name = name
|
||||
self.issues = {}
|
||||
#: we're cheating, instead of storing serialised in-memory
|
||||
#: objects we're storing the Python stuff directly, Commit
|
||||
#: objects for commits, {str: hash} for trees and bytes for
|
||||
#: blobs. We're still indirecting via hashes and storing a
|
||||
#: h:o map because going through the API probably requires it
|
||||
self.objects = {}
|
||||
# branches: refs/heads/*
|
||||
# PRs: refs/pull/*
|
||||
self.refs = {}
|
||||
# {event: (wsgi_app, url)}
|
||||
self.hooks = collections.defaultdict(list)
|
||||
|
||||
def hook(self, hook, events):
|
||||
for event in events:
|
||||
self.hooks[event].append(Client(*hook))
|
||||
|
||||
def notify(self, event_type, *payload):
|
||||
for client in self.hooks.get(event_type, []):
|
||||
getattr(client, event_type)(*payload)
|
||||
|
||||
def issue(self, number):
|
||||
return self.issues[number]
|
||||
|
||||
def make_issue(self, title, body):
|
||||
return Issue(self, title, body)
|
||||
|
||||
def make_pr(self, title, body, target, ctid, user, label=None):
|
||||
assert 'heads/%s' % target in self.refs
|
||||
return PR(self, title, body, target, ctid, user=user, label=label or f'{user}:{target}')
|
||||
|
||||
def make_ref(self, name, commit, force=False):
|
||||
assert isinstance(self.objects[commit], Commit)
|
||||
if not force and name in self.refs:
|
||||
raise ValueError("ref %s already exists" % name)
|
||||
self.refs[name] = commit
|
||||
|
||||
def commit(self, ref):
|
||||
sha = self.refs.get(ref) or ref
|
||||
commit = self.objects[sha]
|
||||
assert isinstance(commit, Commit)
|
||||
return commit
|
||||
|
||||
def log(self, ref):
|
||||
commits = [self.commit(ref)]
|
||||
while commits:
|
||||
c = commits.pop(0)
|
||||
commits.extend(self.commit(r) for r in c.parents)
|
||||
yield c
|
||||
|
||||
def post_status(self, ref, status, context='default', description=""):
|
||||
assert status in ('error', 'failure', 'pending', 'success')
|
||||
c = self.commit(ref)
|
||||
c.statuses.append((status, context, description))
|
||||
self.notify('status', self.name, context, status, c.id)
|
||||
|
||||
def make_commit(self, ref, message, author, committer=None, tree=None, changes=None):
|
||||
assert (tree is None) ^ (changes is None), \
|
||||
"a commit must provide either a full tree or changes to the previous tree"
|
||||
|
||||
branch = False
|
||||
if ref is None:
|
||||
pids = []
|
||||
else:
|
||||
pid = ref
|
||||
if not re.match(r'[0-9a-f]{40}', ref):
|
||||
pid = self.refs[ref]
|
||||
branch = True
|
||||
parent = self.objects[pid]
|
||||
pids = [pid]
|
||||
|
||||
if tree is None:
|
||||
# TODO?
|
||||
tid = self._update_tree(parent.tree, changes)
|
||||
elif type(tree) is type(u''):
|
||||
assert isinstance(self.objects.get(tree), dict)
|
||||
tid = tree
|
||||
else:
|
||||
tid = self._save_tree(tree)
|
||||
|
||||
c = Commit(tid, message, author, committer or author, parents=pids)
|
||||
self.objects[c.id] = c
|
||||
if branch:
|
||||
self.refs[ref] = c.id
|
||||
return c.id
|
||||
|
||||
def _save_tree(self, t):
|
||||
""" t: Dict String (String | Tree)
|
||||
"""
|
||||
t = {name: self._make_obj(obj) for name, obj in t.items()}
|
||||
h, _ = git.make_tree(
|
||||
self.objects,
|
||||
t
|
||||
)
|
||||
self.objects[h] = t
|
||||
return h
|
||||
|
||||
def _make_obj(self, o):
|
||||
if type(o) is type(u''):
|
||||
o = o.encode('utf-8')
|
||||
|
||||
if type(o) is bytes:
|
||||
h, b = git.make_blob(o)
|
||||
self.objects[h] = o
|
||||
return h
|
||||
return self._save_tree(o)
|
||||
|
||||
def api(self, path, request):
|
||||
for method, pattern, handler in self._handlers:
|
||||
if method and request.method != method:
|
||||
continue
|
||||
|
||||
m = re.match(pattern, path)
|
||||
if m:
|
||||
return handler(self, request, **m.groupdict())
|
||||
return (404, {'message': f"No match for {request.method} {path}"})
|
||||
|
||||
def _read_ref(self, r, ref):
|
||||
obj = self.refs.get(ref)
|
||||
if obj is None:
|
||||
return (404, None)
|
||||
return (200, {
|
||||
"ref": "refs/%s" % ref,
|
||||
"object": {
|
||||
"type": "commit",
|
||||
"sha": obj,
|
||||
}
|
||||
})
|
||||
def _create_ref(self, r):
|
||||
body = json.loads(r.body)
|
||||
ref = body['ref']
|
||||
# ref must start with refs/ and contain at least two slashes
|
||||
if not (ref.startswith('refs/') and ref.count('/') >= 2):
|
||||
return (400, None)
|
||||
ref = ref[5:]
|
||||
# if ref already exists conflict?
|
||||
if ref in self.refs:
|
||||
return (409, None)
|
||||
|
||||
sha = body['sha']
|
||||
obj = self.objects.get(sha)
|
||||
# if sha is not in the repo or not a commit, 404
|
||||
if not isinstance(obj, Commit):
|
||||
return (404, None)
|
||||
|
||||
self.make_ref(ref, sha)
|
||||
|
||||
return (201, {
|
||||
"ref": "refs/%s" % ref,
|
||||
"object": {
|
||||
"type": "commit",
|
||||
"sha": sha,
|
||||
}
|
||||
})
|
||||
|
||||
def _write_ref(self, r, ref):
|
||||
current = self.refs.get(ref)
|
||||
if current is None:
|
||||
return (404, None)
|
||||
body = json.loads(r.body)
|
||||
sha = body['sha']
|
||||
if sha not in self.objects:
|
||||
return (404, None)
|
||||
|
||||
if not body.get('force'):
|
||||
if not git.is_ancestor(self.objects, current, sha):
|
||||
return (400, None)
|
||||
|
||||
self.make_ref(ref, sha, force=True)
|
||||
return (200, {
|
||||
"ref": "refs/%s" % ref,
|
||||
"object": {
|
||||
"type": "commit",
|
||||
"sha": sha,
|
||||
}
|
||||
})
|
||||
|
||||
def _create_commit(self, r):
|
||||
body = json.loads(r.body)
|
||||
[parent] = body.get('parents') or [None]
|
||||
author = body.get('author') or {'name': 'default', 'email': 'default', 'date': 'Z'}
|
||||
try:
|
||||
sha = self.make_commit(
|
||||
ref=parent,
|
||||
message=body['message'],
|
||||
author=author,
|
||||
committer=body.get('committer') or author,
|
||||
tree=body['tree']
|
||||
)
|
||||
except (KeyError, AssertionError):
|
||||
# either couldn't find the parent or couldn't find the tree
|
||||
return (404, None)
|
||||
|
||||
return (201, {
|
||||
"sha": sha,
|
||||
"author": author,
|
||||
"committer": body.get('committer') or author,
|
||||
"message": body['message'],
|
||||
"tree": {"sha": body['tree']},
|
||||
"parents": [{"sha": sha}],
|
||||
})
|
||||
def _read_commit(self, r, sha):
|
||||
c = self.objects.get(sha)
|
||||
if not isinstance(c, Commit):
|
||||
return (404, None)
|
||||
return (200, {
|
||||
"sha": sha,
|
||||
"author": c.author,
|
||||
"committer": c.committer,
|
||||
"message": c.message,
|
||||
"tree": {"sha": c.tree},
|
||||
"parents": [{"sha": p} for p in c.parents],
|
||||
})
|
||||
|
||||
def _create_issue_comment(self, r, number):
|
||||
try:
|
||||
issue = self.issues[int(number)]
|
||||
except KeyError:
|
||||
return (404, None)
|
||||
try:
|
||||
body = json.loads(r.body)['body']
|
||||
except KeyError:
|
||||
return (400, None)
|
||||
|
||||
issue.post_comment(body, "<insert current user here>")
|
||||
return (201, {
|
||||
'id': 0,
|
||||
'body': body,
|
||||
'user': { 'login': "<insert current user here>" },
|
||||
})
|
||||
|
||||
def _edit_pr(self, r, number):
|
||||
try:
|
||||
pr = self.issues[int(number)]
|
||||
except KeyError:
|
||||
return (404, None)
|
||||
|
||||
body = json.loads(r.body)
|
||||
if not body.keys() & {'title', 'body', 'state', 'base'}:
|
||||
# FIXME: return PR content
|
||||
return (200, {})
|
||||
assert body.get('state') in ('open', 'closed', None)
|
||||
|
||||
pr.state = body.get('state') or pr.state
|
||||
if body.get('title'):
|
||||
pr.title = body.get('title')
|
||||
if body.get('body'):
|
||||
pr.body = body.get('body')
|
||||
if body.get('base'):
|
||||
pr.base = body.get('base')
|
||||
|
||||
if body.get('state') == 'open':
|
||||
self.notify('pull_request', 'reopened', self.name, pr)
|
||||
elif body.get('state') == 'closed':
|
||||
self.notify('pull_request', 'closed', self.name, pr)
|
||||
|
||||
return (200, {})
|
||||
|
||||
def _do_merge(self, r):
|
||||
body = json.loads(r.body) # {base, head, commit_message}
|
||||
if not body.get('commit_message'):
|
||||
return (400, {'message': "Merges require a commit message"})
|
||||
base = 'heads/%s' % body['base']
|
||||
target = self.refs.get(base)
|
||||
if not target:
|
||||
return (404, {'message': "Base does not exist"})
|
||||
# head can be either a branch or a sha
|
||||
sha = self.refs.get('heads/%s' % body['head']) or body['head']
|
||||
if sha not in self.objects:
|
||||
return (404, {'message': "Head does not exist"})
|
||||
|
||||
if git.is_ancestor(self.objects, sha, of=target):
|
||||
return (204, None)
|
||||
|
||||
# merging according to read-tree:
|
||||
# get common ancestor (base) of commits
|
||||
try:
|
||||
base = git.merge_base(self.objects, target, sha)
|
||||
except Exception as e:
|
||||
return (400, {'message': "No common ancestor between %(base)s and %(head)s" % body})
|
||||
try:
|
||||
tid = git.merge_objects(
|
||||
self.objects,
|
||||
self.objects[base].tree,
|
||||
self.objects[target].tree,
|
||||
self.objects[sha].tree,
|
||||
)
|
||||
except Exception as e:
|
||||
logging.exception("Merge Conflict")
|
||||
return (409, {'message': 'Merge Conflict %r' % e})
|
||||
|
||||
c = Commit(tid, body['commit_message'], author=None, committer=None, parents=[target, sha])
|
||||
self.objects[c.id] = c
|
||||
|
||||
return (201, {
|
||||
"sha": c.id,
|
||||
"commit": {
|
||||
"author": c.author,
|
||||
"committer": c.committer,
|
||||
"message": body['commit_message'],
|
||||
"tree": {"sha": tid},
|
||||
},
|
||||
"parents": [{"sha": target}, {"sha": sha}]
|
||||
})
|
||||
|
||||
_handlers = [
|
||||
('POST', r'git/refs', _create_ref),
|
||||
('GET', r'git/refs/(?P<ref>.*)', _read_ref),
|
||||
('PATCH', r'git/refs/(?P<ref>.*)', _write_ref),
|
||||
|
||||
# nb: there's a different commits at /commits with repo-level metadata
|
||||
('GET', r'git/commits/(?P<sha>[0-9A-Fa-f]{40})', _read_commit),
|
||||
('POST', r'git/commits', _create_commit),
|
||||
|
||||
('POST', r'issues/(?P<number>\d+)/comments', _create_issue_comment),
|
||||
|
||||
('POST', r'merges', _do_merge),
|
||||
|
||||
('PATCH', r'pulls/(?P<number>\d+)', _edit_pr),
|
||||
]
|
||||
|
||||
class Issue(object):
|
||||
def __init__(self, repo, title, body):
|
||||
self.repo = repo
|
||||
self._title = title
|
||||
self._body = body
|
||||
self.number = max(repo.issues or [0]) + 1
|
||||
self.comments = []
|
||||
repo.issues[self.number] = self
|
||||
|
||||
def post_comment(self, body, user):
|
||||
self.comments.append((user, body))
|
||||
self.repo.notify('issue_comment', self, user, body)
|
||||
|
||||
@property
|
||||
def title(self):
|
||||
return self._title
|
||||
@title.setter
|
||||
def title(self, value):
|
||||
self._title = value
|
||||
|
||||
@property
|
||||
def body(self):
|
||||
return self._body
|
||||
@body.setter
|
||||
def body(self, value):
|
||||
self._body = value
|
||||
|
||||
class PR(Issue):
|
||||
def __init__(self, repo, title, body, target, ctid, user, label):
|
||||
super(PR, self).__init__(repo, title, body)
|
||||
assert ctid in repo.objects
|
||||
repo.refs['pull/%d' % self.number] = ctid
|
||||
self.head = ctid
|
||||
self._base = target
|
||||
self.user = user
|
||||
self.label = label
|
||||
self.state = 'open'
|
||||
|
||||
repo.notify('pull_request', 'opened', repo.name, self)
|
||||
|
||||
@Issue.title.setter
|
||||
def title(self, value):
|
||||
old = self.title
|
||||
Issue.title.fset(self, value)
|
||||
self.repo.notify('pull_request', 'edited', self.repo.name, self, {
|
||||
'title': {'from': old}
|
||||
})
|
||||
@Issue.body.setter
|
||||
def body(self, value):
|
||||
old = self.body
|
||||
Issue.body.fset(self, value)
|
||||
self.repo.notify('pull_request', 'edited', self.repo.name, self, {
|
||||
'body': {'from': old}
|
||||
})
|
||||
@property
|
||||
def base(self):
|
||||
return self._base
|
||||
@base.setter
|
||||
def base(self, value):
|
||||
old, self._base = self._base, value
|
||||
self.repo.notify('pull_request', 'edited', self.repo.name, self, {
|
||||
'base': {'from': {'ref': old}}
|
||||
})
|
||||
|
||||
def push(self, sha):
|
||||
self.head = sha
|
||||
self.repo.notify('pull_request', 'synchronize', self.repo.name, self)
|
||||
|
||||
def open(self):
|
||||
assert self.state == 'closed'
|
||||
self.state = 'open'
|
||||
self.repo.notify('pull_request', 'reopened', self.repo.name, self)
|
||||
|
||||
def close(self):
|
||||
self.state = 'closed'
|
||||
self.repo.notify('pull_request', 'closed', self.repo.name, self)
|
||||
|
||||
@property
|
||||
def commits(self):
|
||||
store = self.repo.objects
|
||||
target = self.repo.commit('heads/%s' % self.base).id
|
||||
return len({h for h, _ in git.walk_ancestors(store, self.head, False)}
|
||||
- {h for h, _ in git.walk_ancestors(store, target, False)})
|
||||
|
||||
class Commit(object):
|
||||
__slots__ = ['tree', 'message', 'author', 'committer', 'parents', 'statuses']
|
||||
def __init__(self, tree, message, author, committer, parents):
|
||||
self.tree = tree
|
||||
self.message = message
|
||||
self.author = author
|
||||
self.committer = committer or author
|
||||
self.parents = parents
|
||||
self.statuses = []
|
||||
|
||||
@property
|
||||
def id(self):
|
||||
return git.make_commit(self.tree, self.message, self.author, self.committer, parents=self.parents)[0]
|
||||
|
||||
def __str__(self):
|
||||
parents = '\n'.join('parent {p}' for p in self.parents) + '\n'
|
||||
return f"""commit {self.id}
|
||||
tree {self.tree}
|
||||
{parents}author {self.author}
|
||||
committer {self.committer}
|
||||
|
||||
{self.message}"""
|
||||
|
||||
class Client(werkzeug.test.Client):
|
||||
def __init__(self, application, path):
|
||||
self._webhook_path = path
|
||||
super(Client, self).__init__(application, werkzeug.wrappers.BaseResponse)
|
||||
|
||||
def _make_env(self, event_type, data):
|
||||
return werkzeug.test.EnvironBuilder(
|
||||
path=self._webhook_path,
|
||||
method='POST',
|
||||
headers=[('X-Github-Event', event_type)],
|
||||
content_type='application/json',
|
||||
data=json.dumps(data),
|
||||
)
|
||||
|
||||
def pull_request(self, action, repository, pr, changes=None):
|
||||
assert action in ('opened', 'reopened', 'closed', 'synchronize', 'edited')
|
||||
return self.open(self._make_env(
|
||||
'pull_request', {
|
||||
'action': action,
|
||||
'pull_request': {
|
||||
'number': pr.number,
|
||||
'head': {
|
||||
'sha': pr.head,
|
||||
'label': pr.label,
|
||||
},
|
||||
'base': {
|
||||
'ref': pr.base,
|
||||
'repo': {
|
||||
'name': repository.split('/')[1],
|
||||
'full_name': repository,
|
||||
},
|
||||
},
|
||||
'title': pr.title,
|
||||
'body': pr.body,
|
||||
'commits': pr.commits,
|
||||
'user': { 'login': pr.user },
|
||||
},
|
||||
**({'changes': changes} if changes else {})
|
||||
}
|
||||
))
|
||||
|
||||
def status(self, repository, context, state, sha):
|
||||
assert state in ('success', 'failure', 'pending')
|
||||
return self.open(self._make_env(
|
||||
'status', {
|
||||
'name': repository,
|
||||
'context': context,
|
||||
'state': state,
|
||||
'sha': sha,
|
||||
}
|
||||
))
|
||||
|
||||
def issue_comment(self, issue, user, body):
|
||||
contents = {
|
||||
'action': 'created',
|
||||
'issue': { 'number': issue.number },
|
||||
'repository': { 'name': issue.repo.name.split('/')[1], 'full_name': issue.repo.name },
|
||||
'sender': { 'login': user },
|
||||
'comment': { 'body': body },
|
||||
}
|
||||
if isinstance(issue, PR):
|
||||
contents['issue']['pull_request'] = { 'url': 'fake' }
|
||||
return self.open(self._make_env('issue_comment', contents))
|
122
runbot_merge/tests/fake_github/git.py
Normal file
122
runbot_merge/tests/fake_github/git.py
Normal file
@ -0,0 +1,122 @@
|
||||
import collections
|
||||
import hashlib
|
||||
|
||||
def make_obj(t, contents):
|
||||
assert t in ('blob', 'tree', 'commit')
|
||||
obj = b'%s %d\0%s' % (t.encode('utf-8'), len(contents), contents)
|
||||
return hashlib.sha1(obj).hexdigest(), obj
|
||||
|
||||
def make_blob(contents):
|
||||
return make_obj('blob', contents)
|
||||
|
||||
def make_tree(store, objs):
|
||||
""" objs should be a mapping or iterable of (name, object)
|
||||
"""
|
||||
if isinstance(objs, collections.Mapping):
|
||||
objs = objs.items()
|
||||
|
||||
return make_obj('tree', b''.join(
|
||||
b'%s %s\0%s' % (
|
||||
b'040000' if isinstance(obj, collections.Mapping) else b'100644',
|
||||
name.encode('utf-8'),
|
||||
h.encode('utf-8'),
|
||||
)
|
||||
for name, h in sorted(objs)
|
||||
for obj in [store[h]]
|
||||
# TODO: check that obj is a blob or tree
|
||||
))
|
||||
|
||||
def make_commit(tree, message, author, committer=None, parents=()):
|
||||
contents = ['tree %s' % tree]
|
||||
for parent in parents:
|
||||
contents.append('parent %s' % parent)
|
||||
contents.append('author %s' % author)
|
||||
contents.append('committer %s' % committer or author)
|
||||
contents.append('')
|
||||
contents.append(message)
|
||||
|
||||
return make_obj('commit', '\n'.join(contents).encode('utf-8'))
|
||||
|
||||
def walk_ancestors(store, commit, exclude_self=True):
|
||||
"""
|
||||
:param store: mapping of hashes to commit objects (w/ a parents attribute)
|
||||
"""
|
||||
q = [(commit, 0)]
|
||||
while q:
|
||||
node, distance = q.pop()
|
||||
q.extend((p, distance+1) for p in store[node].parents)
|
||||
if not (distance == 0 and exclude_self):
|
||||
yield (node, distance)
|
||||
|
||||
def is_ancestor(store, candidate, of):
|
||||
# could have candidate == of after all
|
||||
return any(
|
||||
current == candidate
|
||||
for current, _ in walk_ancestors(store, of, exclude_self=False)
|
||||
)
|
||||
|
||||
|
||||
def merge_base(store, c1, c2):
|
||||
""" Find LCA between two commits. Brute-force: get all ancestors of A,
|
||||
all ancestors of B, intersect, and pick the one with the lowest distance
|
||||
"""
|
||||
a1 = walk_ancestors(store, c1, exclude_self=False)
|
||||
# map of sha:distance
|
||||
a2 = dict(walk_ancestors(store, c2, exclude_self=False))
|
||||
# find lowest ancestor by distance(ancestor, c1) + distance(ancestor, c2)
|
||||
_distance, lca = min(
|
||||
(d1 + d2, a)
|
||||
for a, d1 in a1
|
||||
for d2 in [a2.get(a)]
|
||||
if d2 is not None
|
||||
)
|
||||
return lca
|
||||
|
||||
def merge_objects(store, b, o1, o2):
|
||||
""" Merges trees and blobs.
|
||||
|
||||
Store = Mapping<Hash, (Blob | Tree)>
|
||||
Blob = bytes
|
||||
Tree = Mapping<Name, Hash>
|
||||
"""
|
||||
# FIXME: handle None input (similarly named entry added in two
|
||||
# branches, or delete in one branch & change in other)
|
||||
if not (b and o1 or o2):
|
||||
raise ValueError("Don't know how to merge additions/removals yet")
|
||||
b, o1, o2 = store[b], store[o1], store[o2]
|
||||
if any(isinstance(o, bytes) for o in [b, o1, o2]):
|
||||
raise TypeError("Don't know how to merge blobs")
|
||||
|
||||
entries = sorted(set(b).union(o1, o2))
|
||||
|
||||
t = {}
|
||||
for entry in entries:
|
||||
base = b.get(entry)
|
||||
e1 = o1.get(entry)
|
||||
e2 = o2.get(entry)
|
||||
if e1 == e2:
|
||||
merged = e1 # either no change or same change on both side
|
||||
elif base == e1:
|
||||
merged = e2 # e1 did not change, use e2
|
||||
elif base == e2:
|
||||
merged = e1 # e2 did not change, use e1
|
||||
else:
|
||||
merged = merge_objects(store, base, e1, e2)
|
||||
# None => entry removed
|
||||
if merged is not None:
|
||||
t[entry] = merged
|
||||
|
||||
# FIXME: fix partial redundancy with make_tree
|
||||
tid, _ = make_tree(store, t)
|
||||
store[tid] = t
|
||||
return tid
|
||||
|
||||
def read_object(store, tid):
|
||||
# recursively reads tree of objects
|
||||
o = store[tid]
|
||||
if isinstance(o, bytes):
|
||||
return o
|
||||
return {
|
||||
k: read_object(store, v)
|
||||
for k, v in o.items()
|
||||
}
|
1087
runbot_merge/tests/test_basic.py
Normal file
1087
runbot_merge/tests/test_basic.py
Normal file
File diff suppressed because it is too large
Load Diff
346
runbot_merge/tests/test_multirepo.py
Normal file
346
runbot_merge/tests/test_multirepo.py
Normal file
@ -0,0 +1,346 @@
|
||||
""" The mergebot does not work on a dependency basis, rather all
|
||||
repositories of a project are co-equal and get (on target and
|
||||
source branches).
|
||||
|
||||
When preparing a staging, we simply want to ensure branch-matched PRs
|
||||
are staged concurrently in all repos
|
||||
"""
|
||||
import json
|
||||
|
||||
import odoo
|
||||
|
||||
import pytest
|
||||
|
||||
from fake_github import git
|
||||
|
||||
@pytest.fixture
|
||||
def project(env):
|
||||
env['res.partner'].create({
|
||||
'name': "Reviewer",
|
||||
'github_login': 'reviewer',
|
||||
'reviewer': True,
|
||||
})
|
||||
env['res.partner'].create({
|
||||
'name': "Self Reviewer",
|
||||
'github_login': 'self-reviewer',
|
||||
'self_reviewer': True,
|
||||
})
|
||||
return env['runbot_merge.project'].create({
|
||||
'name': 'odoo',
|
||||
'github_token': 'okokok',
|
||||
'github_prefix': 'hansen',
|
||||
'branch_ids': [(0, 0, {'name': 'master'})],
|
||||
'required_statuses': 'legal/cla,ci/runbot',
|
||||
})
|
||||
|
||||
@pytest.fixture
|
||||
def repo_a(gh, project):
|
||||
project.write({'repo_ids': [(0, 0, {'name': "odoo/a"})]})
|
||||
return gh.repo('odoo/a', hooks=[
|
||||
((odoo.http.root, '/runbot_merge/hooks'), ['pull_request', 'issue_comment', 'status'])
|
||||
])
|
||||
|
||||
@pytest.fixture
|
||||
def repo_b(gh, project):
|
||||
project.write({'repo_ids': [(0, 0, {'name': "odoo/b"})]})
|
||||
return gh.repo('odoo/b', hooks=[
|
||||
((odoo.http.root, '/runbot_merge/hooks'), ['pull_request', 'issue_comment', 'status'])
|
||||
])
|
||||
|
||||
@pytest.fixture
|
||||
def repo_c(gh, project):
|
||||
project.write({'repo_ids': [(0, 0, {'name': "odoo/c"})]})
|
||||
return gh.repo('odoo/c', hooks=[
|
||||
((odoo.http.root, '/runbot_merge/hooks'), ['pull_request', 'issue_comment', 'status'])
|
||||
])
|
||||
|
||||
def make_pr(repo, prefix, trees, target='master', user='user', label=None):
|
||||
base = repo.commit(f'heads/{target}')
|
||||
tree = dict(repo.objects[base.tree])
|
||||
c = base.id
|
||||
for i, t in enumerate(trees):
|
||||
tree.update(t)
|
||||
c = repo.make_commit(c, f'commit_{prefix}_{i:02}', None,
|
||||
tree=dict(tree))
|
||||
pr = repo.make_pr(f'title {prefix}', f'body {prefix}', target=target,
|
||||
ctid=c, user=user, label=label and f'{user}:{label}')
|
||||
repo.post_status(c, 'success', 'ci/runbot')
|
||||
repo.post_status(c, 'success', 'legal/cla')
|
||||
pr.post_comment('hansen r+', 'reviewer')
|
||||
return pr
|
||||
def to_pr(env, pr):
|
||||
return env['runbot_merge.pull_requests'].search([
|
||||
('repository.name', '=', pr.repo.name),
|
||||
('number', '=', pr.number),
|
||||
])
|
||||
def test_stage_one(env, project, repo_a, repo_b):
|
||||
""" First PR is non-matched from A => should not select PR from B
|
||||
"""
|
||||
project.batch_limit = 1
|
||||
|
||||
repo_a.make_ref(
|
||||
'heads/master',
|
||||
repo_a.make_commit(None, 'initial', None, tree={'a': 'a_0'})
|
||||
)
|
||||
pr_a = make_pr(repo_a, 'A', [{'a': 'a_1'}], label='do-a-thing')
|
||||
|
||||
repo_b.make_ref(
|
||||
'heads/master',
|
||||
repo_b.make_commit(None, 'initial', None, tree={'a': 'b_0'})
|
||||
)
|
||||
pr_b = make_pr(repo_b, 'B', [{'a': 'b_1'}], label='do-other-thing')
|
||||
|
||||
env['runbot_merge.project']._check_progress()
|
||||
|
||||
assert to_pr(env, pr_a).state == 'ready'
|
||||
assert to_pr(env, pr_a).staging_id
|
||||
assert to_pr(env, pr_b).state == 'ready'
|
||||
assert not to_pr(env, pr_b).staging_id
|
||||
|
||||
def test_stage_match(env, project, repo_a, repo_b):
|
||||
""" First PR is matched from A, => should select matched PR from B
|
||||
"""
|
||||
project.batch_limit = 1
|
||||
repo_a.make_ref(
|
||||
'heads/master',
|
||||
repo_a.make_commit(None, 'initial', None, tree={'a': 'a_0'})
|
||||
)
|
||||
pr_a = make_pr(repo_a, 'A', [{'a': 'a_1'}], label='do-a-thing')
|
||||
|
||||
repo_b.make_ref(
|
||||
'heads/master',
|
||||
repo_b.make_commit(None, 'initial', None, tree={'a': 'b_0'})
|
||||
)
|
||||
pr_b = make_pr(repo_b, 'B', [{'a': 'b_1'}], label='do-a-thing')
|
||||
|
||||
env['runbot_merge.project']._check_progress()
|
||||
|
||||
pr_a = to_pr(env, pr_a)
|
||||
pr_b = to_pr(env, pr_b)
|
||||
assert pr_a.state == 'ready'
|
||||
assert pr_a.staging_id
|
||||
assert pr_b.state == 'ready'
|
||||
assert pr_b.staging_id
|
||||
# should be part of the same staging
|
||||
assert pr_a.staging_id == pr_b.staging_id, \
|
||||
"branch-matched PRs should be part of the same staging"
|
||||
|
||||
def test_sub_match(env, project, repo_a, repo_b, repo_c):
|
||||
""" Branch-matching should work on a subset of repositories
|
||||
"""
|
||||
project.batch_limit = 1
|
||||
repo_a.make_ref(
|
||||
'heads/master',
|
||||
repo_a.make_commit(None, 'initial', None, tree={'a': 'a_0'})
|
||||
)
|
||||
# no pr here
|
||||
|
||||
repo_b.make_ref(
|
||||
'heads/master',
|
||||
repo_b.make_commit(None, 'initial', None, tree={'a': 'b_0'})
|
||||
)
|
||||
pr_b = make_pr(repo_b, 'B', [{'a': 'b_1'}], label='do-a-thing')
|
||||
|
||||
repo_c.make_ref(
|
||||
'heads/master',
|
||||
repo_c.make_commit(None, 'initial', None, tree={'a': 'c_0'})
|
||||
)
|
||||
pr_c = make_pr(repo_c, 'C', [{'a': 'c_1'}], label='do-a-thing')
|
||||
|
||||
env['runbot_merge.project']._check_progress()
|
||||
|
||||
pr_b = to_pr(env, pr_b)
|
||||
pr_c = to_pr(env, pr_c)
|
||||
assert pr_b.state == 'ready'
|
||||
assert pr_b.staging_id
|
||||
assert pr_c.state == 'ready'
|
||||
assert pr_c.staging_id
|
||||
# should be part of the same staging
|
||||
assert pr_c.staging_id == pr_b.staging_id, \
|
||||
"branch-matched PRs should be part of the same staging"
|
||||
st = pr_b.staging_id
|
||||
assert json.loads(st.heads) == {
|
||||
'odoo/a': repo_a.commit('heads/master').id,
|
||||
'odoo/b': repo_b.commit('heads/staging.master').id,
|
||||
'odoo/c': repo_c.commit('heads/staging.master').id,
|
||||
}
|
||||
|
||||
def test_merge_fail(env, project, repo_a, repo_b):
|
||||
""" In a matched-branch scenario, if merging in one of the linked repos
|
||||
fails it should revert the corresponding merges
|
||||
"""
|
||||
project.batch_limit = 1
|
||||
|
||||
root_a = repo_a.make_commit(None, 'initial', None, tree={'a': 'a_0'})
|
||||
repo_a.make_ref('heads/master', root_a)
|
||||
root_b = repo_b.make_commit(None, 'initial', None, tree={'a': 'b_0'})
|
||||
repo_b.make_ref('heads/master', root_b)
|
||||
|
||||
# first set of matched PRs
|
||||
pr1a = make_pr(repo_a, 'A', [{'a': 'a_1'}], label='do-a-thing')
|
||||
pr1b = make_pr(repo_b, 'B', [{'a': 'b_1'}], label='do-a-thing')
|
||||
|
||||
# add a conflicting commit to B so the staging fails
|
||||
repo_b.make_commit('heads/master', 'cn', None, tree={'a': 'cn'})
|
||||
|
||||
# and a second set of PRs which should get staged while the first set
|
||||
# fails
|
||||
pr2a = make_pr(repo_a, 'A2', [{'b': 'ok'}], label='do-b-thing')
|
||||
pr2b = make_pr(repo_b, 'B2', [{'b': 'ok'}], label='do-b-thing')
|
||||
|
||||
env['runbot_merge.project']._check_progress()
|
||||
|
||||
s2 = to_pr(env, pr2a) | to_pr(env, pr2b)
|
||||
st = env['runbot_merge.stagings'].search([])
|
||||
assert st
|
||||
assert st.batch_ids.prs == s2
|
||||
|
||||
failed = to_pr(env, pr1b)
|
||||
assert failed.state == 'error'
|
||||
assert pr1b.comments == [
|
||||
('reviewer', 'hansen r+'),
|
||||
('<insert current user here>', 'Unable to stage PR (merge conflict)'),
|
||||
]
|
||||
other = to_pr(env, pr1a)
|
||||
assert not other.staging_id
|
||||
assert len(list(repo_a.log('heads/staging.master'))) == 2,\
|
||||
"root commit + squash-merged PR commit"
|
||||
|
||||
def test_ff_fail(env, project, repo_a, repo_b):
|
||||
""" In a matched-branch scenario, fast-forwarding one of the repos fails
|
||||
the entire thing should be rolled back
|
||||
"""
|
||||
project.batch_limit = 1
|
||||
root_a = repo_a.make_commit(None, 'initial', None, tree={'a': 'a_0'})
|
||||
repo_a.make_ref('heads/master', root_a)
|
||||
make_pr(repo_a, 'A', [{'a': 'a_1'}], label='do-a-thing')
|
||||
|
||||
root_b = repo_b.make_commit(None, 'initial', None, tree={'a': 'b_0'})
|
||||
repo_b.make_ref('heads/master', root_b)
|
||||
make_pr(repo_b, 'B', [{'a': 'b_1'}], label='do-a-thing')
|
||||
|
||||
env['runbot_merge.project']._check_progress()
|
||||
|
||||
# add second commit blocking FF
|
||||
cn = repo_b.make_commit('heads/master', 'second', None, tree={'a': 'b_0', 'b': 'other'})
|
||||
|
||||
repo_a.post_status('heads/staging.master', 'success', 'ci/runbot')
|
||||
repo_a.post_status('heads/staging.master', 'success', 'legal/cla')
|
||||
repo_b.post_status('heads/staging.master', 'success', 'ci/runbot')
|
||||
repo_b.post_status('heads/staging.master', 'success', 'legal/cla')
|
||||
|
||||
env['runbot_merge.project']._check_progress()
|
||||
assert repo_b.commit('heads/master').id == cn,\
|
||||
"B should still be at the conflicting commit"
|
||||
assert repo_a.commit('heads/master').id == root_a,\
|
||||
"FF A should have been rolled back when B failed"
|
||||
|
||||
# should be re-staged
|
||||
st = env['runbot_merge.stagings'].search([])
|
||||
assert len(st) == 1
|
||||
assert len(st.batch_ids.prs) == 2
|
||||
|
||||
def test_one_failed(env, project, repo_a, repo_b):
|
||||
""" If the companion of a ready branch-matched PR is not ready,
|
||||
they should not get staged
|
||||
"""
|
||||
project.batch_limit = 1
|
||||
c_a = repo_a.make_commit(None, 'initial', None, tree={'a': 'a_0'})
|
||||
repo_a.make_ref('heads/master', c_a)
|
||||
# pr_a is born ready
|
||||
pr_a = make_pr(repo_a, 'A', [{'a': 'a_1'}], label='do-a-thing')
|
||||
|
||||
c_b = repo_b.make_commit(None, 'initial', None, tree={'a': 'b_0'})
|
||||
repo_b.make_ref('heads/master', c_b)
|
||||
c_pr = repo_b.make_commit(c_b, 'pr', None, tree={'a': 'b_1'})
|
||||
pr_b = repo_b.make_pr(
|
||||
'title', 'body', target='master', ctid=c_pr,
|
||||
user='user', label='user:do-a-thing',
|
||||
)
|
||||
repo_b.post_status(c_pr, 'success', 'ci/runbot')
|
||||
repo_b.post_status(c_pr, 'success', 'legal/cla')
|
||||
|
||||
pr_a = to_pr(env, pr_a)
|
||||
pr_b = to_pr(env, pr_b)
|
||||
assert pr_a.state == 'ready'
|
||||
assert pr_b.state == 'validated'
|
||||
assert pr_a.label == pr_b.label == 'user:do-a-thing'
|
||||
|
||||
env['runbot_merge.project']._check_progress()
|
||||
|
||||
assert not pr_b.staging_id
|
||||
assert not pr_a.staging_id, \
|
||||
"pr_a should not have been staged as companion is not ready"
|
||||
|
||||
def test_batching(env, project, repo_a, repo_b):
|
||||
""" If multiple batches (label groups) are ready they should get batched
|
||||
together (within the limits of teh project's batch limit)
|
||||
"""
|
||||
project.batch_limit = 3
|
||||
repo_a.make_ref('heads/master', repo_a.make_commit(None, 'initial', None, tree={'a': 'a0'}))
|
||||
repo_b.make_ref('heads/master', repo_b.make_commit(None, 'initial', None, tree={'b': 'b0'}))
|
||||
|
||||
prs = [(
|
||||
a and to_pr(env, make_pr(repo_a, f'A{i}', [{f'a{i}': f'a{i}'}], label=f'batch{i}')),
|
||||
b and to_pr(env, make_pr(repo_b, f'B{i}', [{f'b{i}': f'b{i}'}], label=f'batch{i}'))
|
||||
)
|
||||
for i, (a, b) in enumerate([(1, 1), (0, 1), (1, 1), (1, 1), (1, 0)])
|
||||
]
|
||||
|
||||
env['runbot_merge.project']._check_progress()
|
||||
|
||||
st = env['runbot_merge.stagings'].search([])
|
||||
assert st
|
||||
assert len(st.batch_ids) == 3,\
|
||||
"Should have batched the first <batch_limit> batches"
|
||||
assert st.mapped('batch_ids.prs') == (
|
||||
prs[0][0] | prs[0][1]
|
||||
| prs[1][1]
|
||||
| prs[2][0] | prs[2][1]
|
||||
)
|
||||
|
||||
assert not prs[3][0].staging_id
|
||||
assert not prs[3][1].staging_id
|
||||
assert not prs[4][0].staging_id
|
||||
|
||||
def test_batching_split(env, repo_a, repo_b):
|
||||
""" If a staging fails, it should get split properly across repos
|
||||
"""
|
||||
repo_a.make_ref('heads/master', repo_a.make_commit(None, 'initial', None, tree={'a': 'a0'}))
|
||||
repo_b.make_ref('heads/master', repo_b.make_commit(None, 'initial', None, tree={'b': 'b0'}))
|
||||
|
||||
prs = [(
|
||||
a and to_pr(env, make_pr(repo_a, f'A{i}', [{f'a{i}': f'a{i}'}], label=f'batch{i}')),
|
||||
b and to_pr(env, make_pr(repo_b, f'B{i}', [{f'b{i}': f'b{i}'}], label=f'batch{i}'))
|
||||
)
|
||||
for i, (a, b) in enumerate([(1, 1), (0, 1), (1, 1), (1, 1), (1, 0)])
|
||||
]
|
||||
|
||||
env['runbot_merge.project']._check_progress()
|
||||
|
||||
st0 = env['runbot_merge.stagings'].search([])
|
||||
assert len(st0.batch_ids) == 5
|
||||
assert len(st0.mapped('batch_ids.prs')) == 8
|
||||
|
||||
# mark b.staging as failed -> should create two new stagings with (0, 1)
|
||||
# and (2, 3, 4) and stage the first one
|
||||
repo_b.post_status('heads/staging.master', 'success', 'legal/cla')
|
||||
repo_b.post_status('heads/staging.master', 'failure', 'ci/runbot')
|
||||
|
||||
env['runbot_merge.project']._check_progress()
|
||||
|
||||
assert not st0.exists()
|
||||
sts = env['runbot_merge.stagings'].search([])
|
||||
assert len(sts) == 2
|
||||
st1, st2 = sts
|
||||
# a bit oddly st1 is probably the (2,3,4) one: the split staging for
|
||||
# (0, 1) has been "exploded" and a new staging was created for it
|
||||
assert not st1.heads
|
||||
assert len(st1.batch_ids) == 3
|
||||
assert st1.mapped('batch_ids.prs') == \
|
||||
prs[2][0] | prs[2][1] | prs[3][0] | prs[3][1] | prs[4][0]
|
||||
|
||||
assert st2.heads
|
||||
assert len(st2.batch_ids) == 2
|
||||
assert st2.mapped('batch_ids.prs') == \
|
||||
prs[0][0] | prs[0][1] | prs[1][1]
|
196
runbot_merge/views/mergebot.xml
Normal file
196
runbot_merge/views/mergebot.xml
Normal file
@ -0,0 +1,196 @@
|
||||
<odoo>
|
||||
<record id="runbot_merge_form_project" model="ir.ui.view">
|
||||
<field name="name">Project Form</field>
|
||||
<field name="model">runbot_merge.project</field>
|
||||
<field name="arch" type="xml">
|
||||
<form>
|
||||
<header>
|
||||
<button type="object" name="sync_prs" string="Sync PRs"
|
||||
attrs="{'invisible': [['github_token', '=', False]]}"/>
|
||||
</header>
|
||||
<sheet>
|
||||
<!--
|
||||
<div class="oe_button_box" name="button_box">
|
||||
<button class="oe_stat_button" name="action_see_attachments" type="object" icon="fa-book" attrs="{'invisible': ['|', ('state', '=', 'confirmed'), ('type', '=', 'routing')]}">
|
||||
<field string="Attachments" name="mrp_document_count" widget="statinfo"/>
|
||||
</button>
|
||||
</div>
|
||||
-->
|
||||
<div class="oe_title">
|
||||
<h1><field name="name" placeholder="Name"/></h1>
|
||||
</div>
|
||||
<group>
|
||||
<group>
|
||||
<field name="github_prefix" string="bot name"/>
|
||||
</group>
|
||||
<group>
|
||||
<field name="required_statuses"/>
|
||||
</group>
|
||||
</group>
|
||||
<group>
|
||||
<group>
|
||||
<field name="github_token"/>
|
||||
</group>
|
||||
<group>
|
||||
<field name="ci_timeout"/>
|
||||
<field name="batch_limit"/>
|
||||
</group>
|
||||
</group>
|
||||
|
||||
<field name="repo_ids">
|
||||
<tree editable="bottom">
|
||||
<field name="name"/>
|
||||
</tree>
|
||||
</field>
|
||||
<field name="branch_ids">
|
||||
<tree editable="bottom">
|
||||
<field name="name"/>
|
||||
</tree>
|
||||
</field>
|
||||
</sheet>
|
||||
</form>
|
||||
</field>
|
||||
</record>
|
||||
|
||||
<record id="runbot_merge_action_projects" model="ir.actions.act_window">
|
||||
<field name="name">Projects</field>
|
||||
<field name="res_model">runbot_merge.project</field>
|
||||
<field name="view_mode">tree,form</field>
|
||||
</record>
|
||||
|
||||
<record id="runbot_merge_action_prs" model="ir.actions.act_window">
|
||||
<field name="name">Pull Requests</field>
|
||||
<field name="res_model">runbot_merge.pull_requests</field>
|
||||
<field name="view_mode">tree,form</field>
|
||||
<field name="context">{'search_default_open': True}</field>
|
||||
</record>
|
||||
<record id="runbot_merge_search_prs" model="ir.ui.view">
|
||||
<field name="name">PR search</field>
|
||||
<field name="model">runbot_merge.pull_requests</field>
|
||||
<field name="arch" type="xml">
|
||||
<search>
|
||||
<filter
|
||||
name="open" string="Open"
|
||||
domain="[('state', 'not in', ['merged', 'closed'])]"
|
||||
/>
|
||||
<field name="author"/>
|
||||
<field name="label"/>
|
||||
<field name="target"/>
|
||||
<field name="repository"/>
|
||||
<field name="state"/>
|
||||
|
||||
<group>
|
||||
<filter string="Target" name="target_" context="{'group_by':'target'}"/>
|
||||
<filter string="Repository" name="repo_" context="{'group_by':'repository'}"/>
|
||||
<filter string="State" name="state_" context="{'group_by':'state'}"/>
|
||||
<filter string="Priority" name="priority_" context="{'group_by':'priority'}"/>
|
||||
</group>
|
||||
</search>
|
||||
</field>
|
||||
</record>
|
||||
<record id="runbot_merge_tree_prs" model="ir.ui.view">
|
||||
<field name="name">PR tree</field>
|
||||
<field name="model">runbot_merge.pull_requests</field>
|
||||
<field name="arch" type="xml">
|
||||
<tree>
|
||||
<field name="repository"/>
|
||||
<field name="number"/>
|
||||
<field name="target"/>
|
||||
<field name="state"/>
|
||||
</tree>
|
||||
</field>
|
||||
</record>
|
||||
<record id="runbot_merge_form_prs" model="ir.ui.view">
|
||||
<field name="name">PR form</field>
|
||||
<field name="model">runbot_merge.pull_requests</field>
|
||||
<field name="arch" type="xml">
|
||||
<form>
|
||||
<header/>
|
||||
<sheet>
|
||||
<div class="oe_title">
|
||||
<h1>
|
||||
<field name="repository"/>#<field name="number"/>
|
||||
</h1>
|
||||
</div>
|
||||
<group>
|
||||
<group>
|
||||
<field name="target"/>
|
||||
<field name="state"/>
|
||||
<field name="author"/>
|
||||
<field name="priority"/>
|
||||
</group>
|
||||
<group>
|
||||
<field name="label"/>
|
||||
<field name="squash"/>
|
||||
</group>
|
||||
</group>
|
||||
<group>
|
||||
<group colspan="4">
|
||||
<field name="head"/>
|
||||
<field name="statuses"/>
|
||||
</group>
|
||||
</group>
|
||||
<group>
|
||||
<group colspan="4" string="Message">
|
||||
<field name="message" nolabel="1"/>
|
||||
</group>
|
||||
</group>
|
||||
<group>
|
||||
<group colspan="4" string="Delegates">
|
||||
<field name="delegates" nolabel="1">
|
||||
<tree>
|
||||
<field name="name"/>
|
||||
<field name="github_login"/>
|
||||
</tree>
|
||||
</field>
|
||||
</group>
|
||||
</group>
|
||||
</sheet>
|
||||
</form>
|
||||
</field>
|
||||
</record>
|
||||
|
||||
<record id="runbot_merge_action_stagings" model="ir.actions.act_window">
|
||||
<field name="name">Stagings</field>
|
||||
<field name="res_model">runbot_merge.stagings</field>
|
||||
<field name="view_mode">tree,form</field>
|
||||
<field name="context">{'default_active': True}</field>
|
||||
</record>
|
||||
<record id="runbot_merge_search_stagings" model="ir.ui.view">
|
||||
<field name="name">Stagings Search</field>
|
||||
<field name="model">runbot_merge.stagings</field>
|
||||
<field name="arch" type="xml">
|
||||
<search>
|
||||
<filter string="Active" name="active"
|
||||
domain="[('heads', '!=', False)]"/>
|
||||
<field name="state"/>
|
||||
<field name="target"/>
|
||||
|
||||
<group>
|
||||
<filter string="Target" name="target_" context="{'group_by': 'target'}"/>
|
||||
</group>
|
||||
</search>
|
||||
</field>
|
||||
</record>
|
||||
<record id="runbot_merge_tree_stagings" model="ir.ui.view">
|
||||
<field name="name">Stagings Tree</field>
|
||||
<field name="model">runbot_merge.stagings</field>
|
||||
<field name="arch" type="xml">
|
||||
<tree>
|
||||
<field name="target"/>
|
||||
<field name="state"/>
|
||||
</tree>
|
||||
</field>
|
||||
</record>
|
||||
|
||||
<menuitem name="Mergebot" id="runbot_merge_menu"/>
|
||||
<menuitem name="Projects" id="runbot_merge_menu_project"
|
||||
parent="runbot_merge_menu"
|
||||
action="runbot_merge_action_projects"/>
|
||||
<menuitem name="Pull Requests" id="runbot_merge_menu_prs"
|
||||
parent="runbot_merge_menu"
|
||||
action="runbot_merge_action_prs"/>
|
||||
<menuitem name="Stagings" id="runbot_merge_menu_stagings"
|
||||
parent="runbot_merge_menu"
|
||||
action="runbot_merge_action_stagings"/>
|
||||
</odoo>
|
27
runbot_merge/views/res_partner.xml
Normal file
27
runbot_merge/views/res_partner.xml
Normal file
@ -0,0 +1,27 @@
|
||||
<odoo>
|
||||
<record id="runbot_merge_form_partner" model="ir.ui.view">
|
||||
<field name="name">Add mergebot/GH info to partners form</field>
|
||||
<field name="model">res.partner</field>
|
||||
<field name="inherit_id" ref="base.view_partner_form"/>
|
||||
<field name="arch" type="xml">
|
||||
<xpath expr="//notebook" position="inside">
|
||||
<page string="Mergebot">
|
||||
<group>
|
||||
<group>
|
||||
<field name="github_login"/>
|
||||
</group>
|
||||
<group>
|
||||
<field name="reviewer"/>
|
||||
<field name="self_reviewer"/>
|
||||
</group>
|
||||
</group>
|
||||
<group>
|
||||
<group colspan="4" string="Delegate On">
|
||||
<field name="delegate_reviewer" nolabel="1"/>
|
||||
</group>
|
||||
</group>
|
||||
</page>
|
||||
</xpath>
|
||||
</field>
|
||||
</record>
|
||||
</odoo>
|
Loading…
Reference in New Issue
Block a user