Merge remote-tracking branch 'upstream/master' into python-registry-v2

This commit is contained in:
Jake Moshenko 2015-09-04 16:32:01 -04:00
commit 210ed7cf02
148 changed files with 1829 additions and 445 deletions

View file

@ -1,3 +1,22 @@
### v1.11.2
- Fixed security bug with LDAP login (#376)
### 1.11.1
- Loosened the check for mounted volumes bug (#353)
- Strengthened HTTPS configuration (#329)
- Disabled password change for non-DB auth (#347)
- Added support for custom favicon (#343)
- Fixed tarfile support for non-unicode pax fields (#328)
- Fixed permissions on tag history API requiring READ instead of WRITE tokens (#316)
- Added public access to time machine (#334)
- Added missing JSON schema for 'refs' and 'branch_name' (#330)
- Always create a new connection to Swift (#336)
- Minor UI Fixes (#356, #341, #338, #337)
- Minor trigger fixes (#357, #349)
- Refactored and fixed internal code (#331)
### 1.11.0 ### 1.11.0
- Changed user pages to display public repositories (#321) - Changed user pages to display public repositories (#321)

View file

@ -1,6 +1,6 @@
# vim:ft=dockerfile # vim:ft=dockerfile
FROM phusion/baseimage:0.9.16 FROM phusion/baseimage:0.9.17
ENV DEBIAN_FRONTEND noninteractive ENV DEBIAN_FRONTEND noninteractive
ENV HOME /root ENV HOME /root
@ -42,6 +42,7 @@ ADD conf/init/copy_config_files.sh /etc/my_init.d/
ADD conf/init/doupdatelimits.sh /etc/my_init.d/ ADD conf/init/doupdatelimits.sh /etc/my_init.d/
ADD conf/init/copy_syslog_config.sh /etc/my_init.d/ ADD conf/init/copy_syslog_config.sh /etc/my_init.d/
ADD conf/init/runmigration.sh /etc/my_init.d/ ADD conf/init/runmigration.sh /etc/my_init.d/
ADD conf/init/syslog-ng.conf /etc/syslog-ng/
ADD conf/init/service/ /etc/service/ ADD conf/init/service/ /etc/service/

View file

@ -1,5 +1,7 @@
# Quay.io - container image registry # Quay.io - container image registry
`master` branch build status: ![Docker Repository on Quay.io](https://quay.io/repository/quay/quay/status?token=7bffbc13-8bb0-4fb4-8a70-684a0cf485d3 "Docker Repository on Quay.io")
Quay.io is a container image registry with managements APIs, a Docker registry API, a container build system. Quay.io is a container image registry with managements APIs, a Docker registry API, a container build system.
The application is implemented as a set of API endpoints written in python and an Angular.js frontend. The application is implemented as a set of API endpoints written in python and an Angular.js frontend.
@ -7,6 +9,42 @@ The application is implemented as a set of API endpoints written in python and a
If you are doing local development on your workstation against the code base follow these instructions. If you are doing local development on your workstation against the code base follow these instructions.
### Docker
Quay and its parts can run inside of docker containers.
This method requires no installation of any python packages on your host machine.
The `local-docker.sh` script is provided to prepare and run parts of quay.
First, start redis:
```
docker run -d -p 6379:6379 redis
```
And clone the configuration repo:
```
git clone git@github.com:coreos-inc/quay-config.git ../quay-config
ln -s ../../quay-config/local conf/stack
```
To build and run a docker container, pass one argument to local-docker.sh:
- `dev`: run quay on port 5000
- `buildman`: run the buildmanager
- `notifications`: run the notification worker
- `test`: run the unit tests
For example:
```
./local-docker.sh dev
````
will start quay in a docker container.
Now quay will be running on: http://127.0.0.1:5000
The username is `devtable` and the password is `password`.
### OS X ### OS X
``` ```
@ -15,8 +53,6 @@ cd quay
./contrib/osx/local-setup.sh ./contrib/osx/local-setup.sh
``` ```
## Running Development Environment
Now run the server; it will use sqlite as the SQL server. Now run the server; it will use sqlite as the SQL server.
``` ```

18
app.py
View file

@ -2,6 +2,7 @@ import logging
import os import os
import json import json
from functools import partial
from flask import Flask, request, Request, _request_ctx_stack from flask import Flask, request, Request, _request_ctx_stack
from flask.ext.principal import Principal from flask.ext.principal import Principal
from flask.ext.login import LoginManager, UserMixin from flask.ext.login import LoginManager, UserMixin
@ -20,17 +21,18 @@ from data.billing import Billing
from data.buildlogs import BuildLogs from data.buildlogs import BuildLogs
from data.archivedlogs import LogArchive from data.archivedlogs import LogArchive
from data.userevent import UserEventsBuilderModule from data.userevent import UserEventsBuilderModule
from data.queue import WorkQueue from data.queue import WorkQueue, MetricQueueReporter
from util import get_app_url
from util.saas.analytics import Analytics from util.saas.analytics import Analytics
from util.saas.exceptionlog import Sentry from util.saas.exceptionlog import Sentry
from util.names import urn_generator from util.names import urn_generator
from util.config.oauth import GoogleOAuthConfig, GithubOAuthConfig, GitLabOAuthConfig from util.config.oauth import GoogleOAuthConfig, GithubOAuthConfig, GitLabOAuthConfig
from util.security.signing import Signer from util.security.signing import Signer
from util.saas.queuemetrics import QueueMetrics from util.saas.cloudwatch import start_cloudwatch_sender
from util.saas.metricqueue import MetricQueue
from util.config.provider import FileConfigProvider, TestConfigProvider from util.config.provider import FileConfigProvider, TestConfigProvider
from util.config.configutil import generate_secret_key from util.config.configutil import generate_secret_key
from util.config.superusermanager import SuperUserManager from util.config.superusermanager import SuperUserManager
from buildman.jobutil.buildreporter import BuildMetrics
OVERRIDE_CONFIG_DIRECTORY = 'conf/stack/' OVERRIDE_CONFIG_DIRECTORY = 'conf/stack/'
OVERRIDE_CONFIG_YAML_FILENAME = 'conf/stack/config.yaml' OVERRIDE_CONFIG_YAML_FILENAME = 'conf/stack/config.yaml'
@ -129,8 +131,8 @@ authentication = UserAuthentication(app, OVERRIDE_CONFIG_DIRECTORY)
userevents = UserEventsBuilderModule(app) userevents = UserEventsBuilderModule(app)
superusers = SuperUserManager(app) superusers = SuperUserManager(app)
signer = Signer(app, OVERRIDE_CONFIG_DIRECTORY) signer = Signer(app, OVERRIDE_CONFIG_DIRECTORY)
queue_metrics = QueueMetrics(app) metric_queue = MetricQueue()
build_metrics = BuildMetrics(app) start_cloudwatch_sender(metric_queue, app)
tf = app.config['DB_TRANSACTION_FACTORY'] tf = app.config['DB_TRANSACTION_FACTORY']
@ -141,8 +143,9 @@ google_login = GoogleOAuthConfig(app.config, 'GOOGLE_LOGIN_CONFIG')
oauth_apps = [github_login, github_trigger, gitlab_trigger, google_login] oauth_apps = [github_login, github_trigger, gitlab_trigger, google_login]
image_diff_queue = WorkQueue(app.config['DIFFS_QUEUE_NAME'], tf) image_diff_queue = WorkQueue(app.config['DIFFS_QUEUE_NAME'], tf)
image_replication_queue = WorkQueue(app.config['REPLICATION_QUEUE_NAME'], tf)
dockerfile_build_queue = WorkQueue(app.config['DOCKERFILE_BUILD_QUEUE_NAME'], tf, dockerfile_build_queue = WorkQueue(app.config['DOCKERFILE_BUILD_QUEUE_NAME'], tf,
reporter=queue_metrics.report) reporter=MetricQueueReporter(metric_queue))
notification_queue = WorkQueue(app.config['NOTIFICATION_QUEUE_NAME'], tf) notification_queue = WorkQueue(app.config['NOTIFICATION_QUEUE_NAME'], tf)
database.configure(app.config) database.configure(app.config)
@ -173,5 +176,4 @@ class LoginWrappedDBUser(UserMixin):
def get_id(self): def get_id(self):
return unicode(self._uuid) return unicode(self._uuid)
def get_app_url(): get_app_url = partial(get_app_url, app.config)
return '%s://%s' % (app.config['PREFERRED_URL_SCHEME'], app.config['SERVER_HOSTNAME'])

View file

@ -1,5 +1,4 @@
import logging import logging
import jwt
import re import re
from datetime import datetime, timedelta from datetime import datetime, timedelta
@ -11,10 +10,11 @@ from cryptography.hazmat.backends import default_backend
from cachetools import lru_cache from cachetools import lru_cache
from app import app from app import app
from auth_context import set_grant_user_context from .auth_context import set_grant_user_context
from permissions import repository_read_grant, repository_write_grant from .permissions import repository_read_grant, repository_write_grant
from util.names import parse_namespace_repository from util.names import parse_namespace_repository
from util.http import abort from util.http import abort
from util.security import strictjwt
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -44,17 +44,14 @@ def identity_from_bearer_token(bearer_token, max_signed_s, public_key):
# Load the JWT returned. # Load the JWT returned.
try: try:
payload = jwt.decode(encoded, public_key, algorithms=['RS256'], audience='quay', payload = strictjwt.decode(encoded, public_key, algorithms=['RS256'], audience='quay',
issuer='token-issuer') issuer='token-issuer')
except jwt.InvalidTokenError: except strictjwt.InvalidTokenError:
raise InvalidJWTException('Invalid token') raise InvalidJWTException('Invalid token')
if not 'sub' in payload: if not 'sub' in payload:
raise InvalidJWTException('Missing sub field in JWT') raise InvalidJWTException('Missing sub field in JWT')
if not 'exp' in payload:
raise InvalidJWTException('Missing exp field in JWT')
# Verify that the expiration is no more than 300 seconds in the future. # Verify that the expiration is no more than 300 seconds in the future.
if datetime.fromtimestamp(payload['exp']) > datetime.utcnow() + timedelta(seconds=max_signed_s): if datetime.fromtimestamp(payload['exp']) > datetime.utcnow() + timedelta(seconds=max_signed_s):
raise InvalidJWTException('Token was signed for more than %s seconds' % max_signed_s) raise InvalidJWTException('Token was signed for more than %s seconds' % max_signed_s)

View file

@ -116,14 +116,10 @@ class BuildComponent(BaseComponent):
# push_token: The token to use to push the built image. # push_token: The token to use to push the built image.
# tag_names: The name(s) of the tag(s) for the newly built image. # tag_names: The name(s) of the tag(s) for the newly built image.
# base_image: The image name and credentials to use to conduct the base image pull. # base_image: The image name and credentials to use to conduct the base image pull.
# repository: The repository to pull (DEPRECATED 0.2)
# tag: The tag to pull (DEPRECATED in 0.2)
# username: The username for pulling the base image (if any). # username: The username for pulling the base image (if any).
# password: The password for pulling the base image (if any). # password: The password for pulling the base image (if any).
build_arguments = { build_arguments = {
'build_package': self.user_files.get_file_url(build_job.repo_build.resource_key, 'build_package': build_job.get_build_package_url(self.user_files),
requires_cors=False)
if build_job.repo_build.resource_key is not None else "",
'sub_directory': build_config.get('build_subdir', ''), 'sub_directory': build_config.get('build_subdir', ''),
'repository': repository_name, 'repository': repository_name,
'registry': self.registry_hostname, 'registry': self.registry_hostname,

View file

@ -62,6 +62,17 @@ class BuildJob(object):
def repo_build(self): def repo_build(self):
return self._load_repo_build() return self._load_repo_build()
def get_build_package_url(self, user_files):
""" Returns the URL of the build package for this build, if any or empty string if none. """
archive_url = self.build_config.get('archive_url', None)
if archive_url:
return archive_url
if not self.repo_build.resource_key:
return ''
return user_files.get_file_url(self.repo_build.resource_key, requires_cors=False)
@property @property
def pull_credentials(self): def pull_credentials(self):
""" Returns the pull credentials for this job, or None if none. """ """ Returns the pull credentials for this job, or None if none. """

View file

@ -1,70 +0,0 @@
from buildman.enums import BuildJobResult
from util.saas.cloudwatch import get_queue
class BuildReporter(object):
"""
Base class for reporting build statuses to a metrics service.
"""
def report_completion_status(self, status):
"""
Method to invoke the recording of build's completion status to a metric service.
"""
raise NotImplementedError
class NullReporter(BuildReporter):
"""
The /dev/null of BuildReporters.
"""
def report_completion_status(self, *args):
pass
class CloudWatchBuildReporter(BuildReporter):
"""
Implements a BuildReporter for Amazon's CloudWatch.
"""
def __init__(self, queue, namespace_name, completed_name, failed_name, incompleted_name):
self._queue = queue
self._namespace_name = namespace_name
self._completed_name = completed_name
self._failed_name = failed_name
self._incompleted_name = incompleted_name
def _send_to_queue(self, *args, **kwargs):
self._queue.put((args, kwargs))
def report_completion_status(self, status):
if status == BuildJobResult.COMPLETE:
status_name = self._completed_name
elif status == BuildJobResult.ERROR:
status_name = self._failed_name
elif status == BuildJobResult.INCOMPLETE:
status_name = self._incompleted_name
else:
return
self._send_to_queue(self._namespace_name, status_name, 1, unit='Count')
class BuildMetrics(object):
"""
BuildMetrics initializes a reporter for recording the status of build completions.
"""
def __init__(self, app=None):
self._app = app
self._reporter = NullReporter()
if app is not None:
reporter_type = app.config.get('BUILD_METRICS_TYPE', 'Null')
if reporter_type == 'CloudWatch':
namespace = app.config['BUILD_METRICS_NAMESPACE']
completed_name = app.config['BUILD_METRICS_COMPLETED_NAME']
failed_name = app.config['BUILD_METRICS_FAILED_NAME']
incompleted_name = app.config['BUILD_METRICS_INCOMPLETED_NAME']
request_queue = get_queue(app)
self._reporter = CloudWatchBuildReporter(request_queue, namespace, completed_name,
failed_name, incompleted_name)
def __getattr__(self, name):
return getattr(self._reporter, name, None)

View file

@ -16,7 +16,7 @@ from buildman.enums import BuildJobResult, BuildServerStatus
from buildman.jobutil.buildstatus import StatusHandler from buildman.jobutil.buildstatus import StatusHandler
from buildman.jobutil.buildjob import BuildJob, BuildJobLoadException from buildman.jobutil.buildjob import BuildJob, BuildJobLoadException
from data import database from data import database
from app import app, build_metrics from app import app, metric_queue
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -151,7 +151,7 @@ class BuilderServer(object):
if self._current_status == BuildServerStatus.SHUTDOWN and not self._job_count: if self._current_status == BuildServerStatus.SHUTDOWN and not self._job_count:
self._shutdown_event.set() self._shutdown_event.set()
build_metrics.report_completion_status(job_status) report_completion_status(job_status)
@trollius.coroutine @trollius.coroutine
def _work_checker(self): def _work_checker(self):
@ -225,3 +225,15 @@ class BuilderServer(object):
# Initialize the work queue checker. # Initialize the work queue checker.
yield From(self._work_checker()) yield From(self._work_checker())
def report_completion_status(status):
if status == BuildJobResult.COMPLETE:
status_name = 'CompleteBuilds'
elif status == BuildJobResult.ERROR:
status_name = 'FailedBuilds'
elif status == BuildJobResult.INCOMPLETE:
status_name = 'IncompletedBuilds'
else:
return
metric_queue.put(status_name, 1, unit='Count')

View file

@ -4,6 +4,9 @@ ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCC0m+hVmyR3vn/xoxJe9+atRWBxSK+YXgyufNVDMcb7H00Jfnc341QH3kDVYZamUbhVh/nyc2RP7YbnZR5zORFtgOaNSdkMYrPozzBvxjnvSUokkCCWbLqXDHvIKiR12r+UTSijPJE/Yk702Mb2ejAFuae1C3Ec+qKAoOCagDjpQ3THyb5oaKE7VPHdwCWjWIQLRhC+plu77ObhoXIFJLD13gCi01L/rp4mYVCxIc2lX5A8rkK+bZHnIZwWUQ4t8SIjWxIaUo0FE7oZ83nKuNkYj5ngmLHQLY23Nx2WhE9H6NBthUpik9SmqQPtVYbhIG+bISPoH9Xs8CLrFb0VRjz Joey's Mac - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCC0m+hVmyR3vn/xoxJe9+atRWBxSK+YXgyufNVDMcb7H00Jfnc341QH3kDVYZamUbhVh/nyc2RP7YbnZR5zORFtgOaNSdkMYrPozzBvxjnvSUokkCCWbLqXDHvIKiR12r+UTSijPJE/Yk702Mb2ejAFuae1C3Ec+qKAoOCagDjpQ3THyb5oaKE7VPHdwCWjWIQLRhC+plu77ObhoXIFJLD13gCi01L/rp4mYVCxIc2lX5A8rkK+bZHnIZwWUQ4t8SIjWxIaUo0FE7oZ83nKuNkYj5ngmLHQLY23Nx2WhE9H6NBthUpik9SmqQPtVYbhIG+bISPoH9Xs8CLrFb0VRjz Joey's Mac
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCo6FhAP7mFFOAzM91gtaKW7saahtaN4lur42FMMztz6aqUycIltCmvxo+3FmrXgCG30maMNU36Vm1+9QRtVQEd+eRuoIWP28t+8MT01Fh4zPuE2Wca3pOHSNo3X81FfWJLzmwEHiQKs9HPQqUhezR9PcVWVkbMyAzw85c0UycGmHGFNb0UiRd9HFY6XbgbxhZv/mvKLZ99xE3xkOzS1PNsdSNvjUKwZR7pSUPqNS5S/1NXyR4GhFTU24VPH/bTATOv2ATH+PSzsZ7Qyz9UHj38tKC+ALJHEDJ4HXGzobyOUP78cHGZOfCB5FYubq0zmOudAjKIAhwI8XTFvJ2DX1P3 jimmyzelinskie - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCo6FhAP7mFFOAzM91gtaKW7saahtaN4lur42FMMztz6aqUycIltCmvxo+3FmrXgCG30maMNU36Vm1+9QRtVQEd+eRuoIWP28t+8MT01Fh4zPuE2Wca3pOHSNo3X81FfWJLzmwEHiQKs9HPQqUhezR9PcVWVkbMyAzw85c0UycGmHGFNb0UiRd9HFY6XbgbxhZv/mvKLZ99xE3xkOzS1PNsdSNvjUKwZR7pSUPqNS5S/1NXyR4GhFTU24VPH/bTATOv2ATH+PSzsZ7Qyz9UHj38tKC+ALJHEDJ4HXGzobyOUP78cHGZOfCB5FYubq0zmOudAjKIAhwI8XTFvJ2DX1P3 jimmyzelinskie
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDNvw8qo9m8np7yQ/Smv/oklM8bo8VyNRZriGYBDuolWDL/mZpYCQnZJXphQo7RFdNABYistikjJlBuuwUohLf2uSq0iKoFa2TgwI43wViWzvuzU4nA02/ITD5BZdmWAFNyIoqeB50Ol4qUgDwLAZ+7Kv7uCi6chcgr9gTi99jY3GHyZjrMiXMHGVGi+FExFuzhVC2drKjbz5q6oRfQeLtNfG4psl5GU3MQU6FkX4fgoCx0r9R48/b7l4+TT7pWblJQiRfeldixu6308vyoTUEHasdkU3/X0OTaGz/h5XqTKnGQc6stvvoED3w+L3QFp0H5Z8sZ9stSsitmCBrmbcKZ jakemoshenko - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDNvw8qo9m8np7yQ/Smv/oklM8bo8VyNRZriGYBDuolWDL/mZpYCQnZJXphQo7RFdNABYistikjJlBuuwUohLf2uSq0iKoFa2TgwI43wViWzvuzU4nA02/ITD5BZdmWAFNyIoqeB50Ol4qUgDwLAZ+7Kv7uCi6chcgr9gTi99jY3GHyZjrMiXMHGVGi+FExFuzhVC2drKjbz5q6oRfQeLtNfG4psl5GU3MQU6FkX4fgoCx0r9R48/b7l4+TT7pWblJQiRfeldixu6308vyoTUEHasdkU3/X0OTaGz/h5XqTKnGQc6stvvoED3w+L3QFp0H5Z8sZ9stSsitmCBrmbcKZ jakemoshenko
- ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAgEAo/JkbGO6R7g1ZxARi0xWVM7FOfN02snRAcIO6vT9M7xMUkWVLgD+hM/o91lk+UFiYdql0CATobpFWncRL36KaUqsbw9/1BlI40wg296XHXSSnxhxZ4L7ytf6G1tyN319HXlI2kh9vAf/fy++yDvkH8dI3k1oLoW+mZPET6Pff04/6AXXrRlS5mhmGv9irGwiDHtVKpj6lU8DN/UtOrv1tiQ0pgwEJq05fLGoQfgPNaBCnW2z4Ubpn2gyMcMBMpSwo4hCqJePd349e4bLmFcT+gXYg7Mnup1DoTDlowFFN56wpxQbdp96IxWzU+jYPaIAuRo+BJzCyOS8qBv0Z4RZrgop0qp2JYiVwmViO6TZhIDz6loQJXUOIleQmNgTbiZx8Bwv5GY2jMYoVwlBp7yy5bRjxfbFsJ0vU7TVzNAG7oEJy/74HmHmWzRQlSlQjesr8gRbm9zgR8wqc/L107UOWFg7Cgh8ZNjKuADbXqYuda1Y9m2upcfS26UPz5l5PW5uFRMHZSi8pb1XV6/0Z8H8vwsh37Ur6aLi/5jruRmKhdlsNrB1IiDicBsPW3yg7HHSIdPU4oBNPC77yDCT3l4CKr4el81RrZt7FbJPfY+Ig9Q5O+05f6I8+ZOlJGyZ/Qfyl2aVm1HnlJKuBqPxeic8tMng/9B5N7uZL6Y3k5jFU8c= quentin
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDI7LtxLItapmUbt3Gs+4Oxa1i22fkx1+aJDkAjiRWPSX3+cxOzuPfHX9uFzr+qj5hy4J7ErrPp8q9alu+il9lE26GQuUxOZiaUrXu4dRCXXdCqTHARWBxGUXjkxdMp2HIzFpBxmVqcRubrgM36LBzKapdDOqQdz7XnNm5Jmf0tH/N0+TgV60P0WVY1CxmTya+JHNFVgazhd+oIGEhTyW/eszMGcFUgZet7DQFytYIQXYSwwGpGdJ+0InKAJ2SzCt/yuUlSrhrVM8vSGeami1XYmgQiyth1zjteMd8uTrc9NREH7bZTNcMFBqVYE3BYQWGRrv8pMMgP9gxgLbxtVsUl barakmich-titania
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDUWB4aSjSRHCz5/6H9/EJhJVvRmPThvEzyHinaWPsuM9prBSLci9NF9WneVl30nczkvllA+w34kycdrS3fKpjTbODaEOLHBobWl3bccY0I6kr86q5z67NZffjCm/P/RL+dBaOiBWS8PV8oiDF1P6YdMo8Jk46n9fozmLCXHUuCw5BJ8PGjQqbsEzA3qFMeKZYdJHOizOfeIfKfCWYrrumVRY9v6SAUDoFOl4PZEM7QdGp9EoRYb9MNLgKLnZ4RjbcLoFwiqxY4KEM4zfjZPNOECiLCuJqvHM2QawwuO1klJ16HpJk+FzOTWQoZtT47LoE/XNSOcNtAOiD+OQ449ia1EArhm7+1DnLXvHXKIl1JtuqJz+wFCsbNSdB7P562OHAGRIxYK3DfE+0CZH1BeHYl7xiRBeCtZ+OZMIocqeJtq8taIS7Un5wnGcQWxFtQnr/f65EgbIi7G2dxPcjhr6K+GWYezsiReVVKnIClq2MHhABG9QOncKDIa47L3nyx3pm4ZfMbC2jmnK2pFgGGSfYDy4487JnAUOG1mzZ9vm4gDhatT+vZFSBOwv1e4CErBh/wYXooF5I0nGmE6y6zkKFqP+ZolJ6iXmXQ7Ea2oaGeyaprweBjkhHgghi4KbwKbClope4Zo9X9JJYBLQSW33sEEuy8MlSBpdZAbz9t/FvJaw== mjibson
write_files: write_files:
- path: /root/overrides.list - path: /root/overrides.list

View file

@ -0,0 +1,2 @@
#!/bin/sh
exec logger -i -t storagereplication

View file

@ -0,0 +1,8 @@
#! /bin/bash
echo 'Starting storage replication worker'
cd /
venv/bin/python -m workers.storagereplication 2>&1
echo 'Repository storage replication exited'

143
conf/init/syslog-ng.conf Normal file
View file

@ -0,0 +1,143 @@
@version: 3.5
@include "scl.conf"
@include "`scl-root`/system/tty10.conf"
# Syslog-ng configuration file, compatible with default Debian syslogd
# installation.
# First, set some global options.
options { chain_hostnames(off); flush_lines(0); use_dns(no); use_fqdn(no);
owner("root"); group("adm"); perm(0640); stats_freq(0);
bad_hostname("^gconfd$");
};
########################
# Sources
########################
# This is the default behavior of sysklogd package
# Logs may come from unix stream, but not from another machine.
#
source s_src {
unix-stream("/dev/log");
internal();
};
# If you wish to get logs from remote machine you should uncomment
# this and comment the above source line.
#
#source s_net { tcp(ip(127.0.0.1) port(1000)); };
########################
# Destinations
########################
# First some standard logfile
#
destination d_auth { file("/var/log/auth.log"); };
destination d_cron { file("/var/log/cron.log"); };
destination d_daemon { file("/var/log/daemon.log"); };
destination d_kern { file("/var/log/kern.log"); };
destination d_lpr { file("/var/log/lpr.log"); };
destination d_mail { file("/var/log/mail.log"); };
destination d_syslog { file("/var/log/syslog"); };
destination d_user { file("/var/log/user.log"); };
destination d_uucp { file("/var/log/uucp.log"); };
# This files are the log come from the mail subsystem.
#
destination d_mailinfo { file("/var/log/mail.info"); };
destination d_mailwarn { file("/var/log/mail.warn"); };
destination d_mailerr { file("/var/log/mail.err"); };
# Logging for INN news system
#
destination d_newscrit { file("/var/log/news/news.crit"); };
destination d_newserr { file("/var/log/news/news.err"); };
destination d_newsnotice { file("/var/log/news/news.notice"); };
# Some `catch-all' logfiles.
#
destination d_debug { file("/var/log/debug"); };
destination d_error { file("/var/log/error"); };
destination d_messages { file("/var/log/messages"); };
# The named pipe /dev/xconsole is for the nsole' utility. To use it,
# you must invoke nsole' with the -file' option:
#
# $ xconsole -file /dev/xconsole [...]
#
destination d_xconsole { pipe("/dev/xconsole"); };
# Send the messages to an other host
#
#destination d_net { tcp("127.0.0.1" port(1000) log_fifo_size(1000)); };
# Debian only
destination d_ppp { file("/var/log/ppp.log"); };
########################
# Filters
########################
# Here's come the filter options. With this rules, we can set which
# message go where.
filter f_dbg { level(debug); };
filter f_info { level(info); };
filter f_notice { level(notice); };
filter f_warn { level(warn); };
filter f_err { level(err); };
filter f_crit { level(crit .. emerg); };
filter f_debug { level(debug) and not facility(auth, authpriv, news, mail); };
filter f_error { level(err .. emerg) ; };
filter f_auth { facility(auth, authpriv) and not filter(f_debug); };
filter f_cron { facility(cron) and not filter(f_debug); };
filter f_daemon { facility(daemon) and not filter(f_debug); };
filter f_kern { facility(kern) and not filter(f_debug); };
filter f_lpr { facility(lpr) and not filter(f_debug); };
filter f_local { facility(local0, local1, local3, local4, local5,
local6, local7) and not filter(f_debug); };
filter f_mail { facility(mail) and not filter(f_debug); };
filter f_news { facility(news) and not filter(f_debug); };
filter f_syslog3 { not facility(auth, authpriv, mail) and not filter(f_debug); };
filter f_uucp { facility(uucp) and not filter(f_debug); };
filter f_cnews { level(notice, err, crit) and facility(news); };
filter f_cother { level(debug, info, notice, warn) or facility(daemon, mail); };
filter f_ppp { facility(local2) and not filter(f_debug); };
filter f_console { level(warn .. emerg); };
########################
# Log paths
########################
log { source(s_src); filter(f_auth); destination(d_auth); };
log { source(s_src); filter(f_cron); destination(d_cron); };
log { source(s_src); filter(f_daemon); destination(d_daemon); };
log { source(s_src); filter(f_kern); destination(d_kern); };
log { source(s_src); filter(f_lpr); destination(d_lpr); };
log { source(s_src); filter(f_syslog3); destination(d_syslog); };
log { source(s_src); filter(f_uucp); destination(d_uucp); };
log { source(s_src); filter(f_mail); destination(d_mail); };
#log { source(s_src); filter(f_mail); filter(f_info); destination(d_mailinfo); };
#log { source(s_src); filter(f_mail); filter(f_warn); destination(d_mailwarn); };
#log { source(s_src); filter(f_mail); filter(f_err); destination(d_mailerr); };
log { source(s_src); filter(f_news); filter(f_crit); destination(d_newscrit); };
log { source(s_src); filter(f_news); filter(f_err); destination(d_newserr); };
log { source(s_src); filter(f_news); filter(f_notice); destination(d_newsnotice); };
#log { source(s_src); filter(f_ppp); destination(d_ppp); };
log { source(s_src); filter(f_debug); destination(d_debug); };
log { source(s_src); filter(f_error); destination(d_error); };
# All messages send to a remote site
#
#log { source(s_src); destination(d_net); };
###
# Include all config files in /etc/syslog-ng/conf.d/
###
@include "/etc/syslog-ng/conf.d/*.conf"

View file

@ -20,7 +20,7 @@ CLIENT_WHITELIST = ['SERVER_HOSTNAME', 'PREFERRED_URL_SCHEME', 'MIXPANEL_KEY',
'STRIPE_PUBLISHABLE_KEY', 'ENTERPRISE_LOGO_URL', 'SENTRY_PUBLIC_DSN', 'STRIPE_PUBLISHABLE_KEY', 'ENTERPRISE_LOGO_URL', 'SENTRY_PUBLIC_DSN',
'AUTHENTICATION_TYPE', 'REGISTRY_TITLE', 'REGISTRY_TITLE_SHORT', 'AUTHENTICATION_TYPE', 'REGISTRY_TITLE', 'REGISTRY_TITLE_SHORT',
'CONTACT_INFO', 'AVATAR_KIND', 'LOCAL_OAUTH_HANDLER', 'DOCUMENTATION_LOCATION', 'CONTACT_INFO', 'AVATAR_KIND', 'LOCAL_OAUTH_HANDLER', 'DOCUMENTATION_LOCATION',
'DOCUMENTATION_METADATA'] 'DOCUMENTATION_METADATA', 'SETUP_COMPLETE']
def frontend_visible_config(config_dict): def frontend_visible_config(config_dict):
@ -129,6 +129,7 @@ class DefaultConfig(object):
NOTIFICATION_QUEUE_NAME = 'notification' NOTIFICATION_QUEUE_NAME = 'notification'
DIFFS_QUEUE_NAME = 'imagediff' DIFFS_QUEUE_NAME = 'imagediff'
DOCKERFILE_BUILD_QUEUE_NAME = 'dockerfilebuild' DOCKERFILE_BUILD_QUEUE_NAME = 'dockerfilebuild'
REPLICATION_QUEUE_NAME = 'imagestoragereplication'
# Super user config. Note: This MUST BE an empty list for the default config. # Super user config. Note: This MUST BE an empty list for the default config.
SUPER_USERS = [] SUPER_USERS = []
@ -179,6 +180,9 @@ class DefaultConfig(object):
# basic auth. # basic auth.
FEATURE_REQUIRE_ENCRYPTED_BASIC_AUTH = False FEATURE_REQUIRE_ENCRYPTED_BASIC_AUTH = False
# Feature Flag: Whether to automatically replicate between storage engines.
FEATURE_STORAGE_REPLICATION = False
BUILD_MANAGER = ('enterprise', {}) BUILD_MANAGER = ('enterprise', {})
DISTRIBUTED_STORAGE_CONFIG = { DISTRIBUTED_STORAGE_CONFIG = {
@ -187,6 +191,7 @@ class DefaultConfig(object):
} }
DISTRIBUTED_STORAGE_PREFERENCE = ['local_us'] DISTRIBUTED_STORAGE_PREFERENCE = ['local_us']
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS = ['local_us']
# Health checker. # Health checker.
HEALTH_CHECKER = ('LocalHealthCheck', {}) HEALTH_CHECKER = ('LocalHealthCheck', {})

View file

@ -1,8 +1,7 @@
import logging import logging
from gzip import GzipFile from util.registry.gzipinputstream import GzipInputStream
from flask import send_file, abort from flask import send_file, abort
from cStringIO import StringIO
from data.userfiles import DelegateUserfiles, UserfilesHandlers from data.userfiles import DelegateUserfiles, UserfilesHandlers
@ -17,10 +16,8 @@ class LogArchiveHandlers(UserfilesHandlers):
def get(self, file_id): def get(self, file_id):
path = self._files.get_file_id_path(file_id) path = self._files.get_file_id_path(file_id)
try: try:
with self._storage.stream_read_file(self._locations, path) as gzip_stream: data_stream = self._storage.stream_read_file(self._locations, path)
with GzipFile(fileobj=gzip_stream) as unzipped: return send_file(GzipInputStream(data_stream), mimetype=JSON_MIMETYPE)
unzipped_buffer = StringIO(unzipped.read())
return send_file(unzipped_buffer, mimetype=JSON_MIMETYPE)
except IOError: except IOError:
abort(404) abort(404)

View file

@ -17,6 +17,7 @@ PLANS = [
'deprecated': True, 'deprecated': True,
'free_trial_days': 14, 'free_trial_days': 14,
'superseded_by': None, 'superseded_by': None,
'plans_page_hidden': False,
}, },
{ {
'title': 'Basic', 'title': 'Basic',
@ -28,6 +29,7 @@ PLANS = [
'deprecated': True, 'deprecated': True,
'free_trial_days': 14, 'free_trial_days': 14,
'superseded_by': None, 'superseded_by': None,
'plans_page_hidden': False,
}, },
{ {
'title': 'Yacht', 'title': 'Yacht',
@ -39,6 +41,7 @@ PLANS = [
'deprecated': True, 'deprecated': True,
'free_trial_days': 180, 'free_trial_days': 180,
'superseded_by': 'bus-small-30', 'superseded_by': 'bus-small-30',
'plans_page_hidden': False,
}, },
{ {
'title': 'Personal', 'title': 'Personal',
@ -50,6 +53,7 @@ PLANS = [
'deprecated': True, 'deprecated': True,
'free_trial_days': 14, 'free_trial_days': 14,
'superseded_by': 'personal-30', 'superseded_by': 'personal-30',
'plans_page_hidden': False,
}, },
{ {
'title': 'Skiff', 'title': 'Skiff',
@ -61,6 +65,7 @@ PLANS = [
'deprecated': True, 'deprecated': True,
'free_trial_days': 14, 'free_trial_days': 14,
'superseded_by': 'bus-micro-30', 'superseded_by': 'bus-micro-30',
'plans_page_hidden': False,
}, },
{ {
'title': 'Yacht', 'title': 'Yacht',
@ -72,6 +77,7 @@ PLANS = [
'deprecated': True, 'deprecated': True,
'free_trial_days': 14, 'free_trial_days': 14,
'superseded_by': 'bus-small-30', 'superseded_by': 'bus-small-30',
'plans_page_hidden': False,
}, },
{ {
'title': 'Freighter', 'title': 'Freighter',
@ -83,6 +89,7 @@ PLANS = [
'deprecated': True, 'deprecated': True,
'free_trial_days': 14, 'free_trial_days': 14,
'superseded_by': 'bus-medium-30', 'superseded_by': 'bus-medium-30',
'plans_page_hidden': False,
}, },
{ {
'title': 'Tanker', 'title': 'Tanker',
@ -94,6 +101,7 @@ PLANS = [
'deprecated': True, 'deprecated': True,
'free_trial_days': 14, 'free_trial_days': 14,
'superseded_by': 'bus-large-30', 'superseded_by': 'bus-large-30',
'plans_page_hidden': False,
}, },
# Active plans # Active plans
@ -107,6 +115,7 @@ PLANS = [
'deprecated': False, 'deprecated': False,
'free_trial_days': 30, 'free_trial_days': 30,
'superseded_by': None, 'superseded_by': None,
'plans_page_hidden': False,
}, },
{ {
'title': 'Personal', 'title': 'Personal',
@ -118,6 +127,7 @@ PLANS = [
'deprecated': False, 'deprecated': False,
'free_trial_days': 30, 'free_trial_days': 30,
'superseded_by': None, 'superseded_by': None,
'plans_page_hidden': False,
}, },
{ {
'title': 'Skiff', 'title': 'Skiff',
@ -129,6 +139,7 @@ PLANS = [
'deprecated': False, 'deprecated': False,
'free_trial_days': 30, 'free_trial_days': 30,
'superseded_by': None, 'superseded_by': None,
'plans_page_hidden': False,
}, },
{ {
'title': 'Yacht', 'title': 'Yacht',
@ -140,6 +151,7 @@ PLANS = [
'deprecated': False, 'deprecated': False,
'free_trial_days': 30, 'free_trial_days': 30,
'superseded_by': None, 'superseded_by': None,
'plans_page_hidden': False,
}, },
{ {
'title': 'Freighter', 'title': 'Freighter',
@ -151,6 +163,7 @@ PLANS = [
'deprecated': False, 'deprecated': False,
'free_trial_days': 30, 'free_trial_days': 30,
'superseded_by': None, 'superseded_by': None,
'plans_page_hidden': False,
}, },
{ {
'title': 'Tanker', 'title': 'Tanker',
@ -162,6 +175,19 @@ PLANS = [
'deprecated': False, 'deprecated': False,
'free_trial_days': 30, 'free_trial_days': 30,
'superseded_by': None, 'superseded_by': None,
'plans_page_hidden': False,
},
{
'title': 'Carrier',
'price': 35000,
'privateRepos': 250,
'stripeId': 'bus-xlarge-30',
'audience': 'For extra large businesses',
'bus_features': True,
'deprecated': False,
'free_trial_days': 30,
'superseded_by': None,
'plans_page_hidden': True,
}, },
] ]

View file

@ -544,6 +544,15 @@ class ImageStoragePlacement(BaseModel):
) )
class UserRegion(BaseModel):
user = QuayUserField(index=True, allows_robots=False)
location = ForeignKeyField(ImageStorageLocation)
indexes = (
(('user', 'location'), True),
)
class Image(BaseModel): class Image(BaseModel):
# This class is intentionally denormalized. Even though images are supposed # This class is intentionally denormalized. Even though images are supposed
# to be globally unique we can't treat them as such for permissions and # to be globally unique we can't treat them as such for permissions and
@ -733,6 +742,7 @@ class RepositoryNotification(BaseModel):
repository = ForeignKeyField(Repository, index=True) repository = ForeignKeyField(Repository, index=True)
event = ForeignKeyField(ExternalNotificationEvent) event = ForeignKeyField(ExternalNotificationEvent)
method = ForeignKeyField(ExternalNotificationMethod) method = ForeignKeyField(ExternalNotificationMethod)
title = CharField(null=True)
config_json = TextField() config_json = TextField()
@ -777,4 +787,4 @@ all_models = [User, Repository, Image, AccessToken, Role, RepositoryPermission,
ExternalNotificationEvent, ExternalNotificationMethod, RepositoryNotification, ExternalNotificationEvent, ExternalNotificationMethod, RepositoryNotification,
RepositoryAuthorizedEmail, ImageStorageTransformation, DerivedImageStorage, RepositoryAuthorizedEmail, ImageStorageTransformation, DerivedImageStorage,
TeamMemberInvite, ImageStorageSignature, ImageStorageSignatureKind, TeamMemberInvite, ImageStorageSignature, ImageStorageSignatureKind,
AccessTokenKind, Star, RepositoryActionCount, TagManifest, BlobUpload] AccessTokenKind, Star, RepositoryActionCount, TagManifest, BlobUpload, UserRegion]

View file

@ -0,0 +1,26 @@
"""Add title field to notification
Revision ID: 499f6f08de3
Revises: 246df01a6d51
Create Date: 2015-08-21 14:18:07.287743
"""
# revision identifiers, used by Alembic.
revision = '499f6f08de3'
down_revision = '246df01a6d51'
from alembic import op
import sqlalchemy as sa
def upgrade(tables):
### commands auto generated by Alembic - please adjust! ###
op.add_column('repositorynotification', sa.Column('title', sa.String(length=255), nullable=True))
### end Alembic commands ###
def downgrade(tables):
### commands auto generated by Alembic - please adjust! ###
op.drop_column('repositorynotification', 'title')
### end Alembic commands ###

View file

@ -0,0 +1,35 @@
"""Add UserRegion table
Revision ID: 9512773a4a2
Revises: 499f6f08de3
Create Date: 2015-09-01 14:17:08.628052
"""
# revision identifiers, used by Alembic.
revision = '9512773a4a2'
down_revision = '499f6f08de3'
from alembic import op
import sqlalchemy as sa
def upgrade(tables):
### commands auto generated by Alembic - please adjust! ###
op.create_table('userregion',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('user_id', sa.Integer(), nullable=False),
sa.Column('location_id', sa.Integer(), nullable=False),
sa.ForeignKeyConstraint(['location_id'], ['imagestoragelocation.id'], name=op.f('fk_userregion_location_id_imagestoragelocation')),
sa.ForeignKeyConstraint(['user_id'], ['user.id'], name=op.f('fk_userregion_user_id_user')),
sa.PrimaryKeyConstraint('id', name=op.f('pk_userregion'))
)
op.create_index('userregion_location_id', 'userregion', ['location_id'], unique=False)
op.create_index('userregion_user_id', 'userregion', ['user_id'], unique=False)
### end Alembic commands ###
def downgrade(tables):
### commands auto generated by Alembic - please adjust! ###
op.drop_table('userregion')
### end Alembic commands ###

View file

@ -113,12 +113,12 @@ def delete_matching_notifications(target, kind_name, **kwargs):
notification.delete_instance() notification.delete_instance()
def create_repo_notification(repo, event_name, method_name, config): def create_repo_notification(repo, event_name, method_name, config, title=None):
event = ExternalNotificationEvent.get(ExternalNotificationEvent.name == event_name) event = ExternalNotificationEvent.get(ExternalNotificationEvent.name == event_name)
method = ExternalNotificationMethod.get(ExternalNotificationMethod.name == method_name) method = ExternalNotificationMethod.get(ExternalNotificationMethod.name == method_name)
return RepositoryNotification.create(repository=repo, event=event, method=method, return RepositoryNotification.create(repository=repo, event=event, method=method,
config_json=json.dumps(config)) config_json=json.dumps(config), title=title)
def get_repo_notification(uuid): def get_repo_notification(uuid):

View file

@ -8,8 +8,9 @@ from oauth2lib import utils
from data.database import (OAuthApplication, OAuthAuthorizationCode, OAuthAccessToken, User, from data.database import (OAuthApplication, OAuthAuthorizationCode, OAuthAccessToken, User,
AccessToken, random_string_generator) AccessToken, random_string_generator)
from data.model import user from data.model import user, config
from auth import scopes from auth import scopes
from util import get_app_url
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -45,7 +46,10 @@ class DatabaseAuthorizationProvider(AuthorizationProvider):
return False return False
def validate_redirect_uri(self, client_id, redirect_uri): def validate_redirect_uri(self, client_id, redirect_uri):
if redirect_uri == url_for('web.oauth_local_handler', _external=True): internal_redirect_url = '%s%s' % (get_app_url(config.app_config),
url_for('web.oauth_local_handler'))
if redirect_uri == internal_redirect_url:
return True return True
try: try:

View file

@ -17,14 +17,19 @@ def list_robot_permissions(robot_name):
.where(User.username == robot_name, User.robot == True)) .where(User.username == robot_name, User.robot == True))
def list_organization_member_permissions(organization): def list_organization_member_permissions(organization, limit_to_user=None):
query = (RepositoryPermission query = (RepositoryPermission
.select(RepositoryPermission, Repository, User) .select(RepositoryPermission, Repository, User)
.join(Repository) .join(Repository)
.switch(RepositoryPermission) .switch(RepositoryPermission)
.join(User) .join(User)
.where(Repository.namespace_user == organization) .where(Repository.namespace_user == organization))
.where(User.robot == False))
if limit_to_user is not None:
query = query.where(RepositoryPermission.user == limit_to_user)
else:
query = query.where(User.robot == False)
return query return query

View file

@ -11,6 +11,12 @@ from data.database import (ImageStorage, Image, DerivedImageStorage, ImageStorag
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
def add_storage_placement(storage, location_name):
""" Adds a storage placement for the given storage at the given location. """
location = ImageStorageLocation.get(name=location_name)
ImageStoragePlacement.create(location=location, storage=storage)
def find_or_create_derived_storage(source, transformation_name, preferred_location): def find_or_create_derived_storage(source, transformation_name, preferred_location):
existing = find_derived_storage(source, transformation_name) existing = find_derived_storage(source, transformation_name)
if existing is not None: if existing is not None:

View file

@ -8,7 +8,8 @@ from datetime import datetime, timedelta
from data.database import (User, LoginService, FederatedLogin, RepositoryPermission, TeamMember, from data.database import (User, LoginService, FederatedLogin, RepositoryPermission, TeamMember,
Team, Repository, TupleSelector, TeamRole, Namespace, Visibility, Team, Repository, TupleSelector, TeamRole, Namespace, Visibility,
EmailConfirmation, Role, db_for_update, random_string_generator) EmailConfirmation, Role, db_for_update, random_string_generator,
UserRegion, ImageStorageLocation)
from data.model import (DataModelException, InvalidPasswordException, InvalidRobotException, from data.model import (DataModelException, InvalidPasswordException, InvalidRobotException,
InvalidUsernameException, InvalidEmailAddressException, InvalidUsernameException, InvalidEmailAddressException,
TooManyUsersException, TooManyLoginAttemptsException, db_transaction, TooManyUsersException, TooManyLoginAttemptsException, db_transaction,
@ -463,6 +464,13 @@ def get_user_by_id(user_db_id):
return None return None
def get_namespace_user_by_user_id(namespace_user_db_id):
try:
return User.get(User.id == namespace_user_db_id, User.robot == False)
except User.DoesNotExist:
raise InvalidUsernameException('User with id does not exist: %s' % namespace_user_db_id)
def get_namespace_by_user_id(namespace_user_db_id): def get_namespace_by_user_id(namespace_user_db_id):
try: try:
return User.get(User.id == namespace_user_db_id, User.robot == False).username return User.get(User.id == namespace_user_db_id, User.robot == False).username
@ -664,3 +672,8 @@ def get_pull_credentials(robotname):
'registry': '%s://%s/v1/' % (config.app_config['PREFERRED_URL_SCHEME'], 'registry': '%s://%s/v1/' % (config.app_config['PREFERRED_URL_SCHEME'],
config.app_config['SERVER_HOSTNAME']), config.app_config['SERVER_HOSTNAME']),
} }
def get_region_locations(user):
""" Returns the locations defined as preferred storage for the given user. """
query = UserRegion.select().join(ImageStorageLocation).where(UserRegion.user == user)
return set([region.location.name for region in query])

View file

@ -13,6 +13,17 @@ class NoopWith:
def __exit__(self, type, value, traceback): def __exit__(self, type, value, traceback):
pass pass
class MetricQueueReporter(object):
def __init__(self, metric_queue):
self._metric_queue = metric_queue
def __call__(self, currently_processing, running_count, total_count):
need_capacity_count = total_count - running_count
self._metric_queue.put('BuildCapacityShortage', need_capacity_count, unit='Count')
building_percent = 100 if currently_processing else 0
self._metric_queue.put('PercentBuilding', building_percent, unit='Percent')
class WorkQueue(object): class WorkQueue(object):
def __init__(self, queue_name, transaction_factory, def __init__(self, queue_name, transaction_factory,
canonical_name_match_list=None, reporter=None): canonical_name_match_list=None, reporter=None):

View file

@ -1,13 +1,15 @@
import logging import logging
import json import json
import os import os
import jwt
from datetime import datetime, timedelta from datetime import datetime, timedelta
from data.users.federated import FederatedUsers, VerifiedCredentials from data.users.federated import FederatedUsers, VerifiedCredentials
from util.security import strictjwt
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
class ExternalJWTAuthN(FederatedUsers): class ExternalJWTAuthN(FederatedUsers):
""" Delegates authentication to a REST endpoint that returns JWTs. """ """ Delegates authentication to a REST endpoint that returns JWTs. """
PUBLIC_KEY_FILENAME = 'jwt-authn.cert' PUBLIC_KEY_FILENAME = 'jwt-authn.cert'
@ -45,9 +47,9 @@ class ExternalJWTAuthN(FederatedUsers):
# Load the JWT returned. # Load the JWT returned.
encoded = result_data.get('token', '') encoded = result_data.get('token', '')
try: try:
payload = jwt.decode(encoded, self.public_key, algorithms=['RS256'], payload = strictjwt.decode(encoded, self.public_key, algorithms=['RS256'],
audience='quay.io/jwtauthn', issuer=self.issuer) audience='quay.io/jwtauthn', issuer=self.issuer)
except jwt.InvalidTokenError: except strictjwt.InvalidTokenError:
logger.exception('Exception when decoding returned JWT') logger.exception('Exception when decoding returned JWT')
return (None, 'Invalid username or password') return (None, 'Invalid username or password')

View file

@ -9,6 +9,16 @@ from data.users.federated import FederatedUsers, VerifiedCredentials
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
class LDAPConnectionBuilder(object):
def __init__(self, ldap_uri, user_dn, user_pw):
self._ldap_uri = ldap_uri
self._user_dn = user_dn
self._user_pw = user_pw
def get_connection(self):
return LDAPConnection(self._ldap_uri, self._user_dn, self._user_pw)
class LDAPConnection(object): class LDAPConnection(object):
def __init__(self, ldap_uri, user_dn, user_pw): def __init__(self, ldap_uri, user_dn, user_pw):
self._ldap_uri = ldap_uri self._ldap_uri = ldap_uri
@ -20,13 +30,7 @@ class LDAPConnection(object):
trace_level = 2 if os.environ.get('USERS_DEBUG') == '1' else 0 trace_level = 2 if os.environ.get('USERS_DEBUG') == '1' else 0
self._conn = ldap.initialize(self._ldap_uri, trace_level=trace_level) self._conn = ldap.initialize(self._ldap_uri, trace_level=trace_level)
self._conn.set_option(ldap.OPT_REFERRALS, 1) self._conn.set_option(ldap.OPT_REFERRALS, 1)
self._conn.simple_bind_s(self._user_dn, self._user_pw)
try:
self._conn.simple_bind_s(self._user_dn, self._user_pw)
except ldap.INVALID_CREDENTIALS:
logger.exception('LDAP admin dn or password are invalid')
return None
return self._conn return self._conn
def __exit__(self, exc_type, value, tb): def __exit__(self, exc_type, value, tb):
@ -38,7 +42,7 @@ class LDAPUsers(FederatedUsers):
def __init__(self, ldap_uri, base_dn, admin_dn, admin_passwd, user_rdn, uid_attr, email_attr): def __init__(self, ldap_uri, base_dn, admin_dn, admin_passwd, user_rdn, uid_attr, email_attr):
super(LDAPUsers, self).__init__('ldap') super(LDAPUsers, self).__init__('ldap')
self._ldap_conn = LDAPConnection(ldap_uri, admin_dn, admin_passwd) self._ldap = LDAPConnectionBuilder(ldap_uri, admin_dn, admin_passwd)
self._ldap_uri = ldap_uri self._ldap_uri = ldap_uri
self._base_dn = base_dn self._base_dn = base_dn
self._user_rdn = user_rdn self._user_rdn = user_rdn
@ -65,10 +69,15 @@ class LDAPUsers(FederatedUsers):
return referral_dn return referral_dn
def _ldap_user_search(self, username_or_email): def _ldap_user_search(self, username_or_email):
with self._ldap_conn as conn: # Verify the admin connection works first. We do this here to avoid wrapping
if conn is None: # the entire block in the INVALID CREDENTIALS check.
return (None, 'LDAP Admin dn or password is invalid') try:
with self._ldap.get_connection():
pass
except ldap.INVALID_CREDENTIALS:
return (None, 'LDAP Admin dn or password is invalid')
with self._ldap.get_connection() as conn:
logger.debug('Incoming username or email param: %s', username_or_email.__repr__()) logger.debug('Incoming username or email param: %s', username_or_email.__repr__())
user_search_dn = ','.join(self._user_rdn + self._base_dn) user_search_dn = ','.join(self._user_rdn + self._base_dn)
query = u'(|({0}={2})({1}={2}))'.format(self._uid_attr, self._email_attr, query = u'(|({0}={2})({1}={2}))'.format(self._uid_attr, self._email_attr,

2
dev.df
View file

@ -18,4 +18,4 @@ RUN venv/bin/pip install -r requirements.txt
WORKDIR /src/quay WORKDIR /src/quay
ENV PYTHONPATH=/ ENV PYTHONPATH=/
ENV PATH=$PATH:/venv/bin ENV PATH=/venv/bin:$PATH

View file

@ -1,7 +1,7 @@
import logging import logging
import datetime import datetime
from app import app from app import app, metric_queue
from flask import Blueprint, request, make_response, jsonify, session from flask import Blueprint, request, make_response, jsonify, session
from flask.ext.restful import Resource, abort, Api, reqparse from flask.ext.restful import Resource, abort, Api, reqparse
from flask.ext.restful.utils.cors import crossdomain from flask.ext.restful.utils.cors import crossdomain
@ -20,6 +20,7 @@ from auth.auth_context import get_authenticated_user, get_validated_oauth_token
from auth.auth import process_oauth from auth.auth import process_oauth
from endpoints.csrf import csrf_protect from endpoints.csrf import csrf_protect
from endpoints.decorators import check_anon_protection from endpoints.decorators import check_anon_protection
from util.saas.metricqueue import time_decorator
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -28,7 +29,7 @@ api = Api()
api.init_app(api_bp) api.init_app(api_bp)
api.decorators = [csrf_protect, api.decorators = [csrf_protect,
crossdomain(origin='*', headers=['Authorization', 'Content-Type']), crossdomain(origin='*', headers=['Authorization', 'Content-Type']),
process_oauth] process_oauth, time_decorator(api_bp.name, metric_queue)]
class ApiException(Exception): class ApiException(Exception):

View file

@ -3,8 +3,10 @@
import logging import logging
import json import json
import datetime import datetime
import hashlib
from flask import request from flask import request
from rfc3987 import parse as uri_parse
from app import app, userfiles as user_files, build_logs, log_archive, dockerfile_build_queue from app import app, userfiles as user_files, build_logs, log_archive, dockerfile_build_queue
from endpoints.api import (RepositoryParamResource, parse_args, query_param, nickname, resource, from endpoints.api import (RepositoryParamResource, parse_args, query_param, nickname, resource,
@ -134,8 +136,11 @@ def build_status_view(build_obj):
} }
} }
if can_write and build_obj.resource_key is not None: if can_write:
resp['archive_url'] = user_files.get_file_url(build_obj.resource_key, requires_cors=True) if build_obj.resource_key is not None:
resp['archive_url'] = user_files.get_file_url(build_obj.resource_key, requires_cors=True)
elif job_config.get('archive_url', None):
resp['archive_url'] = job_config['archive_url']
return resp return resp
@ -148,14 +153,15 @@ class RepositoryBuildList(RepositoryParamResource):
'RepositoryBuildRequest': { 'RepositoryBuildRequest': {
'type': 'object', 'type': 'object',
'description': 'Description of a new repository build.', 'description': 'Description of a new repository build.',
'required': [
'file_id',
],
'properties': { 'properties': {
'file_id': { 'file_id': {
'type': 'string', 'type': 'string',
'description': 'The file id that was generated when the build spec was uploaded', 'description': 'The file id that was generated when the build spec was uploaded',
}, },
'archive_url': {
'type': 'string',
'description': 'The URL of the .tar.gz to build. Must start with "http" or "https".',
},
'subdirectory': { 'subdirectory': {
'type': 'string', 'type': 'string',
'description': 'Subdirectory in which the Dockerfile can be found', 'description': 'Subdirectory in which the Dockerfile can be found',
@ -204,7 +210,26 @@ class RepositoryBuildList(RepositoryParamResource):
logger.debug('User requested repository initialization.') logger.debug('User requested repository initialization.')
request_json = request.get_json() request_json = request.get_json()
dockerfile_id = request_json['file_id'] dockerfile_id = request_json.get('file_id', None)
archive_url = request_json.get('archive_url', None)
if not dockerfile_id and not archive_url:
raise InvalidRequest('file_id or archive_url required')
if archive_url:
archive_match = None
try:
archive_match = uri_parse(archive_url, 'URI')
except ValueError:
pass
if not archive_match:
raise InvalidRequest('Invalid Archive URL: Must be a valid URI')
scheme = archive_match.get('scheme', None)
if scheme != 'http' and scheme != 'https':
raise InvalidRequest('Invalid Archive URL: Must be http or https')
subdir = request_json['subdirectory'] if 'subdirectory' in request_json else '' subdir = request_json['subdirectory'] if 'subdirectory' in request_json else ''
tags = request_json.get('docker_tags', ['latest']) tags = request_json.get('docker_tags', ['latest'])
pull_robot_name = request_json.get('pull_robot', None) pull_robot_name = request_json.get('pull_robot', None)
@ -228,18 +253,24 @@ class RepositoryBuildList(RepositoryParamResource):
# Check if the dockerfile resource has already been used. If so, then it # Check if the dockerfile resource has already been used. If so, then it
# can only be reused if the user has access to the repository in which the # can only be reused if the user has access to the repository in which the
# dockerfile was previously built. # dockerfile was previously built.
associated_repository = model.build.get_repository_for_resource(dockerfile_id) if dockerfile_id:
if associated_repository: associated_repository = model.build.get_repository_for_resource(dockerfile_id)
if not ModifyRepositoryPermission(associated_repository.namespace_user.username, if associated_repository:
associated_repository.name): if not ModifyRepositoryPermission(associated_repository.namespace_user.username,
raise Unauthorized() associated_repository.name):
raise Unauthorized()
# Start the build. # Start the build.
repo = model.repository.get_repository(namespace, repository) repo = model.repository.get_repository(namespace, repository)
build_name = (user_files.get_file_checksum(dockerfile_id)
if dockerfile_id
else hashlib.sha224(archive_url).hexdigest()[0:7])
prepared = PreparedBuild() prepared = PreparedBuild()
prepared.build_name = user_files.get_file_checksum(dockerfile_id) prepared.build_name = build_name
prepared.dockerfile_id = dockerfile_id prepared.dockerfile_id = dockerfile_id
prepared.archive_url = archive_url
prepared.tags = tags prepared.tags = tags
prepared.subdirectory = subdir prepared.subdirectory = subdir
prepared.is_manual = True prepared.is_manual = True

View file

@ -278,6 +278,46 @@ class OrganizationMemberList(ApiResource):
class OrganizationMember(ApiResource): class OrganizationMember(ApiResource):
""" Resource for managing individual organization members. """ """ Resource for managing individual organization members. """
@require_scope(scopes.ORG_ADMIN)
@nickname('getOrganizationMember')
def get(self, orgname, membername):
""" Retrieves the details of a member of the organization.
"""
permission = AdministerOrganizationPermission(orgname)
if permission.can():
# Lookup the user.
member = model.user.get_user(membername)
if not member:
raise NotFound()
organization = model.user.get_user_or_org(orgname)
if not organization:
raise NotFound()
# Lookup the user's information in the organization.
teams = list(model.team.get_user_teams_within_org(membername, organization))
if not teams:
raise NotFound()
repo_permissions = model.permission.list_organization_member_permissions(organization, member)
def local_team_view(team):
return {
'name': team.name,
'avatar': avatar.get_data_for_team(team),
}
return {
'name': member.username,
'kind': 'robot' if member.robot else 'user',
'avatar': avatar.get_data_for_user(member),
'teams': [local_team_view(team) for team in teams],
'repositories': [permission.repository.name for permission in repo_permissions]
}
raise Unauthorized()
@require_scope(scopes.ORG_ADMIN) @require_scope(scopes.ORG_ADMIN)
@nickname('removeOrganizationMember') @nickname('removeOrganizationMember')
def delete(self, orgname, membername): def delete(self, orgname, membername):

View file

@ -26,7 +26,8 @@ def notification_view(note):
'uuid': note.uuid, 'uuid': note.uuid,
'event': note.event.name, 'event': note.event.name,
'method': note.method.name, 'method': note.method.name,
'config': config 'config': config,
'title': note.title,
} }
@ -55,7 +56,11 @@ class RepositoryNotificationList(RepositoryParamResource):
'config': { 'config': {
'type': 'object', 'type': 'object',
'description': 'JSON config information for the specific method of notification' 'description': 'JSON config information for the specific method of notification'
} },
'title': {
'type': 'string',
'description': 'The human-readable title of the notification',
},
} }
}, },
} }
@ -78,7 +83,8 @@ class RepositoryNotificationList(RepositoryParamResource):
raise request_error(message=ex.message) raise request_error(message=ex.message)
new_notification = model.notification.create_repo_notification(repo, parsed['event'], new_notification = model.notification.create_repo_notification(repo, parsed['event'],
parsed['method'], parsed['config']) parsed['method'], parsed['config'],
parsed.get('title', None))
resp = notification_view(new_notification) resp = notification_view(new_notification)
log_action('add_repo_notification', namespace, log_action('add_repo_notification', namespace,

View file

@ -461,6 +461,7 @@ class TriggerBuildList(RepositoryParamResource):
} }
FIELD_VALUE_LIMIT = 30
@resource('/v1/repository/<repopath:repository>/trigger/<trigger_uuid>/fields/<field_name>') @resource('/v1/repository/<repopath:repository>/trigger/<trigger_uuid>/fields/<field_name>')
@internal_only @internal_only
@ -479,7 +480,7 @@ class BuildTriggerFieldValues(RepositoryParamResource):
user_permission = UserAdminPermission(trigger.connected_user.username) user_permission = UserAdminPermission(trigger.connected_user.username)
if user_permission.can(): if user_permission.can():
handler = BuildTriggerHandler.get_handler(trigger, config) handler = BuildTriggerHandler.get_handler(trigger, config)
values = handler.list_field_values(field_name) values = handler.list_field_values(field_name, limit=FIELD_VALUE_LIMIT)
if values is None: if values is None:
raise NotFound() raise NotFound()

View file

@ -28,7 +28,8 @@ def start_build(repository, prepared_build, pull_robot_name=None):
'build_subdir': prepared_build.subdirectory, 'build_subdir': prepared_build.subdirectory,
'trigger_metadata': prepared_build.metadata or {}, 'trigger_metadata': prepared_build.metadata or {},
'is_manual': prepared_build.is_manual, 'is_manual': prepared_build.is_manual,
'manual_user': get_authenticated_user().username if get_authenticated_user() else None 'manual_user': get_authenticated_user().username if get_authenticated_user() else None,
'archive_url': prepared_build.archive_url
} }
with app.config['DB_TRANSACTION_FACTORY'](db): with app.config['DB_TRANSACTION_FACTORY'](db):
@ -83,6 +84,7 @@ class PreparedBuild(object):
""" """
def __init__(self, trigger=None): def __init__(self, trigger=None):
self._dockerfile_id = None self._dockerfile_id = None
self._archive_url = None
self._tags = None self._tags = None
self._build_name = None self._build_name = None
self._subdirectory = None self._subdirectory = None
@ -124,6 +126,17 @@ class PreparedBuild(object):
def trigger(self): def trigger(self):
return self._trigger return self._trigger
@property
def archive_url(self):
return self._archive_url
@archive_url.setter
def archive_url(self, value):
if self._archive_url:
raise Exception('Property archive_url already set')
self._archive_url = value
@property @property
def dockerfile_id(self): def dockerfile_id(self):
return self._dockerfile_id return self._dockerfile_id

View file

@ -4,8 +4,9 @@ import requests
import re import re
from flask.ext.mail import Message from flask.ext.mail import Message
from app import mail, app from app import mail, app, OVERRIDE_CONFIG_DIRECTORY
from data import model from data import model
from util.config.validator import SSL_FILENAMES
from workers.queueworker import JobException from workers.queueworker import JobException
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -20,6 +21,11 @@ class NotificationMethodPerformException(JobException):
pass pass
SSLClientCert = None
if app.config['PREFERRED_URL_SCHEME'] == 'https':
# TODO(jschorr): move this into the config provider library
SSLClientCert = [OVERRIDE_CONFIG_DIRECTORY + f for f in SSL_FILENAMES]
class NotificationMethod(object): class NotificationMethod(object):
def __init__(self): def __init__(self):
pass pass
@ -177,7 +183,7 @@ class WebhookMethod(NotificationMethod):
headers = {'Content-type': 'application/json'} headers = {'Content-type': 'application/json'}
try: try:
resp = requests.post(url, data=json.dumps(payload), headers=headers) resp = requests.post(url, data=json.dumps(payload), headers=headers, cert=SSLClientCert)
if resp.status_code/100 != 2: if resp.status_code/100 != 2:
error_message = '%s response for webhook to url: %s' % (resp.status_code, url) error_message = '%s response for webhook to url: %s' % (resp.status_code, url)
logger.error(error_message) logger.error(error_message)

View file

@ -197,7 +197,7 @@ class BuildTriggerHandler(object):
""" """
raise NotImplementedError raise NotImplementedError
def list_field_values(self, field_name): def list_field_values(self, field_name, limit=None):
""" """
Lists all values for the given custom trigger field. For example, a trigger might have a Lists all values for the given custom trigger field. For example, a trigger might have a
field named "branches", and this method would return all branches. field named "branches", and this method would return all branches.
@ -434,7 +434,7 @@ class BitbucketBuildTrigger(BuildTriggerHandler):
return data return data
def list_field_values(self, field_name): def list_field_values(self, field_name, limit=None):
source = self.config['build_source'] source = self.config['build_source']
(namespace, name) = source.split('/') (namespace, name) = source.split('/')
@ -457,14 +457,22 @@ class BitbucketBuildTrigger(BuildTriggerHandler):
if not result: if not result:
return None return None
return data.keys() tags = list(data.keys())
if limit:
tags = tags[0:limit]
return tags
if field_name == 'branch_name': if field_name == 'branch_name':
(result, data, _) = repository.get_branches() (result, data, _) = repository.get_branches()
if not result: if not result:
return None return None
return data.keys() branches = list(data.keys())
if limit:
branches = branches[0:limit]
return branches
return None return None
@ -548,7 +556,7 @@ class BitbucketBuildTrigger(BuildTriggerHandler):
def handle_trigger_request(self, request): def handle_trigger_request(self, request):
payload = request.get_json() payload = request.get_json()
if not 'push' in payload: if not payload or not 'push' in payload:
logger.debug('Skipping BitBucket request due to missing push data in payload') logger.debug('Skipping BitBucket request due to missing push data in payload')
raise SkipRequestException() raise SkipRequestException()
@ -1039,7 +1047,7 @@ class GithubBuildTrigger(BuildTriggerHandler):
return self._prepare_build(ref, commit_sha, True, repo=repo) return self._prepare_build(ref, commit_sha, True, repo=repo)
def list_field_values(self, field_name): def list_field_values(self, field_name, limit=None):
if field_name == 'refs': if field_name == 'refs':
branches = self.list_field_values('branch_name') branches = self.list_field_values('branch_name')
tags = self.list_field_values('tag_name') tags = self.list_field_values('tag_name')
@ -1053,7 +1061,11 @@ class GithubBuildTrigger(BuildTriggerHandler):
gh_client = self._get_client() gh_client = self._get_client()
source = config['build_source'] source = config['build_source']
repo = gh_client.get_repo(source) repo = gh_client.get_repo(source)
return [tag.name for tag in repo.get_tags()] gh_tags = repo.get_tags()
if limit:
gh_tags = repo.get_tags()[0:limit]
return [tag.name for tag in gh_tags]
except GitHubBadCredentialsException: except GitHubBadCredentialsException:
return [] return []
except GithubException: except GithubException:
@ -1066,7 +1078,11 @@ class GithubBuildTrigger(BuildTriggerHandler):
gh_client = self._get_client() gh_client = self._get_client()
source = config['build_source'] source = config['build_source']
repo = gh_client.get_repo(source) repo = gh_client.get_repo(source)
branches = [branch.name for branch in repo.get_branches()] gh_branches = repo.get_branches()
if limit:
gh_branches = repo.get_branches()[0:limit]
branches = [branch.name for branch in gh_branches]
if not repo.default_branch in branches: if not repo.default_branch in branches:
branches.insert(0, repo.default_branch) branches.insert(0, repo.default_branch)
@ -1417,7 +1433,7 @@ class GitLabBuildTrigger(BuildTriggerHandler):
return contents return contents
def list_field_values(self, field_name): def list_field_values(self, field_name, limit=None):
if field_name == 'refs': if field_name == 'refs':
branches = self.list_field_values('branch_name') branches = self.list_field_values('branch_name')
tags = self.list_field_values('tag_name') tags = self.list_field_values('tag_name')
@ -1434,12 +1450,20 @@ class GitLabBuildTrigger(BuildTriggerHandler):
tags = gl_client.getrepositorytags(repo['id']) tags = gl_client.getrepositorytags(repo['id'])
if tags is False: if tags is False:
return [] return []
if limit:
tags = tags[0:limit]
return [tag['name'] for tag in tags] return [tag['name'] for tag in tags]
if field_name == 'branch_name': if field_name == 'branch_name':
branches = gl_client.getbranches(repo['id']) branches = gl_client.getbranches(repo['id'])
if branches is False: if branches is False:
return [] return []
if limit:
branches = branches[0:limit]
return [branch['name'] for branch in branches] return [branch['name'] for branch in branches]
return None return None

View file

@ -1,10 +1,13 @@
from flask import Blueprint, make_response from flask import Blueprint, make_response
from app import metric_queue
from endpoints.decorators import anon_protect, anon_allowed from endpoints.decorators import anon_protect, anon_allowed
from util.saas.metricqueue import time_blueprint
v1_bp = Blueprint('v1', __name__) v1_bp = Blueprint('v1', __name__)
time_blueprint(v1_bp, metric_queue)
# Note: This is *not* part of the Docker index spec. This is here for our own health check, # Note: This is *not* part of the Docker index spec. This is here for our own health check,
# since we have nginx handle the _ping below. # since we have nginx handle the _ping below.

View file

@ -1,12 +1,13 @@
import logging import logging
import json import json
import features
from flask import make_response, request, session, Response, redirect, abort as flask_abort from flask import make_response, request, session, Response, redirect, abort as flask_abort
from functools import wraps from functools import wraps
from datetime import datetime from datetime import datetime
from time import time from time import time
from app import storage as store, image_diff_queue, app from app import storage as store, image_diff_queue, image_replication_queue, app
from auth.auth import process_auth, extract_namespace_repo_from_session from auth.auth import process_auth, extract_namespace_repo_from_session
from auth.auth_context import get_authenticated_user, get_grant_user_context from auth.auth_context import get_authenticated_user, get_grant_user_context
from digest import checksums from digest import checksums
@ -36,6 +37,30 @@ def set_uploading_flag(repo_image, is_image_uploading):
repo_image.storage.save() repo_image.storage.save()
def _finish_image(namespace, repository, repo_image):
# Checksum is ok, we remove the marker
set_uploading_flag(repo_image, False)
image_id = repo_image.docker_image_id
# The layer is ready for download, send a job to the work queue to
# process it.
logger.debug('Adding layer to diff queue')
repo = model.repository.get_repository(namespace, repository)
image_diff_queue.put([repo.namespace_user.username, repository, image_id], json.dumps({
'namespace_user_id': repo.namespace_user.id,
'repository': repository,
'image_id': image_id,
}))
# Send a job to the work queue to replicate the image layer.
if features.STORAGE_REPLICATION:
image_replication_queue.put([repo_image.storage.uuid], json.dumps({
'namespace_user_id': repo.namespace_user.id,
'storage_id': repo_image.storage.uuid,
}))
def require_completion(f): def require_completion(f):
"""This make sure that the image push correctly finished.""" """This make sure that the image push correctly finished."""
@wraps(f) @wraps(f)
@ -210,7 +235,11 @@ def put_image_layer(namespace, repository, image_id):
# Stream write the data to storage. # Stream write the data to storage.
with database.CloseForLongOperation(app.config): with database.CloseForLongOperation(app.config):
store.stream_write(repo_image.storage.locations, layer_path, sr) try:
store.stream_write(repo_image.storage.locations, layer_path, sr)
except IOError:
logger.exception('Exception when writing image data')
abort(520, 'Image %(image_id)s could not be written. Please try again.', image_id=image_id)
# Append the computed checksum. # Append the computed checksum.
csums = [] csums = []
@ -243,18 +272,8 @@ def put_image_layer(namespace, repository, image_id):
abort(400, 'Checksum mismatch; ignoring the layer for image %(image_id)s', abort(400, 'Checksum mismatch; ignoring the layer for image %(image_id)s',
issue='checksum-mismatch', image_id=image_id) issue='checksum-mismatch', image_id=image_id)
# Checksum is ok, we remove the marker # Mark the image as uploaded.
set_uploading_flag(repo_image, False) _finish_image(namespace, repository, repo_image)
# The layer is ready for download, send a job to the work queue to
# process it.
logger.debug('Adding layer to diff queue')
repo = model.repository.get_repository(namespace, repository)
image_diff_queue.put([repo.namespace_user.username, repository, image_id], json.dumps({
'namespace_user_id': repo.namespace_user.id,
'repository': repository,
'image_id': image_id,
}))
return make_response('true', 200) return make_response('true', 200)
@ -316,18 +335,8 @@ def put_image_checksum(namespace, repository, image_id):
abort(400, 'Checksum mismatch for image: %(image_id)s', abort(400, 'Checksum mismatch for image: %(image_id)s',
issue='checksum-mismatch', image_id=image_id) issue='checksum-mismatch', image_id=image_id)
# Checksum is ok, we remove the marker # Mark the image as uploaded.
set_uploading_flag(repo_image, False) _finish_image(namespace, repository, repo_image)
# The layer is ready for download, send a job to the work queue to
# process it.
logger.debug('Adding layer to diff queue')
repo = model.repository.get_repository(namespace, repository)
image_diff_queue.put([repo.namespace_user.username, repository, image_id], json.dumps({
'namespace_user_id': repo.namespace_user.id,
'repository': repository,
'image_id': image_id,
}))
return make_response('true', 200) return make_response('true', 200)

View file

@ -7,6 +7,7 @@ from flask import Blueprint, make_response, url_for, request, jsonify
from functools import wraps from functools import wraps
from urlparse import urlparse from urlparse import urlparse
from app import metric_queue
from endpoints.decorators import anon_protect, anon_allowed from endpoints.decorators import anon_protect, anon_allowed
from endpoints.v2.errors import V2RegistryException from endpoints.v2.errors import V2RegistryException
from auth.jwt_auth import process_jwt_auth from auth.jwt_auth import process_jwt_auth
@ -15,13 +16,14 @@ from auth.permissions import (ReadRepositoryPermission, ModifyRepositoryPermissi
AdministerRepositoryPermission) AdministerRepositoryPermission)
from data import model from data import model
from util.http import abort from util.http import abort
from util.saas.metricqueue import time_blueprint
from app import app from app import app
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
v2_bp = Blueprint('v2', __name__) v2_bp = Blueprint('v2', __name__)
time_blueprint(v2_bp, metric_queue)
@v2_bp.app_errorhandler(V2RegistryException) @v2_bp.app_errorhandler(V2RegistryException)
def handle_registry_v2_exception(error): def handle_registry_v2_exception(error):

View file

@ -20,6 +20,7 @@ from data.database import (db, all_models, Role, TeamRole, Visibility, LoginServ
ExternalNotificationEvent, ExternalNotificationMethod, NotificationKind) ExternalNotificationEvent, ExternalNotificationMethod, NotificationKind)
from data import model from data import model
from app import app, storage as store from app import app, storage as store
from storage.basestorage import StoragePaths
from workers import repositoryactioncounter from workers import repositoryactioncounter
@ -85,6 +86,17 @@ def __create_subtree(repo, structure, creator_username, parent, tag_map):
new_image.storage.checksum = checksum new_image.storage.checksum = checksum
new_image.storage.save() new_image.storage.save()
# Write some data for the storage.
if os.environ.get('WRITE_STORAGE_FILES'):
storage_paths = StoragePaths()
paths = [storage_paths.image_json_path,
storage_paths.image_ancestry_path,
storage_paths.image_layer_path]
for path_builder in paths:
path = path_builder(new_image.storage.uuid)
store.put_content('local_us', path, checksum)
creation_time = REFERENCE_DATE + timedelta(weeks=image_num) + timedelta(days=model_num) creation_time = REFERENCE_DATE + timedelta(weeks=image_num) + timedelta(days=model_num)
command_list = SAMPLE_CMDS[image_num % len(SAMPLE_CMDS)] command_list = SAMPLE_CMDS[image_num % len(SAMPLE_CMDS)]
command = json.dumps(command_list) if command_list else None command = json.dumps(command_list) if command_list else None

View file

@ -1,11 +1,30 @@
#!/bin/sh #!/bin/bash
# Run this from the quay directory to start a quay development instance in
# docker on port 5000.
set -e set -e
REPO=quay.io/quay/quay-dev REPO=quay.io/quay/quay-dev
docker build -t $REPO -f dev.df . d ()
docker run -it -p 5000:5000 -v $(pwd)/..:/src $REPO bash /src/quay/local-run.sh {
docker build -t $REPO -f dev.df .
docker -- run --rm -it --net=host -v $(pwd)/..:/src $REPO $*
}
case $1 in
buildman)
d /venv/bin/python -m buildman.builder
;;
dev)
d bash /src/quay/local-run.sh
;;
notifications)
d /venv/bin/python -m workers.notificationworker
;;
test)
d bash /src/quay/local-test.sh
;;
*)
echo "unknown option"
exit 1
;;
esac

View file

@ -54,3 +54,4 @@ Flask-Testing
pyjwt pyjwt
toposort toposort
pyjwkest pyjwkest
rfc3987

View file

@ -58,7 +58,7 @@ pycparser==2.14
pycrypto==2.6.1 pycrypto==2.6.1
pygpgme==0.3 pygpgme==0.3
pyjwkest==1.0.3 pyjwkest==1.0.3
PyJWT==1.3.0 PyJWT==1.4.0
PyMySQL==0.6.6 PyMySQL==0.6.6
pyOpenSSL==0.15.1 pyOpenSSL==0.15.1
PyPDF2==1.24 PyPDF2==1.24
@ -74,6 +74,7 @@ redis==2.10.3
reportlab==2.7 reportlab==2.7
requests==2.7.0 requests==2.7.0
requests-oauthlib==0.5.0 requests-oauthlib==0.5.0
rfc3987==1.3.4
simplejson==3.7.3 simplejson==3.7.3
six==1.9.0 six==1.9.0
SQLAlchemy==1.0.6 SQLAlchemy==1.0.6

View file

@ -1254,3 +1254,14 @@ a:focus {
color: white; color: white;
z-index: 2; z-index: 2;
} }
.co-alert.thin {
padding: 6px;
padding-left: 38px;
margin-bottom: 0px;
}
.co-alert.thin:before {
top: 5px;
font-size: 18px;
}

View file

@ -0,0 +1,13 @@
.dockerfile-build-dialog-element .btn-group {
margin-bottom: 20px;
}
.dockerfile-build-dialog-element button i {
margin-right: 6px;
}
.dockerfile-build-dialog-element .trigger-list {
margin: 0px;
width: 100%;
}

View file

@ -3,6 +3,10 @@
white-space: nowrap; white-space: nowrap;
} }
.dockerfile-build-form .file-drop {
padding: 0px;
}
.dockerfile-build-form input[type="file"] { .dockerfile-build-form input[type="file"] {
margin: 0px; margin: 0px;
} }
@ -10,6 +14,11 @@
.dockerfile-build-form .help-text { .dockerfile-build-form .help-text {
font-size: 13px; font-size: 13px;
color: #aaa; color: #aaa;
margin-bottom: 20px; margin-top: 10px;
padding-left: 22px; margin-bottom: 16px;
}
.dockerfile-build-form dd {
padding-left: 20px;
padding-top: 14px;
} }

View file

@ -1,4 +1,16 @@
.new-organization .co-main-content-panel { .new-organization .co-main-content-panel {
padding: 30px; padding: 30px;
position: relative; position: relative;
} }
.new-organization .field-container {
display: inline-block;
width: 400px;
margin-right: 10px;
}
.new-organization .field-row .co-alert {
display: inline-block;
margin-left: 10px;
margin-top: 10px;
}

View file

@ -20,7 +20,8 @@
</div> </div>
<div class="co-alert co-alert-danger" ng-if="loadError == 'request-failed'"> <div class="co-alert co-alert-danger" ng-if="loadError == 'request-failed'">
Failed to log builds logs. Please reload and try again. Failed to load builds logs. Please reload and try again. If this problem persists,
please check for JavaScript or networking issues and contact support.
</div> </div>
<span class="no-logs" ng-if="!logEntries.length && currentBuild.phase == 'waiting'"> <span class="no-logs" ng-if="!logEntries.length && currentBuild.phase == 'waiting'">

View file

@ -226,6 +226,12 @@
ng-model="config.DISTRIBUTED_STORAGE_CONFIG.local[1][field.name]"> ng-model="config.DISTRIBUTED_STORAGE_CONFIG.local[1][field.name]">
<label for="dsc-{{ field.name }}">{{ field.placeholder }}</label> <label for="dsc-{{ field.name }}">{{ field.placeholder }}</label>
</div> </div>
<div ng-if="field.kind == 'option'">
<select ng-model="config.DISTRIBUTED_STORAGE_CONFIG.local[1][field.name]">
<option ng-repeat="value in field.values" value="{{ value }}"
ng-selected="config.DISTRIBUTED_STORAGE_CONFIG.local[1][field.name] == value">{{ value }}</option>
</select>
</div>
<div class="help-text" ng-if="field.help_url"> <div class="help-text" ng-if="field.help_url">
See <a href="{{ field.help_url }}" target="_blank">Documentation</a> for more information See <a href="{{ field.help_url }}" target="_blank">Documentation</a> for more information
</div> </div>

View file

@ -11,14 +11,14 @@
</div> </div>
<div class="modal-body"> <div class="modal-body">
<!-- Creating spinner --> <!-- Creating spinner -->
<div class="quay-spinner" ng-show="status == 'creating' || status == 'authorizing-email'"></div> <div class="cor-loader" ng-show="status == 'creating' || status == 'authorizing-email'"></div>
<!-- Authorize e-mail view --> <!-- Authorize e-mail view -->
<div ng-show="status == 'authorizing-email-sent'"> <div ng-show="status == 'authorizing-email-sent'">
An e-mail has been sent to <code>{{ currentConfig.email }}</code>. Please click the link contained An e-mail has been sent to <code>{{ currentConfig.email }}</code>. Please click the link contained
in the e-mail. in the e-mail.
<br><br> <br><br>
Waiting... <span class="quay-spinner"></span> <span class="cor-loader-inline"></span>
</div> </div>
<!-- Authorize e-mail view --> <!-- Authorize e-mail view -->
@ -30,6 +30,14 @@
<!-- Create View --> <!-- Create View -->
<table style="width: 100%" ng-show="status == ''"> <table style="width: 100%" ng-show="status == ''">
<tr>
<td style="width: 120px">Notification title:</td>
<td style="padding-right: 21px;">
<input class="form-control" type="text" placeholder="(Optional Title)" ng-model="currentTitle"
style="margin: 10px;">
</td>
</tr>
<tr> <tr>
<td style="width: 120px">When this occurs:</td> <td style="width: 120px">When this occurs:</td>
<td> <td>

View file

@ -1,27 +1,57 @@
<div class="dockerfile-build-dialog-element"> <div class="dockerfile-build-dialog-element">
<!-- Modal message dialog --> <!-- Modal message dialog -->
<div class="modal fade" id="dockerfilebuildModal"> <div class="modal fade dockerfilebuildModal">
<div class="modal-dialog"> <div class="modal-dialog">
<div class="modal-content"> <div class="modal-content" ng-show="triggersResource && triggersResource.loading">
<div class="cor-loader"></div>
</div>
<div class="modal-content" ng-show="!triggersResource || !triggersResource.loading">
<div class="modal-header"> <div class="modal-header">
<button type="button" class="close" data-dismiss="modal" aria-hidden="true">&times;</button> <button type="button" class="close" data-dismiss="modal" aria-hidden="true">&times;</button>
<h4 class="modal-title"> <h4 class="modal-title">
Start new Dockerfile build Start new Dockerfile build
</h4> </h4>
</div> </div>
<div class="modal-body token-dialog-body"> <div class="modal-body">
<div class="alert alert-danger" ng-show="errorMessage"> <div class="btn-group btn-group-sm" ng-show="triggers.length > 0">
<button class="btn" ng-class="viewTriggers ? 'btn-default' : 'btn-info active'" ng-click="showTriggers(false)">
<i class="fa fa-upload"></i>Upload Dockerfile
</button>
<button class="btn" ng-class="viewTriggers ? 'btn-info active' : 'btn-default'" ng-click="showTriggers(true)">
<i class="fa fa-flash"></i>Start Build Trigger
</button>
</div>
<div class="co-alert co-alert-danger" ng-show="errorMessage">
{{ errorMessage }} {{ errorMessage }}
</div> </div>
<div class="dockerfile-build-form" repository="repository" upload-failed="handleBuildFailed(message)"
build-started="handleBuildStarted(build)" build-failed="handleBuildFailed(message)" start-now="startCounter" <!-- Upload Dockerfile -->
is-ready="hasDockerfile" uploading="uploading" building="building"></div> <div ng-show="!viewTriggers">
<div class="dockerfile-build-form" repository="repository" upload-failed="handleBuildFailed(message)"
build-started="handleBuildStarted(build)" build-failed="handleBuildFailed(message)" start-now="startCounter"
is-ready="hasDockerfile" uploading="uploading" building="building"></div>
</div>
<!-- Start Build Trigger -->
<div ng-show="viewTriggers">
<table class="trigger-list">
<tr ng-repeat="trigger in triggers">
<td><span class="trigger-description" trigger="trigger"></span></td>
<td><button class="btn btn-primary" ng-click="runTriggerNow(trigger)">Run Trigger</button></td>
</tr>
</table>
</div>
</div> </div>
<div class="modal-footer"> <div class="modal-footer">
<button type="button" class="btn btn-primary" ng-click="startBuild()" ng-disabled="building || uploading || !hasDockerfile">Start Build</button> <button type="button" class="btn btn-primary" ng-click="startBuild()" ng-disabled="building || uploading || !hasDockerfile" ng-show="!viewTriggers">Start Build</button>
<button type="button" class="btn btn-default" data-dismiss="modal">Close</button> <button type="button" class="btn btn-default" data-dismiss="modal">Close</button>
</div> </div>
</div><!-- /.modal-content --> </div><!-- /.modal-content -->
</div><!-- /.modal-dialog --> </div><!-- /.modal-dialog -->
</div><!-- /.modal --> </div><!-- /.modal -->
<div class="manual-trigger-build-dialog" repository="repository" counter="startTriggerCounter"
trigger="startTrigger"
build-started="handleBuildStarted(build)"></div>
</div> </div>

View file

@ -12,35 +12,35 @@
<dl> <dl>
<dt>Dockerfile or <code>.tar.gz</code> or <code>.zip</code>:</dt> <dt>Dockerfile or <code>.tar.gz</code> or <code>.zip</code>:</dt>
<dd> <dd>
<input id="file-drop" class="file-drop" type="file" file-present="internal.hasDockerfile"> <div class="co-alert co-alert-danger" ng-if="dockerfileState == 'error'">
<div class="help-text">If an archive, the Dockerfile must be at the root</div> {{ dockerfileError }}
</div>
<input id="file-drop" class="file-drop" type="file" files-changed="handleFilesChanged(files)">
<div class="help-text">Note: If an archive, the Dockerfile must be in the root directory.</div>
<div ng-if="dockerfileState == 'loading'">
Reading Dockerfile: <span class="cor-loader-inline"></span>
</div>
</dd> </dd>
</dl> </dl>
<dl> <dl ng-show="privateBaseRepository">
<dt>Base Image Pull Credentials:</dt> <dt>Base Image Pull Credentials:</dt>
<dd style="margin: 20px;"> <dd>
<!-- Select credentials --> <div class="co-alert co-alert-warning"
<div class="btn-group btn-group-sm"> ng-if="currentRobotHasPermission === false">
<button type="button" class="btn btn-default" Warning: Robot account <strong>{{ pullEntity.name }}</strong> does not have
ng-class="is_public ? 'active btn-info' : ''" read permission on repository <strong>{{ privateBaseRepository }}</strong>, so
ng-click="is_public = true"> this build will fail with an authorization error.
None
</button>
<button type="button" class="btn btn-default"
ng-class="is_public ? '' : 'active btn-info'"
ng-click="is_public = false">
<i class="fa ci-robot"></i>
Robot account
</button>
</div> </div>
<div class="entity-search" namespace="repository.namespace"
<!-- Robot Select --> placeholder="'Select robot account for pulling'"
<div ng-show="!is_public" style="margin-top: 10px"> current-entity="pullEntity"
<div class="entity-search" namespace="repository.namespace" allowed-entities="['robot']"></div>
placeholder="'Select robot account for pulling...'" <div class="help-text">
current-entity="pull_entity" The selected Dockerfile contains a <code>FROM</code> that refers to the private
allowed-entities="['robot']"></div> <span class="registry-name"></span> repository <strong>{{ privateBaseRepository }}</strong>.
A robot account with read access to that repository is required for the build.
</div> </div>
</dd> </dd>
</dl> </dl>

View file

@ -1,5 +1,5 @@
<!-- Modal message dialog --> <!-- Modal message dialog -->
<div class="modal fade" id="startTriggerDialog"> <div class="modal fade startTriggerDialog">
<div class="modal-dialog"> <div class="modal-dialog">
<div class="modal-content"> <div class="modal-content">
<div class="modal-header"> <div class="modal-header">

View file

@ -84,7 +84,7 @@
<i class="fa fa-flash"></i> <i class="fa fa-flash"></i>
Build Triggers Build Triggers
<div class="heading-controls hidden-sm hidden-xs"> <div class="heading-controls hidden-xs">
<!-- Add Build Trigger --> <!-- Add Build Trigger -->
<div class="dropdown" id="addBuildTrigger"> <div class="dropdown" id="addBuildTrigger">
<button class="btn btn-primary dropdown-toggle" data-toggle="dropdown"> <button class="btn btn-primary dropdown-toggle" data-toggle="dropdown">
@ -198,7 +198,7 @@
repository="repository" repository="repository"
trigger="currentStartTrigger" trigger="currentStartTrigger"
counter="showTriggerStartDialogCounter" counter="showTriggerStartDialogCounter"
start-build="startTrigger(trigger, parameters)"></div> build-started="handleBuildStarted(build)"></div>
<!-- /Dialogs --> <!-- /Dialogs -->

View file

@ -26,6 +26,7 @@
<table class="co-table permissions" ng-if="notifications.length"> <table class="co-table permissions" ng-if="notifications.length">
<thead> <thead>
<tr> <tr>
<td>Title</td>
<td>Event</td> <td>Event</td>
<td>Notification</td> <td>Notification</td>
<td class="options-col"></td> <td class="options-col"></td>
@ -34,6 +35,10 @@
<tbody> <tbody>
<tr class="notification-row" ng-repeat="notification in notifications"> <tr class="notification-row" ng-repeat="notification in notifications">
<td>
{{ notification.title || '(Untitled)' }}
</td>
<td> <td>
<span class="notification-event"> <span class="notification-event">
<i class="fa fa-lg" ng-class="getEventInfo(notification).icon"></i> <i class="fa fa-lg" ng-class="getEventInfo(notification).icon"></i>
@ -53,6 +58,16 @@
<span class="cor-option" option-click="testNotification(notification)"> <span class="cor-option" option-click="testNotification(notification)">
<i class="fa fa-send"></i> Test Notification <i class="fa fa-send"></i> Test Notification
</span> </span>
<span class="cor-option" option-click="showNotifyInfo(notification, 'url')"
ng-if="getMethodInfo(notification).id == 'webhook'">
<i class="fa fa-link"></i>
View Webhook URL
</span>
<span class="cor-option" option-click="showNotifyInfo(notification, 'email')"
ng-if="getMethodInfo(notification).id == 'email'">
<i class="fa fa-envelope"></i>
View E-mail Address
</span>
<span class="cor-option" option-click="showWebhookInfo(notification)" <span class="cor-option" option-click="showWebhookInfo(notification)"
ng-if="getMethodInfo(notification).id == 'webhook'"> ng-if="getMethodInfo(notification).id == 'webhook'">
<i class="fa fa-book"></i> <i class="fa fa-book"></i>

View file

@ -70,17 +70,17 @@
</div> </div>
</div> </div>
<div class="row testimonial" quay-require="['BILLING']"> <div class="row testimonial" quay-require="['BILLING']">
<div class="tour-action" quay-require="['BILLING']"> <div class="tour-action" quay-require="['BILLING']">
<a href="/plans?trial-plan=personal"> <a href="/plans?trial-plan=personal">
<button class="btn btn-success"> <button class="btn btn-success">
Start free trial Start free trial
</button> </button>
</a> </a>
</div>
</div> </div>
</div> </div>
<!-- Organizations --> <!-- Organizations -->
<div class="product-tour" ng-if="kind == 'organizations'"> <div class="product-tour" ng-if="kind == 'organizations'">
<div class="tour-section row tour-header"> <div class="tour-section row tour-header">

Binary file not shown.

Before

Width:  |  Height:  |  Size: 6.4 KiB

After

Width:  |  Height:  |  Size: 5.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.6 KiB

After

Width:  |  Height:  |  Size: 4.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.5 KiB

After

Width:  |  Height:  |  Size: 4.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.1 KiB

After

Width:  |  Height:  |  Size: 6.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2 KiB

After

Width:  |  Height:  |  Size: 1.9 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.5 KiB

After

Width:  |  Height:  |  Size: 1.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.7 KiB

After

Width:  |  Height:  |  Size: 2 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.2 KiB

After

Width:  |  Height:  |  Size: 2.5 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2 KiB

After

Width:  |  Height:  |  Size: 1.9 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 6.8 KiB

After

Width:  |  Height:  |  Size: 3.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 99 KiB

After

Width:  |  Height:  |  Size: 71 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.5 KiB

After

Width:  |  Height:  |  Size: 3.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.6 KiB

After

Width:  |  Height:  |  Size: 1.8 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.4 KiB

After

Width:  |  Height:  |  Size: 1.4 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 10 KiB

After

Width:  |  Height:  |  Size: 7.4 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.1 KiB

After

Width:  |  Height:  |  Size: 2.8 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 113 KiB

After

Width:  |  Height:  |  Size: 76 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9 KiB

After

Width:  |  Height:  |  Size: 3.9 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.5 KiB

After

Width:  |  Height:  |  Size: 6.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 86 KiB

After

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 99 KiB

After

Width:  |  Height:  |  Size: 76 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 75 KiB

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 66 KiB

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 116 KiB

After

Width:  |  Height:  |  Size: 83 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 81 KiB

After

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 70 KiB

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9 KiB

After

Width:  |  Height:  |  Size: 6.4 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.7 KiB

After

Width:  |  Height:  |  Size: 5 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.4 KiB

After

Width:  |  Height:  |  Size: 2.4 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 6.3 KiB

After

Width:  |  Height:  |  Size: 2.9 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 10 KiB

After

Width:  |  Height:  |  Size: 7.5 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.9 KiB

After

Width:  |  Height:  |  Size: 2.9 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.2 KiB

After

Width:  |  Height:  |  Size: 1.8 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 78 KiB

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 83 KiB

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 145 KiB

After

Width:  |  Height:  |  Size: 104 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 85 KiB

After

Width:  |  Height:  |  Size: 61 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 100 KiB

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 81 KiB

After

Width:  |  Height:  |  Size: 59 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 6 KiB

After

Width:  |  Height:  |  Size: 4.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.3 KiB

After

Width:  |  Height:  |  Size: 2.5 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 123 KiB

After

Width:  |  Height:  |  Size: 92 KiB

Some files were not shown because too many files have changed in this diff Show more