initial import for Open Source 🎉
This commit is contained in:
parent
1898c361f3
commit
9c0dd3b722
2048 changed files with 218743 additions and 0 deletions
3
config_app/Procfile
Normal file
3
config_app/Procfile
Normal file
|
@ -0,0 +1,3 @@
|
|||
app: PYTHONPATH="../" gunicorn -c conf/gunicorn_local.py config_application:application
|
||||
webpack: npm run watch-config-app
|
||||
|
106
config_app/README.md
Normal file
106
config_app/README.md
Normal file
|
@ -0,0 +1,106 @@
|
|||
# Quay config tool
|
||||
|
||||
The Quay config tool is a project to ease the setup, modification, and deployment of Red Hat Quay (sometimes referred to as Red Hat Quay).
|
||||
|
||||
The project was built by [Sam Chow] in the summer of 2018.
|
||||
|
||||
[Sam Chow]: https://github.com/blueish
|
||||
|
||||
## Project Features
|
||||
* Isolated setup tool for creating the config
|
||||
* Ability to download config as a tarball
|
||||
* Ability to load your config from a tarball and modify it
|
||||
* When running on Kubernetes, allows you to deploy your changes to the current cluster and cycles all pods
|
||||
* When running on Kubernetes, allows you to modify the existing configuration
|
||||
|
||||
|
||||
## Project Layout
|
||||
- `conf/` - nginx/gunicorn configuration
|
||||
|
||||
- `config_endpoints/` - backend flask endpoints for serving web and all other API endpoints
|
||||
|
||||
- `config_util/` - utils used by api endpoints for accessing k8s, etc.
|
||||
|
||||
- `config_util/config` - config providers used to manipulate the local config directory before being tarred, uploaded, etc.
|
||||
|
||||
- `docs/` - some updated documentation on how to run the app in both a docker container and on kubernetes
|
||||
|
||||
- `init/` - initial scripts/services started by the docker image
|
||||
|
||||
- `js/` - all frontend javascript
|
||||
|
||||
- `js/components` - mix of the old components ported over, and new components written for the config tool.
|
||||
(config-setup-app is the entrypoint of the frontend)
|
||||
|
||||
- `js/core-config-setup` - The main component responsible for modification of the config.
|
||||
Holds most of the components that modify the configuration
|
||||
|
||||
- `js/setup` - The modal component that covers the setup of the DB and SuperUser
|
||||
|
||||
## Running the config tool
|
||||
Currently, the config tool is still being built alongside the regular Quay container, and is started with the `config` argument to the image. A password is required to be
|
||||
specified, which will then need to be entered with the username `quayconfig` in the browser.
|
||||
|
||||
```bash
|
||||
docker run {quay-image} config {password}
|
||||
```
|
||||
|
||||
The password can also be specified via the `CONFIG_APP_PASSWORD` environment variable:
|
||||
|
||||
```bash
|
||||
docker run -e CONFIG_APP_PASSWORD={password} {quay-image} config
|
||||
```
|
||||
|
||||
|
||||
## Local development
|
||||
If you wish to work on it locally, there's a script in the base dir of quay:
|
||||
```bash
|
||||
./local-config-app.sh
|
||||
```
|
||||
Webpack is setup for hot reloading so the JS will be rebuilt if you're working on it.
|
||||
|
||||
|
||||
## Local development on kubernetes
|
||||
Assuming you're running on minikube, you can build the docker image with the minikube docker daemon:
|
||||
```bash
|
||||
eval $(minikube docker-env)
|
||||
docker built -t config-app . # run in quay dir, not quay/config_app
|
||||
```
|
||||
|
||||
You'll now have to create the namespace, config secret (and optionally the quay-enterprise app and nodeport)
|
||||
- [quay-enterprise-namespace.yml](files/quay-enterprise-namespace.yml)
|
||||
- [quay-enterprise-config-secret.yml](files/quay-enterprise-config-secret.yml)
|
||||
- [quay-enterprise-redis.yml](files/quay-enterprise-redis.yml)
|
||||
|
||||
(Optional, use if you're testing the deployment feature on kube)
|
||||
- [quay-enterprise-app-rc.yml](files/quay-enterprise-app-rc.yml)
|
||||
- [quay-enterprise-service-nodeport.yml](files/quay-enterprise-service-nodeport.yml)
|
||||
|
||||
And the following for the config-tool
|
||||
- [config-tool-service-nodeport.yml](docs/k8s_templates/config-tool-service-nodeport.yml)
|
||||
- [config-tool-serviceaccount.yml](docs/k8s_templates/config-tool-serviceaccount.yml)
|
||||
- [config-tool-servicetoken-role.yml](docs/k8s_templates/config-tool-servicetoken-role.yml)
|
||||
- [config-tool-servicetoken-role-binding.yml](docs/k8s_templates/config-tool-servicetoken-role-binding.yml)
|
||||
- [qe-config-tool.yml](docs/k8s_templates/qe-config-tool.yml)
|
||||
(Note: right now the config tool template uses the tag `config-tool:latest`, which will be the image you created in the minikube docker)
|
||||
|
||||
Apply all of these onto the cluster
|
||||
```bash
|
||||
kubectl apply -f <name-of-file>
|
||||
```
|
||||
|
||||
You can get minikube to route you to the services:
|
||||
```bash
|
||||
minikube service quay-enterprise-config-tool -n quay-enterprise
|
||||
```
|
||||
|
||||
It should open up on your default browser.
|
||||
|
||||
(Note: The config tool is only available through SSL and self-signs certs on startup, so you'll have to use https://\<route>
|
||||
and pass through the warning on your browser to access it.)
|
||||
|
||||
When you make changes to the app, you'll have to rebuild the image and cycle the deployment:
|
||||
```bash
|
||||
kubectl scale deploy --replicas=0 quay-enterprise-config-tool -n quay-enterprise
|
||||
kubectl scale deploy --replicas=1 quay-enterprise-config-tool -n quay-enterprise
|
||||
```
|
0
config_app/__init__.py
Normal file
0
config_app/__init__.py
Normal file
38
config_app/_init_config.py
Normal file
38
config_app/_init_config.py
Normal file
|
@ -0,0 +1,38 @@
|
|||
import os
|
||||
import re
|
||||
import subprocess
|
||||
|
||||
# Note: this currently points to the directory above, since we're in the quay config_app dir
|
||||
# TODO(config_extract): revert to root directory rather than the one above
|
||||
ROOT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
CONF_DIR = os.getenv("QUAYCONF", os.path.join(ROOT_DIR, "conf/"))
|
||||
STATIC_DIR = os.path.join(ROOT_DIR, 'static/')
|
||||
STATIC_LDN_DIR = os.path.join(STATIC_DIR, 'ldn/')
|
||||
STATIC_FONTS_DIR = os.path.join(STATIC_DIR, 'fonts/')
|
||||
TEMPLATE_DIR = os.path.join(ROOT_DIR, 'templates/')
|
||||
IS_KUBERNETES = 'KUBERNETES_SERVICE_HOST' in os.environ
|
||||
|
||||
|
||||
def _get_version_number_changelog():
|
||||
try:
|
||||
with open(os.path.join(ROOT_DIR, 'CHANGELOG.md')) as f:
|
||||
return re.search(r'(v[0-9]+\.[0-9]+\.[0-9]+)', f.readline()).group(0)
|
||||
except IOError:
|
||||
return ''
|
||||
|
||||
|
||||
def _get_git_sha():
|
||||
if os.path.exists("GIT_HEAD"):
|
||||
with open(os.path.join(ROOT_DIR, "GIT_HEAD")) as f:
|
||||
return f.read()
|
||||
else:
|
||||
try:
|
||||
return subprocess.check_output(["git", "rev-parse", "HEAD"]).strip()[0:8]
|
||||
except (OSError, subprocess.CalledProcessError):
|
||||
pass
|
||||
return "unknown"
|
||||
|
||||
|
||||
__version__ = _get_version_number_changelog()
|
||||
__gitrev__ = _get_git_sha()
|
47
config_app/c_app.py
Normal file
47
config_app/c_app.py
Normal file
|
@ -0,0 +1,47 @@
|
|||
import os
|
||||
import logging
|
||||
|
||||
from flask import Flask
|
||||
|
||||
from data import database, model
|
||||
from util.config.superusermanager import SuperUserManager
|
||||
from util.ipresolver import NoopIPResolver
|
||||
|
||||
from config_app._init_config import ROOT_DIR, IS_KUBERNETES
|
||||
from config_app.config_util.config import get_config_provider
|
||||
from util.security.instancekeys import InstanceKeys
|
||||
|
||||
app = Flask(__name__)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
OVERRIDE_CONFIG_DIRECTORY = os.path.join(ROOT_DIR, 'config_app/conf/stack')
|
||||
INIT_SCRIPTS_LOCATION = '/conf/init/'
|
||||
|
||||
is_testing = 'TEST' in os.environ
|
||||
is_kubernetes = IS_KUBERNETES
|
||||
|
||||
logger.debug('Configuration is on a kubernetes deployment: %s' % IS_KUBERNETES)
|
||||
|
||||
config_provider = get_config_provider(OVERRIDE_CONFIG_DIRECTORY, 'config.yaml', 'config.py',
|
||||
testing=is_testing)
|
||||
|
||||
if is_testing:
|
||||
from test.testconfig import TestConfig
|
||||
|
||||
logger.debug('Loading test config.')
|
||||
app.config.from_object(TestConfig())
|
||||
else:
|
||||
from config import DefaultConfig
|
||||
|
||||
logger.debug('Loading default config.')
|
||||
app.config.from_object(DefaultConfig())
|
||||
app.teardown_request(database.close_db_filter)
|
||||
|
||||
# Load the override config via the provider.
|
||||
config_provider.update_app_config(app.config)
|
||||
superusers = SuperUserManager(app)
|
||||
ip_resolver = NoopIPResolver()
|
||||
instance_keys = InstanceKeys(app)
|
||||
|
||||
model.config.app_config = app.config
|
0
config_app/conf/__init__.py
Normal file
0
config_app/conf/__init__.py
Normal file
8
config_app/conf/dhparams.pem
Normal file
8
config_app/conf/dhparams.pem
Normal file
|
@ -0,0 +1,8 @@
|
|||
-----BEGIN DH PARAMETERS-----
|
||||
MIIBCAKCAQEAk7fEh4MFr446aU61ZGxCl8VHvcJhDGcdd+3zaNxdWF7Wvr5QE8zX
|
||||
QswoM5K2szlK7klcJOXer2IToHHQQn00nuWO3m6quZGV6EPbRmRKfRGa8pzSwH+R
|
||||
Ph0OUpEQPh7zvegeVwEbrblD7i53ookbHlYGtxsPb28Y06OP5/xpks9C815Zy4gy
|
||||
tx2yHi4FkFo52yErBF9jD/glsZYVHCo42LFrVGa5/7V0g++fG8yXCrBnqmz2d8FF
|
||||
uU6/KJcmDCUn1m3mDfcf5HgeXSIsukW/XMZ3l9w1fdluJRwdEE9W2ePgqMiG3eC0
|
||||
2T1sPfXCdXPQ7/5Gzf1eMtRZ/McipxVbgwIBAg==
|
||||
-----END DH PARAMETERS-----
|
26
config_app/conf/gunicorn_local.py
Normal file
26
config_app/conf/gunicorn_local.py
Normal file
|
@ -0,0 +1,26 @@
|
|||
import sys
|
||||
import os
|
||||
sys.path.append(os.path.join(os.path.dirname(__file__), "../"))
|
||||
|
||||
import logging
|
||||
|
||||
from Crypto import Random
|
||||
from config_app.config_util.log import logfile_path
|
||||
|
||||
|
||||
logconfig = logfile_path(debug=True)
|
||||
bind = '0.0.0.0:5000'
|
||||
workers = 1
|
||||
worker_class = 'gevent'
|
||||
daemon = False
|
||||
pythonpath = '.'
|
||||
preload_app = True
|
||||
|
||||
def post_fork(server, worker):
|
||||
# Reset the Random library to ensure it won't raise the "PID check failed." error after
|
||||
# gunicorn forks.
|
||||
Random.atfork()
|
||||
|
||||
def when_ready(server):
|
||||
logger = logging.getLogger(__name__)
|
||||
logger.debug('Starting local gunicorn with %s workers and %s worker class', workers, worker_class)
|
26
config_app/conf/gunicorn_web.py
Normal file
26
config_app/conf/gunicorn_web.py
Normal file
|
@ -0,0 +1,26 @@
|
|||
import sys
|
||||
import os
|
||||
sys.path.append(os.path.join(os.path.dirname(__file__), "../"))
|
||||
|
||||
import logging
|
||||
|
||||
from Crypto import Random
|
||||
from config_app.config_util.log import logfile_path
|
||||
|
||||
|
||||
logconfig = logfile_path(debug=True)
|
||||
|
||||
bind = 'unix:/tmp/gunicorn_web.sock'
|
||||
workers = 1
|
||||
worker_class = 'gevent'
|
||||
pythonpath = '.'
|
||||
preload_app = True
|
||||
|
||||
def post_fork(server, worker):
|
||||
# Reset the Random library to ensure it won't raise the "PID check failed." error after
|
||||
# gunicorn forks.
|
||||
Random.atfork()
|
||||
|
||||
def when_ready(server):
|
||||
logger = logging.getLogger(__name__)
|
||||
logger.debug('Starting local gunicorn with %s workers and %s worker class', workers, worker_class)
|
1
config_app/conf/htpasswd
Normal file
1
config_app/conf/htpasswd
Normal file
|
@ -0,0 +1 @@
|
|||
quayconfig:
|
49
config_app/conf/http-base.conf
Normal file
49
config_app/conf/http-base.conf
Normal file
|
@ -0,0 +1,49 @@
|
|||
# vim: ft=nginx
|
||||
|
||||
set_real_ip_from 0.0.0.0/0;
|
||||
real_ip_recursive on;
|
||||
log_format lb_logs '$remote_addr ($proxy_protocol_addr) '
|
||||
'- $remote_user [$time_local] '
|
||||
'"$request" $status $body_bytes_sent '
|
||||
'"$http_referer" "$http_user_agent" '
|
||||
'($request_time $request_length $upstream_response_time)';
|
||||
|
||||
types_hash_max_size 2048;
|
||||
include /etc/opt/rh/rh-nginx112/nginx/mime.types;
|
||||
|
||||
default_type application/octet-stream;
|
||||
|
||||
access_log /var/log/nginx/access.log;
|
||||
error_log /var/log/nginx/error.log;
|
||||
client_body_temp_path /tmp/nginx 1 2;
|
||||
proxy_temp_path /tmp/nginx-proxy;
|
||||
fastcgi_temp_path /tmp/nginx-fastcgi;
|
||||
uwsgi_temp_path /tmp/nginx-uwsgi;
|
||||
scgi_temp_path /tmp/nginx-scgi;
|
||||
|
||||
sendfile on;
|
||||
|
||||
gzip on;
|
||||
gzip_http_version 1.0;
|
||||
gzip_proxied any;
|
||||
gzip_min_length 500;
|
||||
gzip_disable "MSIE [1-6]\.";
|
||||
gzip_types text/plain text/xml text/css
|
||||
text/javascript application/x-javascript
|
||||
application/javascript image/svg+xml
|
||||
application/octet-stream;
|
||||
|
||||
map $proxy_protocol_addr $proper_forwarded_for {
|
||||
"" $proxy_add_x_forwarded_for;
|
||||
default $proxy_protocol_addr;
|
||||
}
|
||||
|
||||
map $http_x_forwarded_proto $proper_scheme {
|
||||
default $scheme;
|
||||
https https;
|
||||
}
|
||||
|
||||
upstream web_app_server {
|
||||
server unix:/tmp/gunicorn_web.sock fail_timeout=0;
|
||||
}
|
||||
|
33
config_app/conf/logging.conf
Normal file
33
config_app/conf/logging.conf
Normal file
|
@ -0,0 +1,33 @@
|
|||
[loggers]
|
||||
keys=root,gunicorn.error,gunicorn.access
|
||||
|
||||
[handlers]
|
||||
keys=console
|
||||
|
||||
[formatters]
|
||||
keys=generic,json
|
||||
|
||||
[logger_root]
|
||||
level=INFO
|
||||
handlers=console
|
||||
|
||||
[handler_console]
|
||||
class=StreamHandler
|
||||
formatter=generic
|
||||
args=(sys.stdout, )
|
||||
|
||||
[formatter_generic]
|
||||
format=%(asctime)s [%(process)d] [%(levelname)s] [%(name)s] %(message)s
|
||||
class=logging.Formatter
|
||||
|
||||
[logger_gunicorn.error]
|
||||
level=ERROR
|
||||
handlers=console
|
||||
propagate=0
|
||||
qualname=gunicorn.error
|
||||
|
||||
[logger_gunicorn.access]
|
||||
handlers=console
|
||||
propagate=0
|
||||
qualname=gunicorn.access
|
||||
level=DEBUG
|
38
config_app/conf/logging_debug.conf
Normal file
38
config_app/conf/logging_debug.conf
Normal file
|
@ -0,0 +1,38 @@
|
|||
[loggers]
|
||||
keys=root,boto,gunicorn.error,gunicorn.access
|
||||
|
||||
[handlers]
|
||||
keys=console
|
||||
|
||||
[formatters]
|
||||
keys=generic,json
|
||||
|
||||
[logger_root]
|
||||
level=DEBUG
|
||||
handlers=console
|
||||
|
||||
[logger_boto]
|
||||
level=INFO
|
||||
handlers=console
|
||||
qualname=boto
|
||||
|
||||
[logger_gunicorn.access]
|
||||
handlers=console
|
||||
propagate=0
|
||||
qualname=gunicorn.access
|
||||
level=DEBUG
|
||||
|
||||
[handler_console]
|
||||
class=StreamHandler
|
||||
formatter=generic
|
||||
args=(sys.stdout, )
|
||||
|
||||
[logger_gunicorn.error]
|
||||
level=ERROR
|
||||
handlers=console
|
||||
propagate=0
|
||||
qualname=gunicorn.error
|
||||
|
||||
[formatter_generic]
|
||||
format=%(asctime)s [%(process)d] [%(levelname)s] [%(name)s] %(message)s
|
||||
class=logging.Formatter
|
38
config_app/conf/logging_debug_json.conf
Normal file
38
config_app/conf/logging_debug_json.conf
Normal file
|
@ -0,0 +1,38 @@
|
|||
[loggers]
|
||||
keys=root,boto,gunicorn.error,gunicorn.access
|
||||
|
||||
[handlers]
|
||||
keys=console
|
||||
|
||||
[formatters]
|
||||
keys=generic,json
|
||||
|
||||
[logger_root]
|
||||
level=DEBUG
|
||||
handlers=console
|
||||
|
||||
[logger_boto]
|
||||
level=INFO
|
||||
handlers=console
|
||||
qualname=boto
|
||||
|
||||
[logger_gunicorn.access]
|
||||
handlers=console
|
||||
propagate=0
|
||||
qualname=gunicorn.access
|
||||
level=DEBUG
|
||||
|
||||
[handler_console]
|
||||
class=StreamHandler
|
||||
formatter=json
|
||||
args=(sys.stdout, )
|
||||
|
||||
[logger_gunicorn.error]
|
||||
level=ERROR
|
||||
handlers=console
|
||||
propagate=0
|
||||
qualname=gunicorn.error
|
||||
|
||||
[formatter_generic]
|
||||
format=%(asctime)s [%(process)d] [%(levelname)s] [%(name)s] %(message)s
|
||||
class=logging.Formatter
|
33
config_app/conf/logging_json.conf
Normal file
33
config_app/conf/logging_json.conf
Normal file
|
@ -0,0 +1,33 @@
|
|||
[loggers]
|
||||
keys=root,gunicorn.error,gunicorn.access
|
||||
|
||||
[handlers]
|
||||
keys=console
|
||||
|
||||
[formatters]
|
||||
keys=json,generic
|
||||
|
||||
[logger_root]
|
||||
level=INFO
|
||||
handlers=console
|
||||
|
||||
[handler_console]
|
||||
class=StreamHandler
|
||||
formatter=json
|
||||
args=(sys.stdout, )
|
||||
|
||||
[formatter_generic]
|
||||
format=%(asctime)s [%(process)d] [%(levelname)s] [%(name)s] %(message)s
|
||||
class=logging.Formatter
|
||||
|
||||
[logger_gunicorn.error]
|
||||
level=ERROR
|
||||
handlers=console
|
||||
propagate=0
|
||||
qualname=gunicorn.error
|
||||
|
||||
[logger_gunicorn.access]
|
||||
handlers=console
|
||||
propagate=0
|
||||
qualname=gunicorn.access
|
||||
level=DEBUG
|
26
config_app/conf/nginx.conf
Normal file
26
config_app/conf/nginx.conf
Normal file
|
@ -0,0 +1,26 @@
|
|||
# vim: ft=nginx
|
||||
|
||||
include root-base.conf;
|
||||
|
||||
http {
|
||||
include http-base.conf;
|
||||
|
||||
ssl_certificate /quay-registry/config_app/quay-config.cert;
|
||||
ssl_certificate_key /quay-registry/config_app/quay-config.key;
|
||||
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
|
||||
ssl_protocols TLSv1.1 TLSv1.2;
|
||||
ssl_session_cache shared:SSL:60m;
|
||||
ssl_session_timeout 2h;
|
||||
ssl_session_tickets on;
|
||||
ssl_prefer_server_ciphers on;
|
||||
ssl_dhparam dhparams.pem;
|
||||
|
||||
server {
|
||||
include server-base.conf;
|
||||
|
||||
listen 8443 ssl http2 default;
|
||||
|
||||
access_log /var/log/nginx/access.log lb_logs;
|
||||
error_log /var/log/nginx/error.log warn;
|
||||
}
|
||||
}
|
15
config_app/conf/root-base.conf
Normal file
15
config_app/conf/root-base.conf
Normal file
|
@ -0,0 +1,15 @@
|
|||
# vim: ft=nginx
|
||||
|
||||
pid /tmp/nginx.pid;
|
||||
error_log /var/log/nginx/error.log;
|
||||
|
||||
worker_processes auto;
|
||||
worker_priority -10;
|
||||
worker_rlimit_nofile 10240;
|
||||
|
||||
daemon off;
|
||||
|
||||
events {
|
||||
worker_connections 10240;
|
||||
accept_mutex off;
|
||||
}
|
21
config_app/conf/server-base.conf
Normal file
21
config_app/conf/server-base.conf
Normal file
|
@ -0,0 +1,21 @@
|
|||
# vim: ft=nginx
|
||||
|
||||
server_name _;
|
||||
|
||||
# Proxy Headers
|
||||
proxy_set_header X-Forwarded-For $proper_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $proper_scheme;
|
||||
proxy_set_header Host $host;
|
||||
proxy_redirect off;
|
||||
|
||||
proxy_set_header Transfer-Encoding $http_transfer_encoding;
|
||||
|
||||
# The DB migrations sometimes take a while, so increase timeout so we don't report an error.
|
||||
proxy_read_timeout 500s;
|
||||
|
||||
location / {
|
||||
auth_basic "Quay config tool";
|
||||
auth_basic_user_file htpasswd;
|
||||
proxy_pass http://web_app_server;
|
||||
}
|
||||
|
44
config_app/conf/supervisord.conf
Normal file
44
config_app/conf/supervisord.conf
Normal file
|
@ -0,0 +1,44 @@
|
|||
; TODO: Dockerfile - pip install supervisor supervisor-stdout
|
||||
|
||||
[supervisord]
|
||||
nodaemon=true
|
||||
|
||||
[unix_http_server]
|
||||
file=%(ENV_QUAYDIR)s/config_app/conf/supervisord.sock
|
||||
user=root
|
||||
|
||||
[supervisorctl]
|
||||
serverurl=unix:///%(ENV_QUAYDIR)s/config_app/conf/supervisord.sock
|
||||
|
||||
[rpcinterface:supervisor]
|
||||
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
|
||||
|
||||
[eventlistener:stdout]
|
||||
environment=
|
||||
PYTHONPATH=%(ENV_QUAYDIR)s
|
||||
command = supervisor_stdout
|
||||
buffer_size = 1024
|
||||
events = PROCESS_LOG
|
||||
result_handler = supervisor_stdout:event_handler
|
||||
|
||||
[program:gunicorn-config]
|
||||
environment=
|
||||
PYTHONPATH=%(ENV_QUAYDIR)s
|
||||
command=gunicorn -c %(ENV_QUAYDIR)s/config_app/conf/gunicorn_web.py config_application:application
|
||||
stdout_logfile=/dev/stdout
|
||||
stdout_logfile_maxbytes=0
|
||||
stderr_logfile=/dev/stdout
|
||||
stderr_logfile_maxbytes=0
|
||||
stdout_events_enabled = true
|
||||
stderr_events_enabled = true
|
||||
|
||||
[program:nginx]
|
||||
environment=
|
||||
PYTHONPATH=%(ENV_QUAYDIR)s
|
||||
command=nginx -c %(ENV_QUAYDIR)s/config_app/conf/nginx.conf
|
||||
stdout_logfile=/dev/stdout
|
||||
stdout_logfile_maxbytes=0
|
||||
stderr_logfile=/dev/stdout
|
||||
stderr_logfile_maxbytes=0
|
||||
stdout_events_enabled = true
|
||||
stderr_events_enabled = true
|
8
config_app/config_application.py
Normal file
8
config_app/config_application.py
Normal file
|
@ -0,0 +1,8 @@
|
|||
from config_app.c_app import app as application
|
||||
|
||||
# Bind all of the blueprints
|
||||
import config_web
|
||||
|
||||
if __name__ == '__main__':
|
||||
logging.config.fileConfig(logfile_path(debug=True), disable_existing_loggers=False)
|
||||
application.run(port=5000, debug=True, threaded=True, host='0.0.0.0')
|
0
config_app/config_endpoints/__init__.py
Normal file
0
config_app/config_endpoints/__init__.py
Normal file
158
config_app/config_endpoints/api/__init__.py
Normal file
158
config_app/config_endpoints/api/__init__.py
Normal file
|
@ -0,0 +1,158 @@
|
|||
import logging
|
||||
|
||||
from flask import Blueprint, request, abort
|
||||
from flask_restful import Resource, Api
|
||||
from flask_restful.utils.cors import crossdomain
|
||||
from data import model
|
||||
from email.utils import formatdate
|
||||
from calendar import timegm
|
||||
from functools import partial, wraps
|
||||
from jsonschema import validate, ValidationError
|
||||
|
||||
from config_app.c_app import app, IS_KUBERNETES
|
||||
from config_app.config_endpoints.exception import InvalidResponse, InvalidRequest
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
api_bp = Blueprint('api', __name__)
|
||||
|
||||
CROSS_DOMAIN_HEADERS = ['Authorization', 'Content-Type', 'X-Requested-With']
|
||||
|
||||
|
||||
class ApiExceptionHandlingApi(Api):
|
||||
pass
|
||||
|
||||
@crossdomain(origin='*', headers=CROSS_DOMAIN_HEADERS)
|
||||
def handle_error(self, error):
|
||||
return super(ApiExceptionHandlingApi, self).handle_error(error)
|
||||
|
||||
|
||||
api = ApiExceptionHandlingApi()
|
||||
|
||||
api.init_app(api_bp)
|
||||
|
||||
def log_action(kind, user_or_orgname, metadata=None, repo=None, repo_name=None):
|
||||
if not metadata:
|
||||
metadata = {}
|
||||
|
||||
if repo:
|
||||
repo_name = repo.name
|
||||
|
||||
model.log.log_action(kind, user_or_orgname, repo_name, user_or_orgname, request.remote_addr, metadata)
|
||||
|
||||
def format_date(date):
|
||||
""" Output an RFC822 date format. """
|
||||
if date is None:
|
||||
return None
|
||||
return formatdate(timegm(date.utctimetuple()))
|
||||
|
||||
|
||||
|
||||
def resource(*urls, **kwargs):
|
||||
def wrapper(api_resource):
|
||||
if not api_resource:
|
||||
return None
|
||||
|
||||
api_resource.registered = True
|
||||
api.add_resource(api_resource, *urls, **kwargs)
|
||||
return api_resource
|
||||
|
||||
return wrapper
|
||||
|
||||
|
||||
class ApiResource(Resource):
|
||||
registered = False
|
||||
method_decorators = []
|
||||
|
||||
def options(self):
|
||||
return None, 200
|
||||
|
||||
|
||||
def add_method_metadata(name, value):
|
||||
def modifier(func):
|
||||
if func is None:
|
||||
return None
|
||||
|
||||
if '__api_metadata' not in dir(func):
|
||||
func.__api_metadata = {}
|
||||
func.__api_metadata[name] = value
|
||||
return func
|
||||
|
||||
return modifier
|
||||
|
||||
|
||||
def method_metadata(func, name):
|
||||
if func is None:
|
||||
return None
|
||||
|
||||
if '__api_metadata' in dir(func):
|
||||
return func.__api_metadata.get(name, None)
|
||||
return None
|
||||
|
||||
|
||||
def no_cache(f):
|
||||
@wraps(f)
|
||||
def add_no_cache(*args, **kwargs):
|
||||
response = f(*args, **kwargs)
|
||||
if response is not None:
|
||||
response.headers['Cache-Control'] = 'no-cache, no-store, must-revalidate'
|
||||
return response
|
||||
return add_no_cache
|
||||
|
||||
|
||||
def define_json_response(schema_name):
|
||||
def wrapper(func):
|
||||
@add_method_metadata('response_schema', schema_name)
|
||||
@wraps(func)
|
||||
def wrapped(self, *args, **kwargs):
|
||||
schema = self.schemas[schema_name]
|
||||
resp = func(self, *args, **kwargs)
|
||||
|
||||
if app.config['TESTING']:
|
||||
try:
|
||||
validate(resp, schema)
|
||||
except ValidationError as ex:
|
||||
raise InvalidResponse(ex.message)
|
||||
|
||||
return resp
|
||||
return wrapped
|
||||
return wrapper
|
||||
|
||||
|
||||
def validate_json_request(schema_name, optional=False):
|
||||
def wrapper(func):
|
||||
@add_method_metadata('request_schema', schema_name)
|
||||
@wraps(func)
|
||||
def wrapped(self, *args, **kwargs):
|
||||
schema = self.schemas[schema_name]
|
||||
try:
|
||||
json_data = request.get_json()
|
||||
if json_data is None:
|
||||
if not optional:
|
||||
raise InvalidRequest('Missing JSON body')
|
||||
else:
|
||||
validate(json_data, schema)
|
||||
return func(self, *args, **kwargs)
|
||||
except ValidationError as ex:
|
||||
raise InvalidRequest(ex.message)
|
||||
return wrapped
|
||||
return wrapper
|
||||
|
||||
def kubernetes_only(f):
|
||||
""" Aborts the request with a 400 if the app is not running on kubernetes """
|
||||
@wraps(f)
|
||||
def abort_if_not_kube(*args, **kwargs):
|
||||
if not IS_KUBERNETES:
|
||||
abort(400)
|
||||
|
||||
return f(*args, **kwargs)
|
||||
return abort_if_not_kube
|
||||
|
||||
nickname = partial(add_method_metadata, 'nickname')
|
||||
|
||||
|
||||
import config_app.config_endpoints.api.discovery
|
||||
import config_app.config_endpoints.api.kube_endpoints
|
||||
import config_app.config_endpoints.api.suconfig
|
||||
import config_app.config_endpoints.api.superuser
|
||||
import config_app.config_endpoints.api.tar_config_loader
|
||||
import config_app.config_endpoints.api.user
|
254
config_app/config_endpoints/api/discovery.py
Normal file
254
config_app/config_endpoints/api/discovery.py
Normal file
|
@ -0,0 +1,254 @@
|
|||
# TODO to extract the discovery stuff into a util at the top level and then use it both here and old discovery.py
|
||||
import logging
|
||||
import sys
|
||||
from collections import OrderedDict
|
||||
|
||||
from config_app.c_app import app
|
||||
from config_app.config_endpoints.api import method_metadata
|
||||
from config_app.config_endpoints.common import fully_qualified_name, PARAM_REGEX, TYPE_CONVERTER
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def generate_route_data():
|
||||
include_internal = True
|
||||
compact = True
|
||||
|
||||
def swagger_parameter(name, description, kind='path', param_type='string', required=True,
|
||||
enum=None, schema=None):
|
||||
# https://github.com/swagger-api/swagger-spec/blob/master/versions/2.0.md#parameterObject
|
||||
parameter_info = {
|
||||
'name': name,
|
||||
'in': kind,
|
||||
'required': required
|
||||
}
|
||||
|
||||
if schema:
|
||||
parameter_info['schema'] = {
|
||||
'$ref': '#/definitions/%s' % schema
|
||||
}
|
||||
else:
|
||||
parameter_info['type'] = param_type
|
||||
|
||||
if enum is not None and len(list(enum)) > 0:
|
||||
parameter_info['enum'] = list(enum)
|
||||
|
||||
return parameter_info
|
||||
|
||||
paths = {}
|
||||
models = {}
|
||||
tags = []
|
||||
tags_added = set()
|
||||
operation_ids = set()
|
||||
|
||||
for rule in app.url_map.iter_rules():
|
||||
endpoint_method = app.view_functions[rule.endpoint]
|
||||
|
||||
# Verify that we have a view class for this API method.
|
||||
if not 'view_class' in dir(endpoint_method):
|
||||
continue
|
||||
|
||||
view_class = endpoint_method.view_class
|
||||
|
||||
# Hide the class if it is internal.
|
||||
internal = method_metadata(view_class, 'internal')
|
||||
if not include_internal and internal:
|
||||
continue
|
||||
|
||||
# Build the tag.
|
||||
parts = fully_qualified_name(view_class).split('.')
|
||||
tag_name = parts[-2]
|
||||
if not tag_name in tags_added:
|
||||
tags_added.add(tag_name)
|
||||
tags.append({
|
||||
'name': tag_name,
|
||||
'description': (sys.modules[view_class.__module__].__doc__ or '').strip()
|
||||
})
|
||||
|
||||
# Build the Swagger data for the path.
|
||||
swagger_path = PARAM_REGEX.sub(r'{\2}', rule.rule)
|
||||
full_name = fully_qualified_name(view_class)
|
||||
path_swagger = {
|
||||
'x-name': full_name,
|
||||
'x-path': swagger_path,
|
||||
'x-tag': tag_name
|
||||
}
|
||||
|
||||
related_user_res = method_metadata(view_class, 'related_user_resource')
|
||||
if related_user_res is not None:
|
||||
path_swagger['x-user-related'] = fully_qualified_name(related_user_res)
|
||||
|
||||
paths[swagger_path] = path_swagger
|
||||
|
||||
# Add any global path parameters.
|
||||
param_data_map = view_class.__api_path_params if '__api_path_params' in dir(
|
||||
view_class) else {}
|
||||
if param_data_map:
|
||||
path_parameters_swagger = []
|
||||
for path_parameter in param_data_map:
|
||||
description = param_data_map[path_parameter].get('description')
|
||||
path_parameters_swagger.append(swagger_parameter(path_parameter, description))
|
||||
|
||||
path_swagger['parameters'] = path_parameters_swagger
|
||||
|
||||
# Add the individual HTTP operations.
|
||||
method_names = list(rule.methods.difference(['HEAD', 'OPTIONS']))
|
||||
for method_name in method_names:
|
||||
# https://github.com/swagger-api/swagger-spec/blob/master/versions/2.0.md#operation-object
|
||||
method = getattr(view_class, method_name.lower(), None)
|
||||
if method is None:
|
||||
logger.debug('Unable to find method for %s in class %s', method_name, view_class)
|
||||
continue
|
||||
|
||||
operationId = method_metadata(method, 'nickname')
|
||||
operation_swagger = {
|
||||
'operationId': operationId,
|
||||
'parameters': [],
|
||||
}
|
||||
|
||||
if operationId is None:
|
||||
continue
|
||||
|
||||
if operationId in operation_ids:
|
||||
raise Exception('Duplicate operation Id: %s' % operationId)
|
||||
|
||||
operation_ids.add(operationId)
|
||||
|
||||
# Mark the method as internal.
|
||||
internal = method_metadata(method, 'internal')
|
||||
if internal is not None:
|
||||
operation_swagger['x-internal'] = True
|
||||
|
||||
if include_internal:
|
||||
requires_fresh_login = method_metadata(method, 'requires_fresh_login')
|
||||
if requires_fresh_login is not None:
|
||||
operation_swagger['x-requires-fresh-login'] = True
|
||||
|
||||
# Add the path parameters.
|
||||
if rule.arguments:
|
||||
for path_parameter in rule.arguments:
|
||||
description = param_data_map.get(path_parameter, {}).get('description')
|
||||
operation_swagger['parameters'].append(
|
||||
swagger_parameter(path_parameter, description))
|
||||
|
||||
# Add the query parameters.
|
||||
if '__api_query_params' in dir(method):
|
||||
for query_parameter_info in method.__api_query_params:
|
||||
name = query_parameter_info['name']
|
||||
description = query_parameter_info['help']
|
||||
param_type = TYPE_CONVERTER[query_parameter_info['type']]
|
||||
required = query_parameter_info['required']
|
||||
|
||||
operation_swagger['parameters'].append(
|
||||
swagger_parameter(name, description, kind='query',
|
||||
param_type=param_type,
|
||||
required=required,
|
||||
enum=query_parameter_info['choices']))
|
||||
|
||||
# Add the OAuth security block.
|
||||
# https://github.com/swagger-api/swagger-spec/blob/master/versions/2.0.md#securityRequirementObject
|
||||
scope = method_metadata(method, 'oauth2_scope')
|
||||
if scope and not compact:
|
||||
operation_swagger['security'] = [{'oauth2_implicit': [scope.scope]}]
|
||||
|
||||
# Add the responses block.
|
||||
# https://github.com/swagger-api/swagger-spec/blob/master/versions/2.0.md#responsesObject
|
||||
response_schema_name = method_metadata(method, 'response_schema')
|
||||
if not compact:
|
||||
if response_schema_name:
|
||||
models[response_schema_name] = view_class.schemas[response_schema_name]
|
||||
|
||||
models['ApiError'] = {
|
||||
'type': 'object',
|
||||
'properties': {
|
||||
'status': {
|
||||
'type': 'integer',
|
||||
'description': 'Status code of the response.'
|
||||
},
|
||||
'type': {
|
||||
'type': 'string',
|
||||
'description': 'Reference to the type of the error.'
|
||||
},
|
||||
'detail': {
|
||||
'type': 'string',
|
||||
'description': 'Details about the specific instance of the error.'
|
||||
},
|
||||
'title': {
|
||||
'type': 'string',
|
||||
'description': 'Unique error code to identify the type of error.'
|
||||
},
|
||||
'error_message': {
|
||||
'type': 'string',
|
||||
'description': 'Deprecated; alias for detail'
|
||||
},
|
||||
'error_type': {
|
||||
'type': 'string',
|
||||
'description': 'Deprecated; alias for detail'
|
||||
}
|
||||
},
|
||||
'required': [
|
||||
'status',
|
||||
'type',
|
||||
'title',
|
||||
]
|
||||
}
|
||||
|
||||
responses = {
|
||||
'400': {
|
||||
'description': 'Bad Request',
|
||||
},
|
||||
|
||||
'401': {
|
||||
'description': 'Session required',
|
||||
},
|
||||
|
||||
'403': {
|
||||
'description': 'Unauthorized access',
|
||||
},
|
||||
|
||||
'404': {
|
||||
'description': 'Not found',
|
||||
},
|
||||
}
|
||||
|
||||
for _, body in responses.items():
|
||||
body['schema'] = {'$ref': '#/definitions/ApiError'}
|
||||
|
||||
if method_name == 'DELETE':
|
||||
responses['204'] = {
|
||||
'description': 'Deleted'
|
||||
}
|
||||
elif method_name == 'POST':
|
||||
responses['201'] = {
|
||||
'description': 'Successful creation'
|
||||
}
|
||||
else:
|
||||
responses['200'] = {
|
||||
'description': 'Successful invocation'
|
||||
}
|
||||
|
||||
if response_schema_name:
|
||||
responses['200']['schema'] = {
|
||||
'$ref': '#/definitions/%s' % response_schema_name
|
||||
}
|
||||
|
||||
operation_swagger['responses'] = responses
|
||||
|
||||
# Add the request block.
|
||||
request_schema_name = method_metadata(method, 'request_schema')
|
||||
if request_schema_name and not compact:
|
||||
models[request_schema_name] = view_class.schemas[request_schema_name]
|
||||
|
||||
operation_swagger['parameters'].append(
|
||||
swagger_parameter('body', 'Request body contents.', kind='body',
|
||||
schema=request_schema_name))
|
||||
|
||||
# Add the operation to the parent path.
|
||||
if not internal or (internal and include_internal):
|
||||
path_swagger[method_name.lower()] = operation_swagger
|
||||
|
||||
tags.sort(key=lambda t: t['name'])
|
||||
paths = OrderedDict(sorted(paths.items(), key=lambda p: p[1]['x-tag']))
|
||||
|
||||
if compact:
|
||||
return {'paths': paths}
|
143
config_app/config_endpoints/api/kube_endpoints.py
Normal file
143
config_app/config_endpoints/api/kube_endpoints.py
Normal file
|
@ -0,0 +1,143 @@
|
|||
import logging
|
||||
|
||||
from flask import request, make_response
|
||||
|
||||
from config_app.config_util.config import get_config_as_kube_secret
|
||||
from data.database import configure
|
||||
|
||||
from config_app.c_app import app, config_provider
|
||||
from config_app.config_endpoints.api import resource, ApiResource, nickname, kubernetes_only, validate_json_request
|
||||
from config_app.config_util.k8saccessor import KubernetesAccessorSingleton, K8sApiException
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@resource('/v1/kubernetes/deployments/')
|
||||
class SuperUserKubernetesDeployment(ApiResource):
|
||||
""" Resource for the getting the status of Red Hat Quay deployments and cycling them """
|
||||
schemas = {
|
||||
'ValidateDeploymentNames': {
|
||||
'type': 'object',
|
||||
'description': 'Validates deployment names for cycling',
|
||||
'required': [
|
||||
'deploymentNames'
|
||||
],
|
||||
'properties': {
|
||||
'deploymentNames': {
|
||||
'type': 'array',
|
||||
'description': 'The names of the deployments to cycle'
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
@kubernetes_only
|
||||
@nickname('scGetNumDeployments')
|
||||
def get(self):
|
||||
return KubernetesAccessorSingleton.get_instance().get_qe_deployments()
|
||||
|
||||
@kubernetes_only
|
||||
@validate_json_request('ValidateDeploymentNames')
|
||||
@nickname('scCycleQEDeployments')
|
||||
def put(self):
|
||||
deployment_names = request.get_json()['deploymentNames']
|
||||
return KubernetesAccessorSingleton.get_instance().cycle_qe_deployments(deployment_names)
|
||||
|
||||
|
||||
@resource('/v1/kubernetes/deployment/<deployment>/status')
|
||||
class QEDeploymentRolloutStatus(ApiResource):
|
||||
@kubernetes_only
|
||||
@nickname('scGetDeploymentRolloutStatus')
|
||||
def get(self, deployment):
|
||||
deployment_rollout_status = KubernetesAccessorSingleton.get_instance().get_deployment_rollout_status(deployment)
|
||||
return {
|
||||
'status': deployment_rollout_status.status,
|
||||
'message': deployment_rollout_status.message,
|
||||
}
|
||||
|
||||
|
||||
@resource('/v1/kubernetes/deployments/rollback')
|
||||
class QEDeploymentRollback(ApiResource):
|
||||
""" Resource for rolling back deployments """
|
||||
schemas = {
|
||||
'ValidateDeploymentNames': {
|
||||
'type': 'object',
|
||||
'description': 'Validates deployment names for rolling back',
|
||||
'required': [
|
||||
'deploymentNames'
|
||||
],
|
||||
'properties': {
|
||||
'deploymentNames': {
|
||||
'type': 'array',
|
||||
'description': 'The names of the deployments to rollback'
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
@kubernetes_only
|
||||
@nickname('scRollbackDeployments')
|
||||
@validate_json_request('ValidateDeploymentNames')
|
||||
def post(self):
|
||||
"""
|
||||
Returns the config to its original state and rolls back deployments
|
||||
:return:
|
||||
"""
|
||||
deployment_names = request.get_json()['deploymentNames']
|
||||
|
||||
# To roll back a deployment, we must do 2 things:
|
||||
# 1. Roll back the config secret to its old value (discarding changes we made in this session)
|
||||
# 2. Trigger a rollback to the previous revision, so that the pods will be restarted with
|
||||
# the old config
|
||||
old_secret = get_config_as_kube_secret(config_provider.get_old_config_dir())
|
||||
kube_accessor = KubernetesAccessorSingleton.get_instance()
|
||||
kube_accessor.replace_qe_secret(old_secret)
|
||||
|
||||
try:
|
||||
for name in deployment_names:
|
||||
kube_accessor.rollback_deployment(name)
|
||||
except K8sApiException as e:
|
||||
logger.exception('Failed to rollback deployment.')
|
||||
return make_response(e.message, 503)
|
||||
|
||||
return make_response('Ok', 204)
|
||||
|
||||
|
||||
@resource('/v1/kubernetes/config')
|
||||
class SuperUserKubernetesConfiguration(ApiResource):
|
||||
""" Resource for saving the config files to kubernetes secrets. """
|
||||
|
||||
@kubernetes_only
|
||||
@nickname('scDeployConfiguration')
|
||||
def post(self):
|
||||
try:
|
||||
new_secret = get_config_as_kube_secret(config_provider.get_config_dir_path())
|
||||
KubernetesAccessorSingleton.get_instance().replace_qe_secret(new_secret)
|
||||
except K8sApiException as e:
|
||||
logger.exception('Failed to deploy qe config secret to kubernetes.')
|
||||
return make_response(e.message, 503)
|
||||
|
||||
return make_response('Ok', 201)
|
||||
|
||||
|
||||
@resource('/v1/kubernetes/config/populate')
|
||||
class KubernetesConfigurationPopulator(ApiResource):
|
||||
""" Resource for populating the local configuration from the cluster's kubernetes secrets. """
|
||||
|
||||
@kubernetes_only
|
||||
@nickname('scKubePopulateConfig')
|
||||
def post(self):
|
||||
# Get a clean transient directory to write the config into
|
||||
config_provider.new_config_dir()
|
||||
|
||||
kube_accessor = KubernetesAccessorSingleton.get_instance()
|
||||
kube_accessor.save_secret_to_directory(config_provider.get_config_dir_path())
|
||||
config_provider.create_copy_of_config_dir()
|
||||
|
||||
# We update the db configuration to connect to their specified one
|
||||
# (Note, even if this DB isn't valid, it won't affect much in the config app, since we'll report an error,
|
||||
# and all of the options create a new clean dir, so we'll never pollute configs)
|
||||
combined = dict(**app.config)
|
||||
combined.update(config_provider.get_config())
|
||||
configure(combined)
|
||||
|
||||
return 200
|
302
config_app/config_endpoints/api/suconfig.py
Normal file
302
config_app/config_endpoints/api/suconfig.py
Normal file
|
@ -0,0 +1,302 @@
|
|||
import logging
|
||||
|
||||
from flask import abort, request
|
||||
|
||||
from config_app.config_endpoints.api.suconfig_models_pre_oci import pre_oci_model as model
|
||||
from config_app.config_endpoints.api import resource, ApiResource, nickname, validate_json_request
|
||||
from config_app.c_app import (app, config_provider, superusers, ip_resolver,
|
||||
instance_keys, INIT_SCRIPTS_LOCATION)
|
||||
|
||||
from data.database import configure
|
||||
from data.runmigration import run_alembic_migration
|
||||
from util.config.configutil import add_enterprise_config_defaults
|
||||
from util.config.validator import validate_service_for_config, ValidatorContext, \
|
||||
is_valid_config_upload_filename
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def database_is_valid():
|
||||
""" Returns whether the database, as configured, is valid. """
|
||||
return model.is_valid()
|
||||
|
||||
|
||||
def database_has_users():
|
||||
""" Returns whether the database has any users defined. """
|
||||
return model.has_users()
|
||||
|
||||
|
||||
@resource('/v1/superuser/config')
|
||||
class SuperUserConfig(ApiResource):
|
||||
""" Resource for fetching and updating the current configuration, if any. """
|
||||
schemas = {
|
||||
'UpdateConfig': {
|
||||
'type': 'object',
|
||||
'description': 'Updates the YAML config file',
|
||||
'required': [
|
||||
'config',
|
||||
],
|
||||
'properties': {
|
||||
'config': {
|
||||
'type': 'object'
|
||||
},
|
||||
'password': {
|
||||
'type': 'string'
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
@nickname('scGetConfig')
|
||||
def get(self):
|
||||
""" Returns the currently defined configuration, if any. """
|
||||
config_object = config_provider.get_config()
|
||||
return {
|
||||
'config': config_object
|
||||
}
|
||||
|
||||
@nickname('scUpdateConfig')
|
||||
@validate_json_request('UpdateConfig')
|
||||
def put(self):
|
||||
""" Updates the config override file. """
|
||||
# Note: This method is called to set the database configuration before super users exists,
|
||||
# so we also allow it to be called if there is no valid registry configuration setup.
|
||||
config_object = request.get_json()['config']
|
||||
|
||||
# Add any enterprise defaults missing from the config.
|
||||
add_enterprise_config_defaults(config_object, app.config['SECRET_KEY'])
|
||||
|
||||
# Write the configuration changes to the config override file.
|
||||
config_provider.save_config(config_object)
|
||||
|
||||
# now try to connect to the db provided in their config to validate it works
|
||||
combined = dict(**app.config)
|
||||
combined.update(config_provider.get_config())
|
||||
configure(combined, testing=app.config['TESTING'])
|
||||
|
||||
return {
|
||||
'exists': True,
|
||||
'config': config_object
|
||||
}
|
||||
|
||||
|
||||
@resource('/v1/superuser/registrystatus')
|
||||
class SuperUserRegistryStatus(ApiResource):
|
||||
""" Resource for determining the status of the registry, such as if config exists,
|
||||
if a database is configured, and if it has any defined users.
|
||||
"""
|
||||
|
||||
@nickname('scRegistryStatus')
|
||||
def get(self):
|
||||
""" Returns the status of the registry. """
|
||||
# If there is no config file, we need to setup the database.
|
||||
if not config_provider.config_exists():
|
||||
return {
|
||||
'status': 'config-db'
|
||||
}
|
||||
|
||||
# If the database isn't yet valid, then we need to set it up.
|
||||
if not database_is_valid():
|
||||
return {
|
||||
'status': 'setup-db'
|
||||
}
|
||||
|
||||
config = config_provider.get_config()
|
||||
if config and config.get('SETUP_COMPLETE'):
|
||||
return {
|
||||
'status': 'config'
|
||||
}
|
||||
|
||||
return {
|
||||
'status': 'create-superuser' if not database_has_users() else 'config'
|
||||
}
|
||||
|
||||
|
||||
class _AlembicLogHandler(logging.Handler):
|
||||
def __init__(self):
|
||||
super(_AlembicLogHandler, self).__init__()
|
||||
self.records = []
|
||||
|
||||
def emit(self, record):
|
||||
self.records.append({
|
||||
'level': record.levelname,
|
||||
'message': record.getMessage()
|
||||
})
|
||||
|
||||
|
||||
def _reload_config():
|
||||
combined = dict(**app.config)
|
||||
combined.update(config_provider.get_config())
|
||||
configure(combined)
|
||||
return combined
|
||||
|
||||
|
||||
@resource('/v1/superuser/setupdb')
|
||||
class SuperUserSetupDatabase(ApiResource):
|
||||
""" Resource for invoking alembic to setup the database. """
|
||||
|
||||
@nickname('scSetupDatabase')
|
||||
def get(self):
|
||||
""" Invokes the alembic upgrade process. """
|
||||
# Note: This method is called after the database configured is saved, but before the
|
||||
# database has any tables. Therefore, we only allow it to be run in that unique case.
|
||||
if config_provider.config_exists() and not database_is_valid():
|
||||
combined = _reload_config()
|
||||
|
||||
app.config['DB_URI'] = combined['DB_URI']
|
||||
db_uri = app.config['DB_URI']
|
||||
escaped_db_uri = db_uri.replace('%', '%%')
|
||||
|
||||
log_handler = _AlembicLogHandler()
|
||||
|
||||
try:
|
||||
run_alembic_migration(escaped_db_uri, log_handler, setup_app=False)
|
||||
except Exception as ex:
|
||||
return {
|
||||
'error': str(ex)
|
||||
}
|
||||
|
||||
return {
|
||||
'logs': log_handler.records
|
||||
}
|
||||
|
||||
abort(403)
|
||||
|
||||
|
||||
@resource('/v1/superuser/config/createsuperuser')
|
||||
class SuperUserCreateInitialSuperUser(ApiResource):
|
||||
""" Resource for creating the initial super user. """
|
||||
schemas = {
|
||||
'CreateSuperUser': {
|
||||
'type': 'object',
|
||||
'description': 'Information for creating the initial super user',
|
||||
'required': [
|
||||
'username',
|
||||
'password',
|
||||
'email'
|
||||
],
|
||||
'properties': {
|
||||
'username': {
|
||||
'type': 'string',
|
||||
'description': 'The username for the superuser'
|
||||
},
|
||||
'password': {
|
||||
'type': 'string',
|
||||
'description': 'The password for the superuser'
|
||||
},
|
||||
'email': {
|
||||
'type': 'string',
|
||||
'description': 'The e-mail address for the superuser'
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
@nickname('scCreateInitialSuperuser')
|
||||
@validate_json_request('CreateSuperUser')
|
||||
def post(self):
|
||||
""" Creates the initial super user, updates the underlying configuration and
|
||||
sets the current session to have that super user. """
|
||||
|
||||
_reload_config()
|
||||
|
||||
# Special security check: This method is only accessible when:
|
||||
# - There is a valid config YAML file.
|
||||
# - There are currently no users in the database (clean install)
|
||||
#
|
||||
# We do this special security check because at the point this method is called, the database
|
||||
# is clean but does not (yet) have any super users for our permissions code to check against.
|
||||
if config_provider.config_exists() and not database_has_users():
|
||||
data = request.get_json()
|
||||
username = data['username']
|
||||
password = data['password']
|
||||
email = data['email']
|
||||
|
||||
# Create the user in the database.
|
||||
superuser_uuid = model.create_superuser(username, password, email)
|
||||
|
||||
# Add the user to the config.
|
||||
config_object = config_provider.get_config()
|
||||
config_object['SUPER_USERS'] = [username]
|
||||
config_provider.save_config(config_object)
|
||||
|
||||
# Update the in-memory config for the new superuser.
|
||||
superusers.register_superuser(username)
|
||||
|
||||
return {
|
||||
'status': True
|
||||
}
|
||||
|
||||
abort(403)
|
||||
|
||||
|
||||
@resource('/v1/superuser/config/validate/<service>')
|
||||
class SuperUserConfigValidate(ApiResource):
|
||||
""" Resource for validating a block of configuration against an external service. """
|
||||
schemas = {
|
||||
'ValidateConfig': {
|
||||
'type': 'object',
|
||||
'description': 'Validates configuration',
|
||||
'required': [
|
||||
'config'
|
||||
],
|
||||
'properties': {
|
||||
'config': {
|
||||
'type': 'object'
|
||||
},
|
||||
'password': {
|
||||
'type': 'string',
|
||||
'description': 'The users password, used for auth validation'
|
||||
}
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
@nickname('scValidateConfig')
|
||||
@validate_json_request('ValidateConfig')
|
||||
def post(self, service):
|
||||
""" Validates the given config for the given service. """
|
||||
# Note: This method is called to validate the database configuration before super users exists,
|
||||
# so we also allow it to be called if there is no valid registry configuration setup. Note that
|
||||
# this is also safe since this method does not access any information not given in the request.
|
||||
config = request.get_json()['config']
|
||||
validator_context = ValidatorContext.from_app(app, config,
|
||||
request.get_json().get('password', ''),
|
||||
instance_keys=instance_keys,
|
||||
ip_resolver=ip_resolver,
|
||||
config_provider=config_provider,
|
||||
init_scripts_location=INIT_SCRIPTS_LOCATION)
|
||||
|
||||
return validate_service_for_config(service, validator_context)
|
||||
|
||||
|
||||
@resource('/v1/superuser/config/file/<filename>')
|
||||
class SuperUserConfigFile(ApiResource):
|
||||
""" Resource for fetching the status of config files and overriding them. """
|
||||
|
||||
@nickname('scConfigFileExists')
|
||||
def get(self, filename):
|
||||
""" Returns whether the configuration file with the given name exists. """
|
||||
if not is_valid_config_upload_filename(filename):
|
||||
abort(404)
|
||||
|
||||
return {
|
||||
'exists': config_provider.volume_file_exists(filename)
|
||||
}
|
||||
|
||||
@nickname('scUpdateConfigFile')
|
||||
def post(self, filename):
|
||||
""" Updates the configuration file with the given name. """
|
||||
if not is_valid_config_upload_filename(filename):
|
||||
abort(404)
|
||||
|
||||
# Note: This method can be called before the configuration exists
|
||||
# to upload the database SSL cert.
|
||||
uploaded_file = request.files['file']
|
||||
if not uploaded_file:
|
||||
abort(400)
|
||||
|
||||
config_provider.save_volume_file(filename, uploaded_file)
|
||||
return {
|
||||
'status': True
|
||||
}
|
39
config_app/config_endpoints/api/suconfig_models_interface.py
Normal file
39
config_app/config_endpoints/api/suconfig_models_interface.py
Normal file
|
@ -0,0 +1,39 @@
|
|||
from abc import ABCMeta, abstractmethod
|
||||
from six import add_metaclass
|
||||
|
||||
|
||||
@add_metaclass(ABCMeta)
|
||||
class SuperuserConfigDataInterface(object):
|
||||
"""
|
||||
Interface that represents all data store interactions required by the superuser config API.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def is_valid(self):
|
||||
"""
|
||||
Returns true if the configured database is valid.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def has_users(self):
|
||||
"""
|
||||
Returns true if there are any users defined.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def create_superuser(self, username, password, email):
|
||||
"""
|
||||
Creates a new superuser with the given username, password and email. Returns the user's UUID.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def has_federated_login(self, username, service_name):
|
||||
"""
|
||||
Returns true if the matching user has a federated login under the matching service.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def attach_federated_login(self, username, service_name, federated_username):
|
||||
"""
|
||||
Attaches a federatated login to the matching user, under the given service.
|
||||
"""
|
37
config_app/config_endpoints/api/suconfig_models_pre_oci.py
Normal file
37
config_app/config_endpoints/api/suconfig_models_pre_oci.py
Normal file
|
@ -0,0 +1,37 @@
|
|||
from data import model
|
||||
from data.database import User
|
||||
from config_app.config_endpoints.api.suconfig_models_interface import SuperuserConfigDataInterface
|
||||
|
||||
|
||||
class PreOCIModel(SuperuserConfigDataInterface):
|
||||
# Note: this method is different than has_users: the user select will throw if the user
|
||||
# table does not exist, whereas has_users assumes the table is valid
|
||||
def is_valid(self):
|
||||
try:
|
||||
list(User.select().limit(1))
|
||||
return True
|
||||
except:
|
||||
return False
|
||||
|
||||
def has_users(self):
|
||||
return bool(list(User.select().limit(1)))
|
||||
|
||||
def create_superuser(self, username, password, email):
|
||||
return model.user.create_user(username, password, email, auto_verify=True).uuid
|
||||
|
||||
def has_federated_login(self, username, service_name):
|
||||
user = model.user.get_user(username)
|
||||
if user is None:
|
||||
return False
|
||||
|
||||
return bool(model.user.lookup_federated_login(user, service_name))
|
||||
|
||||
def attach_federated_login(self, username, service_name, federated_username):
|
||||
user = model.user.get_user(username)
|
||||
if user is None:
|
||||
return False
|
||||
|
||||
model.user.attach_federated_login(user, service_name, federated_username)
|
||||
|
||||
|
||||
pre_oci_model = PreOCIModel()
|
248
config_app/config_endpoints/api/superuser.py
Normal file
248
config_app/config_endpoints/api/superuser.py
Normal file
|
@ -0,0 +1,248 @@
|
|||
import logging
|
||||
import pathvalidate
|
||||
import os
|
||||
import subprocess
|
||||
from datetime import datetime
|
||||
|
||||
from flask import request, jsonify, make_response
|
||||
|
||||
from endpoints.exception import NotFound
|
||||
from data.database import ServiceKeyApprovalType
|
||||
from data.model import ServiceKeyDoesNotExist
|
||||
from util.config.validator import EXTRA_CA_DIRECTORY
|
||||
|
||||
from config_app.config_endpoints.exception import InvalidRequest
|
||||
from config_app.config_endpoints.api import resource, ApiResource, nickname, log_action, validate_json_request
|
||||
from config_app.config_endpoints.api.superuser_models_pre_oci import pre_oci_model
|
||||
from config_app.config_util.ssl import load_certificate, CertInvalidException
|
||||
from config_app.c_app import app, config_provider, INIT_SCRIPTS_LOCATION
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@resource('/v1/superuser/customcerts/<certpath>')
|
||||
class SuperUserCustomCertificate(ApiResource):
|
||||
""" Resource for managing a custom certificate. """
|
||||
|
||||
@nickname('uploadCustomCertificate')
|
||||
def post(self, certpath):
|
||||
uploaded_file = request.files['file']
|
||||
if not uploaded_file:
|
||||
raise InvalidRequest('Missing certificate file')
|
||||
|
||||
# Save the certificate.
|
||||
certpath = pathvalidate.sanitize_filename(certpath)
|
||||
if not certpath.endswith('.crt'):
|
||||
raise InvalidRequest('Invalid certificate file: must have suffix `.crt`')
|
||||
|
||||
logger.debug('Saving custom certificate %s', certpath)
|
||||
cert_full_path = config_provider.get_volume_path(EXTRA_CA_DIRECTORY, certpath)
|
||||
config_provider.save_volume_file(cert_full_path, uploaded_file)
|
||||
logger.debug('Saved custom certificate %s', certpath)
|
||||
|
||||
# Validate the certificate.
|
||||
try:
|
||||
logger.debug('Loading custom certificate %s', certpath)
|
||||
with config_provider.get_volume_file(cert_full_path) as f:
|
||||
load_certificate(f.read())
|
||||
except CertInvalidException:
|
||||
logger.exception('Got certificate invalid error for cert %s', certpath)
|
||||
return '', 204
|
||||
except IOError:
|
||||
logger.exception('Got IO error for cert %s', certpath)
|
||||
return '', 204
|
||||
|
||||
# Call the update script with config dir location to install the certificate immediately.
|
||||
if not app.config['TESTING']:
|
||||
cert_dir = os.path.join(config_provider.get_config_dir_path(), EXTRA_CA_DIRECTORY)
|
||||
if subprocess.call([os.path.join(INIT_SCRIPTS_LOCATION, 'certs_install.sh')], env={ 'CERTDIR': cert_dir }) != 0:
|
||||
raise Exception('Could not install certificates')
|
||||
|
||||
return '', 204
|
||||
|
||||
@nickname('deleteCustomCertificate')
|
||||
def delete(self, certpath):
|
||||
cert_full_path = config_provider.get_volume_path(EXTRA_CA_DIRECTORY, certpath)
|
||||
config_provider.remove_volume_file(cert_full_path)
|
||||
return '', 204
|
||||
|
||||
|
||||
@resource('/v1/superuser/customcerts')
|
||||
class SuperUserCustomCertificates(ApiResource):
|
||||
""" Resource for managing custom certificates. """
|
||||
|
||||
@nickname('getCustomCertificates')
|
||||
def get(self):
|
||||
has_extra_certs_path = config_provider.volume_file_exists(EXTRA_CA_DIRECTORY)
|
||||
extra_certs_found = config_provider.list_volume_directory(EXTRA_CA_DIRECTORY)
|
||||
if extra_certs_found is None:
|
||||
return {
|
||||
'status': 'file' if has_extra_certs_path else 'none',
|
||||
}
|
||||
|
||||
cert_views = []
|
||||
for extra_cert_path in extra_certs_found:
|
||||
try:
|
||||
cert_full_path = config_provider.get_volume_path(EXTRA_CA_DIRECTORY, extra_cert_path)
|
||||
with config_provider.get_volume_file(cert_full_path) as f:
|
||||
certificate = load_certificate(f.read())
|
||||
cert_views.append({
|
||||
'path': extra_cert_path,
|
||||
'names': list(certificate.names),
|
||||
'expired': certificate.expired,
|
||||
})
|
||||
except CertInvalidException as cie:
|
||||
cert_views.append({
|
||||
'path': extra_cert_path,
|
||||
'error': cie.message,
|
||||
})
|
||||
except IOError as ioe:
|
||||
cert_views.append({
|
||||
'path': extra_cert_path,
|
||||
'error': ioe.message,
|
||||
})
|
||||
|
||||
return {
|
||||
'status': 'directory',
|
||||
'certs': cert_views,
|
||||
}
|
||||
|
||||
|
||||
@resource('/v1/superuser/keys')
|
||||
class SuperUserServiceKeyManagement(ApiResource):
|
||||
""" Resource for managing service keys."""
|
||||
schemas = {
|
||||
'CreateServiceKey': {
|
||||
'id': 'CreateServiceKey',
|
||||
'type': 'object',
|
||||
'description': 'Description of creation of a service key',
|
||||
'required': ['service', 'expiration'],
|
||||
'properties': {
|
||||
'service': {
|
||||
'type': 'string',
|
||||
'description': 'The service authenticating with this key',
|
||||
},
|
||||
'name': {
|
||||
'type': 'string',
|
||||
'description': 'The friendly name of a service key',
|
||||
},
|
||||
'metadata': {
|
||||
'type': 'object',
|
||||
'description': 'The key/value pairs of this key\'s metadata',
|
||||
},
|
||||
'notes': {
|
||||
'type': 'string',
|
||||
'description': 'If specified, the extra notes for the key',
|
||||
},
|
||||
'expiration': {
|
||||
'description': 'The expiration date as a unix timestamp',
|
||||
'anyOf': [{'type': 'number'}, {'type': 'null'}],
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
@nickname('listServiceKeys')
|
||||
def get(self):
|
||||
keys = pre_oci_model.list_all_service_keys()
|
||||
|
||||
return jsonify({
|
||||
'keys': [key.to_dict() for key in keys],
|
||||
})
|
||||
|
||||
@nickname('createServiceKey')
|
||||
@validate_json_request('CreateServiceKey')
|
||||
def post(self):
|
||||
body = request.get_json()
|
||||
|
||||
# Ensure we have a valid expiration date if specified.
|
||||
expiration_date = body.get('expiration', None)
|
||||
if expiration_date is not None:
|
||||
try:
|
||||
expiration_date = datetime.utcfromtimestamp(float(expiration_date))
|
||||
except ValueError as ve:
|
||||
raise InvalidRequest('Invalid expiration date: %s' % ve)
|
||||
|
||||
if expiration_date <= datetime.now():
|
||||
raise InvalidRequest('Expiration date cannot be in the past')
|
||||
|
||||
# Create the metadata for the key.
|
||||
metadata = body.get('metadata', {})
|
||||
metadata.update({
|
||||
'created_by': 'Quay Superuser Panel',
|
||||
'ip': request.remote_addr,
|
||||
})
|
||||
|
||||
# Generate a key with a private key that we *never save*.
|
||||
(private_key, key_id) = pre_oci_model.generate_service_key(body['service'], expiration_date,
|
||||
metadata=metadata,
|
||||
name=body.get('name', ''))
|
||||
# Auto-approve the service key.
|
||||
pre_oci_model.approve_service_key(key_id, ServiceKeyApprovalType.SUPERUSER,
|
||||
notes=body.get('notes', ''))
|
||||
|
||||
# Log the creation and auto-approval of the service key.
|
||||
key_log_metadata = {
|
||||
'kid': key_id,
|
||||
'preshared': True,
|
||||
'service': body['service'],
|
||||
'name': body.get('name', ''),
|
||||
'expiration_date': expiration_date,
|
||||
'auto_approved': True,
|
||||
}
|
||||
|
||||
log_action('service_key_create', None, key_log_metadata)
|
||||
log_action('service_key_approve', None, key_log_metadata)
|
||||
|
||||
return jsonify({
|
||||
'kid': key_id,
|
||||
'name': body.get('name', ''),
|
||||
'service': body['service'],
|
||||
'public_key': private_key.publickey().exportKey('PEM'),
|
||||
'private_key': private_key.exportKey('PEM'),
|
||||
})
|
||||
|
||||
@resource('/v1/superuser/approvedkeys/<kid>')
|
||||
class SuperUserServiceKeyApproval(ApiResource):
|
||||
""" Resource for approving service keys. """
|
||||
|
||||
schemas = {
|
||||
'ApproveServiceKey': {
|
||||
'id': 'ApproveServiceKey',
|
||||
'type': 'object',
|
||||
'description': 'Information for approving service keys',
|
||||
'properties': {
|
||||
'notes': {
|
||||
'type': 'string',
|
||||
'description': 'Optional approval notes',
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
@nickname('approveServiceKey')
|
||||
@validate_json_request('ApproveServiceKey')
|
||||
def post(self, kid):
|
||||
notes = request.get_json().get('notes', '')
|
||||
try:
|
||||
key = pre_oci_model.approve_service_key(kid, ServiceKeyApprovalType.SUPERUSER, notes=notes)
|
||||
|
||||
# Log the approval of the service key.
|
||||
key_log_metadata = {
|
||||
'kid': kid,
|
||||
'service': key.service,
|
||||
'name': key.name,
|
||||
'expiration_date': key.expiration_date,
|
||||
}
|
||||
|
||||
# Note: this may not actually be the current person modifying the config, but if they're in the config tool,
|
||||
# they have full access to the DB and could pretend to be any user, so pulling any superuser is likely fine
|
||||
super_user = app.config.get('SUPER_USERS', [None])[0]
|
||||
log_action('service_key_approve', super_user, key_log_metadata)
|
||||
except ServiceKeyDoesNotExist:
|
||||
raise NotFound()
|
||||
except ServiceKeyAlreadyApproved:
|
||||
pass
|
||||
|
||||
return make_response('', 201)
|
173
config_app/config_endpoints/api/superuser_models_interface.py
Normal file
173
config_app/config_endpoints/api/superuser_models_interface.py
Normal file
|
@ -0,0 +1,173 @@
|
|||
from abc import ABCMeta, abstractmethod
|
||||
from collections import namedtuple
|
||||
from six import add_metaclass
|
||||
|
||||
from config_app.config_endpoints.api import format_date
|
||||
|
||||
|
||||
def user_view(user):
|
||||
return {
|
||||
'name': user.username,
|
||||
'kind': 'user',
|
||||
'is_robot': user.robot,
|
||||
}
|
||||
|
||||
|
||||
class RepositoryBuild(namedtuple('RepositoryBuild',
|
||||
['uuid', 'logs_archived', 'repository_namespace_user_username',
|
||||
'repository_name',
|
||||
'can_write', 'can_read', 'pull_robot', 'resource_key', 'trigger',
|
||||
'display_name',
|
||||
'started', 'job_config', 'phase', 'status', 'error',
|
||||
'archive_url'])):
|
||||
"""
|
||||
RepositoryBuild represents a build associated with a repostiory
|
||||
:type uuid: string
|
||||
:type logs_archived: boolean
|
||||
:type repository_namespace_user_username: string
|
||||
:type repository_name: string
|
||||
:type can_write: boolean
|
||||
:type can_write: boolean
|
||||
:type pull_robot: User
|
||||
:type resource_key: string
|
||||
:type trigger: Trigger
|
||||
:type display_name: string
|
||||
:type started: boolean
|
||||
:type job_config: {Any -> Any}
|
||||
:type phase: string
|
||||
:type status: string
|
||||
:type error: string
|
||||
:type archive_url: string
|
||||
"""
|
||||
|
||||
def to_dict(self):
|
||||
|
||||
resp = {
|
||||
'id': self.uuid,
|
||||
'phase': self.phase,
|
||||
'started': format_date(self.started),
|
||||
'display_name': self.display_name,
|
||||
'status': self.status or {},
|
||||
'subdirectory': self.job_config.get('build_subdir', ''),
|
||||
'dockerfile_path': self.job_config.get('build_subdir', ''),
|
||||
'context': self.job_config.get('context', ''),
|
||||
'tags': self.job_config.get('docker_tags', []),
|
||||
'manual_user': self.job_config.get('manual_user', None),
|
||||
'is_writer': self.can_write,
|
||||
'trigger': self.trigger.to_dict(),
|
||||
'trigger_metadata': self.job_config.get('trigger_metadata', None) if self.can_read else None,
|
||||
'resource_key': self.resource_key,
|
||||
'pull_robot': user_view(self.pull_robot) if self.pull_robot else None,
|
||||
'repository': {
|
||||
'namespace': self.repository_namespace_user_username,
|
||||
'name': self.repository_name
|
||||
},
|
||||
'error': self.error,
|
||||
}
|
||||
|
||||
if self.can_write:
|
||||
if self.resource_key is not None:
|
||||
resp['archive_url'] = self.archive_url
|
||||
elif self.job_config.get('archive_url', None):
|
||||
resp['archive_url'] = self.job_config['archive_url']
|
||||
|
||||
return resp
|
||||
|
||||
|
||||
class Approval(namedtuple('Approval', ['approver', 'approval_type', 'approved_date', 'notes'])):
|
||||
"""
|
||||
Approval represents whether a key has been approved or not
|
||||
:type approver: User
|
||||
:type approval_type: string
|
||||
:type approved_date: Date
|
||||
:type notes: string
|
||||
"""
|
||||
|
||||
def to_dict(self):
|
||||
return {
|
||||
'approver': self.approver.to_dict() if self.approver else None,
|
||||
'approval_type': self.approval_type,
|
||||
'approved_date': self.approved_date,
|
||||
'notes': self.notes,
|
||||
}
|
||||
|
||||
|
||||
class ServiceKey(
|
||||
namedtuple('ServiceKey', ['name', 'kid', 'service', 'jwk', 'metadata', 'created_date',
|
||||
'expiration_date', 'rotation_duration', 'approval'])):
|
||||
"""
|
||||
ServiceKey is an apostille signing key
|
||||
:type name: string
|
||||
:type kid: int
|
||||
:type service: string
|
||||
:type jwk: string
|
||||
:type metadata: string
|
||||
:type created_date: Date
|
||||
:type expiration_date: Date
|
||||
:type rotation_duration: Date
|
||||
:type approval: Approval
|
||||
|
||||
"""
|
||||
|
||||
def to_dict(self):
|
||||
return {
|
||||
'name': self.name,
|
||||
'kid': self.kid,
|
||||
'service': self.service,
|
||||
'jwk': self.jwk,
|
||||
'metadata': self.metadata,
|
||||
'created_date': self.created_date,
|
||||
'expiration_date': self.expiration_date,
|
||||
'rotation_duration': self.rotation_duration,
|
||||
'approval': self.approval.to_dict() if self.approval is not None else None,
|
||||
}
|
||||
|
||||
|
||||
class User(namedtuple('User', ['username', 'email', 'verified', 'enabled', 'robot'])):
|
||||
"""
|
||||
User represents a single user.
|
||||
:type username: string
|
||||
:type email: string
|
||||
:type verified: boolean
|
||||
:type enabled: boolean
|
||||
:type robot: User
|
||||
"""
|
||||
|
||||
def to_dict(self):
|
||||
user_data = {
|
||||
'kind': 'user',
|
||||
'name': self.username,
|
||||
'username': self.username,
|
||||
'email': self.email,
|
||||
'verified': self.verified,
|
||||
'enabled': self.enabled,
|
||||
}
|
||||
|
||||
return user_data
|
||||
|
||||
|
||||
class Organization(namedtuple('Organization', ['username', 'email'])):
|
||||
"""
|
||||
Organization represents a single org.
|
||||
:type username: string
|
||||
:type email: string
|
||||
"""
|
||||
|
||||
def to_dict(self):
|
||||
return {
|
||||
'name': self.username,
|
||||
'email': self.email,
|
||||
}
|
||||
|
||||
|
||||
@add_metaclass(ABCMeta)
|
||||
class SuperuserDataInterface(object):
|
||||
"""
|
||||
Interface that represents all data store interactions required by a superuser api.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def list_all_service_keys(self):
|
||||
"""
|
||||
Returns a list of service keys
|
||||
"""
|
60
config_app/config_endpoints/api/superuser_models_pre_oci.py
Normal file
60
config_app/config_endpoints/api/superuser_models_pre_oci.py
Normal file
|
@ -0,0 +1,60 @@
|
|||
from data import model
|
||||
|
||||
from config_app.config_endpoints.api.superuser_models_interface import (SuperuserDataInterface, User, ServiceKey,
|
||||
Approval)
|
||||
|
||||
|
||||
def _create_user(user):
|
||||
if user is None:
|
||||
return None
|
||||
return User(user.username, user.email, user.verified, user.enabled, user.robot)
|
||||
|
||||
|
||||
def _create_key(key):
|
||||
approval = None
|
||||
if key.approval is not None:
|
||||
approval = Approval(_create_user(key.approval.approver), key.approval.approval_type,
|
||||
key.approval.approved_date,
|
||||
key.approval.notes)
|
||||
|
||||
return ServiceKey(key.name, key.kid, key.service, key.jwk, key.metadata, key.created_date,
|
||||
key.expiration_date,
|
||||
key.rotation_duration, approval)
|
||||
|
||||
|
||||
class ServiceKeyDoesNotExist(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class ServiceKeyAlreadyApproved(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class PreOCIModel(SuperuserDataInterface):
|
||||
"""
|
||||
PreOCIModel implements the data model for the SuperUser using a database schema
|
||||
before it was changed to support the OCI specification.
|
||||
"""
|
||||
|
||||
def list_all_service_keys(self):
|
||||
keys = model.service_keys.list_all_keys()
|
||||
return [_create_key(key) for key in keys]
|
||||
|
||||
def approve_service_key(self, kid, approval_type, notes=''):
|
||||
try:
|
||||
key = model.service_keys.approve_service_key(kid, approval_type, notes=notes)
|
||||
return _create_key(key)
|
||||
except model.ServiceKeyDoesNotExist:
|
||||
raise ServiceKeyDoesNotExist
|
||||
except model.ServiceKeyAlreadyApproved:
|
||||
raise ServiceKeyAlreadyApproved
|
||||
|
||||
def generate_service_key(self, service, expiration_date, kid=None, name='', metadata=None,
|
||||
rotation_duration=None):
|
||||
(private_key, key) = model.service_keys.generate_service_key(service, expiration_date,
|
||||
metadata=metadata, name=name)
|
||||
|
||||
return private_key, key.kid
|
||||
|
||||
|
||||
pre_oci_model = PreOCIModel()
|
62
config_app/config_endpoints/api/tar_config_loader.py
Normal file
62
config_app/config_endpoints/api/tar_config_loader.py
Normal file
|
@ -0,0 +1,62 @@
|
|||
import os
|
||||
import tempfile
|
||||
import tarfile
|
||||
|
||||
from contextlib import closing
|
||||
|
||||
from flask import request, make_response, send_file
|
||||
|
||||
from data.database import configure
|
||||
|
||||
from config_app.c_app import app, config_provider
|
||||
from config_app.config_endpoints.api import resource, ApiResource, nickname
|
||||
from config_app.config_util.tar import tarinfo_filter_partial, strip_absolute_path_and_add_trailing_dir
|
||||
|
||||
|
||||
@resource('/v1/configapp/initialization')
|
||||
class ConfigInitialization(ApiResource):
|
||||
"""
|
||||
Resource for dealing with any initialization logic for the config app
|
||||
"""
|
||||
|
||||
@nickname('scStartNewConfig')
|
||||
def post(self):
|
||||
config_provider.new_config_dir()
|
||||
return make_response('OK')
|
||||
|
||||
|
||||
@resource('/v1/configapp/tarconfig')
|
||||
class TarConfigLoader(ApiResource):
|
||||
"""
|
||||
Resource for dealing with configuration as a tarball,
|
||||
including loading and generating functions
|
||||
"""
|
||||
|
||||
@nickname('scGetConfigTarball')
|
||||
def get(self):
|
||||
config_path = config_provider.get_config_dir_path()
|
||||
tar_dir_prefix = strip_absolute_path_and_add_trailing_dir(config_path)
|
||||
temp = tempfile.NamedTemporaryFile()
|
||||
|
||||
with closing(tarfile.open(temp.name, mode="w|gz")) as tar:
|
||||
for name in os.listdir(config_path):
|
||||
tar.add(os.path.join(config_path, name), filter=tarinfo_filter_partial(tar_dir_prefix))
|
||||
return send_file(temp.name, mimetype='application/gzip')
|
||||
|
||||
@nickname('scUploadTarballConfig')
|
||||
def put(self):
|
||||
""" Loads tarball config into the config provider """
|
||||
# Generate a new empty dir to load the config into
|
||||
config_provider.new_config_dir()
|
||||
input_stream = request.stream
|
||||
with tarfile.open(mode="r|gz", fileobj=input_stream) as tar_stream:
|
||||
tar_stream.extractall(config_provider.get_config_dir_path())
|
||||
|
||||
config_provider.create_copy_of_config_dir()
|
||||
|
||||
# now try to connect to the db provided in their config to validate it works
|
||||
combined = dict(**app.config)
|
||||
combined.update(config_provider.get_config())
|
||||
configure(combined)
|
||||
|
||||
return make_response('OK')
|
14
config_app/config_endpoints/api/user.py
Normal file
14
config_app/config_endpoints/api/user.py
Normal file
|
@ -0,0 +1,14 @@
|
|||
from auth.auth_context import get_authenticated_user
|
||||
from config_app.config_endpoints.api import resource, ApiResource, nickname
|
||||
from config_app.config_endpoints.api.superuser_models_interface import user_view
|
||||
|
||||
|
||||
@resource('/v1/user/')
|
||||
class User(ApiResource):
|
||||
""" Operations related to users. """
|
||||
|
||||
@nickname('getLoggedInUser')
|
||||
def get(self):
|
||||
""" Get user information for the authenticated user. """
|
||||
user = get_authenticated_user()
|
||||
return user_view(user)
|
73
config_app/config_endpoints/common.py
Normal file
73
config_app/config_endpoints/common.py
Normal file
|
@ -0,0 +1,73 @@
|
|||
import logging
|
||||
import os
|
||||
import re
|
||||
|
||||
from flask import make_response, render_template
|
||||
from flask_restful import reqparse
|
||||
|
||||
from config import frontend_visible_config
|
||||
from external_libraries import get_external_javascript, get_external_css
|
||||
|
||||
from config_app.c_app import app, IS_KUBERNETES
|
||||
from config_app._init_config import ROOT_DIR
|
||||
from config_app.config_util.k8sconfig import get_k8s_namespace
|
||||
|
||||
|
||||
def truthy_bool(param):
|
||||
return param not in {False, 'false', 'False', '0', 'FALSE', '', 'null'}
|
||||
|
||||
|
||||
DEFAULT_JS_BUNDLE_NAME = 'configapp'
|
||||
PARAM_REGEX = re.compile(r'<([^:>]+:)*([\w]+)>')
|
||||
logger = logging.getLogger(__name__)
|
||||
TYPE_CONVERTER = {
|
||||
truthy_bool: 'boolean',
|
||||
str: 'string',
|
||||
basestring: 'string',
|
||||
reqparse.text_type: 'string',
|
||||
int: 'integer',
|
||||
}
|
||||
|
||||
|
||||
def _list_files(path, extension, contains=""):
|
||||
""" Returns a list of all the files with the given extension found under the given path. """
|
||||
|
||||
def matches(f):
|
||||
return os.path.splitext(f)[1] == '.' + extension and contains in os.path.splitext(f)[0]
|
||||
|
||||
def join_path(dp, f):
|
||||
# Remove the static/ prefix. It is added in the template.
|
||||
return os.path.join(dp, f)[len(ROOT_DIR) + 1 + len('config_app/static/'):]
|
||||
|
||||
filepath = os.path.join(os.path.join(ROOT_DIR, 'config_app/static/'), path)
|
||||
return [join_path(dp, f) for dp, _, files in os.walk(filepath) for f in files if matches(f)]
|
||||
|
||||
|
||||
FONT_AWESOME_4 = 'netdna.bootstrapcdn.com/font-awesome/4.7.0/css/font-awesome.css'
|
||||
|
||||
|
||||
def render_page_template(name, route_data=None, js_bundle_name=DEFAULT_JS_BUNDLE_NAME, **kwargs):
|
||||
""" Renders the page template with the given name as the response and returns its contents. """
|
||||
main_scripts = _list_files('build', 'js', js_bundle_name)
|
||||
|
||||
use_cdn = os.getenv('TESTING') == 'true'
|
||||
|
||||
external_styles = get_external_css(local=not use_cdn, exclude=FONT_AWESOME_4)
|
||||
external_scripts = get_external_javascript(local=not use_cdn)
|
||||
|
||||
contents = render_template(name,
|
||||
route_data=route_data,
|
||||
main_scripts=main_scripts,
|
||||
external_styles=external_styles,
|
||||
external_scripts=external_scripts,
|
||||
config_set=frontend_visible_config(app.config),
|
||||
kubernetes_namespace=IS_KUBERNETES and get_k8s_namespace(),
|
||||
**kwargs)
|
||||
|
||||
resp = make_response(contents)
|
||||
resp.headers['X-FRAME-OPTIONS'] = 'DENY'
|
||||
return resp
|
||||
|
||||
|
||||
def fully_qualified_name(method_view_class):
|
||||
return '%s.%s' % (method_view_class.__module__, method_view_class.__name__)
|
66
config_app/config_endpoints/exception.py
Normal file
66
config_app/config_endpoints/exception.py
Normal file
|
@ -0,0 +1,66 @@
|
|||
from enum import Enum
|
||||
|
||||
from flask import url_for
|
||||
from werkzeug.exceptions import HTTPException
|
||||
|
||||
|
||||
class ApiErrorType(Enum):
|
||||
invalid_request = 'invalid_request'
|
||||
|
||||
|
||||
class ApiException(HTTPException):
|
||||
"""
|
||||
Represents an error in the application/problem+json format.
|
||||
|
||||
See: https://tools.ietf.org/html/rfc7807
|
||||
|
||||
- "type" (string) - A URI reference that identifies the
|
||||
problem type.
|
||||
|
||||
- "title" (string) - A short, human-readable summary of the problem
|
||||
type. It SHOULD NOT change from occurrence to occurrence of the
|
||||
problem, except for purposes of localization
|
||||
|
||||
- "status" (number) - The HTTP status code
|
||||
|
||||
- "detail" (string) - A human-readable explanation specific to this
|
||||
occurrence of the problem.
|
||||
|
||||
- "instance" (string) - A URI reference that identifies the specific
|
||||
occurrence of the problem. It may or may not yield further
|
||||
information if dereferenced.
|
||||
"""
|
||||
|
||||
def __init__(self, error_type, status_code, error_description, payload=None):
|
||||
Exception.__init__(self)
|
||||
self.error_description = error_description
|
||||
self.code = status_code
|
||||
self.payload = payload
|
||||
self.error_type = error_type
|
||||
self.data = self.to_dict()
|
||||
|
||||
super(ApiException, self).__init__(error_description, None)
|
||||
|
||||
def to_dict(self):
|
||||
rv = dict(self.payload or ())
|
||||
|
||||
if self.error_description is not None:
|
||||
rv['detail'] = self.error_description
|
||||
rv['error_message'] = self.error_description # TODO: deprecate
|
||||
|
||||
rv['error_type'] = self.error_type.value # TODO: deprecate
|
||||
rv['title'] = self.error_type.value
|
||||
rv['type'] = url_for('api.error', error_type=self.error_type.value, _external=True)
|
||||
rv['status'] = self.code
|
||||
|
||||
return rv
|
||||
|
||||
|
||||
class InvalidRequest(ApiException):
|
||||
def __init__(self, error_description, payload=None):
|
||||
ApiException.__init__(self, ApiErrorType.invalid_request, 400, error_description, payload)
|
||||
|
||||
|
||||
class InvalidResponse(ApiException):
|
||||
def __init__(self, error_description, payload=None):
|
||||
ApiException.__init__(self, ApiErrorType.invalid_response, 400, error_description, payload)
|
23
config_app/config_endpoints/setup_web.py
Normal file
23
config_app/config_endpoints/setup_web.py
Normal file
|
@ -0,0 +1,23 @@
|
|||
from flask import Blueprint
|
||||
from cachetools.func import lru_cache
|
||||
|
||||
from config_app.config_endpoints.common import render_page_template
|
||||
from config_app.config_endpoints.api.discovery import generate_route_data
|
||||
from config_app.config_endpoints.api import no_cache
|
||||
|
||||
setup_web = Blueprint('setup_web', __name__, template_folder='templates')
|
||||
|
||||
|
||||
@lru_cache(maxsize=1)
|
||||
def _get_route_data():
|
||||
return generate_route_data()
|
||||
|
||||
|
||||
def render_page_template_with_routedata(name, *args, **kwargs):
|
||||
return render_page_template(name, _get_route_data(), *args, **kwargs)
|
||||
|
||||
|
||||
@no_cache
|
||||
@setup_web.route('/', methods=['GET'], defaults={'path': ''})
|
||||
def index(path, **kwargs):
|
||||
return render_page_template_with_routedata('index.html', js_bundle_name='configapp', **kwargs)
|
149
config_app/config_test/__init__.py
Normal file
149
config_app/config_test/__init__.py
Normal file
|
@ -0,0 +1,149 @@
|
|||
import json as py_json
|
||||
import unittest
|
||||
from contextlib import contextmanager
|
||||
from urllib import urlencode
|
||||
from urlparse import urlparse, parse_qs, urlunparse
|
||||
|
||||
from config_app.c_app import app, config_provider
|
||||
from config_app.config_endpoints.api import api
|
||||
from initdb import setup_database_for_testing, finished_database_for_testing
|
||||
|
||||
|
||||
CSRF_TOKEN_KEY = '_csrf_token'
|
||||
CSRF_TOKEN = '123csrfforme'
|
||||
|
||||
READ_ACCESS_USER = 'reader'
|
||||
ADMIN_ACCESS_USER = 'devtable'
|
||||
ADMIN_ACCESS_EMAIL = 'jschorr@devtable.com'
|
||||
|
||||
# OVERRIDES FROM PORTING FROM OLD APP:
|
||||
all_queues = [] # the config app doesn't have any queues
|
||||
|
||||
class ApiTestCase(unittest.TestCase):
|
||||
maxDiff = None
|
||||
|
||||
@staticmethod
|
||||
def _add_csrf(without_csrf):
|
||||
parts = urlparse(without_csrf)
|
||||
query = parse_qs(parts[4])
|
||||
query[CSRF_TOKEN_KEY] = CSRF_TOKEN
|
||||
return urlunparse(list(parts[0:4]) + [urlencode(query)] + list(parts[5:]))
|
||||
|
||||
def url_for(self, resource_name, params=None, skip_csrf=False):
|
||||
params = params or {}
|
||||
url = api.url_for(resource_name, **params)
|
||||
if not skip_csrf:
|
||||
url = ApiTestCase._add_csrf(url)
|
||||
return url
|
||||
|
||||
def setUp(self):
|
||||
setup_database_for_testing(self)
|
||||
self.app = app.test_client()
|
||||
self.ctx = app.test_request_context()
|
||||
self.ctx.__enter__()
|
||||
self.setCsrfToken(CSRF_TOKEN)
|
||||
|
||||
def tearDown(self):
|
||||
finished_database_for_testing(self)
|
||||
config_provider.clear()
|
||||
self.ctx.__exit__(True, None, None)
|
||||
|
||||
def setCsrfToken(self, token):
|
||||
with self.app.session_transaction() as sess:
|
||||
sess[CSRF_TOKEN_KEY] = token
|
||||
|
||||
@contextmanager
|
||||
def toggleFeature(self, name, enabled):
|
||||
import features
|
||||
previous_value = getattr(features, name)
|
||||
setattr(features, name, enabled)
|
||||
yield
|
||||
setattr(features, name, previous_value)
|
||||
|
||||
def getJsonResponse(self, resource_name, params={}, expected_code=200):
|
||||
rv = self.app.get(api.url_for(resource_name, **params))
|
||||
self.assertEquals(expected_code, rv.status_code)
|
||||
data = rv.data
|
||||
parsed = py_json.loads(data)
|
||||
return parsed
|
||||
|
||||
def postResponse(self, resource_name, params={}, data={}, file=None, headers=None,
|
||||
expected_code=200):
|
||||
data = py_json.dumps(data)
|
||||
|
||||
headers = headers or {}
|
||||
headers.update({"Content-Type": "application/json"})
|
||||
|
||||
if file is not None:
|
||||
data = {'file': file}
|
||||
headers = None
|
||||
|
||||
rv = self.app.post(self.url_for(resource_name, params), data=data, headers=headers)
|
||||
self.assertEquals(rv.status_code, expected_code)
|
||||
return rv.data
|
||||
|
||||
def getResponse(self, resource_name, params={}, expected_code=200):
|
||||
rv = self.app.get(api.url_for(resource_name, **params))
|
||||
self.assertEquals(rv.status_code, expected_code)
|
||||
return rv.data
|
||||
|
||||
def putResponse(self, resource_name, params={}, data={}, expected_code=200):
|
||||
rv = self.app.put(
|
||||
self.url_for(resource_name, params), data=py_json.dumps(data),
|
||||
headers={"Content-Type": "application/json"})
|
||||
self.assertEquals(rv.status_code, expected_code)
|
||||
return rv.data
|
||||
|
||||
def deleteResponse(self, resource_name, params={}, expected_code=204):
|
||||
rv = self.app.delete(self.url_for(resource_name, params))
|
||||
|
||||
if rv.status_code != expected_code:
|
||||
print 'Mismatch data for resource DELETE %s: %s' % (resource_name, rv.data)
|
||||
|
||||
self.assertEquals(rv.status_code, expected_code)
|
||||
return rv.data
|
||||
|
||||
def deleteEmptyResponse(self, resource_name, params={}, expected_code=204):
|
||||
rv = self.app.delete(self.url_for(resource_name, params))
|
||||
self.assertEquals(rv.status_code, expected_code)
|
||||
self.assertEquals(rv.data, '') # ensure response body empty
|
||||
return
|
||||
|
||||
def postJsonResponse(self, resource_name, params={}, data={}, expected_code=200):
|
||||
rv = self.app.post(
|
||||
self.url_for(resource_name, params), data=py_json.dumps(data),
|
||||
headers={"Content-Type": "application/json"})
|
||||
|
||||
if rv.status_code != expected_code:
|
||||
print 'Mismatch data for resource POST %s: %s' % (resource_name, rv.data)
|
||||
|
||||
self.assertEquals(rv.status_code, expected_code)
|
||||
data = rv.data
|
||||
parsed = py_json.loads(data)
|
||||
return parsed
|
||||
|
||||
def putJsonResponse(self, resource_name, params={}, data={}, expected_code=200, skip_csrf=False):
|
||||
rv = self.app.put(
|
||||
self.url_for(resource_name, params, skip_csrf), data=py_json.dumps(data),
|
||||
headers={"Content-Type": "application/json"})
|
||||
|
||||
if rv.status_code != expected_code:
|
||||
print 'Mismatch data for resource PUT %s: %s' % (resource_name, rv.data)
|
||||
|
||||
self.assertEquals(rv.status_code, expected_code)
|
||||
data = rv.data
|
||||
parsed = py_json.loads(data)
|
||||
return parsed
|
||||
|
||||
def assertNotInTeam(self, data, membername):
|
||||
for memberData in data['members']:
|
||||
if memberData['name'] == membername:
|
||||
self.fail(membername + ' found in team: ' + data['name'])
|
||||
|
||||
def assertInTeam(self, data, membername):
|
||||
for member_data in data['members']:
|
||||
if member_data['name'] == membername:
|
||||
return
|
||||
|
||||
self.fail(membername + ' not found in team: ' + data['name'])
|
||||
|
208
config_app/config_test/test_api_usage.py
Normal file
208
config_app/config_test/test_api_usage.py
Normal file
|
@ -0,0 +1,208 @@
|
|||
from StringIO import StringIO
|
||||
from mockldap import MockLdap
|
||||
|
||||
from data import database, model
|
||||
from util.security.test.test_ssl_util import generate_test_cert
|
||||
|
||||
from config_app.c_app import app
|
||||
from config_app.config_test import ApiTestCase, all_queues, ADMIN_ACCESS_USER, ADMIN_ACCESS_EMAIL
|
||||
from config_app.config_endpoints.api import api_bp
|
||||
from config_app.config_endpoints.api.superuser import SuperUserCustomCertificate, SuperUserCustomCertificates
|
||||
from config_app.config_endpoints.api.suconfig import SuperUserConfig, SuperUserCreateInitialSuperUser, \
|
||||
SuperUserConfigFile, SuperUserRegistryStatus
|
||||
|
||||
try:
|
||||
app.register_blueprint(api_bp, url_prefix='/api')
|
||||
except ValueError:
|
||||
# This blueprint was already registered
|
||||
pass
|
||||
|
||||
|
||||
class TestSuperUserCreateInitialSuperUser(ApiTestCase):
|
||||
def test_create_superuser(self):
|
||||
data = {
|
||||
'username': 'newsuper',
|
||||
'password': 'password',
|
||||
'email': 'jschorr+fake@devtable.com',
|
||||
}
|
||||
|
||||
# Add some fake config.
|
||||
fake_config = {
|
||||
'AUTHENTICATION_TYPE': 'Database',
|
||||
'SECRET_KEY': 'fakekey',
|
||||
}
|
||||
|
||||
self.putJsonResponse(SuperUserConfig, data=dict(config=fake_config, hostname='fakehost'))
|
||||
|
||||
# Try to write with config. Should 403 since there are users in the DB.
|
||||
self.postResponse(SuperUserCreateInitialSuperUser, data=data, expected_code=403)
|
||||
|
||||
# Delete all users in the DB.
|
||||
for user in list(database.User.select()):
|
||||
model.user.delete_user(user, all_queues)
|
||||
|
||||
# Create the superuser.
|
||||
self.postJsonResponse(SuperUserCreateInitialSuperUser, data=data)
|
||||
|
||||
# Ensure the user exists in the DB.
|
||||
self.assertIsNotNone(model.user.get_user('newsuper'))
|
||||
|
||||
# Ensure that the current user is a superuser in the config.
|
||||
json = self.getJsonResponse(SuperUserConfig)
|
||||
self.assertEquals(['newsuper'], json['config']['SUPER_USERS'])
|
||||
|
||||
# Ensure that the current user is a superuser in memory by trying to call an API
|
||||
# that will fail otherwise.
|
||||
self.getResponse(SuperUserConfigFile, params=dict(filename='ssl.cert'))
|
||||
|
||||
|
||||
class TestSuperUserConfig(ApiTestCase):
|
||||
def test_get_status_update_config(self):
|
||||
# With no config the status should be 'config-db'.
|
||||
json = self.getJsonResponse(SuperUserRegistryStatus)
|
||||
self.assertEquals('config-db', json['status'])
|
||||
|
||||
# Add some fake config.
|
||||
fake_config = {
|
||||
'AUTHENTICATION_TYPE': 'Database',
|
||||
'SECRET_KEY': 'fakekey',
|
||||
}
|
||||
|
||||
json = self.putJsonResponse(SuperUserConfig, data=dict(config=fake_config,
|
||||
hostname='fakehost'))
|
||||
self.assertEquals('fakekey', json['config']['SECRET_KEY'])
|
||||
self.assertEquals('fakehost', json['config']['SERVER_HOSTNAME'])
|
||||
self.assertEquals('Database', json['config']['AUTHENTICATION_TYPE'])
|
||||
|
||||
# With config the status should be 'setup-db'.
|
||||
# TODO: fix this test
|
||||
# json = self.getJsonResponse(SuperUserRegistryStatus)
|
||||
# self.assertEquals('setup-db', json['status'])
|
||||
|
||||
def test_config_file(self):
|
||||
# Try for an invalid file. Should 404.
|
||||
self.getResponse(SuperUserConfigFile, params=dict(filename='foobar'), expected_code=404)
|
||||
|
||||
# Try for a valid filename. Should not exist.
|
||||
json = self.getJsonResponse(SuperUserConfigFile, params=dict(filename='ssl.cert'))
|
||||
self.assertFalse(json['exists'])
|
||||
|
||||
# Add the file.
|
||||
self.postResponse(SuperUserConfigFile, params=dict(filename='ssl.cert'),
|
||||
file=(StringIO('my file contents'), 'ssl.cert'))
|
||||
|
||||
# Should now exist.
|
||||
json = self.getJsonResponse(SuperUserConfigFile, params=dict(filename='ssl.cert'))
|
||||
self.assertTrue(json['exists'])
|
||||
|
||||
def test_update_with_external_auth(self):
|
||||
# Run a mock LDAP.
|
||||
mockldap = MockLdap({
|
||||
'dc=quay,dc=io': {
|
||||
'dc': ['quay', 'io']
|
||||
},
|
||||
'ou=employees,dc=quay,dc=io': {
|
||||
'dc': ['quay', 'io'],
|
||||
'ou': 'employees'
|
||||
},
|
||||
'uid=' + ADMIN_ACCESS_USER + ',ou=employees,dc=quay,dc=io': {
|
||||
'dc': ['quay', 'io'],
|
||||
'ou': 'employees',
|
||||
'uid': [ADMIN_ACCESS_USER],
|
||||
'userPassword': ['password'],
|
||||
'mail': [ADMIN_ACCESS_EMAIL],
|
||||
},
|
||||
})
|
||||
|
||||
config = {
|
||||
'AUTHENTICATION_TYPE': 'LDAP',
|
||||
'LDAP_BASE_DN': ['dc=quay', 'dc=io'],
|
||||
'LDAP_ADMIN_DN': 'uid=devtable,ou=employees,dc=quay,dc=io',
|
||||
'LDAP_ADMIN_PASSWD': 'password',
|
||||
'LDAP_USER_RDN': ['ou=employees'],
|
||||
'LDAP_UID_ATTR': 'uid',
|
||||
'LDAP_EMAIL_ATTR': 'mail',
|
||||
}
|
||||
|
||||
mockldap.start()
|
||||
try:
|
||||
# Write the config with the valid password.
|
||||
self.putResponse(SuperUserConfig,
|
||||
data={'config': config,
|
||||
'password': 'password',
|
||||
'hostname': 'foo'}, expected_code=200)
|
||||
|
||||
# Ensure that the user row has been linked.
|
||||
# TODO: fix this test
|
||||
# self.assertEquals(ADMIN_ACCESS_USER,
|
||||
# model.user.verify_federated_login('ldap', ADMIN_ACCESS_USER).username)
|
||||
finally:
|
||||
mockldap.stop()
|
||||
|
||||
class TestSuperUserCustomCertificates(ApiTestCase):
|
||||
def test_custom_certificates(self):
|
||||
|
||||
# Upload a certificate.
|
||||
cert_contents, _ = generate_test_cert(hostname='somecoolhost', san_list=['DNS:bar', 'DNS:baz'])
|
||||
self.postResponse(SuperUserCustomCertificate, params=dict(certpath='testcert.crt'),
|
||||
file=(StringIO(cert_contents), 'testcert.crt'), expected_code=204)
|
||||
|
||||
# Make sure it is present.
|
||||
json = self.getJsonResponse(SuperUserCustomCertificates)
|
||||
self.assertEquals(1, len(json['certs']))
|
||||
|
||||
cert_info = json['certs'][0]
|
||||
self.assertEquals('testcert.crt', cert_info['path'])
|
||||
|
||||
self.assertEquals(set(['somecoolhost', 'bar', 'baz']), set(cert_info['names']))
|
||||
self.assertFalse(cert_info['expired'])
|
||||
|
||||
# Remove the certificate.
|
||||
self.deleteResponse(SuperUserCustomCertificate, params=dict(certpath='testcert.crt'))
|
||||
|
||||
# Make sure it is gone.
|
||||
json = self.getJsonResponse(SuperUserCustomCertificates)
|
||||
self.assertEquals(0, len(json['certs']))
|
||||
|
||||
def test_expired_custom_certificate(self):
|
||||
# Upload a certificate.
|
||||
cert_contents, _ = generate_test_cert(hostname='somecoolhost', expires=-10)
|
||||
self.postResponse(SuperUserCustomCertificate, params=dict(certpath='testcert.crt'),
|
||||
file=(StringIO(cert_contents), 'testcert.crt'), expected_code=204)
|
||||
|
||||
# Make sure it is present.
|
||||
json = self.getJsonResponse(SuperUserCustomCertificates)
|
||||
self.assertEquals(1, len(json['certs']))
|
||||
|
||||
cert_info = json['certs'][0]
|
||||
self.assertEquals('testcert.crt', cert_info['path'])
|
||||
|
||||
self.assertEquals(set(['somecoolhost']), set(cert_info['names']))
|
||||
self.assertTrue(cert_info['expired'])
|
||||
|
||||
def test_invalid_custom_certificate(self):
|
||||
# Upload an invalid certificate.
|
||||
self.postResponse(SuperUserCustomCertificate, params=dict(certpath='testcert.crt'),
|
||||
file=(StringIO('some contents'), 'testcert.crt'), expected_code=204)
|
||||
|
||||
# Make sure it is present but invalid.
|
||||
json = self.getJsonResponse(SuperUserCustomCertificates)
|
||||
self.assertEquals(1, len(json['certs']))
|
||||
|
||||
cert_info = json['certs'][0]
|
||||
self.assertEquals('testcert.crt', cert_info['path'])
|
||||
self.assertEquals('no start line', cert_info['error'])
|
||||
|
||||
def test_path_sanitization(self):
|
||||
# Upload a certificate.
|
||||
cert_contents, _ = generate_test_cert(hostname='somecoolhost', expires=-10)
|
||||
self.postResponse(SuperUserCustomCertificate, params=dict(certpath='testcert/../foobar.crt'),
|
||||
file=(StringIO(cert_contents), 'testcert/../foobar.crt'), expected_code=204)
|
||||
|
||||
# Make sure it is present.
|
||||
json = self.getJsonResponse(SuperUserCustomCertificates)
|
||||
self.assertEquals(1, len(json['certs']))
|
||||
|
||||
cert_info = json['certs'][0]
|
||||
self.assertEquals('foobar.crt', cert_info['path'])
|
||||
|
179
config_app/config_test/test_suconfig_api.py
Normal file
179
config_app/config_test/test_suconfig_api.py
Normal file
|
@ -0,0 +1,179 @@
|
|||
import unittest
|
||||
import mock
|
||||
|
||||
from data.database import User
|
||||
from data import model
|
||||
|
||||
from config_app.config_endpoints.api.suconfig import SuperUserConfig, SuperUserConfigValidate, SuperUserConfigFile, \
|
||||
SuperUserRegistryStatus, SuperUserCreateInitialSuperUser
|
||||
from config_app.config_endpoints.api import api_bp
|
||||
from config_app.config_test import ApiTestCase, READ_ACCESS_USER, ADMIN_ACCESS_USER
|
||||
from config_app.c_app import app, config_provider
|
||||
|
||||
try:
|
||||
app.register_blueprint(api_bp, url_prefix='/api')
|
||||
except ValueError:
|
||||
# This blueprint was already registered
|
||||
pass
|
||||
|
||||
# OVERRIDES FROM PORTING FROM OLD APP:
|
||||
all_queues = [] # the config app doesn't have any queues
|
||||
|
||||
class FreshConfigProvider(object):
|
||||
def __enter__(self):
|
||||
config_provider.reset_for_test()
|
||||
return config_provider
|
||||
|
||||
def __exit__(self, type, value, traceback):
|
||||
config_provider.reset_for_test()
|
||||
|
||||
|
||||
class TestSuperUserRegistryStatus(ApiTestCase):
|
||||
def test_registry_status_no_config(self):
|
||||
with FreshConfigProvider():
|
||||
json = self.getJsonResponse(SuperUserRegistryStatus)
|
||||
self.assertEquals('config-db', json['status'])
|
||||
|
||||
@mock.patch("config_app.config_endpoints.api.suconfig.database_is_valid", mock.Mock(return_value=False))
|
||||
def test_registry_status_no_database(self):
|
||||
with FreshConfigProvider():
|
||||
config_provider.save_config({'key': 'value'})
|
||||
json = self.getJsonResponse(SuperUserRegistryStatus)
|
||||
self.assertEquals('setup-db', json['status'])
|
||||
|
||||
@mock.patch("config_app.config_endpoints.api.suconfig.database_is_valid", mock.Mock(return_value=True))
|
||||
def test_registry_status_db_has_superuser(self):
|
||||
with FreshConfigProvider():
|
||||
config_provider.save_config({'key': 'value'})
|
||||
json = self.getJsonResponse(SuperUserRegistryStatus)
|
||||
self.assertEquals('config', json['status'])
|
||||
|
||||
@mock.patch("config_app.config_endpoints.api.suconfig.database_is_valid", mock.Mock(return_value=True))
|
||||
@mock.patch("config_app.config_endpoints.api.suconfig.database_has_users", mock.Mock(return_value=False))
|
||||
def test_registry_status_db_no_superuser(self):
|
||||
with FreshConfigProvider():
|
||||
config_provider.save_config({'key': 'value'})
|
||||
json = self.getJsonResponse(SuperUserRegistryStatus)
|
||||
self.assertEquals('create-superuser', json['status'])
|
||||
|
||||
@mock.patch("config_app.config_endpoints.api.suconfig.database_is_valid", mock.Mock(return_value=True))
|
||||
@mock.patch("config_app.config_endpoints.api.suconfig.database_has_users", mock.Mock(return_value=True))
|
||||
def test_registry_status_setup_complete(self):
|
||||
with FreshConfigProvider():
|
||||
config_provider.save_config({'key': 'value', 'SETUP_COMPLETE': True})
|
||||
json = self.getJsonResponse(SuperUserRegistryStatus)
|
||||
self.assertEquals('config', json['status'])
|
||||
|
||||
class TestSuperUserConfigFile(ApiTestCase):
|
||||
def test_get_superuser_invalid_filename(self):
|
||||
with FreshConfigProvider():
|
||||
self.getResponse(SuperUserConfigFile, params=dict(filename='somefile'), expected_code=404)
|
||||
|
||||
def test_get_superuser(self):
|
||||
with FreshConfigProvider():
|
||||
result = self.getJsonResponse(SuperUserConfigFile, params=dict(filename='ssl.cert'))
|
||||
self.assertFalse(result['exists'])
|
||||
|
||||
def test_post_no_file(self):
|
||||
with FreshConfigProvider():
|
||||
# No file
|
||||
self.postResponse(SuperUserConfigFile, params=dict(filename='ssl.cert'), expected_code=400)
|
||||
|
||||
def test_post_superuser_invalid_filename(self):
|
||||
with FreshConfigProvider():
|
||||
self.postResponse(SuperUserConfigFile, params=dict(filename='somefile'), expected_code=404)
|
||||
|
||||
def test_post_superuser(self):
|
||||
with FreshConfigProvider():
|
||||
self.postResponse(SuperUserConfigFile, params=dict(filename='ssl.cert'), expected_code=400)
|
||||
|
||||
|
||||
class TestSuperUserCreateInitialSuperUser(ApiTestCase):
|
||||
def test_no_config_file(self):
|
||||
with FreshConfigProvider():
|
||||
# If there is no config.yaml, then this method should security fail.
|
||||
data = dict(username='cooluser', password='password', email='fake@example.com')
|
||||
self.postResponse(SuperUserCreateInitialSuperUser, data=data, expected_code=403)
|
||||
|
||||
def test_config_file_with_db_users(self):
|
||||
with FreshConfigProvider():
|
||||
# Write some config.
|
||||
self.putJsonResponse(SuperUserConfig, data=dict(config={}, hostname='foobar'))
|
||||
|
||||
# If there is a config.yaml, but existing DB users exist, then this method should security
|
||||
# fail.
|
||||
data = dict(username='cooluser', password='password', email='fake@example.com')
|
||||
self.postResponse(SuperUserCreateInitialSuperUser, data=data, expected_code=403)
|
||||
|
||||
def test_config_file_with_no_db_users(self):
|
||||
with FreshConfigProvider():
|
||||
# Write some config.
|
||||
self.putJsonResponse(SuperUserConfig, data=dict(config={}, hostname='foobar'))
|
||||
|
||||
# Delete all the users in the DB.
|
||||
for user in list(User.select()):
|
||||
model.user.delete_user(user, all_queues)
|
||||
|
||||
# This method should now succeed.
|
||||
data = dict(username='cooluser', password='password', email='fake@example.com')
|
||||
result = self.postJsonResponse(SuperUserCreateInitialSuperUser, data=data)
|
||||
self.assertTrue(result['status'])
|
||||
|
||||
# Verify the superuser was created.
|
||||
User.get(User.username == 'cooluser')
|
||||
|
||||
# Verify the superuser was placed into the config.
|
||||
result = self.getJsonResponse(SuperUserConfig)
|
||||
self.assertEquals(['cooluser'], result['config']['SUPER_USERS'])
|
||||
|
||||
|
||||
class TestSuperUserConfigValidate(ApiTestCase):
|
||||
def test_nonsuperuser_noconfig(self):
|
||||
with FreshConfigProvider():
|
||||
result = self.postJsonResponse(SuperUserConfigValidate, params=dict(service='someservice'),
|
||||
data=dict(config={}))
|
||||
|
||||
self.assertFalse(result['status'])
|
||||
|
||||
|
||||
def test_nonsuperuser_config(self):
|
||||
with FreshConfigProvider():
|
||||
# The validate config call works if there is no config.yaml OR the user is a superuser.
|
||||
# Add a config, and verify it breaks when unauthenticated.
|
||||
json = self.putJsonResponse(SuperUserConfig, data=dict(config={}, hostname='foobar'))
|
||||
self.assertTrue(json['exists'])
|
||||
|
||||
|
||||
result = self.postJsonResponse(SuperUserConfigValidate, params=dict(service='someservice'),
|
||||
data=dict(config={}))
|
||||
|
||||
self.assertFalse(result['status'])
|
||||
|
||||
|
||||
class TestSuperUserConfig(ApiTestCase):
|
||||
def test_get_superuser(self):
|
||||
with FreshConfigProvider():
|
||||
json = self.getJsonResponse(SuperUserConfig)
|
||||
|
||||
# Note: We expect the config to be none because a config.yaml should never be checked into
|
||||
# the directory.
|
||||
self.assertIsNone(json['config'])
|
||||
|
||||
def test_put(self):
|
||||
with FreshConfigProvider() as config:
|
||||
json = self.putJsonResponse(SuperUserConfig, data=dict(config={}, hostname='foobar'))
|
||||
self.assertTrue(json['exists'])
|
||||
|
||||
# Verify the config file exists.
|
||||
self.assertTrue(config.config_exists())
|
||||
|
||||
# This should succeed.
|
||||
json = self.putJsonResponse(SuperUserConfig, data=dict(config={}, hostname='barbaz'))
|
||||
self.assertTrue(json['exists'])
|
||||
|
||||
json = self.getJsonResponse(SuperUserConfig)
|
||||
self.assertIsNotNone(json['config'])
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main()
|
0
config_app/config_util/__init__.py
Normal file
0
config_app/config_util/__init__.py
Normal file
62
config_app/config_util/config/TransientDirectoryProvider.py
Normal file
62
config_app/config_util/config/TransientDirectoryProvider.py
Normal file
|
@ -0,0 +1,62 @@
|
|||
import os
|
||||
|
||||
from shutil import copytree
|
||||
from backports.tempfile import TemporaryDirectory
|
||||
|
||||
from config_app.config_util.config.fileprovider import FileConfigProvider
|
||||
|
||||
OLD_CONFIG_SUBDIR = 'old/'
|
||||
|
||||
class TransientDirectoryProvider(FileConfigProvider):
|
||||
""" Implementation of the config provider that reads and writes the data
|
||||
from/to the file system, only using temporary directories,
|
||||
deleting old dirs and creating new ones as requested.
|
||||
"""
|
||||
|
||||
def __init__(self, config_volume, yaml_filename, py_filename):
|
||||
# Create a temp directory that will be cleaned up when we change the config path
|
||||
# This should ensure we have no "pollution" of different configs:
|
||||
# no uploaded config should ever affect subsequent config modifications/creations
|
||||
temp_dir = TemporaryDirectory()
|
||||
self.temp_dir = temp_dir
|
||||
self.old_config_dir = None
|
||||
super(TransientDirectoryProvider, self).__init__(temp_dir.name, yaml_filename, py_filename)
|
||||
|
||||
@property
|
||||
def provider_id(self):
|
||||
return 'transient'
|
||||
|
||||
def new_config_dir(self):
|
||||
"""
|
||||
Update the path with a new temporary directory, deleting the old one in the process
|
||||
"""
|
||||
self.temp_dir.cleanup()
|
||||
temp_dir = TemporaryDirectory()
|
||||
|
||||
self.config_volume = temp_dir.name
|
||||
self.temp_dir = temp_dir
|
||||
self.yaml_path = os.path.join(temp_dir.name, self.yaml_filename)
|
||||
|
||||
def create_copy_of_config_dir(self):
|
||||
"""
|
||||
Create a directory to store loaded/populated configuration (for rollback if necessary)
|
||||
"""
|
||||
if self.old_config_dir is not None:
|
||||
self.old_config_dir.cleanup()
|
||||
|
||||
temp_dir = TemporaryDirectory()
|
||||
self.old_config_dir = temp_dir
|
||||
|
||||
# Python 2.7's shutil.copy() doesn't allow for copying to existing directories,
|
||||
# so when copying/reading to the old saved config, we have to talk to a subdirectory,
|
||||
# and use the shutil.copytree() function
|
||||
copytree(self.config_volume, os.path.join(temp_dir.name, OLD_CONFIG_SUBDIR))
|
||||
|
||||
def get_config_dir_path(self):
|
||||
return self.config_volume
|
||||
|
||||
def get_old_config_dir(self):
|
||||
if self.old_config_dir is None:
|
||||
raise Exception('Cannot return a configuration that was no old configuration')
|
||||
|
||||
return os.path.join(self.old_config_dir.name, OLD_CONFIG_SUBDIR)
|
39
config_app/config_util/config/__init__.py
Normal file
39
config_app/config_util/config/__init__.py
Normal file
|
@ -0,0 +1,39 @@
|
|||
import base64
|
||||
import os
|
||||
|
||||
from config_app.config_util.config.fileprovider import FileConfigProvider
|
||||
from config_app.config_util.config.testprovider import TestConfigProvider
|
||||
from config_app.config_util.config.TransientDirectoryProvider import TransientDirectoryProvider
|
||||
from util.config.validator import EXTRA_CA_DIRECTORY, EXTRA_CA_DIRECTORY_PREFIX
|
||||
|
||||
|
||||
def get_config_provider(config_volume, yaml_filename, py_filename, testing=False):
|
||||
""" Loads and returns the config provider for the current environment. """
|
||||
|
||||
if testing:
|
||||
return TestConfigProvider()
|
||||
|
||||
return TransientDirectoryProvider(config_volume, yaml_filename, py_filename)
|
||||
|
||||
|
||||
def get_config_as_kube_secret(config_path):
|
||||
data = {}
|
||||
|
||||
# Kubernetes secrets don't have sub-directories, so for the extra_ca_certs dir
|
||||
# we have to put the extra certs in with a prefix, and then one of our init scripts
|
||||
# (02_get_kube_certs.sh) will expand the prefixed certs into the equivalent directory
|
||||
# so that they'll be installed correctly on startup by the certs_install script
|
||||
certs_dir = os.path.join(config_path, EXTRA_CA_DIRECTORY)
|
||||
if os.path.exists(certs_dir):
|
||||
for extra_cert in os.listdir(certs_dir):
|
||||
with open(os.path.join(certs_dir, extra_cert)) as f:
|
||||
data[EXTRA_CA_DIRECTORY_PREFIX + extra_cert] = base64.b64encode(f.read())
|
||||
|
||||
|
||||
for name in os.listdir(config_path):
|
||||
file_path = os.path.join(config_path, name)
|
||||
if not os.path.isdir(file_path):
|
||||
with open(file_path) as f:
|
||||
data[name] = base64.b64encode(f.read())
|
||||
|
||||
return data
|
72
config_app/config_util/config/basefileprovider.py
Normal file
72
config_app/config_util/config/basefileprovider.py
Normal file
|
@ -0,0 +1,72 @@
|
|||
import os
|
||||
import logging
|
||||
|
||||
from config_app.config_util.config.baseprovider import (BaseProvider, import_yaml, export_yaml,
|
||||
CannotWriteConfigException)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class BaseFileProvider(BaseProvider):
|
||||
""" Base implementation of the config provider that reads the data from the file system. """
|
||||
|
||||
def __init__(self, config_volume, yaml_filename, py_filename):
|
||||
self.config_volume = config_volume
|
||||
self.yaml_filename = yaml_filename
|
||||
self.py_filename = py_filename
|
||||
|
||||
self.yaml_path = os.path.join(config_volume, yaml_filename)
|
||||
self.py_path = os.path.join(config_volume, py_filename)
|
||||
|
||||
def update_app_config(self, app_config):
|
||||
if os.path.exists(self.py_path):
|
||||
logger.debug('Applying config file: %s', self.py_path)
|
||||
app_config.from_pyfile(self.py_path)
|
||||
|
||||
if os.path.exists(self.yaml_path):
|
||||
logger.debug('Applying config file: %s', self.yaml_path)
|
||||
import_yaml(app_config, self.yaml_path)
|
||||
|
||||
def get_config(self):
|
||||
if not self.config_exists():
|
||||
return None
|
||||
|
||||
config_obj = {}
|
||||
import_yaml(config_obj, self.yaml_path)
|
||||
return config_obj
|
||||
|
||||
def config_exists(self):
|
||||
return self.volume_file_exists(self.yaml_filename)
|
||||
|
||||
def volume_exists(self):
|
||||
return os.path.exists(self.config_volume)
|
||||
|
||||
def volume_file_exists(self, filename):
|
||||
return os.path.exists(os.path.join(self.config_volume, filename))
|
||||
|
||||
def get_volume_file(self, filename, mode='r'):
|
||||
return open(os.path.join(self.config_volume, filename), mode=mode)
|
||||
|
||||
def get_volume_path(self, directory, filename):
|
||||
return os.path.join(directory, filename)
|
||||
|
||||
def list_volume_directory(self, path):
|
||||
dirpath = os.path.join(self.config_volume, path)
|
||||
if not os.path.exists(dirpath):
|
||||
return None
|
||||
|
||||
if not os.path.isdir(dirpath):
|
||||
return None
|
||||
|
||||
return os.listdir(dirpath)
|
||||
|
||||
def requires_restart(self, app_config):
|
||||
file_config = self.get_config()
|
||||
if not file_config:
|
||||
return False
|
||||
|
||||
for key in file_config:
|
||||
if app_config.get(key) != file_config[key]:
|
||||
return True
|
||||
|
||||
return False
|
128
config_app/config_util/config/baseprovider.py
Normal file
128
config_app/config_util/config/baseprovider.py
Normal file
|
@ -0,0 +1,128 @@
|
|||
import logging
|
||||
import yaml
|
||||
|
||||
from abc import ABCMeta, abstractmethod
|
||||
from six import add_metaclass
|
||||
|
||||
from jsonschema import validate, ValidationError
|
||||
|
||||
from util.config.schema import CONFIG_SCHEMA
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class CannotWriteConfigException(Exception):
|
||||
""" Exception raised when the config cannot be written. """
|
||||
pass
|
||||
|
||||
|
||||
class SetupIncompleteException(Exception):
|
||||
""" Exception raised when attempting to verify config that has not yet been setup. """
|
||||
pass
|
||||
|
||||
|
||||
def import_yaml(config_obj, config_file):
|
||||
with open(config_file) as f:
|
||||
c = yaml.safe_load(f)
|
||||
if not c:
|
||||
logger.debug('Empty YAML config file')
|
||||
return
|
||||
|
||||
if isinstance(c, str):
|
||||
raise Exception('Invalid YAML config file: ' + str(c))
|
||||
|
||||
for key in c.iterkeys():
|
||||
if key.isupper():
|
||||
config_obj[key] = c[key]
|
||||
|
||||
if config_obj.get('SETUP_COMPLETE', False):
|
||||
try:
|
||||
validate(config_obj, CONFIG_SCHEMA)
|
||||
except ValidationError:
|
||||
# TODO: Change this into a real error
|
||||
logger.exception('Could not validate config schema')
|
||||
else:
|
||||
logger.debug('Skipping config schema validation because setup is not complete')
|
||||
|
||||
return config_obj
|
||||
|
||||
|
||||
def get_yaml(config_obj):
|
||||
return yaml.safe_dump(config_obj, encoding='utf-8', allow_unicode=True)
|
||||
|
||||
|
||||
def export_yaml(config_obj, config_file):
|
||||
try:
|
||||
with open(config_file, 'w') as f:
|
||||
f.write(get_yaml(config_obj))
|
||||
except IOError as ioe:
|
||||
raise CannotWriteConfigException(str(ioe))
|
||||
|
||||
|
||||
@add_metaclass(ABCMeta)
|
||||
class BaseProvider(object):
|
||||
""" A configuration provider helps to load, save, and handle config override in the application.
|
||||
"""
|
||||
|
||||
@property
|
||||
def provider_id(self):
|
||||
raise NotImplementedError
|
||||
|
||||
@abstractmethod
|
||||
def update_app_config(self, app_config):
|
||||
""" Updates the given application config object with the loaded override config. """
|
||||
|
||||
@abstractmethod
|
||||
def get_config(self):
|
||||
""" Returns the contents of the config override file, or None if none. """
|
||||
|
||||
@abstractmethod
|
||||
def save_config(self, config_object):
|
||||
""" Updates the contents of the config override file to those given. """
|
||||
|
||||
@abstractmethod
|
||||
def config_exists(self):
|
||||
""" Returns true if a config override file exists in the config volume. """
|
||||
|
||||
@abstractmethod
|
||||
def volume_exists(self):
|
||||
""" Returns whether the config override volume exists. """
|
||||
|
||||
@abstractmethod
|
||||
def volume_file_exists(self, filename):
|
||||
""" Returns whether the file with the given name exists under the config override volume. """
|
||||
|
||||
@abstractmethod
|
||||
def get_volume_file(self, filename, mode='r'):
|
||||
""" Returns a Python file referring to the given name under the config override volume. """
|
||||
|
||||
@abstractmethod
|
||||
def write_volume_file(self, filename, contents):
|
||||
""" Writes the given contents to the config override volumne, with the given filename. """
|
||||
|
||||
@abstractmethod
|
||||
def remove_volume_file(self, filename):
|
||||
""" Removes the config override volume file with the given filename. """
|
||||
|
||||
@abstractmethod
|
||||
def list_volume_directory(self, path):
|
||||
""" Returns a list of strings representing the names of the files found in the config override
|
||||
directory under the given path. If the path doesn't exist, returns None.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def save_volume_file(self, filename, flask_file):
|
||||
""" Saves the given flask file to the config override volume, with the given
|
||||
filename.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def requires_restart(self, app_config):
|
||||
""" If true, the configuration loaded into memory for the app does not match that on disk,
|
||||
indicating that this container requires a restart.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def get_volume_path(self, directory, filename):
|
||||
""" Helper for constructing file paths, which may differ between providers. For example,
|
||||
kubernetes can't have subfolders in configmaps """
|
60
config_app/config_util/config/fileprovider.py
Normal file
60
config_app/config_util/config/fileprovider.py
Normal file
|
@ -0,0 +1,60 @@
|
|||
import os
|
||||
import logging
|
||||
|
||||
from config_app.config_util.config.baseprovider import export_yaml, CannotWriteConfigException
|
||||
from config_app.config_util.config.basefileprovider import BaseFileProvider
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _ensure_parent_dir(filepath):
|
||||
""" Ensures that the parent directory of the given file path exists. """
|
||||
try:
|
||||
parentpath = os.path.abspath(os.path.join(filepath, os.pardir))
|
||||
if not os.path.isdir(parentpath):
|
||||
os.makedirs(parentpath)
|
||||
except IOError as ioe:
|
||||
raise CannotWriteConfigException(str(ioe))
|
||||
|
||||
|
||||
class FileConfigProvider(BaseFileProvider):
|
||||
""" Implementation of the config provider that reads and writes the data
|
||||
from/to the file system. """
|
||||
|
||||
def __init__(self, config_volume, yaml_filename, py_filename):
|
||||
super(FileConfigProvider, self).__init__(config_volume, yaml_filename, py_filename)
|
||||
|
||||
@property
|
||||
def provider_id(self):
|
||||
return 'file'
|
||||
|
||||
def save_config(self, config_obj):
|
||||
export_yaml(config_obj, self.yaml_path)
|
||||
|
||||
def write_volume_file(self, filename, contents):
|
||||
filepath = os.path.join(self.config_volume, filename)
|
||||
_ensure_parent_dir(filepath)
|
||||
|
||||
try:
|
||||
with open(filepath, mode='w') as f:
|
||||
f.write(contents)
|
||||
except IOError as ioe:
|
||||
raise CannotWriteConfigException(str(ioe))
|
||||
|
||||
return filepath
|
||||
|
||||
def remove_volume_file(self, filename):
|
||||
filepath = os.path.join(self.config_volume, filename)
|
||||
os.remove(filepath)
|
||||
|
||||
def save_volume_file(self, filename, flask_file):
|
||||
filepath = os.path.join(self.config_volume, filename)
|
||||
_ensure_parent_dir(filepath)
|
||||
|
||||
# Write the file.
|
||||
try:
|
||||
flask_file.save(filepath)
|
||||
except IOError as ioe:
|
||||
raise CannotWriteConfigException(str(ioe))
|
||||
|
||||
return filepath
|
75
config_app/config_util/config/test/test_helpers.py
Normal file
75
config_app/config_util/config/test/test_helpers.py
Normal file
|
@ -0,0 +1,75 @@
|
|||
import pytest
|
||||
import os
|
||||
import base64
|
||||
|
||||
from backports.tempfile import TemporaryDirectory
|
||||
|
||||
from config_app.config_util.config import get_config_as_kube_secret
|
||||
from util.config.validator import EXTRA_CA_DIRECTORY
|
||||
|
||||
|
||||
def _create_temp_file_structure(file_structure):
|
||||
temp_dir = TemporaryDirectory()
|
||||
|
||||
for filename, data in file_structure.iteritems():
|
||||
if filename == EXTRA_CA_DIRECTORY:
|
||||
extra_ca_dir_path = os.path.join(temp_dir.name, EXTRA_CA_DIRECTORY)
|
||||
os.mkdir(extra_ca_dir_path)
|
||||
|
||||
for name, cert_value in data:
|
||||
with open(os.path.join(extra_ca_dir_path, name), 'w') as f:
|
||||
f.write(cert_value)
|
||||
else:
|
||||
with open(os.path.join(temp_dir.name, filename), 'w') as f:
|
||||
f.write(data)
|
||||
|
||||
return temp_dir
|
||||
|
||||
|
||||
@pytest.mark.parametrize('file_structure, expected_secret', [
|
||||
pytest.param({
|
||||
'config.yaml': 'test:true',
|
||||
},
|
||||
{
|
||||
'config.yaml': 'dGVzdDp0cnVl',
|
||||
}, id='just a config value'),
|
||||
pytest.param({
|
||||
'config.yaml': 'test:true',
|
||||
'otherfile.ext': 'im a file'
|
||||
},
|
||||
{
|
||||
'config.yaml': 'dGVzdDp0cnVl',
|
||||
'otherfile.ext': base64.b64encode('im a file')
|
||||
}, id='config and another file'),
|
||||
pytest.param({
|
||||
'config.yaml': 'test:true',
|
||||
'extra_ca_certs': [
|
||||
('cert.crt', 'im a cert!'),
|
||||
]
|
||||
},
|
||||
{
|
||||
'config.yaml': 'dGVzdDp0cnVl',
|
||||
'extra_ca_certs_cert.crt': base64.b64encode('im a cert!'),
|
||||
}, id='config and an extra cert'),
|
||||
pytest.param({
|
||||
'config.yaml': 'test:true',
|
||||
'otherfile.ext': 'im a file',
|
||||
'extra_ca_certs': [
|
||||
('cert.crt', 'im a cert!'),
|
||||
('another.crt', 'im a different cert!'),
|
||||
]
|
||||
},
|
||||
{
|
||||
'config.yaml': 'dGVzdDp0cnVl',
|
||||
'otherfile.ext': base64.b64encode('im a file'),
|
||||
'extra_ca_certs_cert.crt': base64.b64encode('im a cert!'),
|
||||
'extra_ca_certs_another.crt': base64.b64encode('im a different cert!'),
|
||||
}, id='config, files, and extra certs!'),
|
||||
])
|
||||
def test_get_config_as_kube_secret(file_structure, expected_secret):
|
||||
temp_dir = _create_temp_file_structure(file_structure)
|
||||
|
||||
secret = get_config_as_kube_secret(temp_dir.name)
|
||||
assert secret == expected_secret
|
||||
|
||||
temp_dir.cleanup()
|
|
@ -0,0 +1,68 @@
|
|||
import pytest
|
||||
import os
|
||||
|
||||
from config_app.config_util.config.TransientDirectoryProvider import TransientDirectoryProvider
|
||||
|
||||
|
||||
@pytest.mark.parametrize('files_to_write, operations, expected_new_dir', [
|
||||
pytest.param({
|
||||
'config.yaml': 'a config',
|
||||
}, ([], [], []), {
|
||||
'config.yaml': 'a config',
|
||||
}, id='just a config'),
|
||||
pytest.param({
|
||||
'config.yaml': 'a config',
|
||||
'oldfile': 'hmmm'
|
||||
}, ([], [], ['oldfile']), {
|
||||
'config.yaml': 'a config',
|
||||
}, id='delete a file'),
|
||||
pytest.param({
|
||||
'config.yaml': 'a config',
|
||||
'oldfile': 'hmmm'
|
||||
}, ([('newfile', 'asdf')], [], ['oldfile']), {
|
||||
'config.yaml': 'a config',
|
||||
'newfile': 'asdf'
|
||||
}, id='delete and add a file'),
|
||||
pytest.param({
|
||||
'config.yaml': 'a config',
|
||||
'somefile': 'before'
|
||||
}, ([('newfile', 'asdf')], [('somefile', 'after')], []), {
|
||||
'config.yaml': 'a config',
|
||||
'newfile': 'asdf',
|
||||
'somefile': 'after',
|
||||
}, id='add new files and change files'),
|
||||
])
|
||||
def test_transient_dir_copy_config_dir(files_to_write, operations, expected_new_dir):
|
||||
config_provider = TransientDirectoryProvider('', '', '')
|
||||
|
||||
for name, data in files_to_write.iteritems():
|
||||
config_provider.write_volume_file(name, data)
|
||||
|
||||
config_provider.create_copy_of_config_dir()
|
||||
|
||||
for create in operations[0]:
|
||||
(name, data) = create
|
||||
config_provider.write_volume_file(name, data)
|
||||
|
||||
for update in operations[1]:
|
||||
(name, data) = update
|
||||
config_provider.write_volume_file(name, data)
|
||||
|
||||
for delete in operations[2]:
|
||||
config_provider.remove_volume_file(delete)
|
||||
|
||||
# check that the new directory matches expected state
|
||||
for filename, data in expected_new_dir.iteritems():
|
||||
with open(os.path.join(config_provider.get_config_dir_path(), filename)) as f:
|
||||
new_data = f.read()
|
||||
assert new_data == data
|
||||
|
||||
# Now check that the old dir matches the original state
|
||||
saved = config_provider.get_old_config_dir()
|
||||
|
||||
for filename, data in files_to_write.iteritems():
|
||||
with open(os.path.join(saved, filename)) as f:
|
||||
new_data = f.read()
|
||||
assert new_data == data
|
||||
|
||||
config_provider.temp_dir.cleanup()
|
83
config_app/config_util/config/testprovider.py
Normal file
83
config_app/config_util/config/testprovider.py
Normal file
|
@ -0,0 +1,83 @@
|
|||
import json
|
||||
import io
|
||||
import os
|
||||
|
||||
from config_app.config_util.config.baseprovider import BaseProvider
|
||||
|
||||
REAL_FILES = ['test/data/signing-private.gpg', 'test/data/signing-public.gpg', 'test/data/test.pem']
|
||||
|
||||
|
||||
class TestConfigProvider(BaseProvider):
|
||||
""" Implementation of the config provider for testing. Everything is kept in-memory instead on
|
||||
the real file system. """
|
||||
|
||||
def __init__(self):
|
||||
self.clear()
|
||||
|
||||
def clear(self):
|
||||
self.files = {}
|
||||
self._config = {}
|
||||
|
||||
@property
|
||||
def provider_id(self):
|
||||
return 'test'
|
||||
|
||||
def update_app_config(self, app_config):
|
||||
self._config = app_config
|
||||
|
||||
def get_config(self):
|
||||
if not 'config.yaml' in self.files:
|
||||
return None
|
||||
|
||||
return json.loads(self.files.get('config.yaml', '{}'))
|
||||
|
||||
def save_config(self, config_obj):
|
||||
self.files['config.yaml'] = json.dumps(config_obj)
|
||||
|
||||
def config_exists(self):
|
||||
return 'config.yaml' in self.files
|
||||
|
||||
def volume_exists(self):
|
||||
return True
|
||||
|
||||
def volume_file_exists(self, filename):
|
||||
if filename in REAL_FILES:
|
||||
return True
|
||||
|
||||
return filename in self.files
|
||||
|
||||
def save_volume_file(self, filename, flask_file):
|
||||
self.files[filename] = flask_file.read()
|
||||
|
||||
def write_volume_file(self, filename, contents):
|
||||
self.files[filename] = contents
|
||||
|
||||
def get_volume_file(self, filename, mode='r'):
|
||||
if filename in REAL_FILES:
|
||||
return open(filename, mode=mode)
|
||||
|
||||
return io.BytesIO(self.files[filename])
|
||||
|
||||
def remove_volume_file(self, filename):
|
||||
self.files.pop(filename, None)
|
||||
|
||||
def list_volume_directory(self, path):
|
||||
paths = []
|
||||
for filename in self.files:
|
||||
if filename.startswith(path):
|
||||
paths.append(filename[len(path) + 1:])
|
||||
|
||||
return paths
|
||||
|
||||
def requires_restart(self, app_config):
|
||||
return False
|
||||
|
||||
def reset_for_test(self):
|
||||
self._config['SUPER_USERS'] = ['devtable']
|
||||
self.files = {}
|
||||
|
||||
def get_volume_path(self, directory, filename):
|
||||
return os.path.join(directory, filename)
|
||||
|
||||
def get_config_dir_path(self):
|
||||
return ''
|
306
config_app/config_util/k8saccessor.py
Normal file
306
config_app/config_util/k8saccessor.py
Normal file
|
@ -0,0 +1,306 @@
|
|||
import logging
|
||||
import json
|
||||
import base64
|
||||
import datetime
|
||||
import os
|
||||
|
||||
from requests import Request, Session
|
||||
from collections import namedtuple
|
||||
from util.config.validator import EXTRA_CA_DIRECTORY, EXTRA_CA_DIRECTORY_PREFIX
|
||||
|
||||
from config_app.config_util.k8sconfig import KubernetesConfig
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
QE_DEPLOYMENT_LABEL = 'quay-enterprise-component'
|
||||
QE_CONTAINER_NAME = 'quay-enterprise-app'
|
||||
|
||||
|
||||
# Tuple containing response of the deployment rollout status method.
|
||||
# status is one of: 'failed' | 'progressing' | 'available'
|
||||
# message is any string describing the state.
|
||||
DeploymentRolloutStatus = namedtuple('DeploymentRolloutStatus', ['status', 'message'])
|
||||
|
||||
class K8sApiException(Exception):
|
||||
pass
|
||||
|
||||
|
||||
def _deployment_rollout_status_message(deployment, deployment_name):
|
||||
"""
|
||||
Gets the friendly human readable message of the current state of the deployment rollout
|
||||
:param deployment: python dict matching: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#deployment-v1-apps
|
||||
:param deployment_name: string
|
||||
:return: DeploymentRolloutStatus
|
||||
"""
|
||||
# Logic for rollout status pulled from the `kubectl rollout status` command:
|
||||
# https://github.com/kubernetes/kubernetes/blob/d9ba19c751709c8608e09a0537eea98973f3a796/pkg/kubectl/rollout_status.go#L62
|
||||
if deployment['metadata']['generation'] <= deployment['status']['observedGeneration']:
|
||||
for cond in deployment['status']['conditions']:
|
||||
if cond['type'] == 'Progressing' and cond['reason'] == 'ProgressDeadlineExceeded':
|
||||
return DeploymentRolloutStatus(
|
||||
status='failed',
|
||||
message="Deployment %s's rollout failed. Please try again later." % deployment_name
|
||||
)
|
||||
|
||||
desired_replicas = deployment['spec']['replicas']
|
||||
current_replicas = deployment['status'].get('replicas', 0)
|
||||
if current_replicas == 0:
|
||||
return DeploymentRolloutStatus(
|
||||
status='available',
|
||||
message='Deployment %s updated (no replicas, so nothing to roll out)' % deployment_name
|
||||
)
|
||||
|
||||
# Some fields are optional in the spec, so if they're omitted, replace with defaults that won't indicate a wrong status
|
||||
available_replicas = deployment['status'].get('availableReplicas', 0)
|
||||
updated_replicas = deployment['status'].get('updatedReplicas', 0)
|
||||
|
||||
if updated_replicas < desired_replicas:
|
||||
return DeploymentRolloutStatus(
|
||||
status='progressing',
|
||||
message='Waiting for rollout to finish: %d out of %d new replicas have been updated...' % (
|
||||
updated_replicas, desired_replicas)
|
||||
)
|
||||
|
||||
if current_replicas > updated_replicas:
|
||||
return DeploymentRolloutStatus(
|
||||
status='progressing',
|
||||
message='Waiting for rollout to finish: %d old replicas are pending termination...' % (
|
||||
current_replicas - updated_replicas)
|
||||
)
|
||||
|
||||
if available_replicas < updated_replicas:
|
||||
return DeploymentRolloutStatus(
|
||||
status='progressing',
|
||||
message='Waiting for rollout to finish: %d of %d updated replicas are available...' % (
|
||||
available_replicas, updated_replicas)
|
||||
)
|
||||
|
||||
return DeploymentRolloutStatus(
|
||||
status='available',
|
||||
message='Deployment %s successfully rolled out.' % deployment_name
|
||||
)
|
||||
|
||||
return DeploymentRolloutStatus(
|
||||
status='progressing',
|
||||
message='Waiting for deployment spec to be updated...'
|
||||
)
|
||||
|
||||
|
||||
class KubernetesAccessorSingleton(object):
|
||||
""" Singleton allowing access to kubernetes operations """
|
||||
_instance = None
|
||||
|
||||
def __init__(self, kube_config=None):
|
||||
self.kube_config = kube_config
|
||||
if kube_config is None:
|
||||
self.kube_config = KubernetesConfig.from_env()
|
||||
|
||||
KubernetesAccessorSingleton._instance = self
|
||||
|
||||
@classmethod
|
||||
def get_instance(cls, kube_config=None):
|
||||
"""
|
||||
Singleton getter implementation, returns the instance if one exists, otherwise creates the
|
||||
instance and ties it to the class.
|
||||
:return: KubernetesAccessorSingleton
|
||||
"""
|
||||
if cls._instance is None:
|
||||
return cls(kube_config)
|
||||
|
||||
return cls._instance
|
||||
|
||||
def save_secret_to_directory(self, dir_path):
|
||||
"""
|
||||
Saves all files in the kubernetes secret to a local directory.
|
||||
Assumes the directory is empty.
|
||||
"""
|
||||
secret = self._lookup_secret()
|
||||
|
||||
secret_data = secret.get('data', {})
|
||||
|
||||
# Make the `extra_ca_certs` dir to ensure we can populate extra certs
|
||||
extra_ca_dir_path = os.path.join(dir_path, EXTRA_CA_DIRECTORY)
|
||||
os.mkdir(extra_ca_dir_path)
|
||||
|
||||
for secret_filename, data in secret_data.iteritems():
|
||||
write_path = os.path.join(dir_path, secret_filename)
|
||||
|
||||
if EXTRA_CA_DIRECTORY_PREFIX in secret_filename:
|
||||
write_path = os.path.join(extra_ca_dir_path, secret_filename.replace(EXTRA_CA_DIRECTORY_PREFIX, ''))
|
||||
|
||||
with open(write_path, 'w') as f:
|
||||
f.write(base64.b64decode(data))
|
||||
|
||||
return 200
|
||||
|
||||
def save_file_as_secret(self, name, file_pointer):
|
||||
value = file_pointer.read()
|
||||
self._update_secret_file(name, value)
|
||||
|
||||
def replace_qe_secret(self, new_secret_data):
|
||||
"""
|
||||
Removes the old config and replaces it with the new_secret_data as one action
|
||||
"""
|
||||
# Check first that the namespace for Red Hat Quay exists. If it does not, report that
|
||||
# as an error, as it seems to be a common issue.
|
||||
namespace_url = 'namespaces/%s' % (self.kube_config.qe_namespace)
|
||||
response = self._execute_k8s_api('GET', namespace_url)
|
||||
if response.status_code // 100 != 2:
|
||||
msg = 'A Kubernetes namespace with name `%s` must be created to save config' % self.kube_config.qe_namespace
|
||||
raise Exception(msg)
|
||||
|
||||
# Check if the secret exists. If not, then we create an empty secret and then update the file
|
||||
# inside.
|
||||
secret_url = 'namespaces/%s/secrets/%s' % (self.kube_config.qe_namespace, self.kube_config.qe_config_secret)
|
||||
secret = self._lookup_secret()
|
||||
if secret is None:
|
||||
self._assert_success(self._execute_k8s_api('POST', secret_url, {
|
||||
"kind": "Secret",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": self.kube_config.qe_config_secret
|
||||
},
|
||||
"data": {}
|
||||
}))
|
||||
|
||||
# Update the secret to reflect the file change.
|
||||
secret['data'] = new_secret_data
|
||||
|
||||
self._assert_success(self._execute_k8s_api('PUT', secret_url, secret))
|
||||
|
||||
def get_deployment_rollout_status(self, deployment_name):
|
||||
""""
|
||||
Returns the status of a rollout of a given deployment
|
||||
:return _DeploymentRolloutStatus
|
||||
"""
|
||||
deployment_selector_url = 'namespaces/%s/deployments/%s' % (
|
||||
self.kube_config.qe_namespace, deployment_name
|
||||
)
|
||||
|
||||
response = self._execute_k8s_api('GET', deployment_selector_url, api_prefix='apis/apps/v1')
|
||||
if response.status_code != 200:
|
||||
return DeploymentRolloutStatus('failed', 'Could not get deployment. Please check that the deployment exists')
|
||||
|
||||
deployment = json.loads(response.text)
|
||||
|
||||
return _deployment_rollout_status_message(deployment, deployment_name)
|
||||
|
||||
def get_qe_deployments(self):
|
||||
""""
|
||||
Returns all deployments matching the label selector provided in the KubeConfig
|
||||
"""
|
||||
deployment_selector_url = 'namespaces/%s/deployments?labelSelector=%s%%3D%s' % (
|
||||
self.kube_config.qe_namespace, QE_DEPLOYMENT_LABEL, self.kube_config.qe_deployment_selector
|
||||
)
|
||||
|
||||
response = self._execute_k8s_api('GET', deployment_selector_url, api_prefix='apis/extensions/v1beta1')
|
||||
if response.status_code != 200:
|
||||
return None
|
||||
return json.loads(response.text)
|
||||
|
||||
def cycle_qe_deployments(self, deployment_names):
|
||||
""""
|
||||
Triggers a rollout of all desired deployments in the qe namespace
|
||||
"""
|
||||
|
||||
for name in deployment_names:
|
||||
logger.debug('Cycling deployment %s', name)
|
||||
deployment_url = 'namespaces/%s/deployments/%s' % (self.kube_config.qe_namespace, name)
|
||||
|
||||
# There is currently no command to simply rolling restart all the pods: https://github.com/kubernetes/kubernetes/issues/13488
|
||||
# Instead, we modify the template of the deployment with a dummy env variable to trigger a cycle of the pods
|
||||
# (based off this comment: https://github.com/kubernetes/kubernetes/issues/13488#issuecomment-240393845)
|
||||
self._assert_success(self._execute_k8s_api('PATCH', deployment_url, {
|
||||
'spec': {
|
||||
'template': {
|
||||
'spec': {
|
||||
'containers': [{
|
||||
# Note: this name MUST match the deployment template's pod template
|
||||
# (e.g. <template>.spec.template.spec.containers[0] == 'quay-enterprise-app')
|
||||
'name': QE_CONTAINER_NAME,
|
||||
'env': [{
|
||||
'name': 'RESTART_TIME',
|
||||
'value': str(datetime.datetime.now())
|
||||
}],
|
||||
}]
|
||||
}
|
||||
}
|
||||
}
|
||||
}, api_prefix='apis/extensions/v1beta1', content_type='application/strategic-merge-patch+json'))
|
||||
|
||||
def rollback_deployment(self, deployment_name):
|
||||
deployment_rollback_url = 'namespaces/%s/deployments/%s/rollback' % (
|
||||
self.kube_config.qe_namespace, deployment_name
|
||||
)
|
||||
|
||||
self._assert_success(self._execute_k8s_api('POST', deployment_rollback_url, {
|
||||
'name': deployment_name,
|
||||
'rollbackTo': {
|
||||
# revision=0 makes the deployment rollout to the previous revision
|
||||
'revision': 0
|
||||
}
|
||||
}, api_prefix='apis/extensions/v1beta1'), 201)
|
||||
|
||||
def _assert_success(self, response, expected_code=200):
|
||||
if response.status_code != expected_code:
|
||||
logger.error('Kubernetes API call failed with response: %s => %s', response.status_code,
|
||||
response.text)
|
||||
raise K8sApiException('Kubernetes API call failed: %s' % response.text)
|
||||
|
||||
def _update_secret_file(self, relative_file_path, value=None):
|
||||
if '/' in relative_file_path:
|
||||
raise Exception('Expected path from get_volume_path, but found slashes')
|
||||
|
||||
# Check first that the namespace for Red Hat Quay exists. If it does not, report that
|
||||
# as an error, as it seems to be a common issue.
|
||||
namespace_url = 'namespaces/%s' % (self.kube_config.qe_namespace)
|
||||
response = self._execute_k8s_api('GET', namespace_url)
|
||||
if response.status_code // 100 != 2:
|
||||
msg = 'A Kubernetes namespace with name `%s` must be created to save config' % self.kube_config.qe_namespace
|
||||
raise Exception(msg)
|
||||
|
||||
# Check if the secret exists. If not, then we create an empty secret and then update the file
|
||||
# inside.
|
||||
secret_url = 'namespaces/%s/secrets/%s' % (self.kube_config.qe_namespace, self.kube_config.qe_config_secret)
|
||||
secret = self._lookup_secret()
|
||||
if secret is None:
|
||||
self._assert_success(self._execute_k8s_api('POST', secret_url, {
|
||||
"kind": "Secret",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": self.kube_config.qe_config_secret
|
||||
},
|
||||
"data": {}
|
||||
}))
|
||||
|
||||
# Update the secret to reflect the file change.
|
||||
secret['data'] = secret.get('data', {})
|
||||
|
||||
if value is not None:
|
||||
secret['data'][relative_file_path] = base64.b64encode(value)
|
||||
else:
|
||||
secret['data'].pop(relative_file_path)
|
||||
|
||||
self._assert_success(self._execute_k8s_api('PUT', secret_url, secret))
|
||||
|
||||
def _lookup_secret(self):
|
||||
secret_url = 'namespaces/%s/secrets/%s' % (self.kube_config.qe_namespace, self.kube_config.qe_config_secret)
|
||||
response = self._execute_k8s_api('GET', secret_url)
|
||||
if response.status_code != 200:
|
||||
return None
|
||||
return json.loads(response.text)
|
||||
|
||||
def _execute_k8s_api(self, method, relative_url, data=None, api_prefix='api/v1', content_type='application/json'):
|
||||
headers = {
|
||||
'Authorization': 'Bearer ' + self.kube_config.service_account_token
|
||||
}
|
||||
|
||||
if data:
|
||||
headers['Content-Type'] = content_type
|
||||
|
||||
data = json.dumps(data) if data else None
|
||||
session = Session()
|
||||
url = 'https://%s/%s/%s' % (self.kube_config.api_host, api_prefix, relative_url)
|
||||
|
||||
request = Request(method, url, data=data, headers=headers)
|
||||
return session.send(request.prepare(), verify=False, timeout=2)
|
46
config_app/config_util/k8sconfig.py
Normal file
46
config_app/config_util/k8sconfig.py
Normal file
|
@ -0,0 +1,46 @@
|
|||
import os
|
||||
|
||||
SERVICE_ACCOUNT_TOKEN_PATH = '/var/run/secrets/kubernetes.io/serviceaccount/token'
|
||||
|
||||
DEFAULT_QE_NAMESPACE = 'quay-enterprise'
|
||||
DEFAULT_QE_CONFIG_SECRET = 'quay-enterprise-config-secret'
|
||||
|
||||
# The name of the quay enterprise deployment (not config app) that is used to query & rollout
|
||||
DEFAULT_QE_DEPLOYMENT_SELECTOR = 'app'
|
||||
|
||||
|
||||
def get_k8s_namespace():
|
||||
return os.environ.get('QE_K8S_NAMESPACE', DEFAULT_QE_NAMESPACE)
|
||||
|
||||
|
||||
class KubernetesConfig(object):
|
||||
def __init__(self, api_host='', service_account_token=SERVICE_ACCOUNT_TOKEN_PATH,
|
||||
qe_namespace=DEFAULT_QE_NAMESPACE,
|
||||
qe_config_secret=DEFAULT_QE_CONFIG_SECRET,
|
||||
qe_deployment_selector=DEFAULT_QE_DEPLOYMENT_SELECTOR):
|
||||
self.api_host = api_host
|
||||
self.qe_namespace = qe_namespace
|
||||
self.qe_config_secret = qe_config_secret
|
||||
self.qe_deployment_selector = qe_deployment_selector
|
||||
self.service_account_token = service_account_token
|
||||
|
||||
@classmethod
|
||||
def from_env(cls):
|
||||
# Load the service account token from the local store.
|
||||
if not os.path.exists(SERVICE_ACCOUNT_TOKEN_PATH):
|
||||
raise Exception('Cannot load Kubernetes service account token')
|
||||
|
||||
with open(SERVICE_ACCOUNT_TOKEN_PATH, 'r') as f:
|
||||
service_token = f.read()
|
||||
|
||||
api_host = os.environ.get('KUBERNETES_SERVICE_HOST', '')
|
||||
port = os.environ.get('KUBERNETES_SERVICE_PORT')
|
||||
if port:
|
||||
api_host += ':' + port
|
||||
|
||||
qe_namespace = get_k8s_namespace()
|
||||
qe_config_secret = os.environ.get('QE_K8S_CONFIG_SECRET', DEFAULT_QE_CONFIG_SECRET)
|
||||
qe_deployment_selector = os.environ.get('QE_DEPLOYMENT_SELECTOR', DEFAULT_QE_DEPLOYMENT_SELECTOR)
|
||||
|
||||
return cls(api_host=api_host, service_account_token=service_token, qe_namespace=qe_namespace,
|
||||
qe_config_secret=qe_config_secret, qe_deployment_selector=qe_deployment_selector)
|
47
config_app/config_util/log.py
Normal file
47
config_app/config_util/log.py
Normal file
|
@ -0,0 +1,47 @@
|
|||
import os
|
||||
from config_app._init_config import CONF_DIR
|
||||
|
||||
|
||||
def logfile_path(jsonfmt=False, debug=False):
|
||||
"""
|
||||
Returns the a logfileconf path following this rules:
|
||||
- conf/logging_debug_json.conf # jsonfmt=true, debug=true
|
||||
- conf/logging_json.conf # jsonfmt=true, debug=false
|
||||
- conf/logging_debug.conf # jsonfmt=false, debug=true
|
||||
- conf/logging.conf # jsonfmt=false, debug=false
|
||||
Can be parametrized via envvars: JSONLOG=true, DEBUGLOG=true
|
||||
"""
|
||||
_json = ""
|
||||
_debug = ""
|
||||
|
||||
if jsonfmt or os.getenv('JSONLOG', 'false').lower() == 'true':
|
||||
_json = "_json"
|
||||
|
||||
if debug or os.getenv('DEBUGLOG', 'false').lower() == 'true':
|
||||
_debug = "_debug"
|
||||
|
||||
return os.path.join(CONF_DIR, "logging%s%s.conf" % (_debug, _json))
|
||||
|
||||
|
||||
def filter_logs(values, filtered_fields):
|
||||
"""
|
||||
Takes a dict and a list of keys to filter.
|
||||
eg:
|
||||
with filtered_fields:
|
||||
[{'key': ['k1', k2'], 'fn': lambda x: 'filtered'}]
|
||||
and values:
|
||||
{'k1': {'k2': 'some-secret'}, 'k3': 'some-value'}
|
||||
the returned dict is:
|
||||
{'k1': {k2: 'filtered'}, 'k3': 'some-value'}
|
||||
"""
|
||||
for field in filtered_fields:
|
||||
cdict = values
|
||||
|
||||
for key in field['key'][:-1]:
|
||||
if key in cdict:
|
||||
cdict = cdict[key]
|
||||
|
||||
last_key = field['key'][-1]
|
||||
|
||||
if last_key in cdict and cdict[last_key]:
|
||||
cdict[last_key] = field['fn'](cdict[last_key])
|
85
config_app/config_util/ssl.py
Normal file
85
config_app/config_util/ssl.py
Normal file
|
@ -0,0 +1,85 @@
|
|||
from fnmatch import fnmatch
|
||||
|
||||
import OpenSSL
|
||||
|
||||
|
||||
class CertInvalidException(Exception):
|
||||
""" Exception raised when a certificate could not be parsed/loaded. """
|
||||
pass
|
||||
|
||||
|
||||
class KeyInvalidException(Exception):
|
||||
""" Exception raised when a key could not be parsed/loaded or successfully applied to a cert. """
|
||||
pass
|
||||
|
||||
|
||||
def load_certificate(cert_contents):
|
||||
""" Loads the certificate from the given contents and returns it or raises a CertInvalidException
|
||||
on failure.
|
||||
"""
|
||||
try:
|
||||
cert = OpenSSL.crypto.load_certificate(OpenSSL.crypto.FILETYPE_PEM, cert_contents)
|
||||
return SSLCertificate(cert)
|
||||
except OpenSSL.crypto.Error as ex:
|
||||
raise CertInvalidException(ex.message[0][2])
|
||||
|
||||
|
||||
_SUBJECT_ALT_NAME = 'subjectAltName'
|
||||
|
||||
|
||||
class SSLCertificate(object):
|
||||
""" Helper class for easier working with SSL certificates. """
|
||||
|
||||
def __init__(self, openssl_cert):
|
||||
self.openssl_cert = openssl_cert
|
||||
|
||||
def validate_private_key(self, private_key_path):
|
||||
""" Validates that the private key found at the given file path applies to this certificate.
|
||||
Raises a KeyInvalidException on failure.
|
||||
"""
|
||||
context = OpenSSL.SSL.Context(OpenSSL.SSL.TLSv1_METHOD)
|
||||
context.use_certificate(self.openssl_cert)
|
||||
|
||||
try:
|
||||
context.use_privatekey_file(private_key_path)
|
||||
context.check_privatekey()
|
||||
except OpenSSL.SSL.Error as ex:
|
||||
raise KeyInvalidException(ex.message[0][2])
|
||||
|
||||
def matches_name(self, check_name):
|
||||
""" Returns true if this SSL certificate matches the given DNS hostname. """
|
||||
for dns_name in self.names:
|
||||
if fnmatch(check_name, dns_name):
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
@property
|
||||
def expired(self):
|
||||
""" Returns whether the SSL certificate has expired. """
|
||||
return self.openssl_cert.has_expired()
|
||||
|
||||
@property
|
||||
def common_name(self):
|
||||
""" Returns the defined common name for the certificate, if any. """
|
||||
return self.openssl_cert.get_subject().commonName
|
||||
|
||||
@property
|
||||
def names(self):
|
||||
""" Returns all the DNS named to which the certificate applies. May be empty. """
|
||||
dns_names = set()
|
||||
common_name = self.common_name
|
||||
if common_name is not None:
|
||||
dns_names.add(common_name)
|
||||
|
||||
# Find the DNS extension, if any.
|
||||
for i in range(0, self.openssl_cert.get_extension_count()):
|
||||
ext = self.openssl_cert.get_extension(i)
|
||||
if ext.get_short_name() == _SUBJECT_ALT_NAME:
|
||||
value = str(ext)
|
||||
for san_name in value.split(','):
|
||||
san_name_trimmed = san_name.strip()
|
||||
if san_name_trimmed.startswith('DNS:'):
|
||||
dns_names.add(san_name_trimmed[4:])
|
||||
|
||||
return dns_names
|
22
config_app/config_util/tar.py
Normal file
22
config_app/config_util/tar.py
Normal file
|
@ -0,0 +1,22 @@
|
|||
from util.config.validator import EXTRA_CA_DIRECTORY
|
||||
|
||||
|
||||
def strip_absolute_path_and_add_trailing_dir(path):
|
||||
"""
|
||||
Removes the initial trailing / from the prefix path, and add the last dir one
|
||||
"""
|
||||
return path[1:] + '/'
|
||||
|
||||
|
||||
def tarinfo_filter_partial(prefix):
|
||||
def tarinfo_filter(tarinfo):
|
||||
# remove leading directory info
|
||||
tarinfo.name = tarinfo.name.replace(prefix, '')
|
||||
|
||||
# ignore any directory that isn't the specified extra ca one:
|
||||
if tarinfo.isdir() and not tarinfo.name == EXTRA_CA_DIRECTORY:
|
||||
return None
|
||||
|
||||
return tarinfo
|
||||
|
||||
return tarinfo_filter
|
116
config_app/config_util/test/test_k8saccessor.py
Normal file
116
config_app/config_util/test/test_k8saccessor.py
Normal file
|
@ -0,0 +1,116 @@
|
|||
import pytest
|
||||
|
||||
from httmock import urlmatch, HTTMock, response
|
||||
|
||||
from config_app.config_util.k8saccessor import KubernetesAccessorSingleton, _deployment_rollout_status_message
|
||||
from config_app.config_util.k8sconfig import KubernetesConfig
|
||||
|
||||
|
||||
@pytest.mark.parametrize('deployment_object, expected_status, expected_message', [
|
||||
({'metadata': {'generation': 1},
|
||||
'status': {'observedGeneration': 0, 'conditions': []},
|
||||
'spec': {'replicas': 0}},
|
||||
'progressing',
|
||||
'Waiting for deployment spec to be updated...'),
|
||||
({'metadata': {'generation': 0},
|
||||
'status': {'observedGeneration': 0, 'conditions': [{'type': 'Progressing', 'reason': 'ProgressDeadlineExceeded'}]},
|
||||
'spec': {'replicas': 0}},
|
||||
'failed',
|
||||
"Deployment my-deployment's rollout failed. Please try again later."),
|
||||
({'metadata': {'generation': 0},
|
||||
'status': {'observedGeneration': 0, 'conditions': []},
|
||||
'spec': {'replicas': 0}},
|
||||
'available',
|
||||
'Deployment my-deployment updated (no replicas, so nothing to roll out)'),
|
||||
({'metadata': {'generation': 0},
|
||||
'status': {'observedGeneration': 0, 'conditions': [], 'replicas': 1},
|
||||
'spec': {'replicas': 2}},
|
||||
'progressing',
|
||||
'Waiting for rollout to finish: 0 out of 2 new replicas have been updated...'),
|
||||
({'metadata': {'generation': 0},
|
||||
'status': {'observedGeneration': 0, 'conditions': [], 'replicas': 1, 'updatedReplicas': 1},
|
||||
'spec': {'replicas': 2}},
|
||||
'progressing',
|
||||
'Waiting for rollout to finish: 1 out of 2 new replicas have been updated...'),
|
||||
({'metadata': {'generation': 0},
|
||||
'status': {'observedGeneration': 0, 'conditions': [], 'replicas': 2, 'updatedReplicas': 1},
|
||||
'spec': {'replicas': 1}},
|
||||
'progressing',
|
||||
'Waiting for rollout to finish: 1 old replicas are pending termination...'),
|
||||
({'metadata': {'generation': 0},
|
||||
'status': {'observedGeneration': 0, 'conditions': [], 'replicas': 1, 'updatedReplicas': 2, 'availableReplicas': 0},
|
||||
'spec': {'replicas': 0}},
|
||||
'progressing',
|
||||
'Waiting for rollout to finish: 0 of 2 updated replicas are available...'),
|
||||
({'metadata': {'generation': 0},
|
||||
'status': {'observedGeneration': 0, 'conditions': [], 'replicas': 1, 'updatedReplicas': 2, 'availableReplicas': 2},
|
||||
'spec': {'replicas': 0}},
|
||||
'available',
|
||||
'Deployment my-deployment successfully rolled out.'),
|
||||
])
|
||||
def test_deployment_rollout_status_message(deployment_object, expected_status, expected_message):
|
||||
deployment_status = _deployment_rollout_status_message(deployment_object, 'my-deployment')
|
||||
assert deployment_status.status == expected_status
|
||||
assert deployment_status.message == expected_message
|
||||
|
||||
|
||||
@pytest.mark.parametrize('kube_config, expected_api, expected_query', [
|
||||
({'api_host': 'www.customhost.com'},
|
||||
'/apis/extensions/v1beta1/namespaces/quay-enterprise/deployments', 'labelSelector=quay-enterprise-component%3Dapp'),
|
||||
|
||||
({'api_host': 'www.customhost.com', 'qe_deployment_selector': 'custom-selector'},
|
||||
'/apis/extensions/v1beta1/namespaces/quay-enterprise/deployments',
|
||||
'labelSelector=quay-enterprise-component%3Dcustom-selector'),
|
||||
|
||||
({'api_host': 'www.customhost.com', 'qe_namespace': 'custom-namespace'},
|
||||
'/apis/extensions/v1beta1/namespaces/custom-namespace/deployments', 'labelSelector=quay-enterprise-component%3Dapp'),
|
||||
|
||||
({'api_host': 'www.customhost.com', 'qe_namespace': 'custom-namespace', 'qe_deployment_selector': 'custom-selector'},
|
||||
'/apis/extensions/v1beta1/namespaces/custom-namespace/deployments',
|
||||
'labelSelector=quay-enterprise-component%3Dcustom-selector'),
|
||||
])
|
||||
def test_get_qe_deployments(kube_config, expected_api, expected_query):
|
||||
config = KubernetesConfig(**kube_config)
|
||||
url_hit = [False]
|
||||
|
||||
@urlmatch(netloc=r'www.customhost.com')
|
||||
def handler(request, _):
|
||||
assert request.path == expected_api
|
||||
assert request.query == expected_query
|
||||
url_hit[0] = True
|
||||
return response(200, '{}')
|
||||
|
||||
with HTTMock(handler):
|
||||
KubernetesAccessorSingleton._instance = None
|
||||
assert KubernetesAccessorSingleton.get_instance(config).get_qe_deployments() is not None
|
||||
|
||||
assert url_hit[0]
|
||||
|
||||
|
||||
@pytest.mark.parametrize('kube_config, deployment_names, expected_api_hits', [
|
||||
({'api_host': 'www.customhost.com'}, [], []),
|
||||
({'api_host': 'www.customhost.com'}, ['myDeployment'],
|
||||
['/apis/extensions/v1beta1/namespaces/quay-enterprise/deployments/myDeployment']),
|
||||
({'api_host': 'www.customhost.com', 'qe_namespace': 'custom-namespace'},
|
||||
['myDeployment', 'otherDeployment'],
|
||||
['/apis/extensions/v1beta1/namespaces/custom-namespace/deployments/myDeployment',
|
||||
'/apis/extensions/v1beta1/namespaces/custom-namespace/deployments/otherDeployment']),
|
||||
])
|
||||
def test_cycle_qe_deployments(kube_config, deployment_names, expected_api_hits):
|
||||
KubernetesAccessorSingleton._instance = None
|
||||
|
||||
config = KubernetesConfig(**kube_config)
|
||||
url_hit = [False] * len(expected_api_hits)
|
||||
i = [0]
|
||||
|
||||
@urlmatch(netloc=r'www.customhost.com', method='PATCH')
|
||||
def handler(request, _):
|
||||
assert request.path == expected_api_hits[i[0]]
|
||||
url_hit[i[0]] = True
|
||||
i[0] += 1
|
||||
return response(200, '{}')
|
||||
|
||||
with HTTMock(handler):
|
||||
KubernetesAccessorSingleton.get_instance(config).cycle_qe_deployments(deployment_names)
|
||||
|
||||
assert all(url_hit)
|
32
config_app/config_util/test/test_tar.py
Normal file
32
config_app/config_util/test/test_tar.py
Normal file
|
@ -0,0 +1,32 @@
|
|||
import pytest
|
||||
|
||||
from config_app.config_util.tar import tarinfo_filter_partial
|
||||
|
||||
from util.config.validator import EXTRA_CA_DIRECTORY
|
||||
|
||||
from test.fixtures import *
|
||||
|
||||
|
||||
class MockTarInfo:
|
||||
def __init__(self, name, isdir):
|
||||
self.name = name
|
||||
self.isdir = lambda: isdir
|
||||
|
||||
def __eq__(self, other):
|
||||
return other is not None and self.name == other.name
|
||||
|
||||
|
||||
@pytest.mark.parametrize('prefix,tarinfo,expected', [
|
||||
# It should handle simple files
|
||||
('Users/sam/', MockTarInfo('Users/sam/config.yaml', False), MockTarInfo('config.yaml', False)),
|
||||
# It should allow the extra CA dir
|
||||
('Users/sam/', MockTarInfo('Users/sam/%s' % EXTRA_CA_DIRECTORY, True), MockTarInfo('%s' % EXTRA_CA_DIRECTORY, True)),
|
||||
# it should allow a file in that extra dir
|
||||
('Users/sam/', MockTarInfo('Users/sam/%s/cert.crt' % EXTRA_CA_DIRECTORY, False),
|
||||
MockTarInfo('%s/cert.crt' % EXTRA_CA_DIRECTORY, False)),
|
||||
# it should not allow a directory that isn't the CA dir
|
||||
('Users/sam/', MockTarInfo('Users/sam/dirignore', True), None),
|
||||
])
|
||||
def test_tarinfo_filter(prefix, tarinfo, expected):
|
||||
partial = tarinfo_filter_partial(prefix)
|
||||
assert partial(tarinfo) == expected
|
6
config_app/config_web.py
Normal file
6
config_app/config_web.py
Normal file
|
@ -0,0 +1,6 @@
|
|||
from config_app.c_app import app as application
|
||||
from config_app.config_endpoints.api import api_bp
|
||||
from config_app.config_endpoints.setup_web import setup_web
|
||||
|
||||
application.register_blueprint(setup_web)
|
||||
application.register_blueprint(api_bp, url_prefix='/api')
|
BIN
config_app/docs/img/initial-choice-screen.png
Normal file
BIN
config_app/docs/img/initial-choice-screen.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 810 KiB |
BIN
config_app/docs/img/load-tarball-config.png
Normal file
BIN
config_app/docs/img/load-tarball-config.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 1,008 KiB |
147
config_app/docs/initial_setup.md
Normal file
147
config_app/docs/initial_setup.md
Normal file
|
@ -0,0 +1,147 @@
|
|||
# On-premises installation
|
||||
|
||||
Red Hat Quay requires three components to be running to begin the setup process:
|
||||
|
||||
- A supported database (MySQL, Postgres)
|
||||
- A Redis instance (for real-time events)
|
||||
- The Red Hat Quay image
|
||||
|
||||
**NOTE**: Please have the host and port of the database and the Redis instance ready.
|
||||
|
||||
## Preparing the database
|
||||
|
||||
A MySQL RDBMS or Postgres installation with an empty database is required, and a login with full access to said database. This login *must* be for a superuser. The schema will be created during the creation of the configuration. The database install can either be pre-existing or run via a [Docker container](mysql-container.md).
|
||||
|
||||
**Note**: Running your database on as a Docker container is not recommended for production workloads.
|
||||
|
||||
## Setting up redis
|
||||
|
||||
Redis stores data which must be accessed quickly but doesn’t require durability guarantees. If you have an existing Redis instance, make sure to accept incoming connections on port 6379 (or change the port in the setup process) and then feel free to skip this step.
|
||||
|
||||
To run redis, simply pull and run the Quay.io Redis image:
|
||||
|
||||
```
|
||||
sudo docker pull quay.io/quay/redis
|
||||
sudo docker run -d -p 6379:6379 quay.io/quay/redis
|
||||
```
|
||||
|
||||
**NOTE**: This host will have to accept incoming connections on port 6379 from the hosts on which the registry will run.
|
||||
|
||||
## Downloading the Red Hat Quay image
|
||||
|
||||
After signing up you will be able to download a pull secret file named `config.json`.
|
||||
|
||||
The `config.json` file will look like this:
|
||||
|
||||
```
|
||||
{
|
||||
"auths": {
|
||||
"quay.io": {
|
||||
"auth": "abcdefghijklmnopqrstuvwxyz...",
|
||||
"email": ""
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
`config.json` contains your credentials for the `quay.io/coreos/quay` repository. Save this file to your machine in `/home/$USER/.docker/config.json` and `/root/.docker/config.json`. You should now be able to execute `docker pull quay.io/coreos/quay:v2.9.2` to download the container.
|
||||
|
||||
## Booting up the configuration tool
|
||||
|
||||
Now you can run the configuration setup tool to create your config bundle.
|
||||
|
||||
Run the following command to boot up the configuration tool:
|
||||
|
||||
```
|
||||
sudo docker run -p 443:443 -d quay.io/coreos/quay:v3.0.0 config
|
||||
```
|
||||
|
||||
## Creating a new configuration
|
||||
|
||||
Visit the image locally by going to https://yourhost/
|
||||
|
||||
|
||||
**Note**: You will see warning about an invalid certificate authority when visiting in your browser. This is because we self sign the certificate at container load time, so you can safely bypass this warning. (On Chrome, for example, click on Advanced, then "Proceed to localhost")
|
||||
|
||||
<img src="img/initial-choice-screen.png" class="img-center" alt="Red Hat Quay Configuration Tool"/>
|
||||
|
||||
Click on "Start New Registry Setup", and follow the instructions to create your configuration, downloading and saving it when complete.
|
||||
|
||||
**Note**: Please keep this tarball safe, as it contains your certificates and other access credentials unencryped. You will also need it if you ever wish to update your configuration.
|
||||
|
||||
|
||||
## Setting up the directories
|
||||
|
||||
Red Hat Quay requires a configuration directory (and a storage directory if using local storage):
|
||||
|
||||
You will need to extract the tarball you received in the previous step into a directory:
|
||||
```
|
||||
mkdir config && tar xzf quay-config.tar.gz -C config
|
||||
```
|
||||
|
||||
|
||||
If you are storing images locally, then you will need a storage directory (skip this step if you are storing images remotely):
|
||||
```
|
||||
mkdir storage
|
||||
```
|
||||
|
||||
**Note**: storing images locally is not recommended for production workloads!
|
||||
|
||||
|
||||
|
||||
## Running the registry
|
||||
|
||||
If you are running with local storage, you'll have to add it as a volume to the docker command, replacing `/local/path/to/the/config/directory` and `/local/path/to/the/storage/directory` with the absolute paths to the directories created in the previous step:
|
||||
|
||||
|
||||
```
|
||||
sudo docker run --restart=always -p 443:443 -p 80:80 --privileged=true -v /local/path/to/the/config/directory:/conf/stack -v /local/path/to/the/storage/directory:/datastorage -d quay.io/coreos/quay:3.0.0
|
||||
```
|
||||
|
||||
Otherwise, run the following command, replacing `/local/path/to/the/config/directory` with the absolute path to the directory created in the previous step:
|
||||
|
||||
```
|
||||
sudo docker run --restart=always -p 443:443 -p 80:80 --privileged=true -v /local/path/to/the/config/directory:/conf/stack -d quay.io/coreos/quay:3.0.0
|
||||
```
|
||||
|
||||
|
||||
## Verifying the status of QE
|
||||
|
||||
Visit the `/health/endtoend` endpoint on the Red Hat Quay hostname and verify that the `code` is `200` and `is_testing` is `false`.
|
||||
|
||||
If `code` is anything other than `200`, visit http://yourhost/ and you will see instructions detailing the problems Red Hat Quay is having with the configuration.
|
||||
|
||||
|
||||
## Logging in
|
||||
|
||||
### If using database authentication:
|
||||
|
||||
Once Red Hat Quay is running, new users can be created by clicking the `Sign Up` button. If e-mail is enabled, the sign up process will require an e-mail confirmation step, after which repositories, organizations and teams can be setup by the user.
|
||||
|
||||
|
||||
### If using LDAP authentication:
|
||||
|
||||
Users should be able to login to the Red Hat Quay directly with their LDAP username and password.
|
||||
|
||||
|
||||
## Updating your configuration
|
||||
|
||||
If you ever wish to change your configuration, you will need to run the configuration tool again:
|
||||
|
||||
```
|
||||
sudo docker run -p 443:443 -d quay.io/coreos/quay:v3.0.0 config
|
||||
```
|
||||
|
||||
Click on "Modify an existing configuration", and upload the tarball provided when initially creating the configuration.
|
||||
|
||||
You will be taken to the setup page, with your previous configuration values pre-populated. After you have made your changes, save the configuration and download the tarball.
|
||||
|
||||
<img src="img/load-tarball-config.png" class="img-center" alt="Red Hat Quay Load Configuration"/>
|
||||
|
||||
Extract the tarball into the config directory where your Red Hat Quay will run:
|
||||
```
|
||||
mkdir config && tar xzf quay-config.tar.gz -C config
|
||||
```
|
||||
|
||||
Now run Red Hat Quay as stated in the **Running the registry** step, and your new instance will reflect the changes made in the new configuration.
|
||||
|
|
@ -0,0 +1,14 @@
|
|||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
namespace: quay-enterprise
|
||||
name: quay-enterprise-config-tool
|
||||
spec:
|
||||
type: NodePort
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 443
|
||||
targetPort: 443
|
||||
nodePort: 30090
|
||||
selector:
|
||||
quay-enterprise-component: config-tool
|
|
@ -0,0 +1,5 @@
|
|||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: qe-config-tool-serviceaccount
|
||||
namespace: quay-enterprise
|
|
@ -0,0 +1,12 @@
|
|||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: quay-enterprise-config-tool-writer
|
||||
namespace: quay-enterprise
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: quay-enterprise-config-tool-role
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: qe-config-tool-serviceaccount
|
|
@ -0,0 +1,32 @@
|
|||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: quay-enterprise-config-tool-role
|
||||
namespace: quay-enterprise
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- secrets
|
||||
verbs:
|
||||
- get
|
||||
- put
|
||||
- patch
|
||||
- update
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- namespaces
|
||||
verbs:
|
||||
- get
|
||||
- apiGroups:
|
||||
- "extensions"
|
||||
- "apps"
|
||||
resources:
|
||||
- deployments
|
||||
- deployments/rollback
|
||||
verbs:
|
||||
- create
|
||||
- get
|
||||
- list
|
||||
- patch
|
30
config_app/docs/k8s_templates/qe-config-tool.yml
Normal file
30
config_app/docs/k8s_templates/qe-config-tool.yml
Normal file
|
@ -0,0 +1,30 @@
|
|||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
namespace: quay-enterprise
|
||||
name: quay-enterprise-config-tool
|
||||
labels:
|
||||
quay-enterprise-component: config-tool
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
quay-enterprise-component: config-tool
|
||||
template:
|
||||
metadata:
|
||||
namespace: quay-enterprise
|
||||
labels:
|
||||
quay-enterprise-component: config-tool
|
||||
spec:
|
||||
serviceAccountName: qe-config-tool-serviceaccount
|
||||
volumes:
|
||||
- name: configvolume
|
||||
secret:
|
||||
secretName: quay-enterprise-config-secret
|
||||
containers:
|
||||
- name: quay-enterprise-config-tool
|
||||
image: config-app:latest # TODO: change to reference to quay image?
|
||||
imagePullPolicy: IfNotPresent # enable when testing with minikube
|
||||
args: ["config"]
|
||||
ports:
|
||||
- containerPort: 80
|
40
config_app/docs/k8s_templates/quay-enterprise-app-rc.yml
Normal file
40
config_app/docs/k8s_templates/quay-enterprise-app-rc.yml
Normal file
|
@ -0,0 +1,40 @@
|
|||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
namespace: quay-enterprise
|
||||
name: quay-enterprise-app
|
||||
labels:
|
||||
quay-enterprise-component: app
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
quay-enterprise-component: app
|
||||
template:
|
||||
metadata:
|
||||
namespace: quay-enterprise
|
||||
labels:
|
||||
quay-enterprise-component: app
|
||||
spec:
|
||||
volumes:
|
||||
- name: configvolume
|
||||
secret:
|
||||
secretName: quay-enterprise-config-secret
|
||||
containers:
|
||||
- name: quay-enterprise-app
|
||||
image: quay.io/coreos/quay:v2.9.3
|
||||
ports:
|
||||
- containerPort: 80
|
||||
readinessProbe:
|
||||
failureThreshold: 3
|
||||
httpGet:
|
||||
path: /health
|
||||
port: 80
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 5
|
||||
volumeMounts:
|
||||
- name: configvolume
|
||||
readOnly: false
|
||||
mountPath: /conf/stack
|
||||
imagePullSecrets:
|
||||
- name: coreos-pull-secret
|
|
@ -0,0 +1,5 @@
|
|||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
namespace: quay-enterprise
|
||||
name: quay-enterprise-config-secret
|
|
@ -0,0 +1,4 @@
|
|||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: quay-enterprise
|
36
config_app/docs/k8s_templates/quay-enterprise-redis.yml
Normal file
36
config_app/docs/k8s_templates/quay-enterprise-redis.yml
Normal file
|
@ -0,0 +1,36 @@
|
|||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
namespace: quay-enterprise
|
||||
name: quay-enterprise-redis
|
||||
labels:
|
||||
quay-enterprise-component: redis
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
quay-enterprise-component: redis
|
||||
template:
|
||||
metadata:
|
||||
namespace: quay-enterprise
|
||||
labels:
|
||||
quay-enterprise-component: redis
|
||||
spec:
|
||||
containers:
|
||||
- name: redis-master
|
||||
image: quay.io/quay/redis
|
||||
ports:
|
||||
- containerPort: 6379
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
namespace: quay-enterprise
|
||||
name: quay-enterprise-redis
|
||||
labels:
|
||||
quay-enterprise-component: redis
|
||||
spec:
|
||||
ports:
|
||||
- port: 6379
|
||||
selector:
|
||||
quay-enterprise-component: redis
|
|
@ -0,0 +1,18 @@
|
|||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
namespace: quay-enterprise
|
||||
name: quay-enterprise
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
targetPort: 80
|
||||
name: http
|
||||
- protocol: TCP
|
||||
port: 443
|
||||
targetPort: 443
|
||||
name: https
|
||||
selector:
|
||||
quay-enterprise-component: app
|
|
@ -0,0 +1,14 @@
|
|||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
namespace: quay-enterprise
|
||||
name: quay-enterprise
|
||||
spec:
|
||||
type: NodePort
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
targetPort: 80
|
||||
nodePort: 30080
|
||||
selector:
|
||||
quay-enterprise-component: app
|
|
@ -0,0 +1,12 @@
|
|||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: quay-enterprise-secret-writer
|
||||
namespace: quay-enterprise
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: quay-enterprise-serviceaccount
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: default
|
|
@ -0,0 +1,21 @@
|
|||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: quay-enterprise-serviceaccount
|
||||
namespace: quay-enterprise
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- secrets
|
||||
verbs:
|
||||
- get
|
||||
- put
|
||||
- patch
|
||||
- update
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- namespaces
|
||||
verbs:
|
||||
- get
|
143
config_app/docs/kube_setup.md
Normal file
143
config_app/docs/kube_setup.md
Normal file
|
@ -0,0 +1,143 @@
|
|||
# Red Hat Quay Installation on Kubernetes
|
||||
|
||||
This guide walks through the deployment of [Red Hat Quay][quay-enterprise-tour] onto a Kubernetes cluster.
|
||||
After completing the steps in this guide, a deployer will have a functioning instance of Red Hat Quay orchestrated as a Kubernetes service on a cluster, and will be able to access the Red Hat Quay Setup tool with a browser to complete configuration of image repositories, builders, and users.
|
||||
|
||||
[quay-enterprise-tour]: https://quay.io/tour/enterprise
|
||||
|
||||
## Prerequisites
|
||||
|
||||
A PostgreSQL database must be available for Red Hat Quay metadata storage.
|
||||
We currently recommend running this database server outside of the cluster.
|
||||
|
||||
## Download Kubernetes Configuration Files
|
||||
|
||||
Visit the [RedHat Documentation][RedHat-documentation] and download the pre-formatted pull secret, under "Account Assets". There are several formats of the secret, be sure to download the "dockercfg" format resulting in a `config.json` file. This pull secret is used to download the Red Hat Quay containers.
|
||||
|
||||
This will be used later in the guide.
|
||||
|
||||
[RedHat-documentation]: https://access.redhat.com/documentation/en-us/
|
||||
|
||||
Next, download each of the following files to your workstation, placing them alongside your pull secret:
|
||||
|
||||
- [quay-enterprise-namespace.yml](k8s_templates/quay-enterprise-namespace.yml)
|
||||
- [quay-enterprise-config-secret.yml](k8s_templates/quay-enterprise-config-secret.yml)
|
||||
- [quay-enterprise-redis.yml](k8s_templates/quay-enterprise-redis.yml)
|
||||
- [quay-enterprise-app-rc.yml](k8s_templates/quay-enterprise-app-rc.yml)
|
||||
- [quay-enterprise-service-nodeport.yml](k8s_templates/quay-enterprise-service-nodeport.yml)
|
||||
- [quay-enterprise-service-loadbalancer.yml](k8s_templates/quay-enterprise-service-loadbalancer.yml)
|
||||
|
||||
## Role Based Access Control
|
||||
|
||||
Red Hat Quay has native Kubernetes integrations. These integrations require Service Account to have access to Kubernetes API. When Kubernetes RBAC is enabled, Role Based Access Control policy manifests also have to be deployed.
|
||||
|
||||
Kubernetes API has minor changes between versions 1.4 and 1.5, Download appropiate versions of Role Based Access Control (RBAC) Policies.
|
||||
|
||||
### Kubernetes v1.6.x and later RBAC Policies
|
||||
|
||||
- [quay-servicetoken-role.yaml](k8s_templates/quay-servicetoken-role-k8s1-6.yaml)
|
||||
- [quay-servicetoken-role-binding.yaml](k8s_templates/quay-servicetoken-role-binding-k8s1-6.yaml)
|
||||
|
||||
|
||||
## Deploy to Kubernetes
|
||||
|
||||
All Kubernetes objects will be deployed under the "quay-enterprise" namespace.
|
||||
The first step is to create this namespace:
|
||||
|
||||
```sh
|
||||
kubectl create -f quay-enterprise-namespace.yml
|
||||
```
|
||||
|
||||
Next, add your pull secret to Kubernetes (make sure you specify the correct path to `config.json`):
|
||||
|
||||
```sh
|
||||
kubectl create secret generic coreos-pull-secret --from-file=".dockerconfigjson=config.json" --type='kubernetes.io/dockerconfigjson' --namespace=quay-enterprise
|
||||
```
|
||||
|
||||
### Kubernetes v1.6.x and later : Deploy RBAC Policies
|
||||
|
||||
```sh
|
||||
kubectl create -f quay-servicetoken-role-k8s1-6.yaml
|
||||
kubectl create -f quay-servicetoken-role-binding-k8s1-6.yaml
|
||||
```
|
||||
|
||||
### Deploy Red Hat Quay objects
|
||||
|
||||
Finally, the remaining Kubernetes objects can be deployed onto Kubernetes:
|
||||
|
||||
```sh
|
||||
kubectl create -f quay-enterprise-config-secret.yml -f quay-enterprise-redis.yml -f quay-enterprise-app-rc.yml
|
||||
```
|
||||
|
||||
## Expose via Kubernetes Service
|
||||
|
||||
In order to access Red Hat Quay, a user must route to it through a Kubernetes Service.
|
||||
It is up to the deployer to decide which Service type is appropriate for their use case: a [LoadBalancer](http://kubernetes.io/docs/user-guide/services/#type-loadbalancer) or a [NodePort](http://kubernetes.io/docs/user-guide/services/#type-nodeport).
|
||||
|
||||
A LoadBalancer is recommended if the Kubernetes cluster is integrated with a cloud provider, otherwise a NodePort will suffice.
|
||||
Along with this guide are examples of this service.
|
||||
|
||||
### LoadBalancer
|
||||
|
||||
Using the sample provided, a LoadBalancer Kubernetes Service can be created like so:
|
||||
|
||||
```sh
|
||||
kubectl create -f quay-enterprise-service-loadbalancer.yml
|
||||
```
|
||||
|
||||
kubectl can be used to find the externally-accessible URL of the quay-enterprise service:
|
||||
|
||||
```sh
|
||||
kubectl describe services quay-enterprise --namespace=quay-enterprise
|
||||
```
|
||||
|
||||
### NodePort
|
||||
|
||||
Using the sample provided, a NodePort Kubernetes Service can be created like so:
|
||||
|
||||
```sh
|
||||
kubectl create -f quay-enterprise-service-nodeport.yml
|
||||
```
|
||||
|
||||
By default, the quay-enterprise service will be available on port 30080 on every node in the Kubernetes cluster.
|
||||
If this port conflicts with an existing Kubernetes Service, simply modify the sample configuration file and change the value of NodePort.
|
||||
|
||||
## Continue with Red Hat Quay Setup
|
||||
|
||||
All that remains is to configure Red Hat Quay itself through the configuration tool.
|
||||
|
||||
Download the following files to your workstation:
|
||||
|
||||
- [config-tool-service-nodeport.yml](k8s_templates/config-tool-service-nodeport.yml)
|
||||
- [config-tool-serviceaccount.yml](k8s_templates/config-tool-serviceaccount.yml)
|
||||
- [config-tool-servicetoken-role.yml](k8s_templates/config-tool-servicetoken-role.yml)
|
||||
- [config-tool-servicetoken-role-binding.yml](k8s_templates/config-tool-servicetoken-role-binding.yml)
|
||||
- [qe-config-tool.yml](k8s_templates/qe-config-tool.yml)
|
||||
|
||||
### Configuring RBAC for the configuration tool
|
||||
|
||||
Apply the following policies to allow the config tool to make changes to the Q.E. deployment:
|
||||
```bash
|
||||
kubectl apply -f config-tool-serviceaccount.yaml
|
||||
```
|
||||
```bash
|
||||
kubectl apply -f config-tool-servicetoken-role.yaml
|
||||
```
|
||||
```bash
|
||||
kubectl apply -f config-tool-servicetoken-role-binding.yaml
|
||||
```
|
||||
|
||||
### Deploy Config Tool
|
||||
|
||||
Deploy the configuration tool and route a service to it:
|
||||
```bash
|
||||
kubectl apply -f qe-config-tool.yml -f config-tool-service-nodeport.yml
|
||||
```
|
||||
|
||||
By default, the config-tool service will be available on port 30090 on every node in the Kubernetes cluster.
|
||||
Similar to the Quay application service, if this port conflicts with an existing Kubernetes Service, simply modify the sample configuration file and change the value of NodePort.
|
||||
Once at the Red Hat Quay setup UI, follow the setup instructions to finalize your installation.
|
||||
|
||||
## Using the Configuration Tool
|
||||
Click on "Start New Configuration for this Cluster", and follow the instructions to create your configuration, downloading and saving it (to load as a backup or if you ever wish to change your settings).
|
||||
You will also be able to deploy the configuration to all instances by hitting "Deploy". Allow for a minute for the Quay instances to cycle the pods, and your configuration will be enacted once the pods have started.
|
22
config_app/init/certs_create.sh
Executable file
22
config_app/init/certs_create.sh
Executable file
|
@ -0,0 +1,22 @@
|
|||
#! /bin/bash
|
||||
set -e
|
||||
QUAYPATH=${QUAYPATH:-"."}
|
||||
QUAYCONF=${QUAYCONF:-"$QUAYPATH/conf"}
|
||||
cd ${QUAYDIR:-"/"}
|
||||
|
||||
if [ -f "$QUAYCONF/stack/ssl.key" ] && [ -f "$QUAYCONF/stack/ssl.cert" ]; then
|
||||
echo 'Using mounted ssl certs for quay-config app'
|
||||
cp $QUAYCONF/stack/ssl.key $QUAYDIR/config_app/quay-config.key
|
||||
cp $QUAYCONF/stack/ssl.cert $QUAYDIR/config_app/quay-config.cert
|
||||
else
|
||||
echo 'Creating self-signed certs for quay-config app'
|
||||
|
||||
# Create certs to secure connections while uploading config for secrets
|
||||
# echo '{"CN":"CA","key":{"algo":"rsa","size":2048}}' | cfssl gencert -initca - | cfssljson -bare quay-config
|
||||
mkdir -p /certificates; cd /certificates
|
||||
openssl req -new -newkey rsa:4096 -days 3650 -nodes -x509 \
|
||||
-subj "/C=US/ST=NY/L=NYC/O=Dis/CN=self-signed" \
|
||||
-keyout quay-config-key.pem -out quay-config.pem
|
||||
cp /certificates/quay-config-key.pem $QUAYDIR/config_app/quay-config.key
|
||||
cp /certificates/quay-config.pem $QUAYDIR/config_app/quay-config.cert
|
||||
fi
|
11
config_app/init/service/gunicorn_web/run
Executable file
11
config_app/init/service/gunicorn_web/run
Executable file
|
@ -0,0 +1,11 @@
|
|||
#! /bin/bash
|
||||
|
||||
echo 'Starting gunicon'
|
||||
|
||||
QUAYPATH=${QUAYPATH:-"."}
|
||||
QUAYCONF=${QUAYCONF:-"$QUAYPATH/conf"}
|
||||
|
||||
cd ${QUAYDIR:-"/"}
|
||||
PYTHONPATH=$QUAYPATH venv/bin/gunicorn -c $QUAYDIR/config_app/conf/gunicorn_web.py config_application:application
|
||||
|
||||
echo 'Gunicorn exited'
|
12
config_app/init/service/nginx/run
Executable file
12
config_app/init/service/nginx/run
Executable file
|
@ -0,0 +1,12 @@
|
|||
#! /bin/bash
|
||||
|
||||
echo 'Starting nginx'
|
||||
|
||||
QUAYPATH=${QUAYPATH:-"."}
|
||||
cd ${QUAYDIR:-"/"}
|
||||
PYTHONPATH=$QUAYPATH
|
||||
QUAYCONF=${QUAYCONF:-"$QUAYPATH/conf"}
|
||||
|
||||
/usr/sbin/nginx -c $QUAYDIR/config_app/conf/nginx.conf
|
||||
|
||||
echo 'Nginx exited'
|
|
@ -0,0 +1,61 @@
|
|||
<div ng-if="$ctrl.state === 'choice' && $ctrl.kubeNamespace === false">
|
||||
<div class="co-dialog modal fade initial-setup-modal in" id="setupModal" style="display: block;">
|
||||
<div class="modal-backdrop fade in" style="height: 1000px;"></div>
|
||||
<div class="modal-dialog fade in">
|
||||
<div class="modal-content">
|
||||
<!-- Header -->
|
||||
<div class="modal-header">
|
||||
<h4 class="modal-title"><span>Choose an option</span></h4>
|
||||
</div>
|
||||
<!-- Body -->
|
||||
<div class="config-setup-wrapper">
|
||||
<a class="config-setup_option" ng-click="$ctrl.chooseSetup()">
|
||||
<i class="fas fa-edit fa-2x"></i>
|
||||
<div>Start New Registry Setup</div>
|
||||
</a>
|
||||
<a class="config-setup_option" ng-click="$ctrl.chooseLoad()">
|
||||
<i class="fas fa-upload fa-2x"></i>
|
||||
<div>Modify an existing configuration</div>
|
||||
</a>
|
||||
</div>
|
||||
</div><!-- /.modal-content -->
|
||||
</div><!-- /.modal-dialog -->
|
||||
</div>
|
||||
</div>
|
||||
<div ng-if="$ctrl.state === 'choice' && $ctrl.kubeNamespace !== false">
|
||||
<div class="co-dialog modal fade initial-setup-modal in" id="kubeSetupModal" style="display: block;">
|
||||
<div class="modal-backdrop fade in" style="height: 1000px;"></div>
|
||||
<div class="modal-dialog fade in">
|
||||
<div class="modal-content">
|
||||
<!-- Header -->
|
||||
<div class="modal-header">
|
||||
<h4 class="modal-title"><span>Choose an option for the <code>{{ $ctrl.kubeNamespace }}</code> namespace</span></h4>
|
||||
</div>
|
||||
<!-- Body -->
|
||||
<div class="config-setup-wrapper">
|
||||
<a class="config-setup_option" ng-click="$ctrl.chooseSetup()">
|
||||
<i class="fas fa-edit fa-2x"></i>
|
||||
<div>Start new configuration for this cluster</div>
|
||||
</a>
|
||||
<a class="config-setup_option" ng-click="$ctrl.choosePopulate()">
|
||||
<i class="fas fa-cloud-download-alt fa-2x"></i>
|
||||
<div>Modify configuration for this cluster</div>
|
||||
</a>
|
||||
<a class="config-setup_option" ng-click="$ctrl.chooseLoad()">
|
||||
<i class="fas fa-upload fa-2x"></i>
|
||||
<div>Populate this cluster from a previously created configuration</div>
|
||||
</a>
|
||||
</div>
|
||||
</div><!-- /.modal-content -->
|
||||
</div><!-- /.modal-dialog -->
|
||||
</div>
|
||||
</div>
|
||||
<div ng-if="$ctrl.state === 'setup'" class="setup" setup-completed="$ctrl.setupCompleted()"></div>
|
||||
<load-config ng-if="$ctrl.state === 'load'" config-loaded="$ctrl.configLoaded()"></load-config>
|
||||
<download-tarball-modal
|
||||
ng-if="$ctrl.state === 'download'"
|
||||
loaded-config="$ctrl.loadedConfig"
|
||||
is-kubernetes="$ctrl.kubeNamespace !== false"
|
||||
choose-deploy="$ctrl.chooseDeploy()">
|
||||
</download-tarball-modal>
|
||||
<kube-deploy-modal ng-if="$ctrl.state === 'deploy'" loaded-config="$ctrl.loadedConfig"></kube-deploy-modal>
|
|
@ -0,0 +1,70 @@
|
|||
import { Component, Inject } from 'ng-metadata/core';
|
||||
const templateUrl = require('./config-setup-app.component.html');
|
||||
|
||||
declare var window: any;
|
||||
|
||||
/**
|
||||
* Initial Screen and Choice in the Config App
|
||||
*/
|
||||
@Component({
|
||||
selector: 'config-setup-app',
|
||||
templateUrl: templateUrl,
|
||||
})
|
||||
export class ConfigSetupAppComponent {
|
||||
private state
|
||||
: 'choice'
|
||||
| 'setup'
|
||||
| 'load'
|
||||
| 'download'
|
||||
| 'deploy';
|
||||
|
||||
private loadedConfig = false;
|
||||
private kubeNamespace: string | boolean = false;
|
||||
|
||||
constructor(@Inject('ApiService') private apiService) {
|
||||
this.state = 'choice';
|
||||
if (window.__kubernetes_namespace) {
|
||||
this.kubeNamespace = window.__kubernetes_namespace;
|
||||
}
|
||||
}
|
||||
|
||||
private chooseSetup(): void {
|
||||
this.apiService.scStartNewConfig()
|
||||
.then(() => {
|
||||
this.state = 'setup';
|
||||
})
|
||||
.catch(this.apiService.errorDisplay(
|
||||
'Could not initialize new setup. Please report this error'
|
||||
));
|
||||
}
|
||||
|
||||
private chooseLoad(): void {
|
||||
this.state = 'load';
|
||||
this.loadedConfig = true;
|
||||
}
|
||||
|
||||
private choosePopulate(): void {
|
||||
this.apiService.scKubePopulateConfig()
|
||||
.then(() => {
|
||||
this.state = 'setup';
|
||||
this.loadedConfig = true;
|
||||
})
|
||||
.catch(err => {
|
||||
this.apiService.errorDisplay(
|
||||
`Could not populate the configuration from your cluster. Please report this error: ${JSON.stringify(err)}`
|
||||
)()
|
||||
})
|
||||
}
|
||||
|
||||
private configLoaded(): void {
|
||||
this.state = 'setup';
|
||||
}
|
||||
|
||||
private setupCompleted(): void {
|
||||
this.state = 'download';
|
||||
}
|
||||
|
||||
private chooseDeploy(): void {
|
||||
this.state = 'deploy';
|
||||
}
|
||||
}
|
|
@ -0,0 +1,3 @@
|
|||
<div class="co-floating-bottom-bar">
|
||||
<span ng-transclude/>
|
||||
</div>
|
|
@ -0,0 +1,44 @@
|
|||
const templateUrl = require('./cor-floating-bottom-bar.html');
|
||||
|
||||
angular.module('quay-config')
|
||||
.directive('corFloatingBottomBar', function() {
|
||||
var directiveDefinitionObject = {
|
||||
priority: 3,
|
||||
templateUrl,
|
||||
replace: true,
|
||||
transclude: true,
|
||||
restrict: 'C',
|
||||
scope: {},
|
||||
controller: function($rootScope, $scope, $element, $timeout, $interval) {
|
||||
var handler = function() {
|
||||
$element.removeClass('floating');
|
||||
$element.css('width', $element[0].parentNode.clientWidth + 'px');
|
||||
|
||||
var windowHeight = $(window).height();
|
||||
var rect = $element[0].getBoundingClientRect();
|
||||
if (rect.bottom > windowHeight) {
|
||||
$element.addClass('floating');
|
||||
}
|
||||
};
|
||||
|
||||
$(window).on("scroll", handler);
|
||||
$(window).on("resize", handler);
|
||||
|
||||
var previousHeight = $element[0].parentNode.clientHeight;
|
||||
var stop = $interval(function() {
|
||||
var currentHeight = $element[0].parentNode.clientWidth;
|
||||
if (previousHeight != currentHeight) {
|
||||
currentHeight = previousHeight;
|
||||
handler();
|
||||
}
|
||||
}, 100);
|
||||
|
||||
$scope.$on('$destroy', function() {
|
||||
$(window).off("resize", handler);
|
||||
$(window).off("scroll", handler);
|
||||
$interval.cancel(stop);
|
||||
});
|
||||
}
|
||||
};
|
||||
return directiveDefinitionObject;
|
||||
});
|
|
@ -0,0 +1,5 @@
|
|||
<div class="co-m-inline-loader co-an-fade-in-out">
|
||||
<div class="co-m-loader-dot__one"></div>
|
||||
<div class="co-m-loader-dot__two"></div>
|
||||
<div class="co-m-loader-dot__three"></div>
|
||||
</div>
|
5
config_app/js/components/cor-loader/cor-loader.html
Normal file
5
config_app/js/components/cor-loader/cor-loader.html
Normal file
|
@ -0,0 +1,5 @@
|
|||
<div class="co-m-loader co-an-fade-in-out">
|
||||
<div class="co-m-loader-dot__one"></div>
|
||||
<div class="co-m-loader-dot__two"></div>
|
||||
<div class="co-m-loader-dot__three"></div>
|
||||
</div>
|
28
config_app/js/components/cor-loader/cor-loader.js
Normal file
28
config_app/js/components/cor-loader/cor-loader.js
Normal file
|
@ -0,0 +1,28 @@
|
|||
const loaderUrl = require('./cor-loader.html');
|
||||
const inlineUrl = require('./cor-loader-inline.html');
|
||||
|
||||
angular.module('quay-config')
|
||||
.directive('corLoader', function() {
|
||||
var directiveDefinitionObject = {
|
||||
templateUrl: loaderUrl,
|
||||
replace: true,
|
||||
restrict: 'C',
|
||||
scope: {
|
||||
},
|
||||
controller: function($rootScope, $scope, $element) {
|
||||
}
|
||||
};
|
||||
return directiveDefinitionObject;
|
||||
})
|
||||
.directive('corLoaderInline', function() {
|
||||
var directiveDefinitionObject = {
|
||||
templateUrl: inlineUrl,
|
||||
replace: true,
|
||||
restrict: 'C',
|
||||
scope: {
|
||||
},
|
||||
controller: function($rootScope, $scope, $element) {
|
||||
}
|
||||
};
|
||||
return directiveDefinitionObject;
|
||||
});
|
3
config_app/js/components/cor-option/cor-option.html
Normal file
3
config_app/js/components/cor-option/cor-option.html
Normal file
|
@ -0,0 +1,3 @@
|
|||
<li>
|
||||
<a ng-click="optionClick()" ng-transclude></a>
|
||||
</li>
|
32
config_app/js/components/cor-option/cor-option.js
Normal file
32
config_app/js/components/cor-option/cor-option.js
Normal file
|
@ -0,0 +1,32 @@
|
|||
const corOption = require('./cor-option.html');
|
||||
const corOptionsMenu = require('./cor-options-menu.html');
|
||||
|
||||
angular.module('quay-config')
|
||||
.directive('corOptionsMenu', function() {
|
||||
var directiveDefinitionObject = {
|
||||
priority: 1,
|
||||
templateUrl: corOptionsMenu,
|
||||
replace: true,
|
||||
transclude: true,
|
||||
restrict: 'C',
|
||||
scope: {},
|
||||
controller: function($rootScope, $scope, $element) {
|
||||
}
|
||||
};
|
||||
return directiveDefinitionObject;
|
||||
})
|
||||
.directive('corOption', function() {
|
||||
var directiveDefinitionObject = {
|
||||
priority: 1,
|
||||
templateUrl: corOption,
|
||||
replace: true,
|
||||
transclude: true,
|
||||
restrict: 'C',
|
||||
scope: {
|
||||
'optionClick': '&optionClick'
|
||||
},
|
||||
controller: function($rootScope, $scope, $element) {
|
||||
}
|
||||
};
|
||||
return directiveDefinitionObject;
|
||||
});
|
|
@ -0,0 +1,6 @@
|
|||
<span class="co-options-menu">
|
||||
<div class="dropdown" style="text-align: left;">
|
||||
<i class="fas fa-cog fa-lg dropdown-toggle" data-toggle="dropdown" data-title="Options" bs-tooltip></i>
|
||||
<ul class="dropdown-menu pull-right" ng-transclude></ul>
|
||||
</div>
|
||||
</span>
|
|
@ -0,0 +1,4 @@
|
|||
<div class="cor-progress-bar-element progress">
|
||||
<div class="progress-bar" ng-style="{'width': (progress * 100) + '%'}"
|
||||
aria-valuenow="{{ (progress * 100) }}" aria-valuemin="0" aria-valuemax="100"></div>
|
||||
</div>
|
|
@ -0,0 +1,74 @@
|
|||
|
||||
|
||||
const corStepBarUrl = require('./cor-step-bar.html');
|
||||
const corStepUrl = require('./cor-step.html');
|
||||
const corProgressBarUrl = require('./cor-progress-bar.html');
|
||||
|
||||
angular.module('quay-config')
|
||||
.directive('corStepBar', () => {
|
||||
const directiveDefinitionObject = {
|
||||
priority: 4,
|
||||
templateUrl: corStepBarUrl,
|
||||
replace: true,
|
||||
transclude: true,
|
||||
restrict: 'C',
|
||||
scope: {
|
||||
'progress': '=?progress'
|
||||
},
|
||||
controller: function($rootScope, $scope, $element) {
|
||||
$scope.$watch('progress', function(progress) {
|
||||
if (!progress) { return; }
|
||||
|
||||
var index = 0;
|
||||
for (var i = 0; i < progress.length; ++i) {
|
||||
if (progress[i]) {
|
||||
index = i;
|
||||
}
|
||||
}
|
||||
|
||||
$element.find('.transclude').children('.co-step-element').each(function(i, elem) {
|
||||
$(elem).removeClass('active');
|
||||
if (i <= index) {
|
||||
$(elem).addClass('active');
|
||||
}
|
||||
});
|
||||
});
|
||||
}
|
||||
};
|
||||
return directiveDefinitionObject;
|
||||
})
|
||||
|
||||
.directive('corStep', function() {
|
||||
var directiveDefinitionObject = {
|
||||
priority: 4,
|
||||
templateUrl: corStepUrl,
|
||||
replace: true,
|
||||
transclude: false,
|
||||
requires: '^corStepBar',
|
||||
restrict: 'C',
|
||||
scope: {
|
||||
'icon': '@icon',
|
||||
'title': '@title',
|
||||
'text': '@text'
|
||||
},
|
||||
controller: function($rootScope, $scope, $element) {
|
||||
}
|
||||
};
|
||||
return directiveDefinitionObject;
|
||||
})
|
||||
|
||||
.directive('corProgressBar', function() {
|
||||
var directiveDefinitionObject = {
|
||||
priority: 4,
|
||||
templateUrl: corProgressBarUrl,
|
||||
replace: true,
|
||||
transclude: true,
|
||||
restrict: 'C',
|
||||
scope: {
|
||||
'progress': '=progress'
|
||||
},
|
||||
controller: function($rootScope, $scope, $element) {
|
||||
}
|
||||
};
|
||||
return directiveDefinitionObject;
|
||||
});
|
3
config_app/js/components/cor-progress/cor-step-bar.html
Normal file
3
config_app/js/components/cor-progress/cor-step-bar.html
Normal file
|
@ -0,0 +1,3 @@
|
|||
<div class="co-step-bar">
|
||||
<span class="transclude" ng-transclude/>
|
||||
</div>
|
6
config_app/js/components/cor-progress/cor-step.html
Normal file
6
config_app/js/components/cor-progress/cor-step.html
Normal file
|
@ -0,0 +1,6 @@
|
|||
<span ng-class="text ? 'co-step-element text' : 'co-step-element icon'">
|
||||
<span data-title="{{ title }}" bs-tooltip>
|
||||
<span class="text" ng-if="text">{{ text }}</span>
|
||||
<i class="fa fa-lg" ng-if="icon" ng-class="'fa-' + icon"></i>
|
||||
</span>
|
||||
</span>
|
|
@ -0,0 +1,3 @@
|
|||
<div class="col-lg-6 col-md-6 col-sm-5 col-xs-12">
|
||||
<h2 class="co-nav-title-content co-fx-text-shadow" ng-transclude></h2>
|
||||
</div>
|
2
config_app/js/components/cor-title/cor-title.html
Normal file
2
config_app/js/components/cor-title/cor-title.html
Normal file
|
@ -0,0 +1,2 @@
|
|||
<div class="co-nav-title" ng-transclude></div>
|
||||
|
31
config_app/js/components/cor-title/cor-title.js
Normal file
31
config_app/js/components/cor-title/cor-title.js
Normal file
|
@ -0,0 +1,31 @@
|
|||
|
||||
const titleUrl = require('./cor-title.html');
|
||||
const titleContentUrl = require('./cor-title-content.html');
|
||||
|
||||
angular.module('quay-config')
|
||||
.directive('corTitleContent', function() {
|
||||
var directiveDefinitionObject = {
|
||||
priority: 1,
|
||||
templateUrl: titleContentUrl,
|
||||
replace: true,
|
||||
transclude: true,
|
||||
restrict: 'C',
|
||||
scope: {},
|
||||
controller: function($rootScope, $scope, $element) {
|
||||
}
|
||||
};
|
||||
return directiveDefinitionObject;
|
||||
})
|
||||
.directive('corTitle', function() {
|
||||
var directiveDefinitionObject = {
|
||||
priority: 1,
|
||||
templateUrl: titleUrl,
|
||||
replace: true,
|
||||
transclude: true,
|
||||
restrict: 'C',
|
||||
scope: {},
|
||||
controller: function($rootScope, $scope, $element) {
|
||||
}
|
||||
};
|
||||
return directiveDefinitionObject;
|
||||
});
|
|
@ -0,0 +1,3 @@
|
|||
<span class="datetime-picker-element">
|
||||
<input class="form-control" type="text" ng-model="selected_datetime"/>
|
||||
</span>
|
60
config_app/js/components/datetime-picker/datetime-picker.js
Normal file
60
config_app/js/components/datetime-picker/datetime-picker.js
Normal file
|
@ -0,0 +1,60 @@
|
|||
const templateUrl = require('./datetime-picker.html');
|
||||
/**
|
||||
* An element which displays a datetime picker.
|
||||
*/
|
||||
angular.module('quay-config').directive('datetimePicker', function () {
|
||||
var directiveDefinitionObject = {
|
||||
priority: 0,
|
||||
templateUrl,
|
||||
replace: false,
|
||||
transclude: true,
|
||||
restrict: 'C',
|
||||
scope: {
|
||||
'datetime': '=datetime',
|
||||
},
|
||||
controller: function($scope, $element) {
|
||||
var datetimeSet = false;
|
||||
|
||||
$(function() {
|
||||
$element.find('input').datetimepicker({
|
||||
'format': 'LLL',
|
||||
'sideBySide': true,
|
||||
'showClear': true,
|
||||
'minDate': new Date(),
|
||||
'debug': false
|
||||
});
|
||||
|
||||
$element.find('input').on("dp.change", function (e) {
|
||||
$scope.$apply(function() {
|
||||
$scope.datetime = e.date ? e.date.unix() : null;
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
$scope.$watch('selected_datetime', function(value) {
|
||||
if (!datetimeSet) { return; }
|
||||
|
||||
if (!value) {
|
||||
if ($scope.datetime) {
|
||||
$scope.datetime = null;
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
$scope.datetime = (new Date(value)).getTime()/1000;
|
||||
});
|
||||
|
||||
$scope.$watch('datetime', function(value) {
|
||||
if (!value) {
|
||||
$scope.selected_datetime = null;
|
||||
datetimeSet = true;
|
||||
return;
|
||||
}
|
||||
|
||||
$scope.selected_datetime = moment.unix(value).format('LLL');
|
||||
datetimeSet = true;
|
||||
});
|
||||
}
|
||||
};
|
||||
return directiveDefinitionObject;
|
||||
});
|
|
@ -0,0 +1,58 @@
|
|||
<div>
|
||||
<div class="co-dialog modal fade initial-setup-modal in" id="setupModal" style="display: block;">
|
||||
<div class="modal-backdrop fade in" style="height: 1000px;"></div>
|
||||
<div class="modal-dialog fade in">
|
||||
<div class="modal-content">
|
||||
<!-- Header -->
|
||||
<div class="modal-header">
|
||||
<span class="cor-step-bar">
|
||||
<span class="cor-step active" title="Configure Database" text="1"></span>
|
||||
<span class="cor-step active" title="Setup Database" icon="database"></span>
|
||||
<span class="cor-step active" title="Create Superuser" text="2"></span>
|
||||
<span class="cor-step active" title="Configure Registry" text="3"></span>
|
||||
<span class="cor-step active" title="Validate Configuration" text="4"></span>
|
||||
<span class="cor-step active" title="Setup Complete" icon="download"></span>
|
||||
</span>
|
||||
<h4 class="modal-title"><span>Download Configuration</span></h4>
|
||||
</div>
|
||||
<!-- Body -->
|
||||
<div class="modal-body download-tarball-modal">
|
||||
<div ng-if="$ctrl.loadedConfig">
|
||||
Please download your updated configuration. To deploy these changes to your Red Hat Quay instances, please
|
||||
<a target="_blank" href="https://coreos.com/quay-enterprise/docs/latest/initial-setup.html">
|
||||
see the docs.
|
||||
</a>
|
||||
<div class="modal__warning-box">
|
||||
<div class="co-alert co-alert-warning">
|
||||
<strong>Warning: </strong>
|
||||
Your configuration and certificates are kept <i>unencrypted</i>. Please keep this file secure.
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<div ng-if="!$ctrl.loadedConfig">
|
||||
Please download your new configuration. For more information, and next steps, please
|
||||
<a target="_blank" href="https://coreos.com/quay-enterprise/docs/latest/initial-setup.html">
|
||||
see the docs.
|
||||
</a>
|
||||
<div class="modal__warning-box">
|
||||
<div class="co-alert co-alert-warning">
|
||||
<strong>Warning: </strong>
|
||||
Your configuration and certificates are kept <i>unencrypted</i>. Please keep this file secure.
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<button class="btn btn-default btn-lg download-button"
|
||||
ng-click="$ctrl.downloadTarball()">
|
||||
<i class="fa fa-download" style="margin-right: 10px;"></i>Download Configuration
|
||||
</button>
|
||||
</div>
|
||||
<div ng-if="$ctrl.isKubernetes" class="modal-footer">
|
||||
<button class="btn btn-primary"
|
||||
ng-click="$ctrl.goToDeploy()">
|
||||
<i class="far fa-paper-plane" style="margin-right: 10px;"></i>Go to deployment rollout
|
||||
</button>
|
||||
</div>
|
||||
</div><!-- /.modal-content -->
|
||||
</div><!-- /.modal-dialog -->
|
||||
</div>
|
||||
</div>
|
|
@ -0,0 +1,40 @@
|
|||
import {Input, Component, Inject, Output, EventEmitter} from 'ng-metadata/core';
|
||||
const templateUrl = require('./download-tarball-modal.component.html');
|
||||
const styleUrl = require('./download-tarball-modal.css');
|
||||
|
||||
declare const FileSaver: any;
|
||||
|
||||
/**
|
||||
* Initial Screen and Choice in the Config App
|
||||
*/
|
||||
@Component({
|
||||
selector: 'download-tarball-modal',
|
||||
templateUrl: templateUrl,
|
||||
styleUrls: [ styleUrl ],
|
||||
})
|
||||
export class DownloadTarballModalComponent {
|
||||
@Input('<') public loadedConfig;
|
||||
@Input('<') public isKubernetes;
|
||||
@Output() public chooseDeploy = new EventEmitter<any>();
|
||||
|
||||
constructor(@Inject('ApiService') private ApiService) {
|
||||
|
||||
}
|
||||
|
||||
private downloadTarball(): void {
|
||||
const errorDisplay: Function = this.ApiService.errorDisplay(
|
||||
'Could not save configuration. Please report this error.'
|
||||
);
|
||||
|
||||
// We need to set the response type to 'blob', to ensure it's never encoded as a string
|
||||
// (string encoded binary data can be difficult to transform with js)
|
||||
// and to make it easier to save (FileSaver expects a blob)
|
||||
this.ApiService.scGetConfigTarball(null, null, null, null, 'blob').then(function(resp) {
|
||||
FileSaver.saveAs(resp, 'quay-config.tar.gz');
|
||||
}, errorDisplay);
|
||||
}
|
||||
|
||||
private goToDeploy() {
|
||||
this.chooseDeploy.emit({});
|
||||
}
|
||||
}
|
|
@ -0,0 +1,18 @@
|
|||
.co-dialog .modal-body.download-tarball-modal {
|
||||
padding: 15px;
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
}
|
||||
|
||||
.modal__warning-box {
|
||||
margin-top: 15px;
|
||||
}
|
||||
|
||||
.download-tarball-modal .download-button {
|
||||
align-self: center;
|
||||
}
|
||||
|
||||
.download-tarball-modal .download-button:hover {
|
||||
background-color: #dddddd;
|
||||
text-decoration: none;
|
||||
}
|
46
config_app/js/components/file-upload-box.html
Normal file
46
config_app/js/components/file-upload-box.html
Normal file
|
@ -0,0 +1,46 @@
|
|||
<div class="file-upload-box-element">
|
||||
<div class="file-input-container">
|
||||
<div ng-show="state != 'uploading'">
|
||||
<form id="file-drop-form-{{ boxId }}">
|
||||
<input id="file-drop-{{ boxId }}" name="file-drop-{{ boxId }}" class="file-drop"
|
||||
type="file" files-changed="handleFilesChanged(files)"
|
||||
accept="{{ getAccepts(extensions) }}">
|
||||
<label for="file-drop-{{ boxId }}" ng-class="state">
|
||||
<span class="chosen-file">
|
||||
<span ng-if="selectedFiles.length">
|
||||
{{ selectedFiles[0].name }}
|
||||
<span ng-if="selectedFiles.length > 1">
|
||||
and {{ selectedFiles.length - 1 }} others...
|
||||
</span>
|
||||
</span>
|
||||
</span><span class="choose-button">
|
||||
<span>Select file</span>
|
||||
</span>
|
||||
</label>
|
||||
</form>
|
||||
</div>
|
||||
|
||||
<div class="cor-loader-line" ng-if="state == 'checking'"></div>
|
||||
|
||||
<div class="status-message" ng-if="state == 'uploading'">
|
||||
<div class="progress progress-striped active">
|
||||
<div class="progress-bar" role="progressbar"
|
||||
aria-valuenow="{{ uploadProgress }}" aria-valuemin="0" aria-valuemax="100"
|
||||
style="{{ 'width: ' + uploadProgress + '%' }}">
|
||||
</div>
|
||||
</div>
|
||||
|
||||
Uploading file {{ currentlyUploadingFile.name }}...
|
||||
</div>
|
||||
|
||||
<div class="select-message" ng-if="state == 'clear'">{{ selectMessage }}</div>
|
||||
<div class="status-message error-message" ng-if="state == 'error'">
|
||||
<i class="fa fa-times-circle"></i>
|
||||
{{ message }}
|
||||
</div>
|
||||
<div class="status-message okay-message" ng-if="state == 'okay'">
|
||||
<i class="fa fa-check-circle"></i>
|
||||
{{ message }}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
169
config_app/js/components/file-upload-box.js
Normal file
169
config_app/js/components/file-upload-box.js
Normal file
|
@ -0,0 +1,169 @@
|
|||
const templateUrl = require('./file-upload-box.html');
|
||||
/**
|
||||
* An element which adds a stylize box for uploading a file.
|
||||
*/
|
||||
angular.module('quay-config').directive('fileUploadBox', function () {
|
||||
var directiveDefinitionObject = {
|
||||
priority: 0,
|
||||
templateUrl,
|
||||
replace: false,
|
||||
transclude: true,
|
||||
restrict: 'C',
|
||||
scope: {
|
||||
'selectMessage': '@selectMessage',
|
||||
|
||||
'filesSelected': '&filesSelected',
|
||||
'filesCleared': '&filesCleared',
|
||||
'filesValidated': '&filesValidated',
|
||||
|
||||
'extensions': '<extensions',
|
||||
'apiEndpoint': '@apiEndpoint',
|
||||
|
||||
'reset': '=?reset'
|
||||
},
|
||||
controller: function($rootScope, $scope, $element, ApiService) {
|
||||
var MEGABYTE = 1000000;
|
||||
var MAX_FILE_SIZE = MAX_FILE_SIZE_MB * MEGABYTE;
|
||||
var MAX_FILE_SIZE_MB = 100;
|
||||
|
||||
var number = $rootScope.__fileUploadBoxIdCounter || 0;
|
||||
$rootScope.__fileUploadBoxIdCounter = number + 1;
|
||||
|
||||
$scope.boxId = number;
|
||||
$scope.state = 'clear';
|
||||
$scope.selectedFiles = [];
|
||||
|
||||
var conductUpload = function(file, apiEndpoint, fileId, mimeType, progressCb, doneCb) {
|
||||
var request = new XMLHttpRequest();
|
||||
request.open('PUT', '/api/v1/' + apiEndpoint, true);
|
||||
request.setRequestHeader('Content-Type', mimeType);
|
||||
request.onprogress = function(e) {
|
||||
$scope.$apply(function() {
|
||||
if (e.lengthComputable) { progressCb((e.loaded / e.total) * 100); }
|
||||
});
|
||||
};
|
||||
|
||||
request.onerror = function() {
|
||||
$scope.$apply(function() { doneCb(false, 'Error when uploading'); });
|
||||
};
|
||||
|
||||
request.onreadystatechange = function() {
|
||||
var state = request.readyState;
|
||||
var status = request.status;
|
||||
|
||||
if (state == 4) {
|
||||
if (Math.floor(status / 100) == 2) {
|
||||
$scope.$apply(function() { doneCb(true, fileId); });
|
||||
} else {
|
||||
var message = request.statusText;
|
||||
if (status == 413) {
|
||||
message = 'Selected file too large to upload';
|
||||
}
|
||||
|
||||
$scope.$apply(function() { doneCb(false, message); });
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
request.send(file);
|
||||
};
|
||||
|
||||
var uploadFiles = function(callback) {
|
||||
var currentIndex = 0;
|
||||
var fileIds = [];
|
||||
|
||||
var progressCb = function(progress) {
|
||||
$scope.uploadProgress = progress;
|
||||
};
|
||||
|
||||
var doneCb = function(status, messageOrId) {
|
||||
if (status) {
|
||||
fileIds.push(messageOrId);
|
||||
currentIndex++;
|
||||
performFileUpload();
|
||||
} else {
|
||||
callback(false, messageOrId);
|
||||
}
|
||||
};
|
||||
|
||||
var performFileUpload = function() {
|
||||
// If we have finished uploading all of the files, invoke the overall callback
|
||||
// with the list of file IDs.
|
||||
if (currentIndex >= $scope.selectedFiles.length) {
|
||||
callback(true, fileIds);
|
||||
return;
|
||||
}
|
||||
|
||||
// For the current file, retrieve a file-drop URL from the API for the file.
|
||||
var currentFile = $scope.selectedFiles[currentIndex];
|
||||
var mimeType = currentFile.type || 'application/octet-stream';
|
||||
var data = {
|
||||
'mimeType': mimeType
|
||||
};
|
||||
|
||||
$scope.currentlyUploadingFile = currentFile;
|
||||
$scope.uploadProgress = 0;
|
||||
|
||||
conductUpload(currentFile, $scope.apiEndpoint, $scope.selectedFiles[0].name, mimeType, progressCb, doneCb);
|
||||
};
|
||||
|
||||
// Start the uploading.
|
||||
$scope.state = 'uploading';
|
||||
performFileUpload();
|
||||
};
|
||||
|
||||
$scope.handleFilesChanged = function(files) {
|
||||
if ($scope.state == 'uploading') { return; }
|
||||
|
||||
$scope.message = null;
|
||||
$scope.selectedFiles = files;
|
||||
|
||||
if (files.length == 0) {
|
||||
$scope.state = 'clear';
|
||||
$scope.filesCleared();
|
||||
} else {
|
||||
for (var i = 0; i < files.length; ++i) {
|
||||
if (files[i].size > MAX_FILE_SIZE) {
|
||||
$scope.state = 'error';
|
||||
$scope.message = 'File ' + files[i].name + ' is larger than the maximum file ' +
|
||||
'size of ' + MAX_FILE_SIZE_MB + ' MB';
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
$scope.state = 'checking';
|
||||
$scope.filesSelected({
|
||||
'files': files,
|
||||
'callback': function(status, message) {
|
||||
$scope.state = status ? 'okay' : 'error';
|
||||
$scope.message = message;
|
||||
|
||||
if (status) {
|
||||
$scope.filesValidated({
|
||||
'files': files,
|
||||
'uploadFiles': uploadFiles
|
||||
});
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
};
|
||||
|
||||
$scope.getAccepts = function(extensions) {
|
||||
if (!extensions || !extensions.length) {
|
||||
return '*';
|
||||
}
|
||||
|
||||
return extensions.join(',');
|
||||
};
|
||||
|
||||
$scope.$watch('reset', function(reset) {
|
||||
if (reset) {
|
||||
$scope.state = 'clear';
|
||||
$element.find('#file-drop-' + $scope.boxId).parent().trigger('reset');
|
||||
}
|
||||
});
|
||||
}
|
||||
};
|
||||
return directiveDefinitionObject;
|
||||
});
|
18
config_app/js/components/files-changed.js
Normal file
18
config_app/js/components/files-changed.js
Normal file
|
@ -0,0 +1,18 @@
|
|||
/**
|
||||
* Raises the 'filesChanged' event on the scope if a file on the marked <input type="file"> exists.
|
||||
*/
|
||||
angular.module('quay-config').directive("filesChanged", [function () {
|
||||
return {
|
||||
restrict: 'A',
|
||||
scope: {
|
||||
'filesChanged': "&"
|
||||
},
|
||||
link: function (scope, element, attributes) {
|
||||
element.bind("change", function (changeEvent) {
|
||||
scope.$apply(function() {
|
||||
scope.filesChanged({'files': changeEvent.target.files});
|
||||
});
|
||||
});
|
||||
}
|
||||
}
|
||||
}]);
|
Some files were not shown because too many files have changed in this diff Show more
Reference in a new issue