After discussion, we decided the best solution for the missing content checksum problem was to lookup the proper blobs in the repository and, if not present, mark the manifest as broken, as this would reflect the actual issue the user faces if they pull the repository tag today via V2
The tag manifest creation operation is now quite a bit heavier (because it is populating a bunch of new-model tables as well), and it is not necessary for the transaction to include this operation, because tags are valid without manifests currently.
The ALTER TABLE operations previously used were causing the DB to die when run on the production TagManifest table which has 7 million rows. We instead now use new mapping tables, which is less nice, but these are temporary anyway, so hopefully we only have to deal with their ugliness for a short duration.
This change also starts passing in the manifest interface, rather than the raw data, to the model for writing.
Note that this change does *not* backfill the existing rows in to the new tables; that will occur in a followup PR. The new columns in `tagmanifest` and `tagmanifestlabel` will be used to track the backfill, as it will occur in a worker.
The user metadata fields are nullable in the database, but were not in
the json sechema. This prevented users from updating some of their
information on the site if they hadn't set the metadata fields.
While this means we need an additional query for initial lookup, it makes the *filtering* query (which is the heavy part) require far fewer joins, thus making it more efficient.
Also adds a new unit test to verify that our filter filters to the correct set of repositories.
This will prevent us from running out of auto-incrementing ID values until such time as we can upgrade to peewee 3 and change the field type to a BigInt
Fixes https://jira.coreos.com/browse/QUAY-943
Before, we'd load *all* the robots, which can be a huge issue in namespaces with a large number of robots. Now, we only load the top-20 robots (as per recency in login), and we also limit the information returned to the entity search to save some bandwidth.
Fixes https://jira.coreos.com/browse/QUAY-927
Superusers were getting confused because the users/orgs were being disabled and renamed, but still appeared in the list until they were GCed by the background worker. Now we just hide them.
Fixes https://jira.coreos.com/browse/QUAY-936
Removes filtering of log types where not necessary, removes filtering based on namespace when filtering based on repository (superfluous check that was causing issues in MySQL preventing the use of the correct index) and fix some other small issues around the API
Fixes https://jira.coreos.com/browse/QUAY-931
This could result in "hanging" robot accounts, although that would only leak the names of said accounts. Now we delete them immediately AND we proactively delete them before replacing the namespace (just to be sure)
If configured, we now check the IP address of the user signing up and, if they are a possible threat, we further reduce their number of allowed maximum builds to the configured value.