After discussion, we decided the best solution for the missing content checksum problem was to lookup the proper blobs in the repository and, if not present, mark the manifest as broken, as this would reflect the actual issue the user faces if they pull the repository tag today via V2
The tag manifest creation operation is now quite a bit heavier (because it is populating a bunch of new-model tables as well), and it is not necessary for the transaction to include this operation, because tags are valid without manifests currently.
The ALTER TABLE operations previously used were causing the DB to die when run on the production TagManifest table which has 7 million rows. We instead now use new mapping tables, which is less nice, but these are temporary anyway, so hopefully we only have to deal with their ugliness for a short duration.
This change also starts passing in the manifest interface, rather than the raw data, to the model for writing.
Note that this change does *not* backfill the existing rows in to the new tables; that will occur in a followup PR. The new columns in `tagmanifest` and `tagmanifestlabel` will be used to track the backfill, as it will occur in a worker.
Small fix to manually clean up temp dir when creating new temp dir,
small fix to font awesome icons, change the jwt/keystone
validators to not use username/password
The user metadata fields are nullable in the database, but were not in
the json sechema. This prevented users from updating some of their
information on the site if they hadn't set the metadata fields.
While this means we need an additional query for initial lookup, it makes the *filtering* query (which is the heavy part) require far fewer joins, thus making it more efficient.
Also adds a new unit test to verify that our filter filters to the correct set of repositories.
This change ensures that the tables in the database during migration have at least one row of "real" data, which should help catch issues in the future where we forget to set column defaults and other such schema oversights that can only be truly tested with non-empty tables
Fixes https://jira.coreos.com/browse/QUAY-913
Before this change, if user creation was disabled, team sync would fail to sync over users that had not yet been invited/logged in, because their accounts could not be created. Following this change, team syncing of users not yet in the system will create those user accounts, allowing users to be "auto invited" via team sync.
Fixes https://jira.coreos.com/browse/QUAY-910
This will prevent us from running out of auto-incrementing ID values until such time as we can upgrade to peewee 3 and change the field type to a BigInt
Fixes https://jira.coreos.com/browse/QUAY-943
Before, we'd load *all* the robots, which can be a huge issue in namespaces with a large number of robots. Now, we only load the top-20 robots (as per recency in login), and we also limit the information returned to the entity search to save some bandwidth.
Fixes https://jira.coreos.com/browse/QUAY-927
Superusers were getting confused because the users/orgs were being disabled and renamed, but still appeared in the list until they were GCed by the background worker. Now we just hide them.
Fixes https://jira.coreos.com/browse/QUAY-936
Removes filtering of log types where not necessary, removes filtering based on namespace when filtering based on repository (superfluous check that was causing issues in MySQL preventing the use of the correct index) and fix some other small issues around the API
Fixes https://jira.coreos.com/browse/QUAY-931
This could result in "hanging" robot accounts, although that would only leak the names of said accounts. Now we delete them immediately AND we proactively delete them before replacing the namespace (just to be sure)
Previously, if we didn't find a key, we'd empty the entire cache, making it essentially a single-key cache. We skip clearing now, although this does mean we won't GC expired entries (not a problem for tests, though)
If configured, we now check the IP address of the user signing up and, if they are a possible threat, we further reduce their number of allowed maximum builds to the configured value.
Only verbs needs to load placements for multiple images, so we can vastly simplify and optimize most queries by making it two-step, and having the rest of the image loads not worry about placements
We were accidentally skipping the invite if the user was a member of *any* organization, rather than the specific organization (as intended)
Fixes https://jira.coreos.com/browse/QUAY-880
This ensures that if the builder sends a heartbeat, but redis is down, we don't time out the build while waiting to connect or receive. Since Redis data is ephemeral anyway, this should give us more stability in builds if/when redis is down
Instead of deleting a namespace synchronously as before, we now mark the namespace for deletion, disable it, and rename it. A worker then comes along and deletes the namespace in the background. This results in a *significantly* better user experience, as the namespace deletion operation now "completes" in under a second, where before it could take 10s of minutes at the worse.
Fixes https://jira.coreos.com/browse/QUAY-838
The byte_count field on the BlobUpload model is marked as not
nullable, but the migration to make the field a big integer removed
that restriction (#2388 :: 76de324) in the database. It's still in
the model though, which means they are out of sync. This adds a
migration to mark the field as not nullable in the database again.
The checksum field was removed from the ImageStorage model in #815,
but was never dropped from the database. This adds a migration to
drop the unused column.
We move all the auth handling, serialization and deserialization into a new AuthContext interface, and then standardize a registration model for handling of specific auth context types (user, robot, token, etc).
Should prevent a repository from being created under a user's namespace without a corresponding admin permission
Fixes https://jira.coreos.com/browse/QUAY-826
Instead of 41 queries now for the simple manifest, we are down to 14.
The biggest changes:
- Only synthesize the V1 image rows if we haven't already found them in the database
- Thread the repository object through to the other model method calls, and use it instead of loading again and again
Allows for exploration of all visible repositories, in paginated form.
This change also fixes the layout of the header on different viewport sizes to be consistently a single line in height.
Fixes https://jira.coreos.com/browse/QS-63
MySQL does not allow rows in the same table referencing other rows to be deleted in a single statement. We now do a two-pass deletion, and add a test to make sure.
Fixes https://jira.prod.coreos.systems/browse/QS-18
this moves all the db model code behind an interface in prep for v2-2
Issue: https://coreosdev.atlassian.net/browse/QUAY-750
- [ ] It works!
- [ ] Comments provide sufficient explanations for the next contributor
- [ ] Tests cover changes and corner cases
- [ ] Follows Quay syntax patterns and format
### Description of Changes
Issue: https://coreosdev.atlassian.net/browse/QUAY-633
## Reviewer Checklist
- [ ] It works!
- [ ] Comments provide sufficient explanations for the next contributor
- [ ] Tests cover changes and corner cases
- [ ] Follows Quay syntax patterns and format
Will let the users know they can recover the tag via time machine
Note: This was tested with the Docker protocol, but the new error code is *technically* out of spec; we should make sure its okay.
this puts the view logic on the object and adds a parameter for logging
[TESTING->locally with docker compose]
Issue: https://coreosdev.atlassian.net/browse/QUAY-632
- [ ] It works!
- [ ] Comments provide sufficient explanations for the next contributor
- [ ] Tests cover changes and corner cases
- [ ] Follows Quay syntax patterns and format
### Description of Changes
ran yapf for the branch
[TESTING->locally using docker compose]
Issue: https://coreosdev.atlassian.net/browse/QUAY-632
## Reviewer Checklist
- [ ] It works!
- [ ] Comments provide sufficient explanations for the next contributor
- [ ] Tests cover changes and corner cases
- [ ] Follows Quay syntax patterns and format
this decouples the database models from the api
[TESTING->locally with docker compose]
Issue: https://coreosdev.atlassian.net/browse/QUAY-632
- [ ] It works!
- [ ] Comments provide sufficient explanations for the next contributor
- [ ] Tests cover changes and corner cases
- [ ] Follows Quay syntax patterns and format
Makes the lookup query underneath the transaction smaller if there are a lot of images referenced directly by tag. We still must do the direct referenced check within the transaction, but this should reduce the scope of the search space a bit.
This prevents us from creating a massive join when there are a large number of tags in the repository, which can result in locking the entire DB for long periods of time. Instead of the join, we just iteratively lookup any images found to be directly referenced by a tag or found as a parent of another image, both of which should be indexed lookups. Once done, we only remove those images and then iterate until the working set stops changing.
before we would return a 400 without a message because the errors were not being caught
Issue: https://www.pivotaltracker.com/story/show/145459707
- [ ] It works!
- [ ] Comments provide sufficient explanations for the next contributor
- [ ] Tests cover changes and corner cases
- [ ] Follows Quay syntax patterns and format
We remove the directly referenced images from the join across ancestors, as they will be covered by the first part of the union clause. For some large repositories, this will result in a significantly reduced set of images that have to be joined NxM.