Amazon S3 does not allow for chunk sizes larger than 5 GB; we currently don't handle that case at all, which is why large uploads are failing. This change ensures that if a storage engine specifies a *maximum* chunk size, we write multiple chunks no larger than that size. |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| basestorage.py | ||
| cloud.py | ||
| distributedstorage.py | ||
| downloadproxy.py | ||
| fakestorage.py | ||
| local.py | ||
| swift.py | ||