crypto: arm64/aes-ccm - don't use an atomic walk needlessly

When the AES-CCM code was first added, the NEON register were saved
and restored eagerly, and so the code avoided doing so, and executed
the scatterwalk in atomic context inside the kernel_neon_begin/end
section.

This has been changed in the meantime, so switch to non-atomic
scatterwalks.

Fixes: bd2ad885e3 ("crypto: arm64/aes-ce-ccm - move kernel mode neon ...")
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This commit is contained in:
Ard Biesheuvel 2019-01-24 17:33:47 +01:00 committed by Herbert Xu
parent 969e2f59d5
commit f9352900e0

View file

@ -253,7 +253,7 @@ static int ccm_encrypt(struct aead_request *req)
/* preserve the original iv for the final round */
memcpy(buf, req->iv, AES_BLOCK_SIZE);
err = skcipher_walk_aead_encrypt(&walk, req, true);
err = skcipher_walk_aead_encrypt(&walk, req, false);
if (may_use_simd()) {
while (walk.nbytes) {
@ -311,7 +311,7 @@ static int ccm_decrypt(struct aead_request *req)
/* preserve the original iv for the final round */
memcpy(buf, req->iv, AES_BLOCK_SIZE);
err = skcipher_walk_aead_decrypt(&walk, req, true);
err = skcipher_walk_aead_decrypt(&walk, req, false);
if (may_use_simd()) {
while (walk.nbytes) {