Commit Graph

65 Commits

Author SHA1 Message Date
Eric Biggers 7ec0a09d4e crypto: skcipher - fix weak key check for lskciphers
When an algorithm of the new "lskcipher" type is exposed through the
"skcipher" API, calls to crypto_skcipher_setkey() don't pass on the
CRYPTO_TFM_REQ_FORBID_WEAK_KEYS flag to the lskcipher.  This causes
self-test failures for ecb(des), as weak keys are not rejected anymore.
Fix this.

Fixes: 31865c4c4d ("crypto: skcipher - Add lskcipher")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-10-20 13:39:26 +08:00
Herbert Xu 31865c4c4d crypto: skcipher - Add lskcipher
Add a new API type lskcipher designed for taking straight kernel
pointers instead of SG lists.  Its relationship to skcipher will
be analogous to that between shash and ahash.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-09-20 13:15:29 +08:00
Ondrej Mosnacek b8969a1b69 crypto: api - Fix CRYPTO_USER checks for report function
Checking the config via ifdef incorrectly compiles out the report
functions when CRYPTO_USER is set to =m. Fix it by using IS_ENABLED()
instead.

Fixes: c0f9e01dd2 ("crypto: api - Check CRYPTO_USER instead of NET for report")
Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-05-02 18:22:24 +08:00
Herbert Xu c0f9e01dd2 crypto: api - Check CRYPTO_USER instead of NET for report
The report function is currently conditionalised on CONFIG_NET.
As it's only used by CONFIG_CRYPTO_USER, conditionalising on that
instead of CONFIG_NET makes more sense.

This gets rid of a rarely used code-path.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-03-14 17:06:42 +08:00
Herbert Xu 1085680bbb crypto: skcipher - Count error stats differently
Move all stat code specific to skcipher into the skcipher code.

While we're at it, change the stats so that bytes and counts
are always incremented even in case of error.  This allows the
reference counting to be removed as we can now increment the
counters prior to the operation.

After the operation we simply increase the error count if necessary.
This is safe as errors can only occur synchronously (or rather,
the existing code already ignored asynchronous errors which are
only visible to the callback function).

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-03-14 17:06:42 +08:00
Ard Biesheuvel d07bd950b9 crypto: skcipher - Use scatterwalk (un)map interface for dst and src buffers
The skcipher walk API implementation avoids scatterwalk_map() for
mapping the source and destination buffers, and invokes kmap_atomic()
directly if the buffer in question is not in low memory (which can only
happen on 32-bit architectures). This avoids some overhead on 64-bit
architectures, and most notably, permits the skcipher code to run with
preemption enabled.

Now that scatterwalk_map() has been updated to use kmap_local(), none of
this is needed, so we can simply use scatterwalk_map/unmap instead.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-01-13 12:11:18 +08:00
Herbert Xu e6cb02bd0a crypto: skcipher - Allow sync algorithms with large request contexts
Some sync algorithms may require a large amount of temporary
space during its operations.  There is no reason why they should
be limited just because some legacy users want to place all
temporary data on the stack.

Such algorithms can now set a flag to indicate that they need
extra request context, which will cause them to be invisible
to users that go through the sync_skcipher interface.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-11-18 16:59:34 +08:00
Changbin Du abfc7fad63 crypto: skcipher - in_irq() cleanup
Replace the obsolete and ambiguos macro in_irq() with new
macro in_hardirq().

Signed-off-by: Changbin Du <changbin.du@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-08-21 15:44:58 +08:00
Ard Biesheuvel 64ca771cd6 crypto: x86 - remove glue helper module
All dependencies on the x86 glue helper module have been replaced by
local instantiations of the new ECB/CBC preprocessor helper macros, so
the glue helper module can be retired.

Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-14 17:10:29 +11:00
Ard Biesheuvel 0eb76ba29d crypto: remove cipher routines from public crypto API
The cipher routines in the crypto API are mostly intended for templates
implementing skcipher modes generically in software, and shouldn't be
used outside of the crypto subsystem. So move the prototypes and all
related definitions to a new header file under include/crypto/internal.
Also, let's use the new module namespace feature to move the symbol
exports into a new namespace CRYPTO_INTERNAL.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:35 +11:00
Waiman Long 453431a549 mm, treewide: rename kzfree() to kfree_sensitive()
As said by Linus:

  A symmetric naming is only helpful if it implies symmetries in use.
  Otherwise it's actively misleading.

  In "kzalloc()", the z is meaningful and an important part of what the
  caller wants.

  In "kzfree()", the z is actively detrimental, because maybe in the
  future we really _might_ want to use that "memfill(0xdeadbeef)" or
  something. The "zero" part of the interface isn't even _relevant_.

The main reason that kzfree() exists is to clear sensitive information
that should not be leaked to other future users of the same memory
objects.

Rename kzfree() to kfree_sensitive() to follow the example of the recently
added kvfree_sensitive() and make the intention of the API more explicit.
In addition, memzero_explicit() is used to clear the memory to make sure
that it won't get optimized away by the compiler.

The renaming is done by using the command sequence:

  git grep -w --name-only kzfree |\
  xargs sed -i 's/kzfree/kfree_sensitive/'

followed by some editing of the kfree_sensitive() kerneldoc and adding
a kzfree backward compatibility macro in slab.h.

[akpm@linux-foundation.org: fs/crypto/inline_crypt.c needs linux/slab.h]
[akpm@linux-foundation.org: fix fs/crypto/inline_crypt.c some more]

Suggested-by: Joe Perches <joe@perches.com>
Signed-off-by: Waiman Long <longman@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: David Howells <dhowells@redhat.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Cc: James Morris <jmorris@namei.org>
Cc: "Serge E. Hallyn" <serge@hallyn.com>
Cc: Joe Perches <joe@perches.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Dan Carpenter <dan.carpenter@oracle.com>
Cc: "Jason A . Donenfeld" <Jason@zx2c4.com>
Link: http://lkml.kernel.org/r/20200616154311.12314-3-longman@redhat.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-08-07 11:33:22 -07:00
Eric Biggers 2eb27c1193 crypto: algapi - add NEED_FALLBACK to INHERITED_FLAGS
CRYPTO_ALG_NEED_FALLBACK is handled inconsistently.  When it's requested
to be clear, some templates propagate that request to child algorithms,
while others don't.

It's apparently desired for NEED_FALLBACK to be propagated, to avoid
deadlocks where a module tries to load itself while it's being
initialized, and to avoid unnecessarily complex fallback chains where we
have e.g. cbc-aes-$driver falling back to cbc(aes-$driver) where
aes-$driver itself falls back to aes-generic, instead of cbc-aes-$driver
simply falling back to cbc(aes-generic).  There have been a number of
fixes to this effect:

commit 89027579bc ("crypto: xts - Propagate NEED_FALLBACK bit")
commit d2c2a85cfe ("crypto: ctr - Propagate NEED_FALLBACK bit")
commit e6c2e65c70 ("crypto: cbc - Propagate NEED_FALLBACK bit")

But it seems that other templates can have the same problems too.

To avoid this whack-a-mole, just add NEED_FALLBACK to INHERITED_FLAGS so
that it's always inherited.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-16 21:49:08 +10:00
Eric Biggers 7bcb2c99f8 crypto: algapi - use common mechanism for inheriting flags
The flag CRYPTO_ALG_ASYNC is "inherited" in the sense that when a
template is instantiated, the template will have CRYPTO_ALG_ASYNC set if
any of the algorithms it uses has CRYPTO_ALG_ASYNC set.

We'd like to add a second flag (CRYPTO_ALG_ALLOCATES_MEMORY) that gets
"inherited" in the same way.  This is difficult because the handling of
CRYPTO_ALG_ASYNC is hardcoded everywhere.  Address this by:

  - Add CRYPTO_ALG_INHERITED_FLAGS, which contains the set of flags that
    have these inheritance semantics.

  - Add crypto_algt_inherited_mask(), for use by template ->create()
    methods.  It returns any of these flags that the user asked to be
    unset and thus must be passed in the 'mask' to crypto_grab_*().

  - Also modify crypto_check_attr_type() to handle computing the 'mask'
    so that most templates can just use this.

  - Make crypto_grab_*() propagate these flags to the template instance
    being created so that templates don't have to do this themselves.

Make crypto/simd.c propagate these flags too, since it "wraps" another
algorithm, similar to a template.

Based on a patch by Mikulas Patocka <mpatocka@redhat.com>
(https://lore.kernel.org/r/alpine.LRH.2.02.2006301414580.30526@file01.intranet.prod.int.rdu2.redhat.com).

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-16 21:49:08 +10:00
Eric Biggers d4fdc2dfaa crypto: algapi - enforce that all instances have a ->free() method
All instances need to have a ->free() method, but people could forget to
set it and then not notice if the instance is never unregistered.  To
help detect this bug earlier, don't allow an instance without a ->free()
method to be registered, and complain loudly if someone tries to do it.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09 11:30:58 +08:00
Eric Biggers d5ed3b65f7 crypto: cipher - make crypto_spawn_cipher() take a crypto_cipher_spawn
Now that all users of single-block cipher spawns have been converted to
use 'struct crypto_cipher_spawn' rather than the less specifically typed
'struct crypto_spawn', make crypto_spawn_cipher() take a pointer to a
'struct crypto_cipher_spawn' rather than a 'struct crypto_spawn'.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09 11:30:57 +08:00
Eric Biggers aacd5b4cfb crypto: skcipher - use crypto_grab_cipher() and simplify error paths
Make skcipher_alloc_instance_simple() use the new function
crypto_grab_cipher() to initialize its cipher spawn.

This is needed to make all spawns be initialized in a consistent way.

Also simplify the error handling by taking advantage of crypto_drop_*()
now accepting (as a no-op) spawns that haven't been initialized yet, and
by taking advantage of crypto_grab_*() now handling ERR_PTR() names.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09 11:30:56 +08:00
Eric Biggers de95c95741 crypto: algapi - pass instance to crypto_grab_spawn()
Currently, crypto_spawn::inst is first used temporarily to pass the
instance to crypto_grab_spawn().  Then crypto_init_spawn() overwrites it
with crypto_spawn::next, which shares the same union.  Finally,
crypto_spawn::inst is set again when the instance is registered.

Make this less convoluted by just passing the instance as an argument to
crypto_grab_spawn() instead.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09 11:30:54 +08:00
Eric Biggers b9f76dddb1 crypto: skcipher - pass instance to crypto_grab_skcipher()
Initializing a crypto_skcipher_spawn currently requires:

1. Set spawn->base.inst to point to the instance.
2. Call crypto_grab_skcipher().

But there's no reason for these steps to be separate, and in fact this
unneeded complication has caused at least one bug, the one fixed by
commit 6db4341017 ("crypto: adiantum - initialize crypto_spawn::inst")

So just make crypto_grab_skcipher() take the instance as an argument.

To keep the function calls from getting too unwieldy due to this extra
argument, also introduce a 'mask' variable into the affected places
which weren't already using one.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09 11:30:54 +08:00
Eric Biggers af5034e8e4 crypto: remove propagation of CRYPTO_TFM_RES_* flags
The CRYPTO_TFM_RES_* flags were apparently meant as a way to make the
->setkey() functions provide more information about errors.  But these
flags weren't actually being used or tested, and in many cases they
weren't being set correctly anyway.  So they've now been removed.

Also, if someone ever actually needs to start better distinguishing
->setkey() errors (which is somewhat unlikely, as this has been unneeded
for a long time), we'd be much better off just defining different return
values, like -EINVAL if the key is invalid for the algorithm vs.
-EKEYREJECTED if the key was rejected by a policy like "no weak keys".
That would be much simpler, less error-prone, and easier to test.

So just remove CRYPTO_TFM_RES_MASK and all the unneeded logic that
propagates these flags around.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09 11:30:53 +08:00
Eric Biggers 674f368a95 crypto: remove CRYPTO_TFM_RES_BAD_KEY_LEN
The CRYPTO_TFM_RES_BAD_KEY_LEN flag was apparently meant as a way to
make the ->setkey() functions provide more information about errors.

However, no one actually checks for this flag, which makes it pointless.

Also, many algorithms fail to set this flag when given a bad length key.
Reviewing just the generic implementations, this is the case for
aes-fixed-time, cbcmac, echainiv, nhpoly1305, pcrypt, rfc3686, rfc4309,
rfc7539, rfc7539esp, salsa20, seqiv, and xcbc.  But there are probably
many more in arch/*/crypto/ and drivers/crypto/.

Some algorithms can even set this flag when the key is the correct
length.  For example, authenc and authencesn set it when the key payload
is malformed in any way (not just a bad length), the atmel-sha and ccree
drivers can set it if a memory allocation fails, and the chelsio driver
sets it for bad auth tag lengths, not just bad key lengths.

So even if someone actually wanted to start checking this flag (which
seems unlikely, since it's been unused for a long time), there would be
a lot of work needed to get it working correctly.  But it would probably
be much better to go back to the drawing board and just define different
return values, like -EINVAL if the key is invalid for the algorithm vs.
-EKEYREJECTED if the key was rejected by a policy like "no weak keys".
That would be much simpler, less error-prone, and easier to test.

So just remove this flag.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09 11:30:53 +08:00
Eric Biggers 70ffa8fd72 crypto: skcipher - remove skcipher_walk_aead()
skcipher_walk_aead() is unused and is identical to
skcipher_walk_aead_encrypt(), so remove it.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09 11:30:51 +08:00
Herbert Xu b3c16bfc6a crypto: skcipher - Add skcipher_ialg_simple helper
This patch introduces the skcipher_ialg_simple helper which fetches
the crypto_alg structure from a simple skcipher instance's spawn.

This allows us to remove the third argument from the function
skcipher_alloc_instance_simple.

In doing so the reference count to the algorithm is now maintained
by the Crypto API and the caller no longer needs to drop the alg
refcount.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-27 18:18:04 +08:00
Eric Biggers 89873b4411 crypto: skcipher - remove crypto_skcipher_extsize()
Due to the removal of the blkcipher and ablkcipher algorithm types,
crypto_skcipher_extsize() now simply calls crypto_alg_extsize().  So
remove it and just use crypto_alg_extsize().

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11 16:36:56 +08:00
Eric Biggers 7e1c109918 crypto: skcipher - remove crypto_skcipher::decrypt
Due to the removal of the blkcipher and ablkcipher algorithm types,
crypto_skcipher::decrypt is now redundant since it always equals
crypto_skcipher_alg(tfm)->decrypt.

Remove it and update crypto_skcipher_decrypt() accordingly.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11 16:36:56 +08:00
Eric Biggers 848755e315 crypto: skcipher - remove crypto_skcipher::encrypt
Due to the removal of the blkcipher and ablkcipher algorithm types,
crypto_skcipher::encrypt is now redundant since it always equals
crypto_skcipher_alg(tfm)->encrypt.

Remove it and update crypto_skcipher_encrypt() accordingly.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11 16:36:56 +08:00
Eric Biggers 15252d9427 crypto: skcipher - remove crypto_skcipher::setkey
Due to the removal of the blkcipher and ablkcipher algorithm types,
crypto_skcipher::setkey now always points to skcipher_setkey().

Simplify by removing this function pointer and instead just making
skcipher_setkey() be crypto_skcipher_setkey() directly.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11 16:36:56 +08:00
Eric Biggers 9ac0d13693 crypto: skcipher - remove crypto_skcipher::keysize
Due to the removal of the blkcipher and ablkcipher algorithm types,
crypto_skcipher::keysize is now redundant since it always equals
crypto_skcipher_alg(tfm)->max_keysize.

Remove it and update crypto_skcipher_default_keysize() accordingly.

Also rename crypto_skcipher_default_keysize() to
crypto_skcipher_max_keysize() to clarify that it specifically returns
the maximum key size, not some unspecified "default".

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11 16:36:56 +08:00
Eric Biggers 140734d371 crypto: skcipher - remove crypto_skcipher::ivsize
Due to the removal of the blkcipher and ablkcipher algorithm types,
crypto_skcipher::ivsize is now redundant since it always equals
crypto_skcipher_alg(tfm)->ivsize.

Remove it and update crypto_skcipher_ivsize() accordingly.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-12-11 16:36:56 +08:00
Ard Biesheuvel d63007eb95 crypto: ablkcipher - remove deprecated and unused ablkcipher support
Now that all users of the deprecated ablkcipher interface have been
moved to the skcipher interface, ablkcipher is no longer used and
can be removed.

Reviewed-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-11-17 09:02:49 +08:00
Eric Biggers c65058b758 crypto: skcipher - remove the "blkcipher" algorithm type
Now that all "blkcipher" algorithms have been converted to "skcipher",
remove the blkcipher algorithm type.

The skcipher (symmetric key cipher) algorithm type was introduced a few
years ago to replace both blkcipher and ablkcipher (synchronous and
asynchronous block cipher).  The advantages of skcipher include:

  - A much less confusing name, since none of these algorithm types have
    ever actually been for raw block ciphers, but rather for all
    length-preserving encryption modes including block cipher modes of
    operation, stream ciphers, and other length-preserving modes.

  - It unified blkcipher and ablkcipher into a single algorithm type
    which supports both synchronous and asynchronous implementations.
    Note, blkcipher already operated only on scatterlists, so the fact
    that skcipher does too isn't a regression in functionality.

  - Better type safety by using struct skcipher_alg, struct
    crypto_skcipher, etc. instead of crypto_alg, crypto_tfm, etc.

  - It sometimes simplifies the implementations of algorithms.

Also, the blkcipher API was no longer being tested.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-11-01 13:38:32 +08:00
Eric Biggers 53253064ad crypto: skcipher - rename crypto_skcipher_type2 to crypto_skcipher_type
Now that the crypto_skcipher_type() function has been removed, there's
no reason to call the crypto_type struct for skciphers
"crypto_skcipher_type2".  Rename it to simply "crypto_skcipher_type".

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-11-01 13:38:32 +08:00
Eric Biggers d3ca75a8b3 crypto: skcipher - unify the crypto_has_skcipher*() functions
crypto_has_skcipher() and crypto_has_skcipher2() do the same thing: they
check for the availability of an algorithm of type skcipher, blkcipher,
or ablkcipher, which also meets any non-type constraints the caller
specified.  And they have exactly the same prototype.

Therefore, eliminate the redundancy by removing crypto_has_skcipher()
and renaming crypto_has_skcipher2() to crypto_has_skcipher().

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-11-01 13:38:32 +08:00
Herbert Xu 0ba3c026e6 crypto: skcipher - Unmap pages after an external error
skcipher_walk_done may be called with an error by internal or
external callers.  For those internal callers we shouldn't unmap
pages but for external callers we must unmap any pages that are
in use.

This patch distinguishes between the two cases by checking whether
walk->nbytes is zero or not.  For internal callers, we now set
walk->nbytes to zero prior to the call.  For external callers,
walk->nbytes has always been non-zero (as zero is used to indicate
the termination of a walk).

Reported-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Fixes: 5cde0af2a9 ("[CRYPTO] cipher: Added block cipher type")
Cc: <stable@vger.kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-09-09 17:35:27 +10:00
Linus Torvalds 4d2fa8b44b Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto updates from Herbert Xu:
 "Here is the crypto update for 5.3:

  API:
   - Test shash interface directly in testmgr
   - cra_driver_name is now mandatory

  Algorithms:
   - Replace arc4 crypto_cipher with library helper
   - Implement 5 way interleave for ECB, CBC and CTR on arm64
   - Add xxhash
   - Add continuous self-test on noise source to drbg
   - Update jitter RNG

  Drivers:
   - Add support for SHA204A random number generator
   - Add support for 7211 in iproc-rng200
   - Fix fuzz test failures in inside-secure
   - Fix fuzz test failures in talitos
   - Fix fuzz test failures in qat"

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (143 commits)
  crypto: stm32/hash - remove interruptible condition for dma
  crypto: stm32/hash - Fix hmac issue more than 256 bytes
  crypto: stm32/crc32 - rename driver file
  crypto: amcc - remove memset after dma_alloc_coherent
  crypto: ccp - Switch to SPDX license identifiers
  crypto: ccp - Validate the the error value used to index error messages
  crypto: doc - Fix formatting of new crypto engine content
  crypto: doc - Add parameter documentation
  crypto: arm64/aes-ce - implement 5 way interleave for ECB, CBC and CTR
  crypto: arm64/aes-ce - add 5 way interleave routines
  crypto: talitos - drop icv_ool
  crypto: talitos - fix hash on SEC1.
  crypto: talitos - move struct talitos_edesc into talitos.h
  lib/scatterlist: Fix mapping iterator when sg->offset is greater than PAGE_SIZE
  crypto/NX: Set receive window credits to max number of CRBs in RxFIFO
  crypto: asymmetric_keys - select CRYPTO_HASH where needed
  crypto: serpent - mark __serpent_setkey_sbox noinline
  crypto: testmgr - dynamically allocate crypto_shash
  crypto: testmgr - dynamically allocate testvec_config
  crypto: talitos - eliminate unneeded 'done' functions at build time
  ...
2019-07-08 20:57:08 -07:00
Eric Biggers 81bcbb1ee7 crypto: skcipher - un-inline encrypt and decrypt functions
crypto_skcipher_encrypt() and crypto_skcipher_decrypt() have grown to be
more than a single indirect function call.  They now also check whether
a key has been set, and with CONFIG_CRYPTO_STATS=y they also update the
crypto statistics.  That can add up to a lot of bloat at every call
site.  Moreover, these always involve a function call anyway, which
greatly limits the benefits of inlining.

So change them to be non-inline.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-06-13 14:31:40 +08:00
Thomas Gleixner 2874c5fd28 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 152
Based on 1 normalized pattern(s):

  this program is free software you can redistribute it and or modify
  it under the terms of the gnu general public license as published by
  the free software foundation either version 2 of the license or at
  your option any later version

extracted by the scancode license scanner the SPDX license identifier

  GPL-2.0-or-later

has been chosen to replace the boilerplate/reference in 3029 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-30 11:26:32 -07:00
Eric Biggers dcaca01a42 crypto: skcipher - don't WARN on unprocessed data after slow walk step
skcipher_walk_done() assumes it's a bug if, after the "slow" path is
executed where the next chunk of data is processed via a bounce buffer,
the algorithm says it didn't process all bytes.  Thus it WARNs on this.

However, this can happen legitimately when the message needs to be
evenly divisible into "blocks" but isn't, and the algorithm has a
'walksize' greater than the block size.  For example, ecb-aes-neonbs
sets 'walksize' to 128 bytes and only supports messages evenly divisible
into 16-byte blocks.  If, say, 17 message bytes remain but they straddle
scatterlist elements, the skcipher_walk code will take the "slow" path
and pass the algorithm all 17 bytes in the bounce buffer.  But the
algorithm will only be able to process 16 bytes, triggering the WARN.

Fix this by just removing the WARN_ON().  Returning -EINVAL, as the code
already does, is the right behavior.

This bug was detected by my patches that improve testmgr to fuzz
algorithms against their generic implementation.

Fixes: b286d8b1a6 ("crypto: skcipher - Add skcipher walk interface")
Cc: <stable@vger.kernel.org> # v4.10+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-08 14:42:55 +08:00
Eric Biggers b1f6b4bf41 crypto: skcipher - set CRYPTO_TFM_NEED_KEY if ->setkey() fails
Some algorithms have a ->setkey() method that is not atomic, in the
sense that setting a key can fail after changes were already made to the
tfm context.  In this case, if a key was already set the tfm can end up
in a state that corresponds to neither the old key nor the new key.

For example, in lrw.c, if gf128mul_init_64k_bbe() fails due to lack of
memory, then priv::table will be left NULL.  After that, encryption with
that tfm will cause a NULL pointer dereference.

It's not feasible to make all ->setkey() methods atomic, especially ones
that have to key multiple sub-tfms.  Therefore, make the crypto API set
CRYPTO_TFM_NEED_KEY if ->setkey() fails and the algorithm requires a
key, to prevent the tfm from being used until a new key is set.

[Cc stable mainly because when introducing the NEED_KEY flag I changed
 AF_ALG to rely on it; and unlike in-kernel crypto API users, AF_ALG
 previously didn't have this problem.  So these "incompletely keyed"
 states became theoretically accessible via AF_ALG -- though, the
 opportunities for causing real mischief seem pretty limited.]

Fixes: f8d33fac84 ("crypto: skcipher - prevent using skciphers without setting key")
Cc: <stable@vger.kernel.org> # v4.16+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-01-18 18:40:24 +08:00
Eric Biggers 0872da16dd crypto: skcipher - add helper for simple block cipher modes
The majority of skcipher templates (including both the existing ones and
the ones remaining to be converted from the "blkcipher" API) just wrap a
single block cipher algorithm.  This includes cbc, cfb, ctr, ecb, kw,
ofb, and pcbc.  Add a helper function skcipher_alloc_instance_simple()
that handles allocating an skcipher instance for this common case.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-01-11 14:16:57 +08:00
Eric Biggers c79b411eaa crypto: skcipher - remove remnants of internal IV generators
Remove dead code related to internal IV generators, which are no longer
used since they've been replaced with the "seqiv" and "echainiv"
templates.  The removed code includes:

- The "givcipher" (GIVCIPHER) algorithm type.  No algorithms are
  registered with this type anymore, so it's unneeded.

- The "const char *geniv" member of aead_alg, ablkcipher_alg, and
  blkcipher_alg.  A few algorithms still set this, but it isn't used
  anymore except to show via /proc/crypto and CRYPTO_MSG_GETALG.
  Just hardcode "<default>" or "<none>" in those cases.

- The 'skcipher_givcrypt_request' structure, which is never used.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-12-23 11:52:45 +08:00
Eric Biggers bb648291fc crypto: skcipher - add might_sleep() to skcipher_walk_virt()
skcipher_walk_virt() can still sleep even with atomic=true, since that
only affects the later calls to skcipher_walk_done().  But,
skcipher_walk_virt() only has to allocate memory for some input data
layouts, so incorrectly calling it with preemption disabled can go
undetected.  Use might_sleep() so that it's detected reliably.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-12-23 11:52:44 +08:00
Eric Biggers 37db69e0b4 crypto: user - clean up report structure copying
There have been a pretty ridiculous number of issues with initializing
the report structures that are copied to userspace by NETLINK_CRYPTO.
Commit 4473710df1 ("crypto: user - Prepare for CRYPTO_MAX_ALG_NAME
expansion") replaced some strncpy()s with strlcpy()s, thereby
introducing information leaks.  Later two other people tried to replace
other strncpy()s with strlcpy() too, which would have introduced even
more information leaks:

    - https://lore.kernel.org/patchwork/patch/954991/
    - https://patchwork.kernel.org/patch/10434351/

Commit cac5818c25 ("crypto: user - Implement a generic crypto
statistics") also uses the buggy strlcpy() approach and therefore leaks
uninitialized memory to userspace.  A fix was proposed, but it was
originally incomplete.

Seeing as how apparently no one can get this right with the current
approach, change all the reporting functions to:

- Start by memsetting the report structure to 0.  This guarantees it's
  always initialized, regardless of what happens later.
- Initialize all strings using strscpy().  This is safe after the
  memset, ensures null termination of long strings, avoids unnecessary
  work, and avoids the -Wstringop-truncation warnings from gcc.
- Use sizeof(var) instead of sizeof(type).  This is more robust against
  copy+paste errors.

For simplicity, also reuse the -EMSGSIZE return value from nla_put().

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-09 17:41:39 +08:00
Kees Cook b350bee5ea crypto: skcipher - Introduce crypto_sync_skcipher
In preparation for removal of VLAs due to skcipher requests on the stack
via SKCIPHER_REQUEST_ON_STACK() usage, this introduces the infrastructure
for the "sync skcipher" tfm, which is for handling the on-stack cases of
skcipher, which are always non-ASYNC and have a known limited request
size.

The crypto API additions:

	struct crypto_sync_skcipher (wrapper for struct crypto_skcipher)
	crypto_alloc_sync_skcipher()
	crypto_free_sync_skcipher()
	crypto_sync_skcipher_setkey()
	crypto_sync_skcipher_get_flags()
	crypto_sync_skcipher_set_flags()
	crypto_sync_skcipher_clear_flags()
	crypto_sync_skcipher_blocksize()
	crypto_sync_skcipher_ivsize()
	crypto_sync_skcipher_reqtfm()
	skcipher_request_set_sync_tfm()
	SYNC_SKCIPHER_REQUEST_ON_STACK() (with tfm type check)

Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-28 12:46:06 +08:00
Eric Biggers 8088d3dd4d crypto: skcipher - fix crash flushing dcache in error path
scatterwalk_done() is only meant to be called after a nonzero number of
bytes have been processed, since scatterwalk_pagedone() will flush the
dcache of the *previous* page.  But in the error case of
skcipher_walk_done(), e.g. if the input wasn't an integer number of
blocks, scatterwalk_done() was actually called after advancing 0 bytes.
This caused a crash ("BUG: unable to handle kernel paging request")
during '!PageSlab(page)' on architectures like arm and arm64 that define
ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE, provided that the input was
page-aligned as in that case walk->offset == 0.

Fix it by reorganizing skcipher_walk_done() to skip the
scatterwalk_advance() and scatterwalk_done() if an error has occurred.

This bug was found by syzkaller fuzzing.

Reproducer, assuming ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE:

	#include <linux/if_alg.h>
	#include <sys/socket.h>
	#include <unistd.h>

	int main()
	{
		struct sockaddr_alg addr = {
			.salg_type = "skcipher",
			.salg_name = "cbc(aes-generic)",
		};
		char buffer[4096] __attribute__((aligned(4096))) = { 0 };
		int fd;

		fd = socket(AF_ALG, SOCK_SEQPACKET, 0);
		bind(fd, (void *)&addr, sizeof(addr));
		setsockopt(fd, SOL_ALG, ALG_SET_KEY, buffer, 16);
		fd = accept(fd, NULL, NULL);
		write(fd, buffer, 15);
		read(fd, buffer, 15);
	}

Reported-by: Liu Chao <liuchao741@huawei.com>
Fixes: b286d8b1a6 ("crypto: skcipher - Add skcipher walk interface")
Cc: <stable@vger.kernel.org> # v4.10+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-08-03 18:06:04 +08:00
Eric Biggers 2a57c0be22 crypto: skcipher - remove unnecessary setting of walk->nbytes
Setting 'walk->nbytes = walk->total' in skcipher_walk_first() doesn't
make sense because actually walk->nbytes needs to be set to the length
of the first step in the walk, which may be less than walk->total.  This
is done by skcipher_walk_next() which is called immediately afterwards.
Also walk->nbytes was already set to 0 in skcipher_walk_skcipher(),
which is a better default value in case it's forgotten to be set later.

Therefore, remove the unnecessary assignment to walk->nbytes.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-08-03 18:06:04 +08:00
Eric Biggers 0567fc9e90 crypto: skcipher - fix aligning block size in skcipher_copy_iv()
The ALIGN() macro needs to be passed the alignment, not the alignmask
(which is the alignment minus 1).

Fixes: b286d8b1a6 ("crypto: skcipher - Add skcipher walk interface")
Cc: <stable@vger.kernel.org> # v4.10+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-08-03 18:06:03 +08:00
Denis Efremov e4e4730698 crypto: skcipher - remove the exporting of skcipher_walk_next
The function skcipher_walk_next declared as static and marked as
EXPORT_SYMBOL_GPL. It's a bit confusing for internal function to be
exported. The area of visibility for such function is its .c file
and all other modules. Other *.c files of the same module can't use it,
despite all other modules can. Relying on the fact that this is the
internal function and it's not a crucial part of the API, the patch
just removes the EXPORT_SYMBOL_GPL marking of skcipher_walk_next.

Found by Linux Driver Verification project (linuxtesting.org).

Signed-off-by: Denis Efremov <efremov@linux.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-07-01 21:00:47 +08:00
Eric Biggers f8d33fac84 crypto: skcipher - prevent using skciphers without setting key
Similar to what was done for the hash API, update the skcipher API to
track whether each transform has been keyed, and reject
encryption/decryption if a key is needed but one hasn't been set.

This isn't as important as the equivalent fix for the hash API because
symmetric ciphers almost always require a key (the "null cipher" is the
only exception), so are unlikely to be used without one.  Still,
tracking the key will prevent accidental unkeyed use.  algif_skcipher
also had to track the key anyway, so the new flag replaces that and
simplifies the algif_skcipher implementation.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-01-12 23:03:39 +11:00
Eric Biggers 2b4f27c36b crypto: skcipher - set walk.iv for zero-length inputs
All the ChaCha20 algorithms as well as the ARM bit-sliced AES-XTS
algorithms call skcipher_walk_virt(), then access the IV (walk.iv)
before checking whether any bytes need to be processed (walk.nbytes).

But if the input is empty, then skcipher_walk_virt() doesn't set the IV,
and the algorithms crash trying to use the uninitialized IV pointer.

Fix it by setting the IV earlier in skcipher_walk_virt().  Also fix it
for the AEAD walk functions.

This isn't a perfect solution because we can't actually align the IV to
->cra_alignmask unless there are bytes to process, for one because the
temporary buffer for the aligned IV is freed by skcipher_walk_done(),
which is only called when there are bytes to process.  Thus, algorithms
that require aligned IVs will still need to avoid accessing the IV when
walk.nbytes == 0.  Still, many algorithms/architectures are fine with
IVs having any alignment, and even for those that aren't, a misaligned
pointer bug is much less severe than an uninitialized pointer bug.

This change also matches the behavior of the older blkcipher_walk API.

Fixes: 0cabf2af6f ("crypto: skcipher - Fix crash on zero-length input")
Reported-by: syzbot <syzkaller@googlegroups.com>
Cc: <stable@vger.kernel.org> # v4.14+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-12-11 22:29:53 +11:00
Linus Torvalds 43570f0383 Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto fixes from Herbert Xu:

 - avoid potential bogus alignment for some AEAD operations

 - fix crash in algif_aead

 - avoid sleeping in softirq context with async af_alg

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
  crypto: skcipher - Fix skcipher_walk_aead_common
  crypto: af_alg - remove locking in async callback
  crypto: algif_aead - skip SGL entries with NULL page
2017-11-28 16:22:10 -08:00