Commit Graph

20 Commits

Author SHA1 Message Date
Eric Biggers 7bcb2c99f8 crypto: algapi - use common mechanism for inheriting flags
The flag CRYPTO_ALG_ASYNC is "inherited" in the sense that when a
template is instantiated, the template will have CRYPTO_ALG_ASYNC set if
any of the algorithms it uses has CRYPTO_ALG_ASYNC set.

We'd like to add a second flag (CRYPTO_ALG_ALLOCATES_MEMORY) that gets
"inherited" in the same way.  This is difficult because the handling of
CRYPTO_ALG_ASYNC is hardcoded everywhere.  Address this by:

  - Add CRYPTO_ALG_INHERITED_FLAGS, which contains the set of flags that
    have these inheritance semantics.

  - Add crypto_algt_inherited_mask(), for use by template ->create()
    methods.  It returns any of these flags that the user asked to be
    unset and thus must be passed in the 'mask' to crypto_grab_*().

  - Also modify crypto_check_attr_type() to handle computing the 'mask'
    so that most templates can just use this.

  - Make crypto_grab_*() propagate these flags to the template instance
    being created so that templates don't have to do this themselves.

Make crypto/simd.c propagate these flags too, since it "wraps" another
algorithm, similar to a template.

Based on a patch by Mikulas Patocka <mpatocka@redhat.com>
(https://lore.kernel.org/r/alpine.LRH.2.02.2006301414580.30526@file01.intranet.prod.int.rdu2.redhat.com).

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-16 21:49:08 +10:00
Eric Biggers 3ff2bab82f crypto: cts - simplify error handling in crypto_cts_create()
Simplify the error handling in crypto_cts_create() by taking advantage
of crypto_grab_skcipher() now handling an ERR_PTR() name and by taking
advantage of crypto_drop_skcipher() now accepting (as a no-op) a spawn
that hasn't been grabbed yet.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-03-06 12:28:23 +11:00
Eric Biggers b9f76dddb1 crypto: skcipher - pass instance to crypto_grab_skcipher()
Initializing a crypto_skcipher_spawn currently requires:

1. Set spawn->base.inst to point to the instance.
2. Call crypto_grab_skcipher().

But there's no reason for these steps to be separate, and in fact this
unneeded complication has caused at least one bug, the one fixed by
commit 6db4341017 ("crypto: adiantum - initialize crypto_spawn::inst")

So just make crypto_grab_skcipher() take the instance as an argument.

To keep the function calls from getting too unwieldy due to this extra
argument, also introduce a 'mask' variable into the affected places
which weren't already using one.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09 11:30:54 +08:00
Eric Biggers af5034e8e4 crypto: remove propagation of CRYPTO_TFM_RES_* flags
The CRYPTO_TFM_RES_* flags were apparently meant as a way to make the
->setkey() functions provide more information about errors.  But these
flags weren't actually being used or tested, and in many cases they
weren't being set correctly anyway.  So they've now been removed.

Also, if someone ever actually needs to start better distinguishing
->setkey() errors (which is somewhat unlikely, as this has been unneeded
for a long time), we'd be much better off just defining different return
values, like -EINVAL if the key is invalid for the algorithm vs.
-EKEYREJECTED if the key was rejected by a policy like "no weak keys".
That would be much simpler, less error-prone, and easier to test.

So just remove CRYPTO_TFM_RES_MASK and all the unneeded logic that
propagates these flags around.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09 11:30:53 +08:00
Eric Biggers c4741b2305 crypto: run initcalls for generic implementations earlier
Use subsys_initcall for registration of all templates and generic
algorithm implementations, rather than module_init.  Then change
cryptomgr to use arch_initcall, to place it before the subsys_initcalls.

This is needed so that when both a generic and optimized implementation
of an algorithm are built into the kernel (not loadable modules), the
generic implementation is registered before the optimized one.
Otherwise, the self-tests for the optimized implementation are unable to
allocate the generic implementation for the new comparison fuzz tests.

Note that on arm, a side effect of this change is that self-tests for
generic implementations may run before the unaligned access handler has
been installed.  So, unaligned accesses will crash the kernel.  This is
arguably a good thing as it makes it easier to detect that type of bug.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-18 22:15:03 +08:00
Eric Biggers c31a871985 crypto: cts - don't support empty messages
My patches to make testmgr fuzz algorithms against their generic
implementation detected that the arm64 implementations of
"cts(cbc(aes))" handle empty messages differently from the cts template.
Namely, the arm64 implementations forbids (with -EINVAL) all messages
shorter than the block size, including the empty message; but the cts
template permits empty messages as a special case.

No user should be CTS-encrypting/decrypting empty messages, but we need
to keep the behavior consistent.  Unfortunately, as noted in the source
of OpenSSL's CTS implementation [1], there's no common specification for
CTS.  This makes it somewhat debatable what the behavior should be.

However, all CTS specifications seem to agree that messages shorter than
the block size are not allowed, and OpenSSL follows this in both CTS
conventions it implements.  It would also simplify the user-visible
semantics to have empty messages no longer be a special case.

Therefore, make the cts template return -EINVAL on *all* messages
shorter than the block size, including the empty message.

[1] https://github.com/openssl/openssl/blob/master/crypto/modes/cts128.c

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-08 14:42:55 +08:00
Salvatore Mesoraca 6650c4de68 crypto: remove several VLAs
We avoid various VLAs[1] by using constant expressions for block size
and alignment mask.

[1] http://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Salvatore Mesoraca <s.mesoraca16@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-04-21 00:58:34 +08:00
Gilad Ben-Yossef 4e5b0ad582 crypto: remove redundant backlog checks on EBUSY
Now that -EBUSY return code only indicates backlog queueing
we can safely remove the now redundant check for the
CRYPTO_TFM_REQ_MAY_BACKLOG flag when -EBUSY is returned.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-11-03 22:11:17 +08:00
Ard Biesheuvel db91af0fbe crypto: algapi - make crypto_xor() and crypto_inc() alignment agnostic
Instead of unconditionally forcing 4 byte alignment for all generic
chaining modes that rely on crypto_xor() or crypto_inc() (which may
result in unnecessary copying of data when the underlying hardware
can perform unaligned accesses efficiently), make those functions
deal with unaligned input explicitly, but only if the Kconfig symbol
HAVE_EFFICIENT_UNALIGNED_ACCESS is set. This will allow us to drop
the alignmasks from the CBC, CMAC, CTR, CTS, PCBC and SEQIV drivers.

For crypto_inc(), this simply involves making the 4-byte stride
conditional on HAVE_EFFICIENT_UNALIGNED_ACCESS being set, given that
it typically operates on 16 byte buffers.

For crypto_xor(), an algorithm is implemented that simply runs through
the input using the largest strides possible if unaligned accesses are
allowed. If they are not, an optimal sequence of memory accesses is
emitted that takes the relative alignment of the input buffers into
account, e.g., if the relative misalignment of dst and src is 4 bytes,
the entire xor operation will be completed using 4 byte loads and stores
(modulo unaligned bits at the start and end). Note that all expressions
involving misalign are simply eliminated by the compiler when
HAVE_EFFICIENT_UNALIGNED_ACCESS is defined.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-02-11 17:52:28 +08:00
Gideon Israel Dsouza d8c34b949d crypto: Replaced gcc specific attributes with macros from compiler.h
Continuing from this commit: 52f5684c8e
("kernel: use macros from compiler.h instead of __attribute__((...))")

I submitted 4 total patches. They are part of task I've taken up to
increase compiler portability in the kernel. I've cleaned up the
subsystems under /kernel /mm /block and /security, this patch targets
/crypto.

There is <linux/compiler.h> which provides macros for various gcc specific
constructs. Eg: __weak for __attribute__((weak)). I've cleaned all
instances of gcc specific attributes with the right macros for the crypto
subsystem.

I had to make one additional change into compiler-gcc.h for the case when
one wants to use this: __attribute__((aligned) and not specify an alignment
factor. From the gcc docs, this will result in the largest alignment for
that data type on the target machine so I've named the macro
__aligned_largest. Please advise if another name is more appropriate.

Signed-off-by: Gideon Israel Dsouza <gidisrael@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-01-13 00:24:39 +08:00
Eric Biggers 60425a8bad crypto: skcipher - Get rid of crypto_spawn_skcipher2()
Since commit 3a01d0ee2b ("crypto: skcipher - Remove top-level
givcipher interface"), crypto_spawn_skcipher2() and
crypto_spawn_skcipher() are equivalent.  So switch callers of
crypto_spawn_skcipher2() to crypto_spawn_skcipher() and remove it.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-11-01 08:37:17 +08:00
Eric Biggers a35528eca0 crypto: skcipher - Get rid of crypto_grab_skcipher2()
Since commit 3a01d0ee2b ("crypto: skcipher - Remove top-level
givcipher interface"), crypto_grab_skcipher2() and
crypto_grab_skcipher() are equivalent.  So switch callers of
crypto_grab_skcipher2() to crypto_grab_skcipher() and remove it.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-11-01 08:37:16 +08:00
Herbert Xu 0605c41cc5 crypto: cts - Convert to skcipher
This patch converts cts over to the skcipher interface.  It also
optimises the implementation to use one CBC operation for all but
the last block, which is then processed separately.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-18 17:35:44 +08:00
Herbert Xu 988dc01744 crypto: cts - Weed out non-CBC algorithms
The cts algorithm as currently implemented assumes the underlying
is a CBC-mode algorithm.  So this patch adds a check for that to
eliminate bogus combinations of cts with non-CBC modes.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-01-20 14:44:15 +11:00
Herbert Xu 0c5c8e646c crypto: cts - Remove bogus use of seqiv
The seqiv generator is completely inappropriate for cts as it's
designed for IPsec algorithms.  Since cts users do not actually
use the IV generator we can just fall back to the default.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Maciej ?enczykowski <zenczykowski@gmail.com>
2015-01-20 14:44:15 +11:00
Kees Cook 4943ba16bb crypto: include crypto- module prefix in template
This adds the module loading prefix "crypto-" to the template lookup
as well.

For example, attempting to load 'vfat(blowfish)' via AF_ALG now correctly
includes the "crypto-" prefix at every level, correctly rejecting "vfat":

	net-pf-38
	algif-hash
	crypto-vfat(blowfish)
	crypto-vfat(blowfish)-all
	crypto-vfat

Reported-by: Mathias Krause <minipli@googlemail.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Mathias Krause <minipli@googlemail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-11-26 20:06:30 +08:00
Daniel Borkmann 7185ad2672 crypto: memzero_explicit - make sure to clear out sensitive data
Recently, in commit 13aa93c70e71 ("random: add and use memzero_explicit()
for clearing data"), we have found that GCC may optimize some memset()
cases away when it detects a stack variable is not being used anymore
and going out of scope. This can happen, for example, in cases when we
are clearing out sensitive information such as keying material or any
e.g. intermediate results from crypto computations, etc.

With the help of Coccinelle, we can figure out and fix such occurences
in the crypto subsytem as well. Julia Lawall provided the following
Coccinelle program:

  @@
  type T;
  identifier x;
  @@

  T x;
  ... when exists
      when any
  -memset
  +memzero_explicit
     (&x,
  -0,
     ...)
  ... when != x
      when strict

  @@
  type T;
  identifier x;
  @@

  T x[...];
  ... when exists
      when any
  -memset
  +memzero_explicit
     (x,
  -0,
     ...)
  ... when != x
      when strict

Therefore, make use of the drop-in replacement memzero_explicit() for
exactly such cases instead of using memset().

Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Cc: Julia Lawall <julia.lawall@lip6.fr>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Hannes Frederic Sowa <hannes@stressinduktion.org>
Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2014-10-17 11:44:07 -04:00
Julia Lawall 3e8afe35c3 crypto: use ERR_CAST
Replace PTR_ERR followed by ERR_PTR by ERR_CAST, to be more concise.

The semantic patch that makes this change is as follows:
(http://coccinelle.lip6.fr/)

// <smpl>
@@
expression err,x;
@@
-       err = PTR_ERR(x);
        if (IS_ERR(x))
-                return ERR_PTR(err);
+                return ERR_CAST(x);
// </smpl>

Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2013-02-04 21:16:53 +08:00
Alexey Dobriyan c4913c7b71 [CRYPTO] cts: Init SG tables
Steps to reproduce:

	modprobe tcrypt		# with CONFIG_DEBUG_SG=y

testing cts(cbc(aes)) encryption
test 1 (128 bit key):
------------[ cut here ]------------
kernel BUG at include/linux/scatterlist.h:65!
invalid opcode: 0000 [1] PREEMPT SMP DEBUG_PAGEALLOC
CPU 0 
Modules linked in: tea xts twofish twofish_common tcrypt(+) [maaaany]
Pid: 16151, comm: modprobe Not tainted 2.6.26-rc4-fat #7
RIP: 0010:[<ffffffffa0bf032e>]  [<ffffffffa0bf032e>] :cts:cts_cbc_encrypt+0x151/0x355
RSP: 0018:ffff81016f497a88  EFLAGS: 00010286
RAX: ffffe20009535d58 RBX: ffff81016f497af0 RCX: 0000000087654321
RDX: ffff8100010d4f28 RSI: ffff81016f497ee8 RDI: ffff81016f497ac0
RBP: ffff81016f497c38 R08: 0000000000000000 R09: 0000000000000011
R10: ffffffff00000008 R11: ffff8100010d4f28 R12: ffff81016f497ac0
R13: ffff81016f497b30 R14: 0000000000000010 R15: 0000000000000010
FS:  00007fac6fa276f0(0000) GS:ffffffff8060e000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 00007f12ca7cc000 CR3: 000000016f441000 CR4: 00000000000026e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff4ff0 DR7: 0000000000000400
Process modprobe (pid: 16151, threadinfo ffff81016f496000, task ffff8101755b4ae0)
Stack:  0000000000000001 ffff81016f496000 ffffffff80719f78 0000000000000001
 0000000000000001 ffffffff8020c87c ffff81016f99c918 20646c756f772049
 65687420656b696c 0000000000000020 0000000000000000 0000000033341102
Call Trace:
 [<ffffffff8020c87c>] ? restore_args+0x0/0x30
 [<ffffffffa04aa311>] ? :aes_generic:crypto_aes_expand_key+0x311/0x369
 [<ffffffff802ab453>] ? check_object+0x15a/0x213
 [<ffffffff802aad22>] ? init_object+0x6e/0x76
 [<ffffffff802ac3ae>] ? __slab_free+0xfc/0x371
 [<ffffffffa0bf05ed>] :cts:crypto_cts_encrypt+0xbb/0xca
 [<ffffffffa07108de>] ? :crypto_blkcipher:setkey+0xc7/0xec
 [<ffffffffa07110b8>] :crypto_blkcipher:async_encrypt+0x38/0x3a
 [<ffffffffa2ce9341>] :tcrypt:test_cipher+0x261/0x7c6
 [<ffffffffa2cfd9df>] :tcrypt:tcrypt_mod_init+0x9df/0x1b30
 [<ffffffff80261e35>] sys_init_module+0x9e/0x1b2
 [<ffffffff8020c15a>] system_call_after_swapgs+0x8a/0x8f
Code: 45 c0 e8 aa 24 63 df 48 c1 e8 0c 48 b9 00 00 00 00 00 e2 ff ff 48 8b 55 88 48 6b c0 68 48 01 c8 b9 21 43 65 87 48 39 4d 80 74 04 <0f> 0b eb fe f6 c2 01 74 04 0f 0b eb fe 83 e2 03 4c 89 ef 44 89 
RIP  [<ffffffffa0bf032e>] :cts:cts_cbc_encrypt+0x151/0x355
 RSP <ffff81016f497a88>
---[ end trace e8bahiarjand37fd ]---

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-06-02 15:46:51 +10:00
Kevin Coffman 76cb952179 [CRYPTO] cts: Add CTS mode required for Kerberos AES support
Implement CTS wrapper for CBC mode required for support of AES
encryption support for Kerberos (rfc3962).

Signed-off-by: Kevin Coffman <kwc@citi.umich.edu>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-04-21 10:19:23 +08:00