This update includes the following changes:

API:
 
 - Add virtual-address based lskcipher interface.
 - Optimise ahash/shash performance in light of costly indirect calls.
 - Remove ahash alignmask attribute.
 
 Algorithms:
 
 - Improve AES/XTS performance of 6-way unrolling for ppc.
 - Remove some uses of obsolete algorithms (md4, md5, sha1).
 - Add FIPS 202 SHA-3 support in pkcs1pad.
 - Add fast path for single-page messages in adiantum.
 - Remove zlib-deflate.
 
 Drivers:
 
 - Add support for S4 in meson RNG driver.
 - Add STM32MP13x support in stm32.
 - Add hwrng interface support in qcom-rng.
 - Add support for deflate algorithm in hisilicon/zip.
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEn51F/lCuNhUwmDeSxycdCkmxi6cFAmVB3vgACgkQxycdCkmx
 i6dsOBAAykbnX8BpnpnOXYywE9ZWrl98rAk51MK0N9olZNfg78zRPIv7fFxFdC20
 SDJrDSNPmn0Qvaa5e0EfoAdklsm0k2GkXL/BwPKMKWUsyIoJVYI3WrBMnjBy9xMp
 yfME+h0bKoXJCZKnYkIUSGUejmUPSyRlEylrXoFlH/VWYwAaii/x9zwreQoF+0LR
 KI24A1q8AYs6Dw9HSfndaAub9GOzrqKYs6fSaMG+77Y4UC5aoi5J9Bp2G3uVyHay
 x/0bZtIxKXS9wn+LeG/3GspX23x/I5VwBOdAoMigrYmAIaIg5qgyMszudltTAs4R
 zF1Kh7WsnM5+vpnBSeigzo+/GGOU3QTz8y3tBTg+3ZR7GWGOwQLiizhOYqCyOfAH
 pIm6c++sZw/OOHiL69Nt4HeLKzGNYYWk3s4X/B/6cqoouPfOsfBaQobZNx9zfy7q
 ZNEvSVBjrFX/L6wDSotny1LTWLUNjHbmLaMV5uQZ/SQKEtv19fp2Dl7SsLkHH+3v
 ldOAwfoJR6QcSwz3Ez02TUAvQhtP172Hnxi7u44eiZu2aUboLhCFr7aEU6kVdBCx
 1rIRVHD1oqlOEDRwPRXzhF3I8R4QDORJIxZ6UUhg7yueuI+XCGDsBNC+LqBrBmSR
 IbdjqmSDUBhJyM5yMnt1VFYhqKQ/ZzwZ3JQviwW76Es9pwEIolM=
 =IZmR
 -----END PGP SIGNATURE-----

Merge tag 'v6.7-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto updates from Herbert Xu:
 "API:
   - Add virtual-address based lskcipher interface
   - Optimise ahash/shash performance in light of costly indirect calls
   - Remove ahash alignmask attribute

  Algorithms:
   - Improve AES/XTS performance of 6-way unrolling for ppc
   - Remove some uses of obsolete algorithms (md4, md5, sha1)
   - Add FIPS 202 SHA-3 support in pkcs1pad
   - Add fast path for single-page messages in adiantum
   - Remove zlib-deflate

  Drivers:
   - Add support for S4 in meson RNG driver
   - Add STM32MP13x support in stm32
   - Add hwrng interface support in qcom-rng
   - Add support for deflate algorithm in hisilicon/zip"

* tag 'v6.7-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (283 commits)
  crypto: adiantum - flush destination page before unmapping
  crypto: testmgr - move pkcs1pad(rsa,sha3-*) to correct place
  Documentation/module-signing.txt: bring up to date
  module: enable automatic module signing with FIPS 202 SHA-3
  crypto: asymmetric_keys - allow FIPS 202 SHA-3 signatures
  crypto: rsa-pkcs1pad - Add FIPS 202 SHA-3 support
  crypto: FIPS 202 SHA-3 register in hash info for IMA
  x509: Add OIDs for FIPS 202 SHA-3 hash and signatures
  crypto: ahash - optimize performance when wrapping shash
  crypto: ahash - check for shash type instead of not ahash type
  crypto: hash - move "ahash wrapping shash" functions to ahash.c
  crypto: talitos - stop using crypto_ahash::init
  crypto: chelsio - stop using crypto_ahash::init
  crypto: ahash - improve file comment
  crypto: ahash - remove struct ahash_request_priv
  crypto: ahash - remove crypto_ahash_alignmask
  crypto: gcm - stop using alignmask of ahash
  crypto: chacha20poly1305 - stop using alignmask of ahash
  crypto: ccm - stop using alignmask of ahash
  net: ipv6: stop checking crypto_ahash_alignmask
  ...
This commit is contained in:
Linus Torvalds 2023-11-02 16:15:30 -10:00
commit bc3012f4e3
275 changed files with 10690 additions and 3351 deletions

View File

@ -1,4 +1,4 @@
What: /sys/kernel/debug/qat_<device>_<BDF>/qat/fw_counters What: /sys/kernel/debug/qat_<device>_<BDF>/fw_counters
Date: November 2023 Date: November 2023
KernelVersion: 6.6 KernelVersion: 6.6
Contact: qat-linux@intel.com Contact: qat-linux@intel.com
@ -59,3 +59,25 @@ Description: (RO) Read returns the device health status.
The driver does not monitor for Heartbeat. It is left for a user The driver does not monitor for Heartbeat. It is left for a user
to poll the status periodically. to poll the status periodically.
What: /sys/kernel/debug/qat_<device>_<BDF>/pm_status
Date: January 2024
KernelVersion: 6.7
Contact: qat-linux@intel.com
Description: (RO) Read returns power management information specific to the
QAT device.
This attribute is only available for qat_4xxx devices.
What: /sys/kernel/debug/qat_<device>_<BDF>/cnv_errors
Date: January 2024
KernelVersion: 6.7
Contact: qat-linux@intel.com
Description: (RO) Read returns, for each Acceleration Engine (AE), the number
of errors and the type of the last error detected by the device
when performing verified compression.
Reported counters::
<N>: Number of Compress and Verify (CnV) errors and type
of the last CnV error detected by Acceleration
Engine N.

View File

@ -29,6 +29,8 @@ Description: (RW) Reports the current configuration of the QAT device.
services services
* asym;sym: identical to sym;asym * asym;sym: identical to sym;asym
* dc: the device is configured for running compression services * dc: the device is configured for running compression services
* dcc: identical to dc but enables the dc chaining feature,
hash then compression. If this is not required chose dc
* sym: the device is configured for running symmetric crypto * sym: the device is configured for running symmetric crypto
services services
* asym: the device is configured for running asymmetric crypto * asym: the device is configured for running asymmetric crypto
@ -93,3 +95,49 @@ Description: (RW) This configuration option provides a way to force the device i
0 0
This attribute is only available for qat_4xxx devices. This attribute is only available for qat_4xxx devices.
What: /sys/bus/pci/devices/<BDF>/qat/rp2srv
Date: January 2024
KernelVersion: 6.7
Contact: qat-linux@intel.com
Description:
(RW) This attribute provides a way for a user to query a
specific ring pair for the type of service that it is currently
configured for.
When written to, the value is cached and used to perform the
read operation. Allowed values are in the range 0 to N-1, where
N is the max number of ring pairs supported by a device. This
can be queried using the attribute qat/num_rps.
A read returns the service associated to the ring pair queried.
The values are:
* dc: the ring pair is configured for running compression services
* sym: the ring pair is configured for running symmetric crypto
services
* asym: the ring pair is configured for running asymmetric crypto
services
Example usage::
# echo 1 > /sys/bus/pci/devices/<BDF>/qat/rp2srv
# cat /sys/bus/pci/devices/<BDF>/qat/rp2srv
sym
This attribute is only available for qat_4xxx devices.
What: /sys/bus/pci/devices/<BDF>/qat/num_rps
Date: January 2024
KernelVersion: 6.7
Contact: qat-linux@intel.com
Description:
(RO) Returns the number of ring pairs that a single device has.
Example usage::
# cat /sys/bus/pci/devices/<BDF>/qat/num_rps
64
This attribute is only available for qat_4xxx devices.

View File

@ -0,0 +1,41 @@
What: /sys/bus/pci/devices/<BDF>/qat_ras/errors_correctable
Date: January 2024
KernelVersion: 6.7
Contact: qat-linux@intel.com
Description: (RO) Reports the number of correctable errors detected by the device.
This attribute is only available for qat_4xxx devices.
What: /sys/bus/pci/devices/<BDF>/qat_ras/errors_nonfatal
Date: January 2024
KernelVersion: 6.7
Contact: qat-linux@intel.com
Description: (RO) Reports the number of non fatal errors detected by the device.
This attribute is only available for qat_4xxx devices.
What: /sys/bus/pci/devices/<BDF>/qat_ras/errors_fatal
Date: January 2024
KernelVersion: 6.7
Contact: qat-linux@intel.com
Description: (RO) Reports the number of fatal errors detected by the device.
This attribute is only available for qat_4xxx devices.
What: /sys/bus/pci/devices/<BDF>/qat_ras/reset_error_counters
Date: January 2024
KernelVersion: 6.7
Contact: qat-linux@intel.com
Description: (WO) Write to resets all error counters of a device.
The following example reports how to reset the counters::
# echo 1 > /sys/bus/pci/devices/<BDF>/qat_ras/reset_error_counters
# cat /sys/bus/pci/devices/<BDF>/qat_ras/errors_correctable
0
# cat /sys/bus/pci/devices/<BDF>/qat_ras/errors_nonfatal
0
# cat /sys/bus/pci/devices/<BDF>/qat_ras/errors_fatal
0
This attribute is only available for qat_4xxx devices.

View File

@ -0,0 +1,226 @@
What: /sys/bus/pci/devices/<BDF>/qat_rl/sla_op
Date: January 2024
KernelVersion: 6.7
Contact: qat-linux@intel.com
Description:
(WO) This attribute is used to perform an operation on an SLA.
The supported operations are: add, update, rm, rm_all, and get.
Input values must be filled through the associated attribute in
this group before a write to this file.
If the operation completes successfully, the associated
attributes will be updated.
The associated attributes are: cir, pir, srv, rp, and id.
Supported operations:
* add: Creates a new SLA with the provided inputs from user.
* Inputs: cir, pir, srv, and rp
* Output: id
* get: Returns the configuration of the specified SLA in id attribute
* Inputs: id
* Outputs: cir, pir, srv, and rp
* update: Updates the SLA with new values set in the following attributes
* Inputs: id, cir, and pir
* rm: Removes the specified SLA in the id attribute.
* Inputs: id
* rm_all: Removes all the configured SLAs.
* Inputs: None
This attribute is only available for qat_4xxx devices.
What: /sys/bus/pci/devices/<BDF>/qat_rl/rp
Date: January 2024
KernelVersion: 6.7
Contact: qat-linux@intel.com
Description:
(RW) When read, reports the current assigned ring pairs for the
queried SLA.
When wrote to, configures the ring pairs associated to a new SLA.
The value is a 64-bit bit mask and is written/displayed in hex.
Each bit of this mask represents a single ring pair i.e.,
bit 1 == ring pair id 0; bit 3 == ring pair id 2.
Selected ring pairs must to be assigned to a single service,
i.e. the one provided with the srv attribute. The service
assigned to a certain ring pair can be checked by querying
the attribute qat/rp2srv.
The maximum number of ring pairs is 4 per SLA.
Applicability in sla_op:
* WRITE: add operation
* READ: get operation
Example usage::
## Read
# echo 4 > /sys/bus/pci/devices/<BDF>/qat_rl/id
# cat /sys/bus/pci/devices/<BDF>/qat_rl/rp
0x5
## Write
# echo 0x5 > /sys/bus/pci/devices/<BDF>/qat_rl/rp
This attribute is only available for qat_4xxx devices.
What: /sys/bus/pci/devices/<BDF>/qat_rl/id
Date: January 2024
KernelVersion: 6.7
Contact: qat-linux@intel.com
Description:
(RW) If written to, the value is used to retrieve a particular
SLA and operate on it.
This is valid only for the following operations: update, rm,
and get.
A read of this attribute is only guaranteed to have correct data
after creation of an SLA.
Applicability in sla_op:
* WRITE: rm and update operations
* READ: add and get operations
Example usage::
## Read
## Set attributes e.g. cir, pir, srv, etc
# echo "add" > /sys/bus/pci/devices/<BDF>/qat_rl/sla_op
# cat /sys/bus/pci/devices/<BDF>/qat_rl/id
4
## Write
# echo 7 > /sys/bus/pci/devices/<BDF>/qat_rl/id
# echo "get" > /sys/bus/pci/devices/<BDF>/qat_rl/sla_op
# cat /sys/bus/pci/devices/<BDF>/qat_rl/rp
0x5 ## ring pair ID 0 and ring pair ID 2
This attribute is only available for qat_4xxx devices.
What: /sys/bus/pci/devices/<BDF>/qat_rl/cir
Date: January 2024
KernelVersion: 6.7
Contact: qat-linux@intel.com
Description:
(RW) Committed information rate (CIR). Rate guaranteed to be
achieved by a particular SLA. The value is expressed in
permille scale, i.e. 1000 refers to the maximum device
throughput for a selected service.
After sending a "get" to sla_op, this will be populated with the
CIR for that queried SLA.
Write to this file before sending an "add/update" sla_op, to set
the SLA to the specified value.
Applicability in sla_op:
* WRITE: add and update operations
* READ: get operation
Example usage::
## Write
# echo 500 > /sys/bus/pci/devices/<BDF>/qat_rl/cir
# echo "add" /sys/bus/pci/devices/<BDF>/qat_rl/sla_op
## Read
# echo 4 > /sys/bus/pci/devices/<BDF>/qat_rl/id
# echo "get" > /sys/bus/pci/devices/<BDF>/qat_rl/sla_op
# cat /sys/bus/pci/devices/<BDF>/qat_rl/cir
500
This attribute is only available for qat_4xxx devices.
What: /sys/bus/pci/devices/<BDF>/qat_rl/pir
Date: January 2024
KernelVersion: 6.7
Contact: qat-linux@intel.com
Description:
(RW) Peak information rate (PIR). The maximum rate that can be
achieved by that particular SLA. An SLA can reach a value
between CIR and PIR when the device is not fully utilized by
requests from other users (assigned to different SLAs).
After sending a "get" to sla_op, this will be populated with the
PIR for that queried SLA.
Write to this file before sending an "add/update" sla_op, to set
the SLA to the specified value.
Applicability in sla_op:
* WRITE: add and update operations
* READ: get operation
Example usage::
## Write
# echo 750 > /sys/bus/pci/devices/<BDF>/qat_rl/pir
# echo "add" > /sys/bus/pci/devices/<BDF>/qat_rl/sla_op
## Read
# echo 4 > /sys/bus/pci/devices/<BDF>/qat_rl/id
# echo "get" > /sys/bus/pci/devices/<BDF>/qat_rl/sla_op
# cat /sys/bus/pci/devices/<BDF>/qat_rl/pir
750
This attribute is only available for qat_4xxx devices.
What: /sys/bus/pci/devices/<BDF>/qat_rl/srv
Date: January 2024
KernelVersion: 6.7
Contact: qat-linux@intel.com
Description:
(RW) Service (SRV). Represents the service (sym, asym, dc)
associated to an SLA.
Can be written to or queried to set/show the SRV type for an SLA.
The SRV attribute is used to specify the SRV type before adding
an SLA. After an SLA is configured, reports the service
associated to that SLA.
Applicability in sla_op:
* WRITE: add and update operations
* READ: get operation
Example usage::
## Write
# echo "dc" > /sys/bus/pci/devices/<BDF>/qat_rl/srv
# echo "add" > /sys/bus/pci/devices/<BDF>/qat_rl/sla_op
# cat /sys/bus/pci/devices/<BDF>/qat_rl/id
4
## Read
# echo 4 > /sys/bus/pci/devices/<BDF>/qat_rl/id
# echo "get" > /sys/bus/pci/devices/<BDF>/qat_rl/sla_op
# cat /sys/bus/pci/devices/<BDF>/qat_rl/srv
dc
This attribute is only available for qat_4xxx devices.
What: /sys/bus/pci/devices/<BDF>/qat_rl/cap_rem
Date: January 2024
KernelVersion: 6.7
Contact: qat-linux@intel.com
Description:
(RW) This file will return the remaining capability for a
particular service/sla. This is the remaining value that a new
SLA can be set to or a current SLA can be increased with.
Example usage::
# echo "asym" > /sys/bus/pci/devices/<BDF>/qat_rl/cap_rem
# cat /sys/bus/pci/devices/<BDF>/qat_rl/cap_rem
250
# echo 250 > /sys/bus/pci/devices/<BDF>/qat_rl/cir
# echo "add" > /sys/bus/pci/devices/<BDF>/qat_rl/sla_op
# cat /sys/bus/pci/devices/<BDF>/qat_rl/cap_rem
0
This attribute is only available for qat_4xxx devices.

View File

@ -28,10 +28,10 @@ trusted userspace bits.
This facility uses X.509 ITU-T standard certificates to encode the public keys This facility uses X.509 ITU-T standard certificates to encode the public keys
involved. The signatures are not themselves encoded in any industrial standard involved. The signatures are not themselves encoded in any industrial standard
type. The facility currently only supports the RSA public key encryption type. The built-in facility currently only supports the RSA & NIST P-384 ECDSA
standard (though it is pluggable and permits others to be used). The possible public key signing standard (though it is pluggable and permits others to be
hash algorithms that can be used are SHA-1, SHA-224, SHA-256, SHA-384, and used). The possible hash algorithms that can be used are SHA-2 and SHA-3 of
SHA-512 (the algorithm is selected by data in the signature). sizes 256, 384, and 512 (the algorithm is selected by data in the signature).
========================== ==========================
@ -81,11 +81,12 @@ This has a number of options available:
sign the modules with: sign the modules with:
=============================== ========================================== =============================== ==========================================
``CONFIG_MODULE_SIG_SHA1`` :menuselection:`Sign modules with SHA-1`
``CONFIG_MODULE_SIG_SHA224`` :menuselection:`Sign modules with SHA-224`
``CONFIG_MODULE_SIG_SHA256`` :menuselection:`Sign modules with SHA-256` ``CONFIG_MODULE_SIG_SHA256`` :menuselection:`Sign modules with SHA-256`
``CONFIG_MODULE_SIG_SHA384`` :menuselection:`Sign modules with SHA-384` ``CONFIG_MODULE_SIG_SHA384`` :menuselection:`Sign modules with SHA-384`
``CONFIG_MODULE_SIG_SHA512`` :menuselection:`Sign modules with SHA-512` ``CONFIG_MODULE_SIG_SHA512`` :menuselection:`Sign modules with SHA-512`
``CONFIG_MODULE_SIG_SHA3_256`` :menuselection:`Sign modules with SHA3-256`
``CONFIG_MODULE_SIG_SHA3_384`` :menuselection:`Sign modules with SHA3-384`
``CONFIG_MODULE_SIG_SHA3_512`` :menuselection:`Sign modules with SHA3-512`
=============================== ========================================== =============================== ==========================================
The algorithm selected here will also be built into the kernel (rather The algorithm selected here will also be built into the kernel (rather
@ -145,6 +146,10 @@ into vmlinux) using parameters in the::
file (which is also generated if it does not already exist). file (which is also generated if it does not already exist).
One can select between RSA (``MODULE_SIG_KEY_TYPE_RSA``) and ECDSA
(``MODULE_SIG_KEY_TYPE_ECDSA``) to generate either RSA 4k or NIST
P-384 keypair.
It is strongly recommended that you provide your own x509.genkey file. It is strongly recommended that you provide your own x509.genkey file.
Most notably, in the x509.genkey file, the req_distinguished_name section Most notably, in the x509.genkey file, the req_distinguished_name section

View File

@ -235,6 +235,4 @@ Specifics Of Asynchronous HASH Transformation
Some of the drivers will want to use the Generic ScatterWalk in case the Some of the drivers will want to use the Generic ScatterWalk in case the
implementation needs to be fed separate chunks of the scatterlist which implementation needs to be fed separate chunks of the scatterlist which
contains the input data. The buffer containing the resulting hash will contains the input data.
always be properly aligned to .cra_alignmask so there is no need to
worry about this.

View File

@ -4,7 +4,7 @@
$id: http://devicetree.org/schemas/crypto/fsl-imx-sahara.yaml# $id: http://devicetree.org/schemas/crypto/fsl-imx-sahara.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml# $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Freescale SAHARA Cryptographic Accelerator included in some i.MX chips title: Freescale SAHARA Cryptographic Accelerator
maintainers: maintainers:
- Steffen Trumtrar <s.trumtrar@pengutronix.de> - Steffen Trumtrar <s.trumtrar@pengutronix.de>
@ -19,19 +19,56 @@ properties:
maxItems: 1 maxItems: 1
interrupts: interrupts:
maxItems: 1 items:
- description: SAHARA Interrupt for Host 0
- description: SAHARA Interrupt for Host 1
minItems: 1
clocks:
items:
- description: Sahara IPG clock
- description: Sahara AHB clock
clock-names:
items:
- const: ipg
- const: ahb
required: required:
- compatible - compatible
- reg - reg
- interrupts - interrupts
- clocks
- clock-names
allOf:
- if:
properties:
compatible:
contains:
enum:
- fsl,imx53-sahara
then:
properties:
interrupts:
minItems: 2
maxItems: 2
else:
properties:
interrupts:
maxItems: 1
additionalProperties: false additionalProperties: false
examples: examples:
- | - |
#include <dt-bindings/clock/imx27-clock.h>
crypto@10025000 { crypto@10025000 {
compatible = "fsl,imx27-sahara"; compatible = "fsl,imx27-sahara";
reg = < 0x10025000 0x800>; reg = <0x10025000 0x800>;
interrupts = <75>; interrupts = <75>;
clocks = <&clks IMX27_CLK_SAHARA_IPG_GATE>,
<&clks IMX27_CLK_SAHARA_AHB_GATE>;
clock-names = "ipg", "ahb";
}; };

View File

@ -13,6 +13,7 @@ properties:
compatible: compatible:
items: items:
- enum: - enum:
- qcom,sa8775p-inline-crypto-engine
- qcom,sm8450-inline-crypto-engine - qcom,sm8450-inline-crypto-engine
- qcom,sm8550-inline-crypto-engine - qcom,sm8550-inline-crypto-engine
- const: qcom,inline-crypto-engine - const: qcom,inline-crypto-engine

View File

@ -11,9 +11,17 @@ maintainers:
properties: properties:
compatible: compatible:
enum: oneOf:
- qcom,prng # 8916 etc. - enum:
- qcom,prng-ee # 8996 and later using EE - qcom,prng # 8916 etc.
- qcom,prng-ee # 8996 and later using EE
- items:
- enum:
- qcom,sa8775p-trng
- qcom,sc7280-trng
- qcom,sm8450-trng
- qcom,sm8550-trng
- const: qcom,trng
reg: reg:
maxItems: 1 maxItems: 1
@ -28,8 +36,18 @@ properties:
required: required:
- compatible - compatible
- reg - reg
- clocks
- clock-names allOf:
- if:
not:
properties:
compatible:
contains:
const: qcom,trng
then:
required:
- clocks
- clock-names
additionalProperties: false additionalProperties: false

View File

@ -14,6 +14,7 @@ properties:
compatible: compatible:
enum: enum:
- amlogic,meson-rng - amlogic,meson-rng
- amlogic,meson-s4-rng
reg: reg:
maxItems: 1 maxItems: 1

View File

@ -15,7 +15,9 @@ maintainers:
properties: properties:
compatible: compatible:
const: st,stm32-rng enum:
- st,stm32-rng
- st,stm32mp13-rng
reg: reg:
maxItems: 1 maxItems: 1
@ -30,11 +32,27 @@ properties:
type: boolean type: boolean
description: If set enable the clock detection management description: If set enable the clock detection management
st,rng-lock-conf:
type: boolean
description: If set, the RNG configuration in RNG_CR, RNG_HTCR and
RNG_NSCR will be locked.
required: required:
- compatible - compatible
- reg - reg
- clocks - clocks
allOf:
- if:
properties:
compatible:
contains:
enum:
- st,stm32-rng
then:
properties:
st,rng-lock-conf: false
additionalProperties: false additionalProperties: false
examples: examples:

View File

@ -908,7 +908,7 @@ F: drivers/crypto/ccp/
F: include/linux/ccp.h F: include/linux/ccp.h
AMD CRYPTOGRAPHIC COPROCESSOR (CCP) DRIVER - SEV SUPPORT AMD CRYPTOGRAPHIC COPROCESSOR (CCP) DRIVER - SEV SUPPORT
M: Brijesh Singh <brijesh.singh@amd.com> M: Ashish Kalra <ashish.kalra@amd.com>
M: Tom Lendacky <thomas.lendacky@amd.com> M: Tom Lendacky <thomas.lendacky@amd.com>
L: linux-crypto@vger.kernel.org L: linux-crypto@vger.kernel.org
S: Supported S: Supported

View File

@ -34,6 +34,14 @@ static int nhpoly1305_neon_update(struct shash_desc *desc,
return 0; return 0;
} }
static int nhpoly1305_neon_digest(struct shash_desc *desc,
const u8 *src, unsigned int srclen, u8 *out)
{
return crypto_nhpoly1305_init(desc) ?:
nhpoly1305_neon_update(desc, src, srclen) ?:
crypto_nhpoly1305_final(desc, out);
}
static struct shash_alg nhpoly1305_alg = { static struct shash_alg nhpoly1305_alg = {
.base.cra_name = "nhpoly1305", .base.cra_name = "nhpoly1305",
.base.cra_driver_name = "nhpoly1305-neon", .base.cra_driver_name = "nhpoly1305-neon",
@ -44,6 +52,7 @@ static struct shash_alg nhpoly1305_alg = {
.init = crypto_nhpoly1305_init, .init = crypto_nhpoly1305_init,
.update = nhpoly1305_neon_update, .update = nhpoly1305_neon_update,
.final = crypto_nhpoly1305_final, .final = crypto_nhpoly1305_final,
.digest = nhpoly1305_neon_digest,
.setkey = crypto_nhpoly1305_setkey, .setkey = crypto_nhpoly1305_setkey,
.descsize = sizeof(struct nhpoly1305_state), .descsize = sizeof(struct nhpoly1305_state),
}; };

View File

@ -34,6 +34,14 @@ static int nhpoly1305_neon_update(struct shash_desc *desc,
return 0; return 0;
} }
static int nhpoly1305_neon_digest(struct shash_desc *desc,
const u8 *src, unsigned int srclen, u8 *out)
{
return crypto_nhpoly1305_init(desc) ?:
nhpoly1305_neon_update(desc, src, srclen) ?:
crypto_nhpoly1305_final(desc, out);
}
static struct shash_alg nhpoly1305_alg = { static struct shash_alg nhpoly1305_alg = {
.base.cra_name = "nhpoly1305", .base.cra_name = "nhpoly1305",
.base.cra_driver_name = "nhpoly1305-neon", .base.cra_driver_name = "nhpoly1305-neon",
@ -44,6 +52,7 @@ static struct shash_alg nhpoly1305_alg = {
.init = crypto_nhpoly1305_init, .init = crypto_nhpoly1305_init,
.update = nhpoly1305_neon_update, .update = nhpoly1305_neon_update,
.final = crypto_nhpoly1305_final, .final = crypto_nhpoly1305_final,
.digest = nhpoly1305_neon_digest,
.setkey = crypto_nhpoly1305_setkey, .setkey = crypto_nhpoly1305_setkey,
.descsize = sizeof(struct nhpoly1305_state), .descsize = sizeof(struct nhpoly1305_state),
}; };

View File

@ -62,10 +62,10 @@
.endm .endm
/* /*
* int sha1_ce_transform(struct sha1_ce_state *sst, u8 const *src, * int __sha1_ce_transform(struct sha1_ce_state *sst, u8 const *src,
* int blocks) * int blocks)
*/ */
SYM_FUNC_START(sha1_ce_transform) SYM_FUNC_START(__sha1_ce_transform)
/* load round constants */ /* load round constants */
loadrc k0.4s, 0x5a827999, w6 loadrc k0.4s, 0x5a827999, w6
loadrc k1.4s, 0x6ed9eba1, w6 loadrc k1.4s, 0x6ed9eba1, w6
@ -147,4 +147,4 @@ CPU_LE( rev32 v11.16b, v11.16b )
str dgb, [x0, #16] str dgb, [x0, #16]
mov w0, w2 mov w0, w2
ret ret
SYM_FUNC_END(sha1_ce_transform) SYM_FUNC_END(__sha1_ce_transform)

View File

@ -29,18 +29,19 @@ struct sha1_ce_state {
extern const u32 sha1_ce_offsetof_count; extern const u32 sha1_ce_offsetof_count;
extern const u32 sha1_ce_offsetof_finalize; extern const u32 sha1_ce_offsetof_finalize;
asmlinkage int sha1_ce_transform(struct sha1_ce_state *sst, u8 const *src, asmlinkage int __sha1_ce_transform(struct sha1_ce_state *sst, u8 const *src,
int blocks); int blocks);
static void __sha1_ce_transform(struct sha1_state *sst, u8 const *src, static void sha1_ce_transform(struct sha1_state *sst, u8 const *src,
int blocks) int blocks)
{ {
while (blocks) { while (blocks) {
int rem; int rem;
kernel_neon_begin(); kernel_neon_begin();
rem = sha1_ce_transform(container_of(sst, struct sha1_ce_state, rem = __sha1_ce_transform(container_of(sst,
sst), src, blocks); struct sha1_ce_state,
sst), src, blocks);
kernel_neon_end(); kernel_neon_end();
src += (blocks - rem) * SHA1_BLOCK_SIZE; src += (blocks - rem) * SHA1_BLOCK_SIZE;
blocks = rem; blocks = rem;
@ -59,7 +60,7 @@ static int sha1_ce_update(struct shash_desc *desc, const u8 *data,
return crypto_sha1_update(desc, data, len); return crypto_sha1_update(desc, data, len);
sctx->finalize = 0; sctx->finalize = 0;
sha1_base_do_update(desc, data, len, __sha1_ce_transform); sha1_base_do_update(desc, data, len, sha1_ce_transform);
return 0; return 0;
} }
@ -79,9 +80,9 @@ static int sha1_ce_finup(struct shash_desc *desc, const u8 *data,
*/ */
sctx->finalize = finalize; sctx->finalize = finalize;
sha1_base_do_update(desc, data, len, __sha1_ce_transform); sha1_base_do_update(desc, data, len, sha1_ce_transform);
if (!finalize) if (!finalize)
sha1_base_do_finalize(desc, __sha1_ce_transform); sha1_base_do_finalize(desc, sha1_ce_transform);
return sha1_base_finish(desc, out); return sha1_base_finish(desc, out);
} }
@ -93,7 +94,7 @@ static int sha1_ce_final(struct shash_desc *desc, u8 *out)
return crypto_sha1_finup(desc, NULL, 0, out); return crypto_sha1_finup(desc, NULL, 0, out);
sctx->finalize = 0; sctx->finalize = 0;
sha1_base_do_finalize(desc, __sha1_ce_transform); sha1_base_do_finalize(desc, sha1_ce_transform);
return sha1_base_finish(desc, out); return sha1_base_finish(desc, out);
} }

View File

@ -71,11 +71,11 @@
.word 0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2 .word 0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2
/* /*
* void sha2_ce_transform(struct sha256_ce_state *sst, u8 const *src, * int __sha256_ce_transform(struct sha256_ce_state *sst, u8 const *src,
* int blocks) * int blocks)
*/ */
.text .text
SYM_FUNC_START(sha2_ce_transform) SYM_FUNC_START(__sha256_ce_transform)
/* load round constants */ /* load round constants */
adr_l x8, .Lsha2_rcon adr_l x8, .Lsha2_rcon
ld1 { v0.4s- v3.4s}, [x8], #64 ld1 { v0.4s- v3.4s}, [x8], #64
@ -154,4 +154,4 @@ CPU_LE( rev32 v19.16b, v19.16b )
3: st1 {dgav.4s, dgbv.4s}, [x0] 3: st1 {dgav.4s, dgbv.4s}, [x0]
mov w0, w2 mov w0, w2
ret ret
SYM_FUNC_END(sha2_ce_transform) SYM_FUNC_END(__sha256_ce_transform)

View File

@ -30,18 +30,19 @@ struct sha256_ce_state {
extern const u32 sha256_ce_offsetof_count; extern const u32 sha256_ce_offsetof_count;
extern const u32 sha256_ce_offsetof_finalize; extern const u32 sha256_ce_offsetof_finalize;
asmlinkage int sha2_ce_transform(struct sha256_ce_state *sst, u8 const *src, asmlinkage int __sha256_ce_transform(struct sha256_ce_state *sst, u8 const *src,
int blocks); int blocks);
static void __sha2_ce_transform(struct sha256_state *sst, u8 const *src, static void sha256_ce_transform(struct sha256_state *sst, u8 const *src,
int blocks) int blocks)
{ {
while (blocks) { while (blocks) {
int rem; int rem;
kernel_neon_begin(); kernel_neon_begin();
rem = sha2_ce_transform(container_of(sst, struct sha256_ce_state, rem = __sha256_ce_transform(container_of(sst,
sst), src, blocks); struct sha256_ce_state,
sst), src, blocks);
kernel_neon_end(); kernel_neon_end();
src += (blocks - rem) * SHA256_BLOCK_SIZE; src += (blocks - rem) * SHA256_BLOCK_SIZE;
blocks = rem; blocks = rem;
@ -55,8 +56,8 @@ const u32 sha256_ce_offsetof_finalize = offsetof(struct sha256_ce_state,
asmlinkage void sha256_block_data_order(u32 *digest, u8 const *src, int blocks); asmlinkage void sha256_block_data_order(u32 *digest, u8 const *src, int blocks);
static void __sha256_block_data_order(struct sha256_state *sst, u8 const *src, static void sha256_arm64_transform(struct sha256_state *sst, u8 const *src,
int blocks) int blocks)
{ {
sha256_block_data_order(sst->state, src, blocks); sha256_block_data_order(sst->state, src, blocks);
} }
@ -68,10 +69,10 @@ static int sha256_ce_update(struct shash_desc *desc, const u8 *data,
if (!crypto_simd_usable()) if (!crypto_simd_usable())
return sha256_base_do_update(desc, data, len, return sha256_base_do_update(desc, data, len,
__sha256_block_data_order); sha256_arm64_transform);
sctx->finalize = 0; sctx->finalize = 0;
sha256_base_do_update(desc, data, len, __sha2_ce_transform); sha256_base_do_update(desc, data, len, sha256_ce_transform);
return 0; return 0;
} }
@ -85,8 +86,8 @@ static int sha256_ce_finup(struct shash_desc *desc, const u8 *data,
if (!crypto_simd_usable()) { if (!crypto_simd_usable()) {
if (len) if (len)
sha256_base_do_update(desc, data, len, sha256_base_do_update(desc, data, len,
__sha256_block_data_order); sha256_arm64_transform);
sha256_base_do_finalize(desc, __sha256_block_data_order); sha256_base_do_finalize(desc, sha256_arm64_transform);
return sha256_base_finish(desc, out); return sha256_base_finish(desc, out);
} }
@ -96,9 +97,9 @@ static int sha256_ce_finup(struct shash_desc *desc, const u8 *data,
*/ */
sctx->finalize = finalize; sctx->finalize = finalize;
sha256_base_do_update(desc, data, len, __sha2_ce_transform); sha256_base_do_update(desc, data, len, sha256_ce_transform);
if (!finalize) if (!finalize)
sha256_base_do_finalize(desc, __sha2_ce_transform); sha256_base_do_finalize(desc, sha256_ce_transform);
return sha256_base_finish(desc, out); return sha256_base_finish(desc, out);
} }
@ -107,15 +108,22 @@ static int sha256_ce_final(struct shash_desc *desc, u8 *out)
struct sha256_ce_state *sctx = shash_desc_ctx(desc); struct sha256_ce_state *sctx = shash_desc_ctx(desc);
if (!crypto_simd_usable()) { if (!crypto_simd_usable()) {
sha256_base_do_finalize(desc, __sha256_block_data_order); sha256_base_do_finalize(desc, sha256_arm64_transform);
return sha256_base_finish(desc, out); return sha256_base_finish(desc, out);
} }
sctx->finalize = 0; sctx->finalize = 0;
sha256_base_do_finalize(desc, __sha2_ce_transform); sha256_base_do_finalize(desc, sha256_ce_transform);
return sha256_base_finish(desc, out); return sha256_base_finish(desc, out);
} }
static int sha256_ce_digest(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out)
{
sha256_base_init(desc);
return sha256_ce_finup(desc, data, len, out);
}
static int sha256_ce_export(struct shash_desc *desc, void *out) static int sha256_ce_export(struct shash_desc *desc, void *out)
{ {
struct sha256_ce_state *sctx = shash_desc_ctx(desc); struct sha256_ce_state *sctx = shash_desc_ctx(desc);
@ -155,6 +163,7 @@ static struct shash_alg algs[] = { {
.update = sha256_ce_update, .update = sha256_ce_update,
.final = sha256_ce_final, .final = sha256_ce_final,
.finup = sha256_ce_finup, .finup = sha256_ce_finup,
.digest = sha256_ce_digest,
.export = sha256_ce_export, .export = sha256_ce_export,
.import = sha256_ce_import, .import = sha256_ce_import,
.descsize = sizeof(struct sha256_ce_state), .descsize = sizeof(struct sha256_ce_state),

View File

@ -27,8 +27,8 @@ asmlinkage void sha256_block_data_order(u32 *digest, const void *data,
unsigned int num_blks); unsigned int num_blks);
EXPORT_SYMBOL(sha256_block_data_order); EXPORT_SYMBOL(sha256_block_data_order);
static void __sha256_block_data_order(struct sha256_state *sst, u8 const *src, static void sha256_arm64_transform(struct sha256_state *sst, u8 const *src,
int blocks) int blocks)
{ {
sha256_block_data_order(sst->state, src, blocks); sha256_block_data_order(sst->state, src, blocks);
} }
@ -36,8 +36,8 @@ static void __sha256_block_data_order(struct sha256_state *sst, u8 const *src,
asmlinkage void sha256_block_neon(u32 *digest, const void *data, asmlinkage void sha256_block_neon(u32 *digest, const void *data,
unsigned int num_blks); unsigned int num_blks);
static void __sha256_block_neon(struct sha256_state *sst, u8 const *src, static void sha256_neon_transform(struct sha256_state *sst, u8 const *src,
int blocks) int blocks)
{ {
sha256_block_neon(sst->state, src, blocks); sha256_block_neon(sst->state, src, blocks);
} }
@ -45,17 +45,15 @@ static void __sha256_block_neon(struct sha256_state *sst, u8 const *src,
static int crypto_sha256_arm64_update(struct shash_desc *desc, const u8 *data, static int crypto_sha256_arm64_update(struct shash_desc *desc, const u8 *data,
unsigned int len) unsigned int len)
{ {
return sha256_base_do_update(desc, data, len, return sha256_base_do_update(desc, data, len, sha256_arm64_transform);
__sha256_block_data_order);
} }
static int crypto_sha256_arm64_finup(struct shash_desc *desc, const u8 *data, static int crypto_sha256_arm64_finup(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out) unsigned int len, u8 *out)
{ {
if (len) if (len)
sha256_base_do_update(desc, data, len, sha256_base_do_update(desc, data, len, sha256_arm64_transform);
__sha256_block_data_order); sha256_base_do_finalize(desc, sha256_arm64_transform);
sha256_base_do_finalize(desc, __sha256_block_data_order);
return sha256_base_finish(desc, out); return sha256_base_finish(desc, out);
} }
@ -98,7 +96,7 @@ static int sha256_update_neon(struct shash_desc *desc, const u8 *data,
if (!crypto_simd_usable()) if (!crypto_simd_usable())
return sha256_base_do_update(desc, data, len, return sha256_base_do_update(desc, data, len,
__sha256_block_data_order); sha256_arm64_transform);
while (len > 0) { while (len > 0) {
unsigned int chunk = len; unsigned int chunk = len;
@ -114,7 +112,7 @@ static int sha256_update_neon(struct shash_desc *desc, const u8 *data,
sctx->count % SHA256_BLOCK_SIZE; sctx->count % SHA256_BLOCK_SIZE;
kernel_neon_begin(); kernel_neon_begin();
sha256_base_do_update(desc, data, chunk, __sha256_block_neon); sha256_base_do_update(desc, data, chunk, sha256_neon_transform);
kernel_neon_end(); kernel_neon_end();
data += chunk; data += chunk;
len -= chunk; len -= chunk;
@ -128,13 +126,13 @@ static int sha256_finup_neon(struct shash_desc *desc, const u8 *data,
if (!crypto_simd_usable()) { if (!crypto_simd_usable()) {
if (len) if (len)
sha256_base_do_update(desc, data, len, sha256_base_do_update(desc, data, len,
__sha256_block_data_order); sha256_arm64_transform);
sha256_base_do_finalize(desc, __sha256_block_data_order); sha256_base_do_finalize(desc, sha256_arm64_transform);
} else { } else {
if (len) if (len)
sha256_update_neon(desc, data, len); sha256_update_neon(desc, data, len);
kernel_neon_begin(); kernel_neon_begin();
sha256_base_do_finalize(desc, __sha256_block_neon); sha256_base_do_finalize(desc, sha256_neon_transform);
kernel_neon_end(); kernel_neon_end();
} }
return sha256_base_finish(desc, out); return sha256_base_finish(desc, out);

View File

@ -102,11 +102,11 @@
.endm .endm
/* /*
* void sha512_ce_transform(struct sha512_state *sst, u8 const *src, * int __sha512_ce_transform(struct sha512_state *sst, u8 const *src,
* int blocks) * int blocks)
*/ */
.text .text
SYM_FUNC_START(sha512_ce_transform) SYM_FUNC_START(__sha512_ce_transform)
/* load state */ /* load state */
ld1 {v8.2d-v11.2d}, [x0] ld1 {v8.2d-v11.2d}, [x0]
@ -203,4 +203,4 @@ CPU_LE( rev64 v19.16b, v19.16b )
3: st1 {v8.2d-v11.2d}, [x0] 3: st1 {v8.2d-v11.2d}, [x0]
mov w0, w2 mov w0, w2
ret ret
SYM_FUNC_END(sha512_ce_transform) SYM_FUNC_END(__sha512_ce_transform)

View File

@ -26,27 +26,27 @@ MODULE_LICENSE("GPL v2");
MODULE_ALIAS_CRYPTO("sha384"); MODULE_ALIAS_CRYPTO("sha384");
MODULE_ALIAS_CRYPTO("sha512"); MODULE_ALIAS_CRYPTO("sha512");
asmlinkage int sha512_ce_transform(struct sha512_state *sst, u8 const *src, asmlinkage int __sha512_ce_transform(struct sha512_state *sst, u8 const *src,
int blocks); int blocks);
asmlinkage void sha512_block_data_order(u64 *digest, u8 const *src, int blocks); asmlinkage void sha512_block_data_order(u64 *digest, u8 const *src, int blocks);
static void __sha512_ce_transform(struct sha512_state *sst, u8 const *src, static void sha512_ce_transform(struct sha512_state *sst, u8 const *src,
int blocks) int blocks)
{ {
while (blocks) { while (blocks) {
int rem; int rem;
kernel_neon_begin(); kernel_neon_begin();
rem = sha512_ce_transform(sst, src, blocks); rem = __sha512_ce_transform(sst, src, blocks);
kernel_neon_end(); kernel_neon_end();
src += (blocks - rem) * SHA512_BLOCK_SIZE; src += (blocks - rem) * SHA512_BLOCK_SIZE;
blocks = rem; blocks = rem;
} }
} }
static void __sha512_block_data_order(struct sha512_state *sst, u8 const *src, static void sha512_arm64_transform(struct sha512_state *sst, u8 const *src,
int blocks) int blocks)
{ {
sha512_block_data_order(sst->state, src, blocks); sha512_block_data_order(sst->state, src, blocks);
} }
@ -54,8 +54,8 @@ static void __sha512_block_data_order(struct sha512_state *sst, u8 const *src,
static int sha512_ce_update(struct shash_desc *desc, const u8 *data, static int sha512_ce_update(struct shash_desc *desc, const u8 *data,
unsigned int len) unsigned int len)
{ {
sha512_block_fn *fn = crypto_simd_usable() ? __sha512_ce_transform sha512_block_fn *fn = crypto_simd_usable() ? sha512_ce_transform
: __sha512_block_data_order; : sha512_arm64_transform;
sha512_base_do_update(desc, data, len, fn); sha512_base_do_update(desc, data, len, fn);
return 0; return 0;
@ -64,8 +64,8 @@ static int sha512_ce_update(struct shash_desc *desc, const u8 *data,
static int sha512_ce_finup(struct shash_desc *desc, const u8 *data, static int sha512_ce_finup(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out) unsigned int len, u8 *out)
{ {
sha512_block_fn *fn = crypto_simd_usable() ? __sha512_ce_transform sha512_block_fn *fn = crypto_simd_usable() ? sha512_ce_transform
: __sha512_block_data_order; : sha512_arm64_transform;
sha512_base_do_update(desc, data, len, fn); sha512_base_do_update(desc, data, len, fn);
sha512_base_do_finalize(desc, fn); sha512_base_do_finalize(desc, fn);
@ -74,8 +74,8 @@ static int sha512_ce_finup(struct shash_desc *desc, const u8 *data,
static int sha512_ce_final(struct shash_desc *desc, u8 *out) static int sha512_ce_final(struct shash_desc *desc, u8 *out)
{ {
sha512_block_fn *fn = crypto_simd_usable() ? __sha512_ce_transform sha512_block_fn *fn = crypto_simd_usable() ? sha512_ce_transform
: __sha512_block_data_order; : sha512_arm64_transform;
sha512_base_do_finalize(desc, fn); sha512_base_do_finalize(desc, fn);
return sha512_base_finish(desc, out); return sha512_base_finish(desc, out);

View File

@ -23,8 +23,8 @@ asmlinkage void sha512_block_data_order(u64 *digest, const void *data,
unsigned int num_blks); unsigned int num_blks);
EXPORT_SYMBOL(sha512_block_data_order); EXPORT_SYMBOL(sha512_block_data_order);
static void __sha512_block_data_order(struct sha512_state *sst, u8 const *src, static void sha512_arm64_transform(struct sha512_state *sst, u8 const *src,
int blocks) int blocks)
{ {
sha512_block_data_order(sst->state, src, blocks); sha512_block_data_order(sst->state, src, blocks);
} }
@ -32,17 +32,15 @@ static void __sha512_block_data_order(struct sha512_state *sst, u8 const *src,
static int sha512_update(struct shash_desc *desc, const u8 *data, static int sha512_update(struct shash_desc *desc, const u8 *data,
unsigned int len) unsigned int len)
{ {
return sha512_base_do_update(desc, data, len, return sha512_base_do_update(desc, data, len, sha512_arm64_transform);
__sha512_block_data_order);
} }
static int sha512_finup(struct shash_desc *desc, const u8 *data, static int sha512_finup(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out) unsigned int len, u8 *out)
{ {
if (len) if (len)
sha512_base_do_update(desc, data, len, sha512_base_do_update(desc, data, len, sha512_arm64_transform);
__sha512_block_data_order); sha512_base_do_finalize(desc, sha512_arm64_transform);
sha512_base_do_finalize(desc, __sha512_block_data_order);
return sha512_base_finish(desc, out); return sha512_base_finish(desc, out);
} }

View File

@ -239,7 +239,6 @@ static struct shash_alg crc32_alg = {
.cra_priority = 300, .cra_priority = 300,
.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, .cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
.cra_blocksize = CHKSUM_BLOCK_SIZE, .cra_blocksize = CHKSUM_BLOCK_SIZE,
.cra_alignmask = 0,
.cra_ctxsize = sizeof(struct chksum_ctx), .cra_ctxsize = sizeof(struct chksum_ctx),
.cra_module = THIS_MODULE, .cra_module = THIS_MODULE,
.cra_init = chksum_cra_init, .cra_init = chksum_cra_init,
@ -261,7 +260,6 @@ static struct shash_alg crc32c_alg = {
.cra_priority = 300, .cra_priority = 300,
.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, .cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
.cra_blocksize = CHKSUM_BLOCK_SIZE, .cra_blocksize = CHKSUM_BLOCK_SIZE,
.cra_alignmask = 0,
.cra_ctxsize = sizeof(struct chksum_ctx), .cra_ctxsize = sizeof(struct chksum_ctx),
.cra_module = THIS_MODULE, .cra_module = THIS_MODULE,
.cra_init = chksumc_cra_init, .cra_init = chksumc_cra_init,

View File

@ -290,7 +290,6 @@ static struct shash_alg crc32_alg = {
.cra_priority = 300, .cra_priority = 300,
.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, .cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
.cra_blocksize = CHKSUM_BLOCK_SIZE, .cra_blocksize = CHKSUM_BLOCK_SIZE,
.cra_alignmask = 0,
.cra_ctxsize = sizeof(struct chksum_ctx), .cra_ctxsize = sizeof(struct chksum_ctx),
.cra_module = THIS_MODULE, .cra_module = THIS_MODULE,
.cra_init = chksum_cra_init, .cra_init = chksum_cra_init,
@ -312,7 +311,6 @@ static struct shash_alg crc32c_alg = {
.cra_priority = 300, .cra_priority = 300,
.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, .cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
.cra_blocksize = CHKSUM_BLOCK_SIZE, .cra_blocksize = CHKSUM_BLOCK_SIZE,
.cra_alignmask = 0,
.cra_ctxsize = sizeof(struct chksum_ctx), .cra_ctxsize = sizeof(struct chksum_ctx),
.cra_module = THIS_MODULE, .cra_module = THIS_MODULE,
.cra_init = chksum_cra_init, .cra_init = chksum_cra_init,

View File

@ -20,6 +20,7 @@
#include <asm/pstate.h> #include <asm/pstate.h>
#include <asm/elf.h> #include <asm/elf.h>
#include <asm/unaligned.h>
#include "opcodes.h" #include "opcodes.h"
@ -35,7 +36,7 @@ static int crc32c_sparc64_setkey(struct crypto_shash *hash, const u8 *key,
if (keylen != sizeof(u32)) if (keylen != sizeof(u32))
return -EINVAL; return -EINVAL;
*mctx = le32_to_cpup((__le32 *)key); *mctx = get_unaligned_le32(key);
return 0; return 0;
} }
@ -51,18 +52,26 @@ static int crc32c_sparc64_init(struct shash_desc *desc)
extern void crc32c_sparc64(u32 *crcp, const u64 *data, unsigned int len); extern void crc32c_sparc64(u32 *crcp, const u64 *data, unsigned int len);
static void crc32c_compute(u32 *crcp, const u64 *data, unsigned int len) static u32 crc32c_compute(u32 crc, const u8 *data, unsigned int len)
{ {
unsigned int asm_len; unsigned int n = -(uintptr_t)data & 7;
asm_len = len & ~7U; if (n) {
if (asm_len) { /* Data isn't 8-byte aligned. Align it. */
crc32c_sparc64(crcp, data, asm_len); n = min(n, len);
data += asm_len / 8; crc = __crc32c_le(crc, data, n);
len -= asm_len; data += n;
len -= n;
}
n = len & ~7U;
if (n) {
crc32c_sparc64(&crc, (const u64 *)data, n);
data += n;
len -= n;
} }
if (len) if (len)
*crcp = __crc32c_le(*crcp, (const unsigned char *) data, len); crc = __crc32c_le(crc, data, len);
return crc;
} }
static int crc32c_sparc64_update(struct shash_desc *desc, const u8 *data, static int crc32c_sparc64_update(struct shash_desc *desc, const u8 *data,
@ -70,19 +79,14 @@ static int crc32c_sparc64_update(struct shash_desc *desc, const u8 *data,
{ {
u32 *crcp = shash_desc_ctx(desc); u32 *crcp = shash_desc_ctx(desc);
crc32c_compute(crcp, (const u64 *) data, len); *crcp = crc32c_compute(*crcp, data, len);
return 0; return 0;
} }
static int __crc32c_sparc64_finup(u32 *crcp, const u8 *data, unsigned int len, static int __crc32c_sparc64_finup(const u32 *crcp, const u8 *data,
u8 *out) unsigned int len, u8 *out)
{ {
u32 tmp = *crcp; put_unaligned_le32(~crc32c_compute(*crcp, data, len), out);
crc32c_compute(&tmp, (const u64 *) data, len);
*(__le32 *) out = ~cpu_to_le32(tmp);
return 0; return 0;
} }
@ -96,7 +100,7 @@ static int crc32c_sparc64_final(struct shash_desc *desc, u8 *out)
{ {
u32 *crcp = shash_desc_ctx(desc); u32 *crcp = shash_desc_ctx(desc);
*(__le32 *) out = ~cpu_to_le32p(crcp); put_unaligned_le32(~*crcp, out);
return 0; return 0;
} }
@ -135,7 +139,6 @@ static struct shash_alg alg = {
.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, .cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
.cra_blocksize = CHKSUM_BLOCK_SIZE, .cra_blocksize = CHKSUM_BLOCK_SIZE,
.cra_ctxsize = sizeof(u32), .cra_ctxsize = sizeof(u32),
.cra_alignmask = 7,
.cra_module = THIS_MODULE, .cra_module = THIS_MODULE,
.cra_init = crc32c_sparc64_cra_init, .cra_init = crc32c_sparc64_cra_init,
} }

View File

@ -672,7 +672,7 @@ ALL_F: .octa 0xffffffffffffffffffffffffffffffff
add %r13, %r10 add %r13, %r10
# Set r10 to be the amount of data left in CYPH_PLAIN_IN after filling # Set r10 to be the amount of data left in CYPH_PLAIN_IN after filling
sub $16, %r10 sub $16, %r10
# Determine if if partial block is not being filled and # Determine if partial block is not being filled and
# shift mask accordingly # shift mask accordingly
jge .L_no_extra_mask_1_\@ jge .L_no_extra_mask_1_\@
sub %r10, %r12 sub %r10, %r12
@ -708,7 +708,7 @@ ALL_F: .octa 0xffffffffffffffffffffffffffffffff
add %r13, %r10 add %r13, %r10
# Set r10 to be the amount of data left in CYPH_PLAIN_IN after filling # Set r10 to be the amount of data left in CYPH_PLAIN_IN after filling
sub $16, %r10 sub $16, %r10
# Determine if if partial block is not being filled and # Determine if partial block is not being filled and
# shift mask accordingly # shift mask accordingly
jge .L_no_extra_mask_2_\@ jge .L_no_extra_mask_2_\@
sub %r10, %r12 sub %r10, %r12

View File

@ -753,7 +753,7 @@ VARIABLE_OFFSET = 16*8
add %r13, %r10 add %r13, %r10
# Set r10 to be the amount of data left in CYPH_PLAIN_IN after filling # Set r10 to be the amount of data left in CYPH_PLAIN_IN after filling
sub $16, %r10 sub $16, %r10
# Determine if if partial block is not being filled and # Determine if partial block is not being filled and
# shift mask accordingly # shift mask accordingly
jge .L_no_extra_mask_1_\@ jge .L_no_extra_mask_1_\@
sub %r10, %r12 sub %r10, %r12
@ -789,7 +789,7 @@ VARIABLE_OFFSET = 16*8
add %r13, %r10 add %r13, %r10
# Set r10 to be the amount of data left in CYPH_PLAIN_IN after filling # Set r10 to be the amount of data left in CYPH_PLAIN_IN after filling
sub $16, %r10 sub $16, %r10
# Determine if if partial block is not being filled and # Determine if partial block is not being filled and
# shift mask accordingly # shift mask accordingly
jge .L_no_extra_mask_2_\@ jge .L_no_extra_mask_2_\@
sub %r10, %r12 sub %r10, %r12

View File

@ -61,8 +61,8 @@ struct generic_gcmaes_ctx {
}; };
struct aesni_xts_ctx { struct aesni_xts_ctx {
u8 raw_tweak_ctx[sizeof(struct crypto_aes_ctx)] AESNI_ALIGN_ATTR; struct crypto_aes_ctx tweak_ctx AESNI_ALIGN_ATTR;
u8 raw_crypt_ctx[sizeof(struct crypto_aes_ctx)] AESNI_ALIGN_ATTR; struct crypto_aes_ctx crypt_ctx AESNI_ALIGN_ATTR;
}; };
#define GCM_BLOCK_LEN 16 #define GCM_BLOCK_LEN 16
@ -80,6 +80,13 @@ struct gcm_context_data {
u8 hash_keys[GCM_BLOCK_LEN * 16]; u8 hash_keys[GCM_BLOCK_LEN * 16];
}; };
static inline void *aes_align_addr(void *addr)
{
if (crypto_tfm_ctx_alignment() >= AESNI_ALIGN)
return addr;
return PTR_ALIGN(addr, AESNI_ALIGN);
}
asmlinkage int aesni_set_key(struct crypto_aes_ctx *ctx, const u8 *in_key, asmlinkage int aesni_set_key(struct crypto_aes_ctx *ctx, const u8 *in_key,
unsigned int key_len); unsigned int key_len);
asmlinkage void aesni_enc(const void *ctx, u8 *out, const u8 *in); asmlinkage void aesni_enc(const void *ctx, u8 *out, const u8 *in);
@ -201,32 +208,24 @@ static __ro_after_init DEFINE_STATIC_KEY_FALSE(gcm_use_avx2);
static inline struct static inline struct
aesni_rfc4106_gcm_ctx *aesni_rfc4106_gcm_ctx_get(struct crypto_aead *tfm) aesni_rfc4106_gcm_ctx *aesni_rfc4106_gcm_ctx_get(struct crypto_aead *tfm)
{ {
unsigned long align = AESNI_ALIGN; return aes_align_addr(crypto_aead_ctx(tfm));
if (align <= crypto_tfm_ctx_alignment())
align = 1;
return PTR_ALIGN(crypto_aead_ctx(tfm), align);
} }
static inline struct static inline struct
generic_gcmaes_ctx *generic_gcmaes_ctx_get(struct crypto_aead *tfm) generic_gcmaes_ctx *generic_gcmaes_ctx_get(struct crypto_aead *tfm)
{ {
unsigned long align = AESNI_ALIGN; return aes_align_addr(crypto_aead_ctx(tfm));
if (align <= crypto_tfm_ctx_alignment())
align = 1;
return PTR_ALIGN(crypto_aead_ctx(tfm), align);
} }
#endif #endif
static inline struct crypto_aes_ctx *aes_ctx(void *raw_ctx) static inline struct crypto_aes_ctx *aes_ctx(void *raw_ctx)
{ {
unsigned long addr = (unsigned long)raw_ctx; return aes_align_addr(raw_ctx);
unsigned long align = AESNI_ALIGN; }
if (align <= crypto_tfm_ctx_alignment()) static inline struct aesni_xts_ctx *aes_xts_ctx(struct crypto_skcipher *tfm)
align = 1; {
return (struct crypto_aes_ctx *)ALIGN(addr, align); return aes_align_addr(crypto_skcipher_ctx(tfm));
} }
static int aes_set_key_common(struct crypto_aes_ctx *ctx, static int aes_set_key_common(struct crypto_aes_ctx *ctx,
@ -881,7 +880,7 @@ static int helper_rfc4106_decrypt(struct aead_request *req)
static int xts_aesni_setkey(struct crypto_skcipher *tfm, const u8 *key, static int xts_aesni_setkey(struct crypto_skcipher *tfm, const u8 *key,
unsigned int keylen) unsigned int keylen)
{ {
struct aesni_xts_ctx *ctx = crypto_skcipher_ctx(tfm); struct aesni_xts_ctx *ctx = aes_xts_ctx(tfm);
int err; int err;
err = xts_verify_key(tfm, key, keylen); err = xts_verify_key(tfm, key, keylen);
@ -891,19 +890,18 @@ static int xts_aesni_setkey(struct crypto_skcipher *tfm, const u8 *key,
keylen /= 2; keylen /= 2;
/* first half of xts-key is for crypt */ /* first half of xts-key is for crypt */
err = aes_set_key_common(aes_ctx(ctx->raw_crypt_ctx), key, keylen); err = aes_set_key_common(&ctx->crypt_ctx, key, keylen);
if (err) if (err)
return err; return err;
/* second half of xts-key is for tweak */ /* second half of xts-key is for tweak */
return aes_set_key_common(aes_ctx(ctx->raw_tweak_ctx), key + keylen, return aes_set_key_common(&ctx->tweak_ctx, key + keylen, keylen);
keylen);
} }
static int xts_crypt(struct skcipher_request *req, bool encrypt) static int xts_crypt(struct skcipher_request *req, bool encrypt)
{ {
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
struct aesni_xts_ctx *ctx = crypto_skcipher_ctx(tfm); struct aesni_xts_ctx *ctx = aes_xts_ctx(tfm);
int tail = req->cryptlen % AES_BLOCK_SIZE; int tail = req->cryptlen % AES_BLOCK_SIZE;
struct skcipher_request subreq; struct skcipher_request subreq;
struct skcipher_walk walk; struct skcipher_walk walk;
@ -939,7 +937,7 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt)
kernel_fpu_begin(); kernel_fpu_begin();
/* calculate first value of T */ /* calculate first value of T */
aesni_enc(aes_ctx(ctx->raw_tweak_ctx), walk.iv, walk.iv); aesni_enc(&ctx->tweak_ctx, walk.iv, walk.iv);
while (walk.nbytes > 0) { while (walk.nbytes > 0) {
int nbytes = walk.nbytes; int nbytes = walk.nbytes;
@ -948,11 +946,11 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt)
nbytes &= ~(AES_BLOCK_SIZE - 1); nbytes &= ~(AES_BLOCK_SIZE - 1);
if (encrypt) if (encrypt)
aesni_xts_encrypt(aes_ctx(ctx->raw_crypt_ctx), aesni_xts_encrypt(&ctx->crypt_ctx,
walk.dst.virt.addr, walk.src.virt.addr, walk.dst.virt.addr, walk.src.virt.addr,
nbytes, walk.iv); nbytes, walk.iv);
else else
aesni_xts_decrypt(aes_ctx(ctx->raw_crypt_ctx), aesni_xts_decrypt(&ctx->crypt_ctx,
walk.dst.virt.addr, walk.src.virt.addr, walk.dst.virt.addr, walk.src.virt.addr,
nbytes, walk.iv); nbytes, walk.iv);
kernel_fpu_end(); kernel_fpu_end();
@ -980,11 +978,11 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt)
kernel_fpu_begin(); kernel_fpu_begin();
if (encrypt) if (encrypt)
aesni_xts_encrypt(aes_ctx(ctx->raw_crypt_ctx), aesni_xts_encrypt(&ctx->crypt_ctx,
walk.dst.virt.addr, walk.src.virt.addr, walk.dst.virt.addr, walk.src.virt.addr,
walk.nbytes, walk.iv); walk.nbytes, walk.iv);
else else
aesni_xts_decrypt(aes_ctx(ctx->raw_crypt_ctx), aesni_xts_decrypt(&ctx->crypt_ctx,
walk.dst.virt.addr, walk.src.virt.addr, walk.dst.virt.addr, walk.src.virt.addr,
walk.nbytes, walk.iv); walk.nbytes, walk.iv);
kernel_fpu_end(); kernel_fpu_end();

View File

@ -34,6 +34,14 @@ static int nhpoly1305_avx2_update(struct shash_desc *desc,
return 0; return 0;
} }
static int nhpoly1305_avx2_digest(struct shash_desc *desc,
const u8 *src, unsigned int srclen, u8 *out)
{
return crypto_nhpoly1305_init(desc) ?:
nhpoly1305_avx2_update(desc, src, srclen) ?:
crypto_nhpoly1305_final(desc, out);
}
static struct shash_alg nhpoly1305_alg = { static struct shash_alg nhpoly1305_alg = {
.base.cra_name = "nhpoly1305", .base.cra_name = "nhpoly1305",
.base.cra_driver_name = "nhpoly1305-avx2", .base.cra_driver_name = "nhpoly1305-avx2",
@ -44,6 +52,7 @@ static struct shash_alg nhpoly1305_alg = {
.init = crypto_nhpoly1305_init, .init = crypto_nhpoly1305_init,
.update = nhpoly1305_avx2_update, .update = nhpoly1305_avx2_update,
.final = crypto_nhpoly1305_final, .final = crypto_nhpoly1305_final,
.digest = nhpoly1305_avx2_digest,
.setkey = crypto_nhpoly1305_setkey, .setkey = crypto_nhpoly1305_setkey,
.descsize = sizeof(struct nhpoly1305_state), .descsize = sizeof(struct nhpoly1305_state),
}; };

View File

@ -34,6 +34,14 @@ static int nhpoly1305_sse2_update(struct shash_desc *desc,
return 0; return 0;
} }
static int nhpoly1305_sse2_digest(struct shash_desc *desc,
const u8 *src, unsigned int srclen, u8 *out)
{
return crypto_nhpoly1305_init(desc) ?:
nhpoly1305_sse2_update(desc, src, srclen) ?:
crypto_nhpoly1305_final(desc, out);
}
static struct shash_alg nhpoly1305_alg = { static struct shash_alg nhpoly1305_alg = {
.base.cra_name = "nhpoly1305", .base.cra_name = "nhpoly1305",
.base.cra_driver_name = "nhpoly1305-sse2", .base.cra_driver_name = "nhpoly1305-sse2",
@ -44,6 +52,7 @@ static struct shash_alg nhpoly1305_alg = {
.init = crypto_nhpoly1305_init, .init = crypto_nhpoly1305_init,
.update = nhpoly1305_sse2_update, .update = nhpoly1305_sse2_update,
.final = crypto_nhpoly1305_final, .final = crypto_nhpoly1305_final,
.digest = nhpoly1305_sse2_digest,
.setkey = crypto_nhpoly1305_setkey, .setkey = crypto_nhpoly1305_setkey,
.descsize = sizeof(struct nhpoly1305_state), .descsize = sizeof(struct nhpoly1305_state),
}; };

View File

@ -24,8 +24,17 @@
#include <linux/types.h> #include <linux/types.h>
#include <crypto/sha1.h> #include <crypto/sha1.h>
#include <crypto/sha1_base.h> #include <crypto/sha1_base.h>
#include <asm/cpu_device_id.h>
#include <asm/simd.h> #include <asm/simd.h>
static const struct x86_cpu_id module_cpu_ids[] = {
X86_MATCH_FEATURE(X86_FEATURE_AVX2, NULL),
X86_MATCH_FEATURE(X86_FEATURE_AVX, NULL),
X86_MATCH_FEATURE(X86_FEATURE_SSSE3, NULL),
{}
};
MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids);
static int sha1_update(struct shash_desc *desc, const u8 *data, static int sha1_update(struct shash_desc *desc, const u8 *data,
unsigned int len, sha1_block_fn *sha1_xform) unsigned int len, sha1_block_fn *sha1_xform)
{ {
@ -301,6 +310,9 @@ static inline void unregister_sha1_ni(void) { }
static int __init sha1_ssse3_mod_init(void) static int __init sha1_ssse3_mod_init(void)
{ {
if (!x86_match_cpu(module_cpu_ids))
return -ENODEV;
if (register_sha1_ssse3()) if (register_sha1_ssse3())
goto fail; goto fail;

View File

@ -38,11 +38,20 @@
#include <crypto/sha2.h> #include <crypto/sha2.h>
#include <crypto/sha256_base.h> #include <crypto/sha256_base.h>
#include <linux/string.h> #include <linux/string.h>
#include <asm/cpu_device_id.h>
#include <asm/simd.h> #include <asm/simd.h>
asmlinkage void sha256_transform_ssse3(struct sha256_state *state, asmlinkage void sha256_transform_ssse3(struct sha256_state *state,
const u8 *data, int blocks); const u8 *data, int blocks);
static const struct x86_cpu_id module_cpu_ids[] = {
X86_MATCH_FEATURE(X86_FEATURE_AVX2, NULL),
X86_MATCH_FEATURE(X86_FEATURE_AVX, NULL),
X86_MATCH_FEATURE(X86_FEATURE_SSSE3, NULL),
{}
};
MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids);
static int _sha256_update(struct shash_desc *desc, const u8 *data, static int _sha256_update(struct shash_desc *desc, const u8 *data,
unsigned int len, sha256_block_fn *sha256_xform) unsigned int len, sha256_block_fn *sha256_xform)
{ {
@ -98,12 +107,20 @@ static int sha256_ssse3_final(struct shash_desc *desc, u8 *out)
return sha256_ssse3_finup(desc, NULL, 0, out); return sha256_ssse3_finup(desc, NULL, 0, out);
} }
static int sha256_ssse3_digest(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out)
{
return sha256_base_init(desc) ?:
sha256_ssse3_finup(desc, data, len, out);
}
static struct shash_alg sha256_ssse3_algs[] = { { static struct shash_alg sha256_ssse3_algs[] = { {
.digestsize = SHA256_DIGEST_SIZE, .digestsize = SHA256_DIGEST_SIZE,
.init = sha256_base_init, .init = sha256_base_init,
.update = sha256_ssse3_update, .update = sha256_ssse3_update,
.final = sha256_ssse3_final, .final = sha256_ssse3_final,
.finup = sha256_ssse3_finup, .finup = sha256_ssse3_finup,
.digest = sha256_ssse3_digest,
.descsize = sizeof(struct sha256_state), .descsize = sizeof(struct sha256_state),
.base = { .base = {
.cra_name = "sha256", .cra_name = "sha256",
@ -163,12 +180,20 @@ static int sha256_avx_final(struct shash_desc *desc, u8 *out)
return sha256_avx_finup(desc, NULL, 0, out); return sha256_avx_finup(desc, NULL, 0, out);
} }
static int sha256_avx_digest(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out)
{
return sha256_base_init(desc) ?:
sha256_avx_finup(desc, data, len, out);
}
static struct shash_alg sha256_avx_algs[] = { { static struct shash_alg sha256_avx_algs[] = { {
.digestsize = SHA256_DIGEST_SIZE, .digestsize = SHA256_DIGEST_SIZE,
.init = sha256_base_init, .init = sha256_base_init,
.update = sha256_avx_update, .update = sha256_avx_update,
.final = sha256_avx_final, .final = sha256_avx_final,
.finup = sha256_avx_finup, .finup = sha256_avx_finup,
.digest = sha256_avx_digest,
.descsize = sizeof(struct sha256_state), .descsize = sizeof(struct sha256_state),
.base = { .base = {
.cra_name = "sha256", .cra_name = "sha256",
@ -239,12 +264,20 @@ static int sha256_avx2_final(struct shash_desc *desc, u8 *out)
return sha256_avx2_finup(desc, NULL, 0, out); return sha256_avx2_finup(desc, NULL, 0, out);
} }
static int sha256_avx2_digest(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out)
{
return sha256_base_init(desc) ?:
sha256_avx2_finup(desc, data, len, out);
}
static struct shash_alg sha256_avx2_algs[] = { { static struct shash_alg sha256_avx2_algs[] = { {
.digestsize = SHA256_DIGEST_SIZE, .digestsize = SHA256_DIGEST_SIZE,
.init = sha256_base_init, .init = sha256_base_init,
.update = sha256_avx2_update, .update = sha256_avx2_update,
.final = sha256_avx2_final, .final = sha256_avx2_final,
.finup = sha256_avx2_finup, .finup = sha256_avx2_finup,
.digest = sha256_avx2_digest,
.descsize = sizeof(struct sha256_state), .descsize = sizeof(struct sha256_state),
.base = { .base = {
.cra_name = "sha256", .cra_name = "sha256",
@ -314,12 +347,20 @@ static int sha256_ni_final(struct shash_desc *desc, u8 *out)
return sha256_ni_finup(desc, NULL, 0, out); return sha256_ni_finup(desc, NULL, 0, out);
} }
static int sha256_ni_digest(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out)
{
return sha256_base_init(desc) ?:
sha256_ni_finup(desc, data, len, out);
}
static struct shash_alg sha256_ni_algs[] = { { static struct shash_alg sha256_ni_algs[] = { {
.digestsize = SHA256_DIGEST_SIZE, .digestsize = SHA256_DIGEST_SIZE,
.init = sha256_base_init, .init = sha256_base_init,
.update = sha256_ni_update, .update = sha256_ni_update,
.final = sha256_ni_final, .final = sha256_ni_final,
.finup = sha256_ni_finup, .finup = sha256_ni_finup,
.digest = sha256_ni_digest,
.descsize = sizeof(struct sha256_state), .descsize = sizeof(struct sha256_state),
.base = { .base = {
.cra_name = "sha256", .cra_name = "sha256",
@ -366,6 +407,9 @@ static inline void unregister_sha256_ni(void) { }
static int __init sha256_ssse3_mod_init(void) static int __init sha256_ssse3_mod_init(void)
{ {
if (!x86_match_cpu(module_cpu_ids))
return -ENODEV;
if (register_sha256_ssse3()) if (register_sha256_ssse3())
goto fail; goto fail;

View File

@ -30,9 +30,11 @@ config MODULE_SIG_KEY_TYPE_RSA
config MODULE_SIG_KEY_TYPE_ECDSA config MODULE_SIG_KEY_TYPE_ECDSA
bool "ECDSA" bool "ECDSA"
select CRYPTO_ECDSA select CRYPTO_ECDSA
depends on !(MODULE_SIG_SHA256 || MODULE_SIG_SHA3_256)
help help
Use an elliptic curve key (NIST P384) for module signing. Consider Use an elliptic curve key (NIST P384) for module signing. Use
using a strong hash like sha256 or sha384 for hashing modules. a strong hash of same or higher bit length, i.e. sha384 or
sha512 for hashing modules.
Note: Remove all ECDSA signing keys, e.g. certs/signing_key.pem, Note: Remove all ECDSA signing keys, e.g. certs/signing_key.pem,
when falling back to building Linux 5.14 and older kernels. when falling back to building Linux 5.14 and older kernels.

View File

@ -85,6 +85,7 @@ config CRYPTO_SKCIPHER
tristate tristate
select CRYPTO_SKCIPHER2 select CRYPTO_SKCIPHER2
select CRYPTO_ALGAPI select CRYPTO_ALGAPI
select CRYPTO_ECB
config CRYPTO_SKCIPHER2 config CRYPTO_SKCIPHER2
tristate tristate
@ -689,7 +690,7 @@ config CRYPTO_CTS
config CRYPTO_ECB config CRYPTO_ECB
tristate "ECB (Electronic Codebook)" tristate "ECB (Electronic Codebook)"
select CRYPTO_SKCIPHER select CRYPTO_SKCIPHER2
select CRYPTO_MANAGER select CRYPTO_MANAGER
help help
ECB (Electronic Codebook) mode (NIST SP800-38A) ECB (Electronic Codebook) mode (NIST SP800-38A)
@ -1296,6 +1297,66 @@ config CRYPTO_JITTERENTROPY
See https://www.chronox.de/jent.html See https://www.chronox.de/jent.html
choice
prompt "CPU Jitter RNG Memory Size"
default CRYPTO_JITTERENTROPY_MEMSIZE_2
depends on CRYPTO_JITTERENTROPY
help
The Jitter RNG measures the execution time of memory accesses.
Multiple consecutive memory accesses are performed. If the memory
size fits into a cache (e.g. L1), only the memory access timing
to that cache is measured. The closer the cache is to the CPU
the less variations are measured and thus the less entropy is
obtained. Thus, if the memory size fits into the L1 cache, the
obtained entropy is less than if the memory size fits within
L1 + L2, which in turn is less if the memory fits into
L1 + L2 + L3. Thus, by selecting a different memory size,
the entropy rate produced by the Jitter RNG can be modified.
config CRYPTO_JITTERENTROPY_MEMSIZE_2
bool "2048 Bytes (default)"
config CRYPTO_JITTERENTROPY_MEMSIZE_128
bool "128 kBytes"
config CRYPTO_JITTERENTROPY_MEMSIZE_1024
bool "1024 kBytes"
config CRYPTO_JITTERENTROPY_MEMSIZE_8192
bool "8192 kBytes"
endchoice
config CRYPTO_JITTERENTROPY_MEMORY_BLOCKS
int
default 64 if CRYPTO_JITTERENTROPY_MEMSIZE_2
default 512 if CRYPTO_JITTERENTROPY_MEMSIZE_128
default 1024 if CRYPTO_JITTERENTROPY_MEMSIZE_1024
default 4096 if CRYPTO_JITTERENTROPY_MEMSIZE_8192
config CRYPTO_JITTERENTROPY_MEMORY_BLOCKSIZE
int
default 32 if CRYPTO_JITTERENTROPY_MEMSIZE_2
default 256 if CRYPTO_JITTERENTROPY_MEMSIZE_128
default 1024 if CRYPTO_JITTERENTROPY_MEMSIZE_1024
default 2048 if CRYPTO_JITTERENTROPY_MEMSIZE_8192
config CRYPTO_JITTERENTROPY_OSR
int "CPU Jitter RNG Oversampling Rate"
range 1 15
default 1
depends on CRYPTO_JITTERENTROPY
help
The Jitter RNG allows the specification of an oversampling rate (OSR).
The Jitter RNG operation requires a fixed amount of timing
measurements to produce one output block of random numbers. The
OSR value is multiplied with the amount of timing measurements to
generate one output block. Thus, the timing measurement is oversampled
by the OSR factor. The oversampling allows the Jitter RNG to operate
on hardware whose timers deliver limited amount of entropy (e.g.
the timer is coarse) by setting the OSR to a higher value. The
trade-off, however, is that the Jitter RNG now requires more time
to generate random numbers.
config CRYPTO_JITTERENTROPY_TESTINTERFACE config CRYPTO_JITTERENTROPY_TESTINTERFACE
bool "CPU Jitter RNG Test Interface" bool "CPU Jitter RNG Test Interface"
depends on CRYPTO_JITTERENTROPY depends on CRYPTO_JITTERENTROPY

View File

@ -16,7 +16,11 @@ obj-$(CONFIG_CRYPTO_ALGAPI2) += crypto_algapi.o
obj-$(CONFIG_CRYPTO_AEAD2) += aead.o obj-$(CONFIG_CRYPTO_AEAD2) += aead.o
obj-$(CONFIG_CRYPTO_GENIV) += geniv.o obj-$(CONFIG_CRYPTO_GENIV) += geniv.o
obj-$(CONFIG_CRYPTO_SKCIPHER2) += skcipher.o crypto_skcipher-y += lskcipher.o
crypto_skcipher-y += skcipher.o
obj-$(CONFIG_CRYPTO_SKCIPHER2) += crypto_skcipher.o
obj-$(CONFIG_CRYPTO_SEQIV) += seqiv.o obj-$(CONFIG_CRYPTO_SEQIV) += seqiv.o
obj-$(CONFIG_CRYPTO_ECHAINIV) += echainiv.o obj-$(CONFIG_CRYPTO_ECHAINIV) += echainiv.o

View File

@ -245,10 +245,9 @@ static void adiantum_hash_header(struct skcipher_request *req)
/* Hash the left-hand part (the "bulk") of the message using NHPoly1305 */ /* Hash the left-hand part (the "bulk") of the message using NHPoly1305 */
static int adiantum_hash_message(struct skcipher_request *req, static int adiantum_hash_message(struct skcipher_request *req,
struct scatterlist *sgl, le128 *digest) struct scatterlist *sgl, unsigned int nents,
le128 *digest)
{ {
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
const struct adiantum_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
struct adiantum_request_ctx *rctx = skcipher_request_ctx(req); struct adiantum_request_ctx *rctx = skcipher_request_ctx(req);
const unsigned int bulk_len = req->cryptlen - BLOCKCIPHER_BLOCK_SIZE; const unsigned int bulk_len = req->cryptlen - BLOCKCIPHER_BLOCK_SIZE;
struct shash_desc *hash_desc = &rctx->u.hash_desc; struct shash_desc *hash_desc = &rctx->u.hash_desc;
@ -256,14 +255,11 @@ static int adiantum_hash_message(struct skcipher_request *req,
unsigned int i, n; unsigned int i, n;
int err; int err;
hash_desc->tfm = tctx->hash;
err = crypto_shash_init(hash_desc); err = crypto_shash_init(hash_desc);
if (err) if (err)
return err; return err;
sg_miter_start(&miter, sgl, sg_nents(sgl), sg_miter_start(&miter, sgl, nents, SG_MITER_FROM_SG | SG_MITER_ATOMIC);
SG_MITER_FROM_SG | SG_MITER_ATOMIC);
for (i = 0; i < bulk_len; i += n) { for (i = 0; i < bulk_len; i += n) {
sg_miter_next(&miter); sg_miter_next(&miter);
n = min_t(unsigned int, miter.length, bulk_len - i); n = min_t(unsigned int, miter.length, bulk_len - i);
@ -285,6 +281,8 @@ static int adiantum_finish(struct skcipher_request *req)
const struct adiantum_tfm_ctx *tctx = crypto_skcipher_ctx(tfm); const struct adiantum_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
struct adiantum_request_ctx *rctx = skcipher_request_ctx(req); struct adiantum_request_ctx *rctx = skcipher_request_ctx(req);
const unsigned int bulk_len = req->cryptlen - BLOCKCIPHER_BLOCK_SIZE; const unsigned int bulk_len = req->cryptlen - BLOCKCIPHER_BLOCK_SIZE;
struct scatterlist *dst = req->dst;
const unsigned int dst_nents = sg_nents(dst);
le128 digest; le128 digest;
int err; int err;
@ -298,13 +296,32 @@ static int adiantum_finish(struct skcipher_request *req)
* enc: C_R = C_M - H_{K_H}(T, C_L) * enc: C_R = C_M - H_{K_H}(T, C_L)
* dec: P_R = P_M - H_{K_H}(T, P_L) * dec: P_R = P_M - H_{K_H}(T, P_L)
*/ */
err = adiantum_hash_message(req, req->dst, &digest); rctx->u.hash_desc.tfm = tctx->hash;
if (err) le128_sub(&rctx->rbuf.bignum, &rctx->rbuf.bignum, &rctx->header_hash);
return err; if (dst_nents == 1 && dst->offset + req->cryptlen <= PAGE_SIZE) {
le128_add(&digest, &digest, &rctx->header_hash); /* Fast path for single-page destination */
le128_sub(&rctx->rbuf.bignum, &rctx->rbuf.bignum, &digest); struct page *page = sg_page(dst);
scatterwalk_map_and_copy(&rctx->rbuf.bignum, req->dst, void *virt = kmap_local_page(page) + dst->offset;
bulk_len, BLOCKCIPHER_BLOCK_SIZE, 1);
err = crypto_shash_digest(&rctx->u.hash_desc, virt, bulk_len,
(u8 *)&digest);
if (err) {
kunmap_local(virt);
return err;
}
le128_sub(&rctx->rbuf.bignum, &rctx->rbuf.bignum, &digest);
memcpy(virt + bulk_len, &rctx->rbuf.bignum, sizeof(le128));
flush_dcache_page(page);
kunmap_local(virt);
} else {
/* Slow path that works for any destination scatterlist */
err = adiantum_hash_message(req, dst, dst_nents, &digest);
if (err)
return err;
le128_sub(&rctx->rbuf.bignum, &rctx->rbuf.bignum, &digest);
scatterwalk_map_and_copy(&rctx->rbuf.bignum, dst,
bulk_len, sizeof(le128), 1);
}
return 0; return 0;
} }
@ -324,6 +341,8 @@ static int adiantum_crypt(struct skcipher_request *req, bool enc)
const struct adiantum_tfm_ctx *tctx = crypto_skcipher_ctx(tfm); const struct adiantum_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
struct adiantum_request_ctx *rctx = skcipher_request_ctx(req); struct adiantum_request_ctx *rctx = skcipher_request_ctx(req);
const unsigned int bulk_len = req->cryptlen - BLOCKCIPHER_BLOCK_SIZE; const unsigned int bulk_len = req->cryptlen - BLOCKCIPHER_BLOCK_SIZE;
struct scatterlist *src = req->src;
const unsigned int src_nents = sg_nents(src);
unsigned int stream_len; unsigned int stream_len;
le128 digest; le128 digest;
int err; int err;
@ -339,12 +358,24 @@ static int adiantum_crypt(struct skcipher_request *req, bool enc)
* dec: C_M = C_R + H_{K_H}(T, C_L) * dec: C_M = C_R + H_{K_H}(T, C_L)
*/ */
adiantum_hash_header(req); adiantum_hash_header(req);
err = adiantum_hash_message(req, req->src, &digest); rctx->u.hash_desc.tfm = tctx->hash;
if (src_nents == 1 && src->offset + req->cryptlen <= PAGE_SIZE) {
/* Fast path for single-page source */
void *virt = kmap_local_page(sg_page(src)) + src->offset;
err = crypto_shash_digest(&rctx->u.hash_desc, virt, bulk_len,
(u8 *)&digest);
memcpy(&rctx->rbuf.bignum, virt + bulk_len, sizeof(le128));
kunmap_local(virt);
} else {
/* Slow path that works for any source scatterlist */
err = adiantum_hash_message(req, src, src_nents, &digest);
scatterwalk_map_and_copy(&rctx->rbuf.bignum, src,
bulk_len, sizeof(le128), 0);
}
if (err) if (err)
return err; return err;
le128_add(&digest, &digest, &rctx->header_hash); le128_add(&rctx->rbuf.bignum, &rctx->rbuf.bignum, &rctx->header_hash);
scatterwalk_map_and_copy(&rctx->rbuf.bignum, req->src,
bulk_len, BLOCKCIPHER_BLOCK_SIZE, 0);
le128_add(&rctx->rbuf.bignum, &rctx->rbuf.bignum, &digest); le128_add(&rctx->rbuf.bignum, &rctx->rbuf.bignum, &digest);
/* If encrypting, encrypt P_M with the block cipher to get C_M */ /* If encrypting, encrypt P_M with the block cipher to get C_M */
@ -468,7 +499,7 @@ static void adiantum_free_instance(struct skcipher_instance *inst)
* Check for a supported set of inner algorithms. * Check for a supported set of inner algorithms.
* See the comment at the beginning of this file. * See the comment at the beginning of this file.
*/ */
static bool adiantum_supported_algorithms(struct skcipher_alg *streamcipher_alg, static bool adiantum_supported_algorithms(struct skcipher_alg_common *streamcipher_alg,
struct crypto_alg *blockcipher_alg, struct crypto_alg *blockcipher_alg,
struct shash_alg *hash_alg) struct shash_alg *hash_alg)
{ {
@ -494,7 +525,7 @@ static int adiantum_create(struct crypto_template *tmpl, struct rtattr **tb)
const char *nhpoly1305_name; const char *nhpoly1305_name;
struct skcipher_instance *inst; struct skcipher_instance *inst;
struct adiantum_instance_ctx *ictx; struct adiantum_instance_ctx *ictx;
struct skcipher_alg *streamcipher_alg; struct skcipher_alg_common *streamcipher_alg;
struct crypto_alg *blockcipher_alg; struct crypto_alg *blockcipher_alg;
struct shash_alg *hash_alg; struct shash_alg *hash_alg;
int err; int err;
@ -514,7 +545,7 @@ static int adiantum_create(struct crypto_template *tmpl, struct rtattr **tb)
crypto_attr_alg_name(tb[1]), 0, mask); crypto_attr_alg_name(tb[1]), 0, mask);
if (err) if (err)
goto err_free_inst; goto err_free_inst;
streamcipher_alg = crypto_spawn_skcipher_alg(&ictx->streamcipher_spawn); streamcipher_alg = crypto_spawn_skcipher_alg_common(&ictx->streamcipher_spawn);
/* Block cipher, e.g. "aes" */ /* Block cipher, e.g. "aes" */
err = crypto_grab_cipher(&ictx->blockcipher_spawn, err = crypto_grab_cipher(&ictx->blockcipher_spawn,
@ -561,8 +592,7 @@ static int adiantum_create(struct crypto_template *tmpl, struct rtattr **tb)
inst->alg.base.cra_blocksize = BLOCKCIPHER_BLOCK_SIZE; inst->alg.base.cra_blocksize = BLOCKCIPHER_BLOCK_SIZE;
inst->alg.base.cra_ctxsize = sizeof(struct adiantum_tfm_ctx); inst->alg.base.cra_ctxsize = sizeof(struct adiantum_tfm_ctx);
inst->alg.base.cra_alignmask = streamcipher_alg->base.cra_alignmask | inst->alg.base.cra_alignmask = streamcipher_alg->base.cra_alignmask;
hash_alg->base.cra_alignmask;
/* /*
* The block cipher is only invoked once per message, so for long * The block cipher is only invoked once per message, so for long
* messages (e.g. sectors for disk encryption) its performance doesn't * messages (e.g. sectors for disk encryption) its performance doesn't
@ -578,8 +608,8 @@ static int adiantum_create(struct crypto_template *tmpl, struct rtattr **tb)
inst->alg.decrypt = adiantum_decrypt; inst->alg.decrypt = adiantum_decrypt;
inst->alg.init = adiantum_init_tfm; inst->alg.init = adiantum_init_tfm;
inst->alg.exit = adiantum_exit_tfm; inst->alg.exit = adiantum_exit_tfm;
inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(streamcipher_alg); inst->alg.min_keysize = streamcipher_alg->min_keysize;
inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(streamcipher_alg); inst->alg.max_keysize = streamcipher_alg->max_keysize;
inst->alg.ivsize = TWEAK_SIZE; inst->alg.ivsize = TWEAK_SIZE;
inst->free = adiantum_free_instance; inst->free = adiantum_free_instance;

View File

@ -269,6 +269,12 @@ struct crypto_aead *crypto_alloc_aead(const char *alg_name, u32 type, u32 mask)
} }
EXPORT_SYMBOL_GPL(crypto_alloc_aead); EXPORT_SYMBOL_GPL(crypto_alloc_aead);
int crypto_has_aead(const char *alg_name, u32 type, u32 mask)
{
return crypto_type_has_alg(alg_name, &crypto_aead_type, type, mask);
}
EXPORT_SYMBOL_GPL(crypto_has_aead);
static int aead_prepare_alg(struct aead_alg *alg) static int aead_prepare_alg(struct aead_alg *alg)
{ {
struct crypto_istat_aead *istat = aead_get_stat(alg); struct crypto_istat_aead *istat = aead_get_stat(alg);

View File

@ -2,8 +2,12 @@
/* /*
* Asynchronous Cryptographic Hash operations. * Asynchronous Cryptographic Hash operations.
* *
* This is the asynchronous version of hash.c with notification of * This is the implementation of the ahash (asynchronous hash) API. It differs
* completion via a callback. * from shash (synchronous hash) in that ahash supports asynchronous operations,
* and it hashes data from scatterlists instead of virtually addressed buffers.
*
* The ahash API provides access to both ahash and shash algorithms. The shash
* API only provides access to shash algorithms.
* *
* Copyright (c) 2008 Loc Ho <lho@amcc.com> * Copyright (c) 2008 Loc Ho <lho@amcc.com>
*/ */
@ -21,33 +25,142 @@
#include "hash.h" #include "hash.h"
static const struct crypto_type crypto_ahash_type; #define CRYPTO_ALG_TYPE_AHASH_MASK 0x0000000e
struct ahash_request_priv { static inline struct crypto_istat_hash *ahash_get_stat(struct ahash_alg *alg)
crypto_completion_t complete; {
void *data; return hash_get_stat(&alg->halg);
u8 *result; }
u32 flags;
void *ubuf[] CRYPTO_MINALIGN_ATTR; static inline int crypto_ahash_errstat(struct ahash_alg *alg, int err)
}; {
if (!IS_ENABLED(CONFIG_CRYPTO_STATS))
return err;
if (err && err != -EINPROGRESS && err != -EBUSY)
atomic64_inc(&ahash_get_stat(alg)->err_cnt);
return err;
}
/*
* For an ahash tfm that is using an shash algorithm (instead of an ahash
* algorithm), this returns the underlying shash tfm.
*/
static inline struct crypto_shash *ahash_to_shash(struct crypto_ahash *tfm)
{
return *(struct crypto_shash **)crypto_ahash_ctx(tfm);
}
static inline struct shash_desc *prepare_shash_desc(struct ahash_request *req,
struct crypto_ahash *tfm)
{
struct shash_desc *desc = ahash_request_ctx(req);
desc->tfm = ahash_to_shash(tfm);
return desc;
}
int shash_ahash_update(struct ahash_request *req, struct shash_desc *desc)
{
struct crypto_hash_walk walk;
int nbytes;
for (nbytes = crypto_hash_walk_first(req, &walk); nbytes > 0;
nbytes = crypto_hash_walk_done(&walk, nbytes))
nbytes = crypto_shash_update(desc, walk.data, nbytes);
return nbytes;
}
EXPORT_SYMBOL_GPL(shash_ahash_update);
int shash_ahash_finup(struct ahash_request *req, struct shash_desc *desc)
{
struct crypto_hash_walk walk;
int nbytes;
nbytes = crypto_hash_walk_first(req, &walk);
if (!nbytes)
return crypto_shash_final(desc, req->result);
do {
nbytes = crypto_hash_walk_last(&walk) ?
crypto_shash_finup(desc, walk.data, nbytes,
req->result) :
crypto_shash_update(desc, walk.data, nbytes);
nbytes = crypto_hash_walk_done(&walk, nbytes);
} while (nbytes > 0);
return nbytes;
}
EXPORT_SYMBOL_GPL(shash_ahash_finup);
int shash_ahash_digest(struct ahash_request *req, struct shash_desc *desc)
{
unsigned int nbytes = req->nbytes;
struct scatterlist *sg;
unsigned int offset;
int err;
if (nbytes &&
(sg = req->src, offset = sg->offset,
nbytes <= min(sg->length, ((unsigned int)(PAGE_SIZE)) - offset))) {
void *data;
data = kmap_local_page(sg_page(sg));
err = crypto_shash_digest(desc, data + offset, nbytes,
req->result);
kunmap_local(data);
} else
err = crypto_shash_init(desc) ?:
shash_ahash_finup(req, desc);
return err;
}
EXPORT_SYMBOL_GPL(shash_ahash_digest);
static void crypto_exit_ahash_using_shash(struct crypto_tfm *tfm)
{
struct crypto_shash **ctx = crypto_tfm_ctx(tfm);
crypto_free_shash(*ctx);
}
static int crypto_init_ahash_using_shash(struct crypto_tfm *tfm)
{
struct crypto_alg *calg = tfm->__crt_alg;
struct crypto_ahash *crt = __crypto_ahash_cast(tfm);
struct crypto_shash **ctx = crypto_tfm_ctx(tfm);
struct crypto_shash *shash;
if (!crypto_mod_get(calg))
return -EAGAIN;
shash = crypto_create_tfm(calg, &crypto_shash_type);
if (IS_ERR(shash)) {
crypto_mod_put(calg);
return PTR_ERR(shash);
}
crt->using_shash = true;
*ctx = shash;
tfm->exit = crypto_exit_ahash_using_shash;
crypto_ahash_set_flags(crt, crypto_shash_get_flags(shash) &
CRYPTO_TFM_NEED_KEY);
crt->reqsize = sizeof(struct shash_desc) + crypto_shash_descsize(shash);
return 0;
}
static int hash_walk_next(struct crypto_hash_walk *walk) static int hash_walk_next(struct crypto_hash_walk *walk)
{ {
unsigned int alignmask = walk->alignmask;
unsigned int offset = walk->offset; unsigned int offset = walk->offset;
unsigned int nbytes = min(walk->entrylen, unsigned int nbytes = min(walk->entrylen,
((unsigned int)(PAGE_SIZE)) - offset); ((unsigned int)(PAGE_SIZE)) - offset);
walk->data = kmap_local_page(walk->pg); walk->data = kmap_local_page(walk->pg);
walk->data += offset; walk->data += offset;
if (offset & alignmask) {
unsigned int unaligned = alignmask + 1 - (offset & alignmask);
if (nbytes > unaligned)
nbytes = unaligned;
}
walk->entrylen -= nbytes; walk->entrylen -= nbytes;
return nbytes; return nbytes;
} }
@ -71,23 +184,8 @@ static int hash_walk_new_entry(struct crypto_hash_walk *walk)
int crypto_hash_walk_done(struct crypto_hash_walk *walk, int err) int crypto_hash_walk_done(struct crypto_hash_walk *walk, int err)
{ {
unsigned int alignmask = walk->alignmask;
walk->data -= walk->offset; walk->data -= walk->offset;
if (walk->entrylen && (walk->offset & alignmask) && !err) {
unsigned int nbytes;
walk->offset = ALIGN(walk->offset, alignmask + 1);
nbytes = min(walk->entrylen,
(unsigned int)(PAGE_SIZE - walk->offset));
if (nbytes) {
walk->entrylen -= nbytes;
walk->data += walk->offset;
return nbytes;
}
}
kunmap_local(walk->data); kunmap_local(walk->data);
crypto_yield(walk->flags); crypto_yield(walk->flags);
@ -119,7 +217,6 @@ int crypto_hash_walk_first(struct ahash_request *req,
return 0; return 0;
} }
walk->alignmask = crypto_ahash_alignmask(crypto_ahash_reqtfm(req));
walk->sg = req->src; walk->sg = req->src;
walk->flags = req->base.flags; walk->flags = req->base.flags;
@ -127,67 +224,64 @@ int crypto_hash_walk_first(struct ahash_request *req,
} }
EXPORT_SYMBOL_GPL(crypto_hash_walk_first); EXPORT_SYMBOL_GPL(crypto_hash_walk_first);
static int ahash_setkey_unaligned(struct crypto_ahash *tfm, const u8 *key,
unsigned int keylen)
{
unsigned long alignmask = crypto_ahash_alignmask(tfm);
int ret;
u8 *buffer, *alignbuffer;
unsigned long absize;
absize = keylen + alignmask;
buffer = kmalloc(absize, GFP_KERNEL);
if (!buffer)
return -ENOMEM;
alignbuffer = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);
memcpy(alignbuffer, key, keylen);
ret = tfm->setkey(tfm, alignbuffer, keylen);
kfree_sensitive(buffer);
return ret;
}
static int ahash_nosetkey(struct crypto_ahash *tfm, const u8 *key, static int ahash_nosetkey(struct crypto_ahash *tfm, const u8 *key,
unsigned int keylen) unsigned int keylen)
{ {
return -ENOSYS; return -ENOSYS;
} }
static void ahash_set_needkey(struct crypto_ahash *tfm) static void ahash_set_needkey(struct crypto_ahash *tfm, struct ahash_alg *alg)
{ {
const struct hash_alg_common *alg = crypto_hash_alg_common(tfm); if (alg->setkey != ahash_nosetkey &&
!(alg->halg.base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY))
if (tfm->setkey != ahash_nosetkey &&
!(alg->base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY))
crypto_ahash_set_flags(tfm, CRYPTO_TFM_NEED_KEY); crypto_ahash_set_flags(tfm, CRYPTO_TFM_NEED_KEY);
} }
int crypto_ahash_setkey(struct crypto_ahash *tfm, const u8 *key, int crypto_ahash_setkey(struct crypto_ahash *tfm, const u8 *key,
unsigned int keylen) unsigned int keylen)
{ {
unsigned long alignmask = crypto_ahash_alignmask(tfm); if (likely(tfm->using_shash)) {
int err; struct crypto_shash *shash = ahash_to_shash(tfm);
int err;
if ((unsigned long)key & alignmask) err = crypto_shash_setkey(shash, key, keylen);
err = ahash_setkey_unaligned(tfm, key, keylen); if (unlikely(err)) {
else crypto_ahash_set_flags(tfm,
err = tfm->setkey(tfm, key, keylen); crypto_shash_get_flags(shash) &
CRYPTO_TFM_NEED_KEY);
return err;
}
} else {
struct ahash_alg *alg = crypto_ahash_alg(tfm);
int err;
if (unlikely(err)) { err = alg->setkey(tfm, key, keylen);
ahash_set_needkey(tfm); if (unlikely(err)) {
return err; ahash_set_needkey(tfm, alg);
return err;
}
} }
crypto_ahash_clear_flags(tfm, CRYPTO_TFM_NEED_KEY); crypto_ahash_clear_flags(tfm, CRYPTO_TFM_NEED_KEY);
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(crypto_ahash_setkey); EXPORT_SYMBOL_GPL(crypto_ahash_setkey);
int crypto_ahash_init(struct ahash_request *req)
{
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
if (likely(tfm->using_shash))
return crypto_shash_init(prepare_shash_desc(req, tfm));
if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
return -ENOKEY;
return crypto_ahash_alg(tfm)->init(req);
}
EXPORT_SYMBOL_GPL(crypto_ahash_init);
static int ahash_save_req(struct ahash_request *req, crypto_completion_t cplt, static int ahash_save_req(struct ahash_request *req, crypto_completion_t cplt,
bool has_state) bool has_state)
{ {
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
unsigned long alignmask = crypto_ahash_alignmask(tfm);
unsigned int ds = crypto_ahash_digestsize(tfm); unsigned int ds = crypto_ahash_digestsize(tfm);
struct ahash_request *subreq; struct ahash_request *subreq;
unsigned int subreq_size; unsigned int subreq_size;
@ -201,7 +295,6 @@ static int ahash_save_req(struct ahash_request *req, crypto_completion_t cplt,
reqsize = ALIGN(reqsize, crypto_tfm_ctx_alignment()); reqsize = ALIGN(reqsize, crypto_tfm_ctx_alignment());
subreq_size += reqsize; subreq_size += reqsize;
subreq_size += ds; subreq_size += ds;
subreq_size += alignmask & ~(crypto_tfm_ctx_alignment() - 1);
flags = ahash_request_flags(req); flags = ahash_request_flags(req);
gfp = (flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? GFP_KERNEL : GFP_ATOMIC; gfp = (flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? GFP_KERNEL : GFP_ATOMIC;
@ -213,7 +306,6 @@ static int ahash_save_req(struct ahash_request *req, crypto_completion_t cplt,
ahash_request_set_callback(subreq, flags, cplt, req); ahash_request_set_callback(subreq, flags, cplt, req);
result = (u8 *)(subreq + 1) + reqsize; result = (u8 *)(subreq + 1) + reqsize;
result = PTR_ALIGN(result, alignmask + 1);
ahash_request_set_crypt(subreq, req->src, result, req->nbytes); ahash_request_set_crypt(subreq, req->src, result, req->nbytes);
@ -249,100 +341,78 @@ static void ahash_restore_req(struct ahash_request *req, int err)
kfree_sensitive(subreq); kfree_sensitive(subreq);
} }
static void ahash_op_unaligned_done(void *data, int err) int crypto_ahash_update(struct ahash_request *req)
{
struct ahash_request *areq = data;
if (err == -EINPROGRESS)
goto out;
/* First copy req->result into req->priv.result */
ahash_restore_req(areq, err);
out:
/* Complete the ORIGINAL request. */
ahash_request_complete(areq, err);
}
static int ahash_op_unaligned(struct ahash_request *req,
int (*op)(struct ahash_request *),
bool has_state)
{
int err;
err = ahash_save_req(req, ahash_op_unaligned_done, has_state);
if (err)
return err;
err = op(req->priv);
if (err == -EINPROGRESS || err == -EBUSY)
return err;
ahash_restore_req(req, err);
return err;
}
static int crypto_ahash_op(struct ahash_request *req,
int (*op)(struct ahash_request *),
bool has_state)
{ {
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
unsigned long alignmask = crypto_ahash_alignmask(tfm); struct ahash_alg *alg;
int err;
if ((unsigned long)req->result & alignmask) if (likely(tfm->using_shash))
err = ahash_op_unaligned(req, op, has_state); return shash_ahash_update(req, ahash_request_ctx(req));
else
err = op(req);
return crypto_hash_errstat(crypto_hash_alg_common(tfm), err); alg = crypto_ahash_alg(tfm);
if (IS_ENABLED(CONFIG_CRYPTO_STATS))
atomic64_add(req->nbytes, &ahash_get_stat(alg)->hash_tlen);
return crypto_ahash_errstat(alg, alg->update(req));
} }
EXPORT_SYMBOL_GPL(crypto_ahash_update);
int crypto_ahash_final(struct ahash_request *req) int crypto_ahash_final(struct ahash_request *req)
{ {
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
struct hash_alg_common *alg = crypto_hash_alg_common(tfm); struct ahash_alg *alg;
if (likely(tfm->using_shash))
return crypto_shash_final(ahash_request_ctx(req), req->result);
alg = crypto_ahash_alg(tfm);
if (IS_ENABLED(CONFIG_CRYPTO_STATS)) if (IS_ENABLED(CONFIG_CRYPTO_STATS))
atomic64_inc(&hash_get_stat(alg)->hash_cnt); atomic64_inc(&ahash_get_stat(alg)->hash_cnt);
return crypto_ahash_errstat(alg, alg->final(req));
return crypto_ahash_op(req, tfm->final, true);
} }
EXPORT_SYMBOL_GPL(crypto_ahash_final); EXPORT_SYMBOL_GPL(crypto_ahash_final);
int crypto_ahash_finup(struct ahash_request *req) int crypto_ahash_finup(struct ahash_request *req)
{ {
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
struct hash_alg_common *alg = crypto_hash_alg_common(tfm); struct ahash_alg *alg;
if (likely(tfm->using_shash))
return shash_ahash_finup(req, ahash_request_ctx(req));
alg = crypto_ahash_alg(tfm);
if (IS_ENABLED(CONFIG_CRYPTO_STATS)) { if (IS_ENABLED(CONFIG_CRYPTO_STATS)) {
struct crypto_istat_hash *istat = hash_get_stat(alg); struct crypto_istat_hash *istat = ahash_get_stat(alg);
atomic64_inc(&istat->hash_cnt); atomic64_inc(&istat->hash_cnt);
atomic64_add(req->nbytes, &istat->hash_tlen); atomic64_add(req->nbytes, &istat->hash_tlen);
} }
return crypto_ahash_errstat(alg, alg->finup(req));
return crypto_ahash_op(req, tfm->finup, true);
} }
EXPORT_SYMBOL_GPL(crypto_ahash_finup); EXPORT_SYMBOL_GPL(crypto_ahash_finup);
int crypto_ahash_digest(struct ahash_request *req) int crypto_ahash_digest(struct ahash_request *req)
{ {
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
struct hash_alg_common *alg = crypto_hash_alg_common(tfm); struct ahash_alg *alg;
int err;
if (likely(tfm->using_shash))
return shash_ahash_digest(req, prepare_shash_desc(req, tfm));
alg = crypto_ahash_alg(tfm);
if (IS_ENABLED(CONFIG_CRYPTO_STATS)) { if (IS_ENABLED(CONFIG_CRYPTO_STATS)) {
struct crypto_istat_hash *istat = hash_get_stat(alg); struct crypto_istat_hash *istat = ahash_get_stat(alg);
atomic64_inc(&istat->hash_cnt); atomic64_inc(&istat->hash_cnt);
atomic64_add(req->nbytes, &istat->hash_tlen); atomic64_add(req->nbytes, &istat->hash_tlen);
} }
if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY) if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
return crypto_hash_errstat(alg, -ENOKEY); err = -ENOKEY;
else
err = alg->digest(req);
return crypto_ahash_op(req, tfm->digest, false); return crypto_ahash_errstat(alg, err);
} }
EXPORT_SYMBOL_GPL(crypto_ahash_digest); EXPORT_SYMBOL_GPL(crypto_ahash_digest);
@ -367,7 +437,7 @@ static int ahash_def_finup_finish1(struct ahash_request *req, int err)
subreq->base.complete = ahash_def_finup_done2; subreq->base.complete = ahash_def_finup_done2;
err = crypto_ahash_reqtfm(req)->final(subreq); err = crypto_ahash_alg(crypto_ahash_reqtfm(req))->final(subreq);
if (err == -EINPROGRESS || err == -EBUSY) if (err == -EINPROGRESS || err == -EBUSY)
return err; return err;
@ -404,13 +474,35 @@ static int ahash_def_finup(struct ahash_request *req)
if (err) if (err)
return err; return err;
err = tfm->update(req->priv); err = crypto_ahash_alg(tfm)->update(req->priv);
if (err == -EINPROGRESS || err == -EBUSY) if (err == -EINPROGRESS || err == -EBUSY)
return err; return err;
return ahash_def_finup_finish1(req, err); return ahash_def_finup_finish1(req, err);
} }
int crypto_ahash_export(struct ahash_request *req, void *out)
{
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
if (likely(tfm->using_shash))
return crypto_shash_export(ahash_request_ctx(req), out);
return crypto_ahash_alg(tfm)->export(req, out);
}
EXPORT_SYMBOL_GPL(crypto_ahash_export);
int crypto_ahash_import(struct ahash_request *req, const void *in)
{
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
if (likely(tfm->using_shash))
return crypto_shash_import(prepare_shash_desc(req, tfm), in);
if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
return -ENOKEY;
return crypto_ahash_alg(tfm)->import(req, in);
}
EXPORT_SYMBOL_GPL(crypto_ahash_import);
static void crypto_ahash_exit_tfm(struct crypto_tfm *tfm) static void crypto_ahash_exit_tfm(struct crypto_tfm *tfm)
{ {
struct crypto_ahash *hash = __crypto_ahash_cast(tfm); struct crypto_ahash *hash = __crypto_ahash_cast(tfm);
@ -424,25 +516,12 @@ static int crypto_ahash_init_tfm(struct crypto_tfm *tfm)
struct crypto_ahash *hash = __crypto_ahash_cast(tfm); struct crypto_ahash *hash = __crypto_ahash_cast(tfm);
struct ahash_alg *alg = crypto_ahash_alg(hash); struct ahash_alg *alg = crypto_ahash_alg(hash);
hash->setkey = ahash_nosetkey;
crypto_ahash_set_statesize(hash, alg->halg.statesize); crypto_ahash_set_statesize(hash, alg->halg.statesize);
if (tfm->__crt_alg->cra_type != &crypto_ahash_type) if (tfm->__crt_alg->cra_type == &crypto_shash_type)
return crypto_init_shash_ops_async(tfm); return crypto_init_ahash_using_shash(tfm);
hash->init = alg->init; ahash_set_needkey(hash, alg);
hash->update = alg->update;
hash->final = alg->final;
hash->finup = alg->finup ?: ahash_def_finup;
hash->digest = alg->digest;
hash->export = alg->export;
hash->import = alg->import;
if (alg->setkey) {
hash->setkey = alg->setkey;
ahash_set_needkey(hash);
}
if (alg->exit_tfm) if (alg->exit_tfm)
tfm->exit = crypto_ahash_exit_tfm; tfm->exit = crypto_ahash_exit_tfm;
@ -452,7 +531,7 @@ static int crypto_ahash_init_tfm(struct crypto_tfm *tfm)
static unsigned int crypto_ahash_extsize(struct crypto_alg *alg) static unsigned int crypto_ahash_extsize(struct crypto_alg *alg)
{ {
if (alg->cra_type != &crypto_ahash_type) if (alg->cra_type == &crypto_shash_type)
return sizeof(struct crypto_shash *); return sizeof(struct crypto_shash *);
return crypto_alg_extsize(alg); return crypto_alg_extsize(alg);
@ -560,19 +639,21 @@ struct crypto_ahash *crypto_clone_ahash(struct crypto_ahash *hash)
if (IS_ERR(nhash)) if (IS_ERR(nhash))
return nhash; return nhash;
nhash->init = hash->init;
nhash->update = hash->update;
nhash->final = hash->final;
nhash->finup = hash->finup;
nhash->digest = hash->digest;
nhash->export = hash->export;
nhash->import = hash->import;
nhash->setkey = hash->setkey;
nhash->reqsize = hash->reqsize; nhash->reqsize = hash->reqsize;
nhash->statesize = hash->statesize; nhash->statesize = hash->statesize;
if (tfm->__crt_alg->cra_type != &crypto_ahash_type) if (likely(hash->using_shash)) {
return crypto_clone_shash_ops_async(nhash, hash); struct crypto_shash **nctx = crypto_ahash_ctx(nhash);
struct crypto_shash *shash;
shash = crypto_clone_shash(ahash_to_shash(hash));
if (IS_ERR(shash)) {
err = PTR_ERR(shash);
goto out_free_nhash;
}
*nctx = shash;
return nhash;
}
err = -ENOSYS; err = -ENOSYS;
alg = crypto_ahash_alg(hash); alg = crypto_ahash_alg(hash);
@ -606,6 +687,11 @@ static int ahash_prepare_alg(struct ahash_alg *alg)
base->cra_type = &crypto_ahash_type; base->cra_type = &crypto_ahash_type;
base->cra_flags |= CRYPTO_ALG_TYPE_AHASH; base->cra_flags |= CRYPTO_ALG_TYPE_AHASH;
if (!alg->finup)
alg->finup = ahash_def_finup;
if (!alg->setkey)
alg->setkey = ahash_nosetkey;
return 0; return 0;
} }
@ -677,10 +763,10 @@ bool crypto_hash_alg_has_setkey(struct hash_alg_common *halg)
{ {
struct crypto_alg *alg = &halg->base; struct crypto_alg *alg = &halg->base;
if (alg->cra_type != &crypto_ahash_type) if (alg->cra_type == &crypto_shash_type)
return crypto_shash_alg_has_setkey(__crypto_shash_alg(alg)); return crypto_shash_alg_has_setkey(__crypto_shash_alg(alg));
return __crypto_ahash_alg(alg)->setkey != NULL; return __crypto_ahash_alg(alg)->setkey != ahash_nosetkey;
} }
EXPORT_SYMBOL_GPL(crypto_hash_alg_has_setkey); EXPORT_SYMBOL_GPL(crypto_hash_alg_has_setkey);

View File

@ -389,7 +389,7 @@ EXPORT_SYMBOL_GPL(crypto_shoot_alg);
struct crypto_tfm *__crypto_alloc_tfmgfp(struct crypto_alg *alg, u32 type, struct crypto_tfm *__crypto_alloc_tfmgfp(struct crypto_alg *alg, u32 type,
u32 mask, gfp_t gfp) u32 mask, gfp_t gfp)
{ {
struct crypto_tfm *tfm = NULL; struct crypto_tfm *tfm;
unsigned int tfm_size; unsigned int tfm_size;
int err = -ENOMEM; int err = -ENOMEM;

View File

@ -7,7 +7,6 @@
* Jon Oberheide <jon@oberheide.org> * Jon Oberheide <jon@oberheide.org>
*/ */
#include <crypto/algapi.h>
#include <crypto/arc4.h> #include <crypto/arc4.h>
#include <crypto/internal/skcipher.h> #include <crypto/internal/skcipher.h>
#include <linux/init.h> #include <linux/init.h>
@ -15,33 +14,24 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/sched.h> #include <linux/sched.h>
static int crypto_arc4_setkey(struct crypto_skcipher *tfm, const u8 *in_key, static int crypto_arc4_setkey(struct crypto_lskcipher *tfm, const u8 *in_key,
unsigned int key_len) unsigned int key_len)
{ {
struct arc4_ctx *ctx = crypto_skcipher_ctx(tfm); struct arc4_ctx *ctx = crypto_lskcipher_ctx(tfm);
return arc4_setkey(ctx, in_key, key_len); return arc4_setkey(ctx, in_key, key_len);
} }
static int crypto_arc4_crypt(struct skcipher_request *req) static int crypto_arc4_crypt(struct crypto_lskcipher *tfm, const u8 *src,
u8 *dst, unsigned nbytes, u8 *iv, bool final)
{ {
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct arc4_ctx *ctx = crypto_lskcipher_ctx(tfm);
struct arc4_ctx *ctx = crypto_skcipher_ctx(tfm);
struct skcipher_walk walk;
int err;
err = skcipher_walk_virt(&walk, req, false); arc4_crypt(ctx, dst, src, nbytes);
return 0;
while (walk.nbytes > 0) {
arc4_crypt(ctx, walk.dst.virt.addr, walk.src.virt.addr,
walk.nbytes);
err = skcipher_walk_done(&walk, 0);
}
return err;
} }
static int crypto_arc4_init(struct crypto_skcipher *tfm) static int crypto_arc4_init(struct crypto_lskcipher *tfm)
{ {
pr_warn_ratelimited("\"%s\" (%ld) uses obsolete ecb(arc4) skcipher\n", pr_warn_ratelimited("\"%s\" (%ld) uses obsolete ecb(arc4) skcipher\n",
current->comm, (unsigned long)current->pid); current->comm, (unsigned long)current->pid);
@ -49,33 +39,29 @@ static int crypto_arc4_init(struct crypto_skcipher *tfm)
return 0; return 0;
} }
static struct skcipher_alg arc4_alg = { static struct lskcipher_alg arc4_alg = {
/* .co.base.cra_name = "arc4",
* For legacy reasons, this is named "ecb(arc4)", not "arc4". .co.base.cra_driver_name = "arc4-generic",
* Nevertheless it's actually a stream cipher, not a block cipher. .co.base.cra_priority = 100,
*/ .co.base.cra_blocksize = ARC4_BLOCK_SIZE,
.base.cra_name = "ecb(arc4)", .co.base.cra_ctxsize = sizeof(struct arc4_ctx),
.base.cra_driver_name = "ecb(arc4)-generic", .co.base.cra_module = THIS_MODULE,
.base.cra_priority = 100, .co.min_keysize = ARC4_MIN_KEY_SIZE,
.base.cra_blocksize = ARC4_BLOCK_SIZE, .co.max_keysize = ARC4_MAX_KEY_SIZE,
.base.cra_ctxsize = sizeof(struct arc4_ctx), .setkey = crypto_arc4_setkey,
.base.cra_module = THIS_MODULE, .encrypt = crypto_arc4_crypt,
.min_keysize = ARC4_MIN_KEY_SIZE, .decrypt = crypto_arc4_crypt,
.max_keysize = ARC4_MAX_KEY_SIZE, .init = crypto_arc4_init,
.setkey = crypto_arc4_setkey,
.encrypt = crypto_arc4_crypt,
.decrypt = crypto_arc4_crypt,
.init = crypto_arc4_init,
}; };
static int __init arc4_init(void) static int __init arc4_init(void)
{ {
return crypto_register_skcipher(&arc4_alg); return crypto_register_lskcipher(&arc4_alg);
} }
static void __exit arc4_exit(void) static void __exit arc4_exit(void)
{ {
crypto_unregister_skcipher(&arc4_alg); crypto_unregister_lskcipher(&arc4_alg);
} }
subsys_initcall(arc4_init); subsys_initcall(arc4_init);

View File

@ -76,7 +76,7 @@ config SIGNED_PE_FILE_VERIFICATION
signed PE binary. signed PE binary.
config FIPS_SIGNATURE_SELFTEST config FIPS_SIGNATURE_SELFTEST
bool "Run FIPS selftests on the X.509+PKCS7 signature verification" tristate "Run FIPS selftests on the X.509+PKCS7 signature verification"
help help
This option causes some selftests to be run on the signature This option causes some selftests to be run on the signature
verification code, using some built in data. This is required verification code, using some built in data. This is required
@ -84,5 +84,6 @@ config FIPS_SIGNATURE_SELFTEST
depends on KEYS depends on KEYS
depends on ASYMMETRIC_KEY_TYPE depends on ASYMMETRIC_KEY_TYPE
depends on PKCS7_MESSAGE_PARSER=X509_CERTIFICATE_PARSER depends on PKCS7_MESSAGE_PARSER=X509_CERTIFICATE_PARSER
depends on X509_CERTIFICATE_PARSER
endif # ASYMMETRIC_KEY_TYPE endif # ASYMMETRIC_KEY_TYPE

View File

@ -22,7 +22,8 @@ x509_key_parser-y := \
x509_cert_parser.o \ x509_cert_parser.o \
x509_loader.o \ x509_loader.o \
x509_public_key.o x509_public_key.o
x509_key_parser-$(CONFIG_FIPS_SIGNATURE_SELFTEST) += selftest.o obj-$(CONFIG_FIPS_SIGNATURE_SELFTEST) += x509_selftest.o
x509_selftest-y += selftest.o
$(obj)/x509_cert_parser.o: \ $(obj)/x509_cert_parser.o: \
$(obj)/x509.asn1.h \ $(obj)/x509.asn1.h \

View File

@ -75,15 +75,6 @@ int mscode_note_digest_algo(void *context, size_t hdrlen,
oid = look_up_OID(value, vlen); oid = look_up_OID(value, vlen);
switch (oid) { switch (oid) {
case OID_md4:
ctx->digest_algo = "md4";
break;
case OID_md5:
ctx->digest_algo = "md5";
break;
case OID_sha1:
ctx->digest_algo = "sha1";
break;
case OID_sha256: case OID_sha256:
ctx->digest_algo = "sha256"; ctx->digest_algo = "sha256";
break; break;
@ -93,8 +84,14 @@ int mscode_note_digest_algo(void *context, size_t hdrlen,
case OID_sha512: case OID_sha512:
ctx->digest_algo = "sha512"; ctx->digest_algo = "sha512";
break; break;
case OID_sha224: case OID_sha3_256:
ctx->digest_algo = "sha224"; ctx->digest_algo = "sha3-256";
break;
case OID_sha3_384:
ctx->digest_algo = "sha3-384";
break;
case OID_sha3_512:
ctx->digest_algo = "sha3-512";
break; break;
case OID__NR: case OID__NR:

View File

@ -1,3 +1,10 @@
-- SPDX-License-Identifier: BSD-3-Clause
--
-- Copyright (C) 2009 IETF Trust and the persons identified as authors
-- of the code
--
-- https://www.rfc-editor.org/rfc/rfc5652#section-3
PKCS7ContentInfo ::= SEQUENCE { PKCS7ContentInfo ::= SEQUENCE {
contentType ContentType ({ pkcs7_check_content_type }), contentType ContentType ({ pkcs7_check_content_type }),
content [0] EXPLICIT SignedData OPTIONAL content [0] EXPLICIT SignedData OPTIONAL

View File

@ -227,15 +227,6 @@ int pkcs7_sig_note_digest_algo(void *context, size_t hdrlen,
struct pkcs7_parse_context *ctx = context; struct pkcs7_parse_context *ctx = context;
switch (ctx->last_oid) { switch (ctx->last_oid) {
case OID_md4:
ctx->sinfo->sig->hash_algo = "md4";
break;
case OID_md5:
ctx->sinfo->sig->hash_algo = "md5";
break;
case OID_sha1:
ctx->sinfo->sig->hash_algo = "sha1";
break;
case OID_sha256: case OID_sha256:
ctx->sinfo->sig->hash_algo = "sha256"; ctx->sinfo->sig->hash_algo = "sha256";
break; break;
@ -257,6 +248,15 @@ int pkcs7_sig_note_digest_algo(void *context, size_t hdrlen,
case OID_gost2012Digest512: case OID_gost2012Digest512:
ctx->sinfo->sig->hash_algo = "streebog512"; ctx->sinfo->sig->hash_algo = "streebog512";
break; break;
case OID_sha3_256:
ctx->sinfo->sig->hash_algo = "sha3-256";
break;
case OID_sha3_384:
ctx->sinfo->sig->hash_algo = "sha3-384";
break;
case OID_sha3_512:
ctx->sinfo->sig->hash_algo = "sha3-512";
break;
default: default:
printk("Unsupported digest algo: %u\n", ctx->last_oid); printk("Unsupported digest algo: %u\n", ctx->last_oid);
return -ENOPKG; return -ENOPKG;
@ -278,11 +278,13 @@ int pkcs7_sig_note_pkey_algo(void *context, size_t hdrlen,
ctx->sinfo->sig->pkey_algo = "rsa"; ctx->sinfo->sig->pkey_algo = "rsa";
ctx->sinfo->sig->encoding = "pkcs1"; ctx->sinfo->sig->encoding = "pkcs1";
break; break;
case OID_id_ecdsa_with_sha1:
case OID_id_ecdsa_with_sha224: case OID_id_ecdsa_with_sha224:
case OID_id_ecdsa_with_sha256: case OID_id_ecdsa_with_sha256:
case OID_id_ecdsa_with_sha384: case OID_id_ecdsa_with_sha384:
case OID_id_ecdsa_with_sha512: case OID_id_ecdsa_with_sha512:
case OID_id_ecdsa_with_sha3_256:
case OID_id_ecdsa_with_sha3_384:
case OID_id_ecdsa_with_sha3_512:
ctx->sinfo->sig->pkey_algo = "ecdsa"; ctx->sinfo->sig->pkey_algo = "ecdsa";
ctx->sinfo->sig->encoding = "x962"; ctx->sinfo->sig->encoding = "x962";
break; break;

View File

@ -1,3 +1,9 @@
-- SPDX-License-Identifier: BSD-3-Clause
--
-- Copyright (C) 2010 IETF Trust and the persons identified as authors
-- of the code
--
-- https://www.rfc-editor.org/rfc/rfc5958#section-2
-- --
-- This is the unencrypted variant -- This is the unencrypted variant
-- --

View File

@ -115,11 +115,13 @@ software_key_determine_akcipher(const struct public_key *pkey,
*/ */
if (!hash_algo) if (!hash_algo)
return -EINVAL; return -EINVAL;
if (strcmp(hash_algo, "sha1") != 0 && if (strcmp(hash_algo, "sha224") != 0 &&
strcmp(hash_algo, "sha224") != 0 &&
strcmp(hash_algo, "sha256") != 0 && strcmp(hash_algo, "sha256") != 0 &&
strcmp(hash_algo, "sha384") != 0 && strcmp(hash_algo, "sha384") != 0 &&
strcmp(hash_algo, "sha512") != 0) strcmp(hash_algo, "sha512") != 0 &&
strcmp(hash_algo, "sha3-256") != 0 &&
strcmp(hash_algo, "sha3-384") != 0 &&
strcmp(hash_algo, "sha3-512") != 0)
return -EINVAL; return -EINVAL;
} else if (strcmp(pkey->pkey_algo, "sm2") == 0) { } else if (strcmp(pkey->pkey_algo, "sm2") == 0) {
if (strcmp(encoding, "raw") != 0) if (strcmp(encoding, "raw") != 0)

View File

@ -4,10 +4,11 @@
* Written by David Howells (dhowells@redhat.com) * Written by David Howells (dhowells@redhat.com)
*/ */
#include <linux/kernel.h>
#include <linux/cred.h>
#include <linux/key.h>
#include <crypto/pkcs7.h> #include <crypto/pkcs7.h>
#include <linux/cred.h>
#include <linux/kernel.h>
#include <linux/key.h>
#include <linux/module.h>
#include "x509_parser.h" #include "x509_parser.h"
struct certs_test { struct certs_test {
@ -175,7 +176,7 @@ static const struct certs_test certs_tests[] __initconst = {
TEST(certs_selftest_1_data, certs_selftest_1_pkcs7), TEST(certs_selftest_1_data, certs_selftest_1_pkcs7),
}; };
int __init fips_signature_selftest(void) static int __init fips_signature_selftest(void)
{ {
struct key *keyring; struct key *keyring;
int ret, i; int ret, i;
@ -222,3 +223,9 @@ int __init fips_signature_selftest(void)
key_put(keyring); key_put(keyring);
return 0; return 0;
} }
late_initcall(fips_signature_selftest);
MODULE_DESCRIPTION("X.509 self tests");
MODULE_AUTHOR("Red Hat, Inc.");
MODULE_LICENSE("GPL");

View File

@ -115,7 +115,7 @@ EXPORT_SYMBOL_GPL(decrypt_blob);
* Sign the specified data blob using the private key specified by params->key. * Sign the specified data blob using the private key specified by params->key.
* The signature is wrapped in an encoding if params->encoding is specified * The signature is wrapped in an encoding if params->encoding is specified
* (eg. "pkcs1"). If the encoding needs to know the digest type, this can be * (eg. "pkcs1"). If the encoding needs to know the digest type, this can be
* passed through params->hash_algo (eg. "sha1"). * passed through params->hash_algo (eg. "sha512").
* *
* Returns the length of the data placed in the signature buffer or an error. * Returns the length of the data placed in the signature buffer or an error.
*/ */

View File

@ -1,3 +1,10 @@
-- SPDX-License-Identifier: BSD-3-Clause
--
-- Copyright (C) 2008 IETF Trust and the persons identified as authors
-- of the code
--
-- https://www.rfc-editor.org/rfc/rfc5280#section-4
Certificate ::= SEQUENCE { Certificate ::= SEQUENCE {
tbsCertificate TBSCertificate ({ x509_note_tbs_certificate }), tbsCertificate TBSCertificate ({ x509_note_tbs_certificate }),
signatureAlgorithm AlgorithmIdentifier, signatureAlgorithm AlgorithmIdentifier,

View File

@ -1,3 +1,8 @@
-- SPDX-License-Identifier: BSD-3-Clause
--
-- Copyright (C) 2008 IETF Trust and the persons identified as authors
-- of the code
--
-- X.509 AuthorityKeyIdentifier -- X.509 AuthorityKeyIdentifier
-- rfc5280 section 4.2.1.1 -- rfc5280 section 4.2.1.1
@ -14,15 +19,15 @@ CertificateSerialNumber ::= INTEGER ({ x509_akid_note_serial })
GeneralNames ::= SEQUENCE OF GeneralName GeneralNames ::= SEQUENCE OF GeneralName
GeneralName ::= CHOICE { GeneralName ::= CHOICE {
otherName [0] ANY, otherName [0] IMPLICIT OtherName,
rfc822Name [1] IA5String, rfc822Name [1] IMPLICIT IA5String,
dNSName [2] IA5String, dNSName [2] IMPLICIT IA5String,
x400Address [3] ANY, x400Address [3] ANY,
directoryName [4] Name ({ x509_akid_note_name }), directoryName [4] Name ({ x509_akid_note_name }),
ediPartyName [5] ANY, ediPartyName [5] IMPLICIT EDIPartyName,
uniformResourceIdentifier [6] IA5String, uniformResourceIdentifier [6] IMPLICIT IA5String,
iPAddress [7] OCTET STRING, iPAddress [7] IMPLICIT OCTET STRING,
registeredID [8] OBJECT IDENTIFIER registeredID [8] IMPLICIT OBJECT IDENTIFIER
} }
Name ::= SEQUENCE OF RelativeDistinguishedName Name ::= SEQUENCE OF RelativeDistinguishedName
@ -33,3 +38,13 @@ AttributeValueAssertion ::= SEQUENCE {
attributeType OBJECT IDENTIFIER ({ x509_note_OID }), attributeType OBJECT IDENTIFIER ({ x509_note_OID }),
attributeValue ANY ({ x509_extract_name_segment }) attributeValue ANY ({ x509_extract_name_segment })
} }
OtherName ::= SEQUENCE {
type-id OBJECT IDENTIFIER,
value [0] ANY
}
EDIPartyName ::= SEQUENCE {
nameAssigner [0] ANY OPTIONAL,
partyName [1] ANY
}

View File

@ -195,19 +195,9 @@ int x509_note_sig_algo(void *context, size_t hdrlen, unsigned char tag,
pr_debug("PubKey Algo: %u\n", ctx->last_oid); pr_debug("PubKey Algo: %u\n", ctx->last_oid);
switch (ctx->last_oid) { switch (ctx->last_oid) {
case OID_md2WithRSAEncryption:
case OID_md3WithRSAEncryption:
default: default:
return -ENOPKG; /* Unsupported combination */ return -ENOPKG; /* Unsupported combination */
case OID_md4WithRSAEncryption:
ctx->cert->sig->hash_algo = "md4";
goto rsa_pkcs1;
case OID_sha1WithRSAEncryption:
ctx->cert->sig->hash_algo = "sha1";
goto rsa_pkcs1;
case OID_sha256WithRSAEncryption: case OID_sha256WithRSAEncryption:
ctx->cert->sig->hash_algo = "sha256"; ctx->cert->sig->hash_algo = "sha256";
goto rsa_pkcs1; goto rsa_pkcs1;
@ -224,9 +214,17 @@ int x509_note_sig_algo(void *context, size_t hdrlen, unsigned char tag,
ctx->cert->sig->hash_algo = "sha224"; ctx->cert->sig->hash_algo = "sha224";
goto rsa_pkcs1; goto rsa_pkcs1;
case OID_id_ecdsa_with_sha1: case OID_id_rsassa_pkcs1_v1_5_with_sha3_256:
ctx->cert->sig->hash_algo = "sha1"; ctx->cert->sig->hash_algo = "sha3-256";
goto ecdsa; goto rsa_pkcs1;
case OID_id_rsassa_pkcs1_v1_5_with_sha3_384:
ctx->cert->sig->hash_algo = "sha3-384";
goto rsa_pkcs1;
case OID_id_rsassa_pkcs1_v1_5_with_sha3_512:
ctx->cert->sig->hash_algo = "sha3-512";
goto rsa_pkcs1;
case OID_id_ecdsa_with_sha224: case OID_id_ecdsa_with_sha224:
ctx->cert->sig->hash_algo = "sha224"; ctx->cert->sig->hash_algo = "sha224";
@ -244,6 +242,18 @@ int x509_note_sig_algo(void *context, size_t hdrlen, unsigned char tag,
ctx->cert->sig->hash_algo = "sha512"; ctx->cert->sig->hash_algo = "sha512";
goto ecdsa; goto ecdsa;
case OID_id_ecdsa_with_sha3_256:
ctx->cert->sig->hash_algo = "sha3-256";
goto ecdsa;
case OID_id_ecdsa_with_sha3_384:
ctx->cert->sig->hash_algo = "sha3-384";
goto ecdsa;
case OID_id_ecdsa_with_sha3_512:
ctx->cert->sig->hash_algo = "sha3-512";
goto ecdsa;
case OID_gost2012Signature256: case OID_gost2012Signature256:
ctx->cert->sig->hash_algo = "streebog256"; ctx->cert->sig->hash_algo = "streebog256";
goto ecrdsa; goto ecrdsa;

View File

@ -40,15 +40,6 @@ struct x509_certificate {
bool blacklisted; bool blacklisted;
}; };
/*
* selftest.c
*/
#ifdef CONFIG_FIPS_SIGNATURE_SELFTEST
extern int __init fips_signature_selftest(void);
#else
static inline int fips_signature_selftest(void) { return 0; }
#endif
/* /*
* x509_cert_parser.c * x509_cert_parser.c
*/ */

View File

@ -262,15 +262,9 @@ static struct asymmetric_key_parser x509_key_parser = {
/* /*
* Module stuff * Module stuff
*/ */
extern int __init certs_selftest(void);
static int __init x509_key_init(void) static int __init x509_key_init(void)
{ {
int ret; return register_asymmetric_key_parser(&x509_key_parser);
ret = register_asymmetric_key_parser(&x509_key_parser);
if (ret < 0)
return ret;
return fips_signature_selftest();
} }
static void __exit x509_key_exit(void) static void __exit x509_key_exit(void)

View File

@ -141,9 +141,6 @@ static int crypto_authenc_genicv(struct aead_request *req, unsigned int flags)
u8 *hash = areq_ctx->tail; u8 *hash = areq_ctx->tail;
int err; int err;
hash = (u8 *)ALIGN((unsigned long)hash + crypto_ahash_alignmask(auth),
crypto_ahash_alignmask(auth) + 1);
ahash_request_set_tfm(ahreq, auth); ahash_request_set_tfm(ahreq, auth);
ahash_request_set_crypt(ahreq, req->dst, hash, ahash_request_set_crypt(ahreq, req->dst, hash,
req->assoclen + req->cryptlen); req->assoclen + req->cryptlen);
@ -286,9 +283,6 @@ static int crypto_authenc_decrypt(struct aead_request *req)
u8 *hash = areq_ctx->tail; u8 *hash = areq_ctx->tail;
int err; int err;
hash = (u8 *)ALIGN((unsigned long)hash + crypto_ahash_alignmask(auth),
crypto_ahash_alignmask(auth) + 1);
ahash_request_set_tfm(ahreq, auth); ahash_request_set_tfm(ahreq, auth);
ahash_request_set_crypt(ahreq, req->src, hash, ahash_request_set_crypt(ahreq, req->src, hash,
req->assoclen + req->cryptlen - authsize); req->assoclen + req->cryptlen - authsize);
@ -373,9 +367,9 @@ static int crypto_authenc_create(struct crypto_template *tmpl,
u32 mask; u32 mask;
struct aead_instance *inst; struct aead_instance *inst;
struct authenc_instance_ctx *ctx; struct authenc_instance_ctx *ctx;
struct skcipher_alg_common *enc;
struct hash_alg_common *auth; struct hash_alg_common *auth;
struct crypto_alg *auth_base; struct crypto_alg *auth_base;
struct skcipher_alg *enc;
int err; int err;
err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_AEAD, &mask); err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_AEAD, &mask);
@ -398,10 +392,9 @@ static int crypto_authenc_create(struct crypto_template *tmpl,
crypto_attr_alg_name(tb[2]), 0, mask); crypto_attr_alg_name(tb[2]), 0, mask);
if (err) if (err)
goto err_free_inst; goto err_free_inst;
enc = crypto_spawn_skcipher_alg(&ctx->enc); enc = crypto_spawn_skcipher_alg_common(&ctx->enc);
ctx->reqoff = ALIGN(2 * auth->digestsize + auth_base->cra_alignmask, ctx->reqoff = 2 * auth->digestsize;
auth_base->cra_alignmask + 1);
err = -ENAMETOOLONG; err = -ENAMETOOLONG;
if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME, if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME,
@ -418,12 +411,11 @@ static int crypto_authenc_create(struct crypto_template *tmpl,
inst->alg.base.cra_priority = enc->base.cra_priority * 10 + inst->alg.base.cra_priority = enc->base.cra_priority * 10 +
auth_base->cra_priority; auth_base->cra_priority;
inst->alg.base.cra_blocksize = enc->base.cra_blocksize; inst->alg.base.cra_blocksize = enc->base.cra_blocksize;
inst->alg.base.cra_alignmask = auth_base->cra_alignmask | inst->alg.base.cra_alignmask = enc->base.cra_alignmask;
enc->base.cra_alignmask;
inst->alg.base.cra_ctxsize = sizeof(struct crypto_authenc_ctx); inst->alg.base.cra_ctxsize = sizeof(struct crypto_authenc_ctx);
inst->alg.ivsize = crypto_skcipher_alg_ivsize(enc); inst->alg.ivsize = enc->ivsize;
inst->alg.chunksize = crypto_skcipher_alg_chunksize(enc); inst->alg.chunksize = enc->chunksize;
inst->alg.maxauthsize = auth->digestsize; inst->alg.maxauthsize = auth->digestsize;
inst->alg.init = crypto_authenc_init_tfm; inst->alg.init = crypto_authenc_init_tfm;

View File

@ -87,11 +87,8 @@ static int crypto_authenc_esn_genicv_tail(struct aead_request *req,
unsigned int flags) unsigned int flags)
{ {
struct crypto_aead *authenc_esn = crypto_aead_reqtfm(req); struct crypto_aead *authenc_esn = crypto_aead_reqtfm(req);
struct crypto_authenc_esn_ctx *ctx = crypto_aead_ctx(authenc_esn);
struct authenc_esn_request_ctx *areq_ctx = aead_request_ctx(req); struct authenc_esn_request_ctx *areq_ctx = aead_request_ctx(req);
struct crypto_ahash *auth = ctx->auth; u8 *hash = areq_ctx->tail;
u8 *hash = PTR_ALIGN((u8 *)areq_ctx->tail,
crypto_ahash_alignmask(auth) + 1);
unsigned int authsize = crypto_aead_authsize(authenc_esn); unsigned int authsize = crypto_aead_authsize(authenc_esn);
unsigned int assoclen = req->assoclen; unsigned int assoclen = req->assoclen;
unsigned int cryptlen = req->cryptlen; unsigned int cryptlen = req->cryptlen;
@ -122,8 +119,7 @@ static int crypto_authenc_esn_genicv(struct aead_request *req,
struct authenc_esn_request_ctx *areq_ctx = aead_request_ctx(req); struct authenc_esn_request_ctx *areq_ctx = aead_request_ctx(req);
struct crypto_authenc_esn_ctx *ctx = crypto_aead_ctx(authenc_esn); struct crypto_authenc_esn_ctx *ctx = crypto_aead_ctx(authenc_esn);
struct crypto_ahash *auth = ctx->auth; struct crypto_ahash *auth = ctx->auth;
u8 *hash = PTR_ALIGN((u8 *)areq_ctx->tail, u8 *hash = areq_ctx->tail;
crypto_ahash_alignmask(auth) + 1);
struct ahash_request *ahreq = (void *)(areq_ctx->tail + ctx->reqoff); struct ahash_request *ahreq = (void *)(areq_ctx->tail + ctx->reqoff);
unsigned int authsize = crypto_aead_authsize(authenc_esn); unsigned int authsize = crypto_aead_authsize(authenc_esn);
unsigned int assoclen = req->assoclen; unsigned int assoclen = req->assoclen;
@ -224,8 +220,7 @@ static int crypto_authenc_esn_decrypt_tail(struct aead_request *req,
struct skcipher_request *skreq = (void *)(areq_ctx->tail + struct skcipher_request *skreq = (void *)(areq_ctx->tail +
ctx->reqoff); ctx->reqoff);
struct crypto_ahash *auth = ctx->auth; struct crypto_ahash *auth = ctx->auth;
u8 *ohash = PTR_ALIGN((u8 *)areq_ctx->tail, u8 *ohash = areq_ctx->tail;
crypto_ahash_alignmask(auth) + 1);
unsigned int cryptlen = req->cryptlen - authsize; unsigned int cryptlen = req->cryptlen - authsize;
unsigned int assoclen = req->assoclen; unsigned int assoclen = req->assoclen;
struct scatterlist *dst = req->dst; struct scatterlist *dst = req->dst;
@ -272,8 +267,7 @@ static int crypto_authenc_esn_decrypt(struct aead_request *req)
struct ahash_request *ahreq = (void *)(areq_ctx->tail + ctx->reqoff); struct ahash_request *ahreq = (void *)(areq_ctx->tail + ctx->reqoff);
unsigned int authsize = crypto_aead_authsize(authenc_esn); unsigned int authsize = crypto_aead_authsize(authenc_esn);
struct crypto_ahash *auth = ctx->auth; struct crypto_ahash *auth = ctx->auth;
u8 *ohash = PTR_ALIGN((u8 *)areq_ctx->tail, u8 *ohash = areq_ctx->tail;
crypto_ahash_alignmask(auth) + 1);
unsigned int assoclen = req->assoclen; unsigned int assoclen = req->assoclen;
unsigned int cryptlen = req->cryptlen; unsigned int cryptlen = req->cryptlen;
u8 *ihash = ohash + crypto_ahash_digestsize(auth); u8 *ihash = ohash + crypto_ahash_digestsize(auth);
@ -344,8 +338,7 @@ static int crypto_authenc_esn_init_tfm(struct crypto_aead *tfm)
ctx->enc = enc; ctx->enc = enc;
ctx->null = null; ctx->null = null;
ctx->reqoff = ALIGN(2 * crypto_ahash_digestsize(auth), ctx->reqoff = 2 * crypto_ahash_digestsize(auth);
crypto_ahash_alignmask(auth) + 1);
crypto_aead_set_reqsize( crypto_aead_set_reqsize(
tfm, tfm,
@ -390,9 +383,9 @@ static int crypto_authenc_esn_create(struct crypto_template *tmpl,
u32 mask; u32 mask;
struct aead_instance *inst; struct aead_instance *inst;
struct authenc_esn_instance_ctx *ctx; struct authenc_esn_instance_ctx *ctx;
struct skcipher_alg_common *enc;
struct hash_alg_common *auth; struct hash_alg_common *auth;
struct crypto_alg *auth_base; struct crypto_alg *auth_base;
struct skcipher_alg *enc;
int err; int err;
err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_AEAD, &mask); err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_AEAD, &mask);
@ -415,7 +408,7 @@ static int crypto_authenc_esn_create(struct crypto_template *tmpl,
crypto_attr_alg_name(tb[2]), 0, mask); crypto_attr_alg_name(tb[2]), 0, mask);
if (err) if (err)
goto err_free_inst; goto err_free_inst;
enc = crypto_spawn_skcipher_alg(&ctx->enc); enc = crypto_spawn_skcipher_alg_common(&ctx->enc);
err = -ENAMETOOLONG; err = -ENAMETOOLONG;
if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME, if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME,
@ -431,12 +424,11 @@ static int crypto_authenc_esn_create(struct crypto_template *tmpl,
inst->alg.base.cra_priority = enc->base.cra_priority * 10 + inst->alg.base.cra_priority = enc->base.cra_priority * 10 +
auth_base->cra_priority; auth_base->cra_priority;
inst->alg.base.cra_blocksize = enc->base.cra_blocksize; inst->alg.base.cra_blocksize = enc->base.cra_blocksize;
inst->alg.base.cra_alignmask = auth_base->cra_alignmask | inst->alg.base.cra_alignmask = enc->base.cra_alignmask;
enc->base.cra_alignmask;
inst->alg.base.cra_ctxsize = sizeof(struct crypto_authenc_esn_ctx); inst->alg.base.cra_ctxsize = sizeof(struct crypto_authenc_esn_ctx);
inst->alg.ivsize = crypto_skcipher_alg_ivsize(enc); inst->alg.ivsize = enc->ivsize;
inst->alg.chunksize = crypto_skcipher_alg_chunksize(enc); inst->alg.chunksize = enc->chunksize;
inst->alg.maxauthsize = auth->digestsize; inst->alg.maxauthsize = auth->digestsize;
inst->alg.init = crypto_authenc_esn_init_tfm; inst->alg.init = crypto_authenc_esn_init_tfm;

View File

@ -5,8 +5,6 @@
* Copyright (c) 2006-2016 Herbert Xu <herbert@gondor.apana.org.au> * Copyright (c) 2006-2016 Herbert Xu <herbert@gondor.apana.org.au>
*/ */
#include <crypto/algapi.h>
#include <crypto/internal/cipher.h>
#include <crypto/internal/skcipher.h> #include <crypto/internal/skcipher.h>
#include <linux/err.h> #include <linux/err.h>
#include <linux/init.h> #include <linux/init.h>
@ -14,99 +12,71 @@
#include <linux/log2.h> #include <linux/log2.h>
#include <linux/module.h> #include <linux/module.h>
static int crypto_cbc_encrypt_segment(struct skcipher_walk *walk, static int crypto_cbc_encrypt_segment(struct crypto_lskcipher *tfm,
struct crypto_skcipher *skcipher) const u8 *src, u8 *dst, unsigned nbytes,
u8 *iv)
{ {
unsigned int bsize = crypto_skcipher_blocksize(skcipher); unsigned int bsize = crypto_lskcipher_blocksize(tfm);
void (*fn)(struct crypto_tfm *, u8 *, const u8 *);
unsigned int nbytes = walk->nbytes;
u8 *src = walk->src.virt.addr;
u8 *dst = walk->dst.virt.addr;
struct crypto_cipher *cipher;
struct crypto_tfm *tfm;
u8 *iv = walk->iv;
cipher = skcipher_cipher_simple(skcipher); for (; nbytes >= bsize; src += bsize, dst += bsize, nbytes -= bsize) {
tfm = crypto_cipher_tfm(cipher);
fn = crypto_cipher_alg(cipher)->cia_encrypt;
do {
crypto_xor(iv, src, bsize); crypto_xor(iv, src, bsize);
fn(tfm, dst, iv); crypto_lskcipher_encrypt(tfm, iv, dst, bsize, NULL);
memcpy(iv, dst, bsize); memcpy(iv, dst, bsize);
}
src += bsize;
dst += bsize;
} while ((nbytes -= bsize) >= bsize);
return nbytes; return nbytes;
} }
static int crypto_cbc_encrypt_inplace(struct skcipher_walk *walk, static int crypto_cbc_encrypt_inplace(struct crypto_lskcipher *tfm,
struct crypto_skcipher *skcipher) u8 *src, unsigned nbytes, u8 *oiv)
{ {
unsigned int bsize = crypto_skcipher_blocksize(skcipher); unsigned int bsize = crypto_lskcipher_blocksize(tfm);
void (*fn)(struct crypto_tfm *, u8 *, const u8 *); u8 *iv = oiv;
unsigned int nbytes = walk->nbytes;
u8 *src = walk->src.virt.addr;
struct crypto_cipher *cipher;
struct crypto_tfm *tfm;
u8 *iv = walk->iv;
cipher = skcipher_cipher_simple(skcipher); if (nbytes < bsize)
tfm = crypto_cipher_tfm(cipher); goto out;
fn = crypto_cipher_alg(cipher)->cia_encrypt;
do { do {
crypto_xor(src, iv, bsize); crypto_xor(src, iv, bsize);
fn(tfm, src, src); crypto_lskcipher_encrypt(tfm, src, src, bsize, NULL);
iv = src; iv = src;
src += bsize; src += bsize;
} while ((nbytes -= bsize) >= bsize); } while ((nbytes -= bsize) >= bsize);
memcpy(walk->iv, iv, bsize); memcpy(oiv, iv, bsize);
out:
return nbytes; return nbytes;
} }
static int crypto_cbc_encrypt(struct skcipher_request *req) static int crypto_cbc_encrypt(struct crypto_lskcipher *tfm, const u8 *src,
u8 *dst, unsigned len, u8 *iv, bool final)
{ {
struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req); struct crypto_lskcipher **ctx = crypto_lskcipher_ctx(tfm);
struct skcipher_walk walk; struct crypto_lskcipher *cipher = *ctx;
int err; int rem;
err = skcipher_walk_virt(&walk, req, false); if (src == dst)
rem = crypto_cbc_encrypt_inplace(cipher, dst, len, iv);
else
rem = crypto_cbc_encrypt_segment(cipher, src, dst, len, iv);
while (walk.nbytes) { return rem && final ? -EINVAL : rem;
if (walk.src.virt.addr == walk.dst.virt.addr)
err = crypto_cbc_encrypt_inplace(&walk, skcipher);
else
err = crypto_cbc_encrypt_segment(&walk, skcipher);
err = skcipher_walk_done(&walk, err);
}
return err;
} }
static int crypto_cbc_decrypt_segment(struct skcipher_walk *walk, static int crypto_cbc_decrypt_segment(struct crypto_lskcipher *tfm,
struct crypto_skcipher *skcipher) const u8 *src, u8 *dst, unsigned nbytes,
u8 *oiv)
{ {
unsigned int bsize = crypto_skcipher_blocksize(skcipher); unsigned int bsize = crypto_lskcipher_blocksize(tfm);
void (*fn)(struct crypto_tfm *, u8 *, const u8 *); const u8 *iv = oiv;
unsigned int nbytes = walk->nbytes;
u8 *src = walk->src.virt.addr;
u8 *dst = walk->dst.virt.addr;
struct crypto_cipher *cipher;
struct crypto_tfm *tfm;
u8 *iv = walk->iv;
cipher = skcipher_cipher_simple(skcipher); if (nbytes < bsize)
tfm = crypto_cipher_tfm(cipher); goto out;
fn = crypto_cipher_alg(cipher)->cia_decrypt;
do { do {
fn(tfm, dst, src); crypto_lskcipher_decrypt(tfm, src, dst, bsize, NULL);
crypto_xor(dst, iv, bsize); crypto_xor(dst, iv, bsize);
iv = src; iv = src;
@ -114,83 +84,72 @@ static int crypto_cbc_decrypt_segment(struct skcipher_walk *walk,
dst += bsize; dst += bsize;
} while ((nbytes -= bsize) >= bsize); } while ((nbytes -= bsize) >= bsize);
memcpy(walk->iv, iv, bsize); memcpy(oiv, iv, bsize);
out:
return nbytes; return nbytes;
} }
static int crypto_cbc_decrypt_inplace(struct skcipher_walk *walk, static int crypto_cbc_decrypt_inplace(struct crypto_lskcipher *tfm,
struct crypto_skcipher *skcipher) u8 *src, unsigned nbytes, u8 *iv)
{ {
unsigned int bsize = crypto_skcipher_blocksize(skcipher); unsigned int bsize = crypto_lskcipher_blocksize(tfm);
void (*fn)(struct crypto_tfm *, u8 *, const u8 *);
unsigned int nbytes = walk->nbytes;
u8 *src = walk->src.virt.addr;
u8 last_iv[MAX_CIPHER_BLOCKSIZE]; u8 last_iv[MAX_CIPHER_BLOCKSIZE];
struct crypto_cipher *cipher;
struct crypto_tfm *tfm;
cipher = skcipher_cipher_simple(skcipher); if (nbytes < bsize)
tfm = crypto_cipher_tfm(cipher); goto out;
fn = crypto_cipher_alg(cipher)->cia_decrypt;
/* Start of the last block. */ /* Start of the last block. */
src += nbytes - (nbytes & (bsize - 1)) - bsize; src += nbytes - (nbytes & (bsize - 1)) - bsize;
memcpy(last_iv, src, bsize); memcpy(last_iv, src, bsize);
for (;;) { for (;;) {
fn(tfm, src, src); crypto_lskcipher_decrypt(tfm, src, src, bsize, NULL);
if ((nbytes -= bsize) < bsize) if ((nbytes -= bsize) < bsize)
break; break;
crypto_xor(src, src - bsize, bsize); crypto_xor(src, src - bsize, bsize);
src -= bsize; src -= bsize;
} }
crypto_xor(src, walk->iv, bsize); crypto_xor(src, iv, bsize);
memcpy(walk->iv, last_iv, bsize); memcpy(iv, last_iv, bsize);
out:
return nbytes; return nbytes;
} }
static int crypto_cbc_decrypt(struct skcipher_request *req) static int crypto_cbc_decrypt(struct crypto_lskcipher *tfm, const u8 *src,
u8 *dst, unsigned len, u8 *iv, bool final)
{ {
struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req); struct crypto_lskcipher **ctx = crypto_lskcipher_ctx(tfm);
struct skcipher_walk walk; struct crypto_lskcipher *cipher = *ctx;
int err; int rem;
err = skcipher_walk_virt(&walk, req, false); if (src == dst)
rem = crypto_cbc_decrypt_inplace(cipher, dst, len, iv);
else
rem = crypto_cbc_decrypt_segment(cipher, src, dst, len, iv);
while (walk.nbytes) { return rem && final ? -EINVAL : rem;
if (walk.src.virt.addr == walk.dst.virt.addr)
err = crypto_cbc_decrypt_inplace(&walk, skcipher);
else
err = crypto_cbc_decrypt_segment(&walk, skcipher);
err = skcipher_walk_done(&walk, err);
}
return err;
} }
static int crypto_cbc_create(struct crypto_template *tmpl, struct rtattr **tb) static int crypto_cbc_create(struct crypto_template *tmpl, struct rtattr **tb)
{ {
struct skcipher_instance *inst; struct lskcipher_instance *inst;
struct crypto_alg *alg;
int err; int err;
inst = skcipher_alloc_instance_simple(tmpl, tb); inst = lskcipher_alloc_instance_simple(tmpl, tb);
if (IS_ERR(inst)) if (IS_ERR(inst))
return PTR_ERR(inst); return PTR_ERR(inst);
alg = skcipher_ialg_simple(inst);
err = -EINVAL; err = -EINVAL;
if (!is_power_of_2(alg->cra_blocksize)) if (!is_power_of_2(inst->alg.co.base.cra_blocksize))
goto out_free_inst; goto out_free_inst;
inst->alg.encrypt = crypto_cbc_encrypt; inst->alg.encrypt = crypto_cbc_encrypt;
inst->alg.decrypt = crypto_cbc_decrypt; inst->alg.decrypt = crypto_cbc_decrypt;
err = skcipher_register_instance(tmpl, inst); err = lskcipher_register_instance(tmpl, inst);
if (err) { if (err) {
out_free_inst: out_free_inst:
inst->free(inst); inst->free(inst);

View File

@ -56,6 +56,7 @@ struct cbcmac_tfm_ctx {
struct cbcmac_desc_ctx { struct cbcmac_desc_ctx {
unsigned int len; unsigned int len;
u8 dg[];
}; };
static inline struct crypto_ccm_req_priv_ctx *crypto_ccm_reqctx( static inline struct crypto_ccm_req_priv_ctx *crypto_ccm_reqctx(
@ -447,10 +448,10 @@ static int crypto_ccm_create_common(struct crypto_template *tmpl,
const char *ctr_name, const char *ctr_name,
const char *mac_name) const char *mac_name)
{ {
struct skcipher_alg_common *ctr;
u32 mask; u32 mask;
struct aead_instance *inst; struct aead_instance *inst;
struct ccm_instance_ctx *ictx; struct ccm_instance_ctx *ictx;
struct skcipher_alg *ctr;
struct hash_alg_common *mac; struct hash_alg_common *mac;
int err; int err;
@ -478,13 +479,12 @@ static int crypto_ccm_create_common(struct crypto_template *tmpl,
ctr_name, 0, mask); ctr_name, 0, mask);
if (err) if (err)
goto err_free_inst; goto err_free_inst;
ctr = crypto_spawn_skcipher_alg(&ictx->ctr); ctr = crypto_spawn_skcipher_alg_common(&ictx->ctr);
/* The skcipher algorithm must be CTR mode, using 16-byte blocks. */ /* The skcipher algorithm must be CTR mode, using 16-byte blocks. */
err = -EINVAL; err = -EINVAL;
if (strncmp(ctr->base.cra_name, "ctr(", 4) != 0 || if (strncmp(ctr->base.cra_name, "ctr(", 4) != 0 ||
crypto_skcipher_alg_ivsize(ctr) != 16 || ctr->ivsize != 16 || ctr->base.cra_blocksize != 1)
ctr->base.cra_blocksize != 1)
goto err_free_inst; goto err_free_inst;
/* ctr and cbcmac must use the same underlying block cipher. */ /* ctr and cbcmac must use the same underlying block cipher. */
@ -504,10 +504,9 @@ static int crypto_ccm_create_common(struct crypto_template *tmpl,
inst->alg.base.cra_priority = (mac->base.cra_priority + inst->alg.base.cra_priority = (mac->base.cra_priority +
ctr->base.cra_priority) / 2; ctr->base.cra_priority) / 2;
inst->alg.base.cra_blocksize = 1; inst->alg.base.cra_blocksize = 1;
inst->alg.base.cra_alignmask = mac->base.cra_alignmask | inst->alg.base.cra_alignmask = ctr->base.cra_alignmask;
ctr->base.cra_alignmask;
inst->alg.ivsize = 16; inst->alg.ivsize = 16;
inst->alg.chunksize = crypto_skcipher_alg_chunksize(ctr); inst->alg.chunksize = ctr->chunksize;
inst->alg.maxauthsize = 16; inst->alg.maxauthsize = 16;
inst->alg.base.cra_ctxsize = sizeof(struct crypto_ccm_ctx); inst->alg.base.cra_ctxsize = sizeof(struct crypto_ccm_ctx);
inst->alg.init = crypto_ccm_init_tfm; inst->alg.init = crypto_ccm_init_tfm;
@ -786,10 +785,9 @@ static int crypto_cbcmac_digest_init(struct shash_desc *pdesc)
{ {
struct cbcmac_desc_ctx *ctx = shash_desc_ctx(pdesc); struct cbcmac_desc_ctx *ctx = shash_desc_ctx(pdesc);
int bs = crypto_shash_digestsize(pdesc->tfm); int bs = crypto_shash_digestsize(pdesc->tfm);
u8 *dg = (u8 *)ctx + crypto_shash_descsize(pdesc->tfm) - bs;
ctx->len = 0; ctx->len = 0;
memset(dg, 0, bs); memset(ctx->dg, 0, bs);
return 0; return 0;
} }
@ -802,18 +800,17 @@ static int crypto_cbcmac_digest_update(struct shash_desc *pdesc, const u8 *p,
struct cbcmac_desc_ctx *ctx = shash_desc_ctx(pdesc); struct cbcmac_desc_ctx *ctx = shash_desc_ctx(pdesc);
struct crypto_cipher *tfm = tctx->child; struct crypto_cipher *tfm = tctx->child;
int bs = crypto_shash_digestsize(parent); int bs = crypto_shash_digestsize(parent);
u8 *dg = (u8 *)ctx + crypto_shash_descsize(parent) - bs;
while (len > 0) { while (len > 0) {
unsigned int l = min(len, bs - ctx->len); unsigned int l = min(len, bs - ctx->len);
crypto_xor(dg + ctx->len, p, l); crypto_xor(&ctx->dg[ctx->len], p, l);
ctx->len +=l; ctx->len +=l;
len -= l; len -= l;
p += l; p += l;
if (ctx->len == bs) { if (ctx->len == bs) {
crypto_cipher_encrypt_one(tfm, dg, dg); crypto_cipher_encrypt_one(tfm, ctx->dg, ctx->dg);
ctx->len = 0; ctx->len = 0;
} }
} }
@ -828,12 +825,11 @@ static int crypto_cbcmac_digest_final(struct shash_desc *pdesc, u8 *out)
struct cbcmac_desc_ctx *ctx = shash_desc_ctx(pdesc); struct cbcmac_desc_ctx *ctx = shash_desc_ctx(pdesc);
struct crypto_cipher *tfm = tctx->child; struct crypto_cipher *tfm = tctx->child;
int bs = crypto_shash_digestsize(parent); int bs = crypto_shash_digestsize(parent);
u8 *dg = (u8 *)ctx + crypto_shash_descsize(parent) - bs;
if (ctx->len) if (ctx->len)
crypto_cipher_encrypt_one(tfm, dg, dg); crypto_cipher_encrypt_one(tfm, ctx->dg, ctx->dg);
memcpy(out, dg, bs); memcpy(out, ctx->dg, bs);
return 0; return 0;
} }
@ -890,8 +886,7 @@ static int cbcmac_create(struct crypto_template *tmpl, struct rtattr **tb)
inst->alg.base.cra_blocksize = 1; inst->alg.base.cra_blocksize = 1;
inst->alg.digestsize = alg->cra_blocksize; inst->alg.digestsize = alg->cra_blocksize;
inst->alg.descsize = ALIGN(sizeof(struct cbcmac_desc_ctx), inst->alg.descsize = sizeof(struct cbcmac_desc_ctx) +
alg->cra_alignmask + 1) +
alg->cra_blocksize; alg->cra_blocksize;
inst->alg.base.cra_ctxsize = sizeof(struct cbcmac_tfm_ctx); inst->alg.base.cra_ctxsize = sizeof(struct cbcmac_tfm_ctx);

View File

@ -558,7 +558,7 @@ static int chachapoly_create(struct crypto_template *tmpl, struct rtattr **tb,
u32 mask; u32 mask;
struct aead_instance *inst; struct aead_instance *inst;
struct chachapoly_instance_ctx *ctx; struct chachapoly_instance_ctx *ctx;
struct skcipher_alg *chacha; struct skcipher_alg_common *chacha;
struct hash_alg_common *poly; struct hash_alg_common *poly;
int err; int err;
@ -579,7 +579,7 @@ static int chachapoly_create(struct crypto_template *tmpl, struct rtattr **tb,
crypto_attr_alg_name(tb[1]), 0, mask); crypto_attr_alg_name(tb[1]), 0, mask);
if (err) if (err)
goto err_free_inst; goto err_free_inst;
chacha = crypto_spawn_skcipher_alg(&ctx->chacha); chacha = crypto_spawn_skcipher_alg_common(&ctx->chacha);
err = crypto_grab_ahash(&ctx->poly, aead_crypto_instance(inst), err = crypto_grab_ahash(&ctx->poly, aead_crypto_instance(inst),
crypto_attr_alg_name(tb[2]), 0, mask); crypto_attr_alg_name(tb[2]), 0, mask);
@ -591,7 +591,7 @@ static int chachapoly_create(struct crypto_template *tmpl, struct rtattr **tb,
if (poly->digestsize != POLY1305_DIGEST_SIZE) if (poly->digestsize != POLY1305_DIGEST_SIZE)
goto err_free_inst; goto err_free_inst;
/* Need 16-byte IV size, including Initial Block Counter value */ /* Need 16-byte IV size, including Initial Block Counter value */
if (crypto_skcipher_alg_ivsize(chacha) != CHACHA_IV_SIZE) if (chacha->ivsize != CHACHA_IV_SIZE)
goto err_free_inst; goto err_free_inst;
/* Not a stream cipher? */ /* Not a stream cipher? */
if (chacha->base.cra_blocksize != 1) if (chacha->base.cra_blocksize != 1)
@ -610,12 +610,11 @@ static int chachapoly_create(struct crypto_template *tmpl, struct rtattr **tb,
inst->alg.base.cra_priority = (chacha->base.cra_priority + inst->alg.base.cra_priority = (chacha->base.cra_priority +
poly->base.cra_priority) / 2; poly->base.cra_priority) / 2;
inst->alg.base.cra_blocksize = 1; inst->alg.base.cra_blocksize = 1;
inst->alg.base.cra_alignmask = chacha->base.cra_alignmask | inst->alg.base.cra_alignmask = chacha->base.cra_alignmask;
poly->base.cra_alignmask;
inst->alg.base.cra_ctxsize = sizeof(struct chachapoly_ctx) + inst->alg.base.cra_ctxsize = sizeof(struct chachapoly_ctx) +
ctx->saltlen; ctx->saltlen;
inst->alg.ivsize = ivsize; inst->alg.ivsize = ivsize;
inst->alg.chunksize = crypto_skcipher_alg_chunksize(chacha); inst->alg.chunksize = chacha->chunksize;
inst->alg.maxauthsize = POLY1305_DIGEST_SIZE; inst->alg.maxauthsize = POLY1305_DIGEST_SIZE;
inst->alg.init = chachapoly_init; inst->alg.init = chachapoly_init;
inst->alg.exit = chachapoly_exit; inst->alg.exit = chachapoly_exit;

View File

@ -28,7 +28,7 @@
*/ */
struct cmac_tfm_ctx { struct cmac_tfm_ctx {
struct crypto_cipher *child; struct crypto_cipher *child;
u8 ctx[]; __be64 consts[];
}; };
/* /*
@ -44,17 +44,15 @@ struct cmac_tfm_ctx {
*/ */
struct cmac_desc_ctx { struct cmac_desc_ctx {
unsigned int len; unsigned int len;
u8 ctx[]; u8 odds[];
}; };
static int crypto_cmac_digest_setkey(struct crypto_shash *parent, static int crypto_cmac_digest_setkey(struct crypto_shash *parent,
const u8 *inkey, unsigned int keylen) const u8 *inkey, unsigned int keylen)
{ {
unsigned long alignmask = crypto_shash_alignmask(parent);
struct cmac_tfm_ctx *ctx = crypto_shash_ctx(parent); struct cmac_tfm_ctx *ctx = crypto_shash_ctx(parent);
unsigned int bs = crypto_shash_blocksize(parent); unsigned int bs = crypto_shash_blocksize(parent);
__be64 *consts = PTR_ALIGN((void *)ctx->ctx, __be64 *consts = ctx->consts;
(alignmask | (__alignof__(__be64) - 1)) + 1);
u64 _const[2]; u64 _const[2];
int i, err = 0; int i, err = 0;
u8 msb_mask, gfmask; u8 msb_mask, gfmask;
@ -104,10 +102,9 @@ static int crypto_cmac_digest_setkey(struct crypto_shash *parent,
static int crypto_cmac_digest_init(struct shash_desc *pdesc) static int crypto_cmac_digest_init(struct shash_desc *pdesc)
{ {
unsigned long alignmask = crypto_shash_alignmask(pdesc->tfm);
struct cmac_desc_ctx *ctx = shash_desc_ctx(pdesc); struct cmac_desc_ctx *ctx = shash_desc_ctx(pdesc);
int bs = crypto_shash_blocksize(pdesc->tfm); int bs = crypto_shash_blocksize(pdesc->tfm);
u8 *prev = PTR_ALIGN((void *)ctx->ctx, alignmask + 1) + bs; u8 *prev = &ctx->odds[bs];
ctx->len = 0; ctx->len = 0;
memset(prev, 0, bs); memset(prev, 0, bs);
@ -119,12 +116,11 @@ static int crypto_cmac_digest_update(struct shash_desc *pdesc, const u8 *p,
unsigned int len) unsigned int len)
{ {
struct crypto_shash *parent = pdesc->tfm; struct crypto_shash *parent = pdesc->tfm;
unsigned long alignmask = crypto_shash_alignmask(parent);
struct cmac_tfm_ctx *tctx = crypto_shash_ctx(parent); struct cmac_tfm_ctx *tctx = crypto_shash_ctx(parent);
struct cmac_desc_ctx *ctx = shash_desc_ctx(pdesc); struct cmac_desc_ctx *ctx = shash_desc_ctx(pdesc);
struct crypto_cipher *tfm = tctx->child; struct crypto_cipher *tfm = tctx->child;
int bs = crypto_shash_blocksize(parent); int bs = crypto_shash_blocksize(parent);
u8 *odds = PTR_ALIGN((void *)ctx->ctx, alignmask + 1); u8 *odds = ctx->odds;
u8 *prev = odds + bs; u8 *prev = odds + bs;
/* checking the data can fill the block */ /* checking the data can fill the block */
@ -165,14 +161,11 @@ static int crypto_cmac_digest_update(struct shash_desc *pdesc, const u8 *p,
static int crypto_cmac_digest_final(struct shash_desc *pdesc, u8 *out) static int crypto_cmac_digest_final(struct shash_desc *pdesc, u8 *out)
{ {
struct crypto_shash *parent = pdesc->tfm; struct crypto_shash *parent = pdesc->tfm;
unsigned long alignmask = crypto_shash_alignmask(parent);
struct cmac_tfm_ctx *tctx = crypto_shash_ctx(parent); struct cmac_tfm_ctx *tctx = crypto_shash_ctx(parent);
struct cmac_desc_ctx *ctx = shash_desc_ctx(pdesc); struct cmac_desc_ctx *ctx = shash_desc_ctx(pdesc);
struct crypto_cipher *tfm = tctx->child; struct crypto_cipher *tfm = tctx->child;
int bs = crypto_shash_blocksize(parent); int bs = crypto_shash_blocksize(parent);
u8 *consts = PTR_ALIGN((void *)tctx->ctx, u8 *odds = ctx->odds;
(alignmask | (__alignof__(__be64) - 1)) + 1);
u8 *odds = PTR_ALIGN((void *)ctx->ctx, alignmask + 1);
u8 *prev = odds + bs; u8 *prev = odds + bs;
unsigned int offset = 0; unsigned int offset = 0;
@ -191,7 +184,7 @@ static int crypto_cmac_digest_final(struct shash_desc *pdesc, u8 *out)
} }
crypto_xor(prev, odds, bs); crypto_xor(prev, odds, bs);
crypto_xor(prev, consts + offset, bs); crypto_xor(prev, (const u8 *)tctx->consts + offset, bs);
crypto_cipher_encrypt_one(tfm, out, prev); crypto_cipher_encrypt_one(tfm, out, prev);
@ -241,7 +234,6 @@ static int cmac_create(struct crypto_template *tmpl, struct rtattr **tb)
struct shash_instance *inst; struct shash_instance *inst;
struct crypto_cipher_spawn *spawn; struct crypto_cipher_spawn *spawn;
struct crypto_alg *alg; struct crypto_alg *alg;
unsigned long alignmask;
u32 mask; u32 mask;
int err; int err;
@ -273,23 +265,14 @@ static int cmac_create(struct crypto_template *tmpl, struct rtattr **tb)
if (err) if (err)
goto err_free_inst; goto err_free_inst;
alignmask = alg->cra_alignmask;
inst->alg.base.cra_alignmask = alignmask;
inst->alg.base.cra_priority = alg->cra_priority; inst->alg.base.cra_priority = alg->cra_priority;
inst->alg.base.cra_blocksize = alg->cra_blocksize; inst->alg.base.cra_blocksize = alg->cra_blocksize;
inst->alg.base.cra_ctxsize = sizeof(struct cmac_tfm_ctx) +
alg->cra_blocksize * 2;
inst->alg.digestsize = alg->cra_blocksize; inst->alg.digestsize = alg->cra_blocksize;
inst->alg.descsize = inst->alg.descsize = sizeof(struct cmac_desc_ctx) +
ALIGN(sizeof(struct cmac_desc_ctx), crypto_tfm_ctx_alignment()) alg->cra_blocksize * 2;
+ (alignmask & ~(crypto_tfm_ctx_alignment() - 1))
+ alg->cra_blocksize * 2;
inst->alg.base.cra_ctxsize =
ALIGN(sizeof(struct cmac_tfm_ctx), crypto_tfm_ctx_alignment())
+ ((alignmask | (__alignof__(__be64) - 1)) &
~(crypto_tfm_ctx_alignment() - 1))
+ alg->cra_blocksize * 2;
inst->alg.init = crypto_cmac_digest_init; inst->alg.init = crypto_cmac_digest_init;
inst->alg.update = crypto_cmac_digest_update; inst->alg.update = crypto_cmac_digest_update;
inst->alg.final = crypto_cmac_digest_final; inst->alg.final = crypto_cmac_digest_final;

View File

@ -377,7 +377,7 @@ static int cryptd_create_skcipher(struct crypto_template *tmpl,
{ {
struct skcipherd_instance_ctx *ctx; struct skcipherd_instance_ctx *ctx;
struct skcipher_instance *inst; struct skcipher_instance *inst;
struct skcipher_alg *alg; struct skcipher_alg_common *alg;
u32 type; u32 type;
u32 mask; u32 mask;
int err; int err;
@ -396,17 +396,17 @@ static int cryptd_create_skcipher(struct crypto_template *tmpl,
if (err) if (err)
goto err_free_inst; goto err_free_inst;
alg = crypto_spawn_skcipher_alg(&ctx->spawn); alg = crypto_spawn_skcipher_alg_common(&ctx->spawn);
err = cryptd_init_instance(skcipher_crypto_instance(inst), &alg->base); err = cryptd_init_instance(skcipher_crypto_instance(inst), &alg->base);
if (err) if (err)
goto err_free_inst; goto err_free_inst;
inst->alg.base.cra_flags |= CRYPTO_ALG_ASYNC | inst->alg.base.cra_flags |= CRYPTO_ALG_ASYNC |
(alg->base.cra_flags & CRYPTO_ALG_INTERNAL); (alg->base.cra_flags & CRYPTO_ALG_INTERNAL);
inst->alg.ivsize = crypto_skcipher_alg_ivsize(alg); inst->alg.ivsize = alg->ivsize;
inst->alg.chunksize = crypto_skcipher_alg_chunksize(alg); inst->alg.chunksize = alg->chunksize;
inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(alg); inst->alg.min_keysize = alg->min_keysize;
inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(alg); inst->alg.max_keysize = alg->max_keysize;
inst->alg.base.cra_ctxsize = sizeof(struct cryptd_skcipher_ctx); inst->alg.base.cra_ctxsize = sizeof(struct cryptd_skcipher_ctx);
@ -929,7 +929,7 @@ static int cryptd_create(struct crypto_template *tmpl, struct rtattr **tb)
return PTR_ERR(algt); return PTR_ERR(algt);
switch (algt->type & algt->mask & CRYPTO_ALG_TYPE_MASK) { switch (algt->type & algt->mask & CRYPTO_ALG_TYPE_MASK) {
case CRYPTO_ALG_TYPE_SKCIPHER: case CRYPTO_ALG_TYPE_LSKCIPHER:
return cryptd_create_skcipher(tmpl, tb, algt, &queue); return cryptd_create_skcipher(tmpl, tb, algt, &queue);
case CRYPTO_ALG_TYPE_HASH: case CRYPTO_ALG_TYPE_HASH:
return cryptd_create_hash(tmpl, tb, algt, &queue); return cryptd_create_hash(tmpl, tb, algt, &queue);

View File

@ -552,20 +552,16 @@ EXPORT_SYMBOL_GPL(crypto_engine_alloc_init);
/** /**
* crypto_engine_exit - free the resources of hardware engine when exit * crypto_engine_exit - free the resources of hardware engine when exit
* @engine: the hardware engine need to be freed * @engine: the hardware engine need to be freed
*
* Return 0 for success.
*/ */
int crypto_engine_exit(struct crypto_engine *engine) void crypto_engine_exit(struct crypto_engine *engine)
{ {
int ret; int ret;
ret = crypto_engine_stop(engine); ret = crypto_engine_stop(engine);
if (ret) if (ret)
return ret; return;
kthread_destroy_worker(engine->kworker); kthread_destroy_worker(engine->kworker);
return 0;
} }
EXPORT_SYMBOL_GPL(crypto_engine_exit); EXPORT_SYMBOL_GPL(crypto_engine_exit);

View File

@ -258,8 +258,8 @@ static int crypto_rfc3686_create(struct crypto_template *tmpl,
struct rtattr **tb) struct rtattr **tb)
{ {
struct skcipher_instance *inst; struct skcipher_instance *inst;
struct skcipher_alg *alg;
struct crypto_skcipher_spawn *spawn; struct crypto_skcipher_spawn *spawn;
struct skcipher_alg_common *alg;
u32 mask; u32 mask;
int err; int err;
@ -278,11 +278,11 @@ static int crypto_rfc3686_create(struct crypto_template *tmpl,
if (err) if (err)
goto err_free_inst; goto err_free_inst;
alg = crypto_spawn_skcipher_alg(spawn); alg = crypto_spawn_skcipher_alg_common(spawn);
/* We only support 16-byte blocks. */ /* We only support 16-byte blocks. */
err = -EINVAL; err = -EINVAL;
if (crypto_skcipher_alg_ivsize(alg) != CTR_RFC3686_BLOCK_SIZE) if (alg->ivsize != CTR_RFC3686_BLOCK_SIZE)
goto err_free_inst; goto err_free_inst;
/* Not a stream cipher? */ /* Not a stream cipher? */
@ -303,11 +303,9 @@ static int crypto_rfc3686_create(struct crypto_template *tmpl,
inst->alg.base.cra_alignmask = alg->base.cra_alignmask; inst->alg.base.cra_alignmask = alg->base.cra_alignmask;
inst->alg.ivsize = CTR_RFC3686_IV_SIZE; inst->alg.ivsize = CTR_RFC3686_IV_SIZE;
inst->alg.chunksize = crypto_skcipher_alg_chunksize(alg); inst->alg.chunksize = alg->chunksize;
inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(alg) + inst->alg.min_keysize = alg->min_keysize + CTR_RFC3686_NONCE_SIZE;
CTR_RFC3686_NONCE_SIZE; inst->alg.max_keysize = alg->max_keysize + CTR_RFC3686_NONCE_SIZE;
inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(alg) +
CTR_RFC3686_NONCE_SIZE;
inst->alg.setkey = crypto_rfc3686_setkey; inst->alg.setkey = crypto_rfc3686_setkey;
inst->alg.encrypt = crypto_rfc3686_crypt; inst->alg.encrypt = crypto_rfc3686_crypt;

View File

@ -324,8 +324,8 @@ static void crypto_cts_free(struct skcipher_instance *inst)
static int crypto_cts_create(struct crypto_template *tmpl, struct rtattr **tb) static int crypto_cts_create(struct crypto_template *tmpl, struct rtattr **tb)
{ {
struct crypto_skcipher_spawn *spawn; struct crypto_skcipher_spawn *spawn;
struct skcipher_alg_common *alg;
struct skcipher_instance *inst; struct skcipher_instance *inst;
struct skcipher_alg *alg;
u32 mask; u32 mask;
int err; int err;
@ -344,10 +344,10 @@ static int crypto_cts_create(struct crypto_template *tmpl, struct rtattr **tb)
if (err) if (err)
goto err_free_inst; goto err_free_inst;
alg = crypto_spawn_skcipher_alg(spawn); alg = crypto_spawn_skcipher_alg_common(spawn);
err = -EINVAL; err = -EINVAL;
if (crypto_skcipher_alg_ivsize(alg) != alg->base.cra_blocksize) if (alg->ivsize != alg->base.cra_blocksize)
goto err_free_inst; goto err_free_inst;
if (strncmp(alg->base.cra_name, "cbc(", 4)) if (strncmp(alg->base.cra_name, "cbc(", 4))
@ -363,9 +363,9 @@ static int crypto_cts_create(struct crypto_template *tmpl, struct rtattr **tb)
inst->alg.base.cra_alignmask = alg->base.cra_alignmask; inst->alg.base.cra_alignmask = alg->base.cra_alignmask;
inst->alg.ivsize = alg->base.cra_blocksize; inst->alg.ivsize = alg->base.cra_blocksize;
inst->alg.chunksize = crypto_skcipher_alg_chunksize(alg); inst->alg.chunksize = alg->chunksize;
inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(alg); inst->alg.min_keysize = alg->min_keysize;
inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(alg); inst->alg.max_keysize = alg->max_keysize;
inst->alg.base.cra_ctxsize = sizeof(struct crypto_cts_ctx); inst->alg.base.cra_ctxsize = sizeof(struct crypto_cts_ctx);

View File

@ -39,24 +39,20 @@ struct deflate_ctx {
struct z_stream_s decomp_stream; struct z_stream_s decomp_stream;
}; };
static int deflate_comp_init(struct deflate_ctx *ctx, int format) static int deflate_comp_init(struct deflate_ctx *ctx)
{ {
int ret = 0; int ret = 0;
struct z_stream_s *stream = &ctx->comp_stream; struct z_stream_s *stream = &ctx->comp_stream;
stream->workspace = vzalloc(zlib_deflate_workspacesize( stream->workspace = vzalloc(zlib_deflate_workspacesize(
MAX_WBITS, MAX_MEM_LEVEL)); -DEFLATE_DEF_WINBITS, MAX_MEM_LEVEL));
if (!stream->workspace) { if (!stream->workspace) {
ret = -ENOMEM; ret = -ENOMEM;
goto out; goto out;
} }
if (format) ret = zlib_deflateInit2(stream, DEFLATE_DEF_LEVEL, Z_DEFLATED,
ret = zlib_deflateInit(stream, 3); -DEFLATE_DEF_WINBITS, DEFLATE_DEF_MEMLEVEL,
else Z_DEFAULT_STRATEGY);
ret = zlib_deflateInit2(stream, DEFLATE_DEF_LEVEL, Z_DEFLATED,
-DEFLATE_DEF_WINBITS,
DEFLATE_DEF_MEMLEVEL,
Z_DEFAULT_STRATEGY);
if (ret != Z_OK) { if (ret != Z_OK) {
ret = -EINVAL; ret = -EINVAL;
goto out_free; goto out_free;
@ -68,7 +64,7 @@ out_free:
goto out; goto out;
} }
static int deflate_decomp_init(struct deflate_ctx *ctx, int format) static int deflate_decomp_init(struct deflate_ctx *ctx)
{ {
int ret = 0; int ret = 0;
struct z_stream_s *stream = &ctx->decomp_stream; struct z_stream_s *stream = &ctx->decomp_stream;
@ -78,10 +74,7 @@ static int deflate_decomp_init(struct deflate_ctx *ctx, int format)
ret = -ENOMEM; ret = -ENOMEM;
goto out; goto out;
} }
if (format) ret = zlib_inflateInit2(stream, -DEFLATE_DEF_WINBITS);
ret = zlib_inflateInit(stream);
else
ret = zlib_inflateInit2(stream, -DEFLATE_DEF_WINBITS);
if (ret != Z_OK) { if (ret != Z_OK) {
ret = -EINVAL; ret = -EINVAL;
goto out_free; goto out_free;
@ -105,21 +98,21 @@ static void deflate_decomp_exit(struct deflate_ctx *ctx)
vfree(ctx->decomp_stream.workspace); vfree(ctx->decomp_stream.workspace);
} }
static int __deflate_init(void *ctx, int format) static int __deflate_init(void *ctx)
{ {
int ret; int ret;
ret = deflate_comp_init(ctx, format); ret = deflate_comp_init(ctx);
if (ret) if (ret)
goto out; goto out;
ret = deflate_decomp_init(ctx, format); ret = deflate_decomp_init(ctx);
if (ret) if (ret)
deflate_comp_exit(ctx); deflate_comp_exit(ctx);
out: out:
return ret; return ret;
} }
static void *gen_deflate_alloc_ctx(struct crypto_scomp *tfm, int format) static void *deflate_alloc_ctx(struct crypto_scomp *tfm)
{ {
struct deflate_ctx *ctx; struct deflate_ctx *ctx;
int ret; int ret;
@ -128,7 +121,7 @@ static void *gen_deflate_alloc_ctx(struct crypto_scomp *tfm, int format)
if (!ctx) if (!ctx)
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
ret = __deflate_init(ctx, format); ret = __deflate_init(ctx);
if (ret) { if (ret) {
kfree(ctx); kfree(ctx);
return ERR_PTR(ret); return ERR_PTR(ret);
@ -137,21 +130,11 @@ static void *gen_deflate_alloc_ctx(struct crypto_scomp *tfm, int format)
return ctx; return ctx;
} }
static void *deflate_alloc_ctx(struct crypto_scomp *tfm)
{
return gen_deflate_alloc_ctx(tfm, 0);
}
static void *zlib_deflate_alloc_ctx(struct crypto_scomp *tfm)
{
return gen_deflate_alloc_ctx(tfm, 1);
}
static int deflate_init(struct crypto_tfm *tfm) static int deflate_init(struct crypto_tfm *tfm)
{ {
struct deflate_ctx *ctx = crypto_tfm_ctx(tfm); struct deflate_ctx *ctx = crypto_tfm_ctx(tfm);
return __deflate_init(ctx, 0); return __deflate_init(ctx);
} }
static void __deflate_exit(void *ctx) static void __deflate_exit(void *ctx)
@ -286,7 +269,7 @@ static struct crypto_alg alg = {
.coa_decompress = deflate_decompress } } .coa_decompress = deflate_decompress } }
}; };
static struct scomp_alg scomp[] = { { static struct scomp_alg scomp = {
.alloc_ctx = deflate_alloc_ctx, .alloc_ctx = deflate_alloc_ctx,
.free_ctx = deflate_free_ctx, .free_ctx = deflate_free_ctx,
.compress = deflate_scompress, .compress = deflate_scompress,
@ -296,17 +279,7 @@ static struct scomp_alg scomp[] = { {
.cra_driver_name = "deflate-scomp", .cra_driver_name = "deflate-scomp",
.cra_module = THIS_MODULE, .cra_module = THIS_MODULE,
} }
}, { };
.alloc_ctx = zlib_deflate_alloc_ctx,
.free_ctx = deflate_free_ctx,
.compress = deflate_scompress,
.decompress = deflate_sdecompress,
.base = {
.cra_name = "zlib-deflate",
.cra_driver_name = "zlib-deflate-scomp",
.cra_module = THIS_MODULE,
}
} };
static int __init deflate_mod_init(void) static int __init deflate_mod_init(void)
{ {
@ -316,7 +289,7 @@ static int __init deflate_mod_init(void)
if (ret) if (ret)
return ret; return ret;
ret = crypto_register_scomps(scomp, ARRAY_SIZE(scomp)); ret = crypto_register_scomp(&scomp);
if (ret) { if (ret) {
crypto_unregister_alg(&alg); crypto_unregister_alg(&alg);
return ret; return ret;
@ -328,7 +301,7 @@ static int __init deflate_mod_init(void)
static void __exit deflate_mod_fini(void) static void __exit deflate_mod_fini(void)
{ {
crypto_unregister_alg(&alg); crypto_unregister_alg(&alg);
crypto_unregister_scomps(scomp, ARRAY_SIZE(scomp)); crypto_unregister_scomp(&scomp);
} }
subsys_initcall(deflate_mod_init); subsys_initcall(deflate_mod_init);

View File

@ -1698,7 +1698,7 @@ static int drbg_init_hash_kernel(struct drbg_state *drbg)
sdesc->shash.tfm = tfm; sdesc->shash.tfm = tfm;
drbg->priv_data = sdesc; drbg->priv_data = sdesc;
return crypto_shash_alignmask(tfm); return 0;
} }
static int drbg_fini_hash_kernel(struct drbg_state *drbg) static int drbg_fini_hash_kernel(struct drbg_state *drbg)

View File

@ -5,75 +5,196 @@
* Copyright (c) 2006 Herbert Xu <herbert@gondor.apana.org.au> * Copyright (c) 2006 Herbert Xu <herbert@gondor.apana.org.au>
*/ */
#include <crypto/algapi.h>
#include <crypto/internal/cipher.h> #include <crypto/internal/cipher.h>
#include <crypto/internal/skcipher.h> #include <crypto/internal/skcipher.h>
#include <linux/err.h> #include <linux/err.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/slab.h>
static int crypto_ecb_crypt(struct skcipher_request *req, static int crypto_ecb_crypt(struct crypto_cipher *cipher, const u8 *src,
struct crypto_cipher *cipher, u8 *dst, unsigned nbytes, bool final,
void (*fn)(struct crypto_tfm *, u8 *, const u8 *)) void (*fn)(struct crypto_tfm *, u8 *, const u8 *))
{ {
const unsigned int bsize = crypto_cipher_blocksize(cipher); const unsigned int bsize = crypto_cipher_blocksize(cipher);
struct skcipher_walk walk;
unsigned int nbytes; while (nbytes >= bsize) {
fn(crypto_cipher_tfm(cipher), dst, src);
src += bsize;
dst += bsize;
nbytes -= bsize;
}
return nbytes && final ? -EINVAL : nbytes;
}
static int crypto_ecb_encrypt2(struct crypto_lskcipher *tfm, const u8 *src,
u8 *dst, unsigned len, u8 *iv, bool final)
{
struct crypto_cipher **ctx = crypto_lskcipher_ctx(tfm);
struct crypto_cipher *cipher = *ctx;
return crypto_ecb_crypt(cipher, src, dst, len, final,
crypto_cipher_alg(cipher)->cia_encrypt);
}
static int crypto_ecb_decrypt2(struct crypto_lskcipher *tfm, const u8 *src,
u8 *dst, unsigned len, u8 *iv, bool final)
{
struct crypto_cipher **ctx = crypto_lskcipher_ctx(tfm);
struct crypto_cipher *cipher = *ctx;
return crypto_ecb_crypt(cipher, src, dst, len, final,
crypto_cipher_alg(cipher)->cia_decrypt);
}
static int lskcipher_setkey_simple2(struct crypto_lskcipher *tfm,
const u8 *key, unsigned int keylen)
{
struct crypto_cipher **ctx = crypto_lskcipher_ctx(tfm);
struct crypto_cipher *cipher = *ctx;
crypto_cipher_clear_flags(cipher, CRYPTO_TFM_REQ_MASK);
crypto_cipher_set_flags(cipher, crypto_lskcipher_get_flags(tfm) &
CRYPTO_TFM_REQ_MASK);
return crypto_cipher_setkey(cipher, key, keylen);
}
static int lskcipher_init_tfm_simple2(struct crypto_lskcipher *tfm)
{
struct lskcipher_instance *inst = lskcipher_alg_instance(tfm);
struct crypto_cipher **ctx = crypto_lskcipher_ctx(tfm);
struct crypto_cipher_spawn *spawn;
struct crypto_cipher *cipher;
spawn = lskcipher_instance_ctx(inst);
cipher = crypto_spawn_cipher(spawn);
if (IS_ERR(cipher))
return PTR_ERR(cipher);
*ctx = cipher;
return 0;
}
static void lskcipher_exit_tfm_simple2(struct crypto_lskcipher *tfm)
{
struct crypto_cipher **ctx = crypto_lskcipher_ctx(tfm);
crypto_free_cipher(*ctx);
}
static void lskcipher_free_instance_simple2(struct lskcipher_instance *inst)
{
crypto_drop_cipher(lskcipher_instance_ctx(inst));
kfree(inst);
}
static struct lskcipher_instance *lskcipher_alloc_instance_simple2(
struct crypto_template *tmpl, struct rtattr **tb)
{
struct crypto_cipher_spawn *spawn;
struct lskcipher_instance *inst;
struct crypto_alg *cipher_alg;
u32 mask;
int err; int err;
err = skcipher_walk_virt(&walk, req, false); err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_LSKCIPHER, &mask);
if (err)
return ERR_PTR(err);
while ((nbytes = walk.nbytes) != 0) { inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
const u8 *src = walk.src.virt.addr; if (!inst)
u8 *dst = walk.dst.virt.addr; return ERR_PTR(-ENOMEM);
spawn = lskcipher_instance_ctx(inst);
do { err = crypto_grab_cipher(spawn, lskcipher_crypto_instance(inst),
fn(crypto_cipher_tfm(cipher), dst, src); crypto_attr_alg_name(tb[1]), 0, mask);
if (err)
goto err_free_inst;
cipher_alg = crypto_spawn_cipher_alg(spawn);
src += bsize; err = crypto_inst_setname(lskcipher_crypto_instance(inst), tmpl->name,
dst += bsize; cipher_alg);
} while ((nbytes -= bsize) >= bsize); if (err)
goto err_free_inst;
err = skcipher_walk_done(&walk, nbytes); inst->free = lskcipher_free_instance_simple2;
}
/* Default algorithm properties, can be overridden */
inst->alg.co.base.cra_blocksize = cipher_alg->cra_blocksize;
inst->alg.co.base.cra_alignmask = cipher_alg->cra_alignmask;
inst->alg.co.base.cra_priority = cipher_alg->cra_priority;
inst->alg.co.min_keysize = cipher_alg->cra_cipher.cia_min_keysize;
inst->alg.co.max_keysize = cipher_alg->cra_cipher.cia_max_keysize;
inst->alg.co.ivsize = cipher_alg->cra_blocksize;
/* Use struct crypto_cipher * by default, can be overridden */
inst->alg.co.base.cra_ctxsize = sizeof(struct crypto_cipher *);
inst->alg.setkey = lskcipher_setkey_simple2;
inst->alg.init = lskcipher_init_tfm_simple2;
inst->alg.exit = lskcipher_exit_tfm_simple2;
return inst;
err_free_inst:
lskcipher_free_instance_simple2(inst);
return ERR_PTR(err);
}
static int crypto_ecb_create2(struct crypto_template *tmpl, struct rtattr **tb)
{
struct lskcipher_instance *inst;
int err;
inst = lskcipher_alloc_instance_simple2(tmpl, tb);
if (IS_ERR(inst))
return PTR_ERR(inst);
/* ECB mode doesn't take an IV */
inst->alg.co.ivsize = 0;
inst->alg.encrypt = crypto_ecb_encrypt2;
inst->alg.decrypt = crypto_ecb_decrypt2;
err = lskcipher_register_instance(tmpl, inst);
if (err)
inst->free(inst);
return err; return err;
} }
static int crypto_ecb_encrypt(struct skcipher_request *req)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
struct crypto_cipher *cipher = skcipher_cipher_simple(tfm);
return crypto_ecb_crypt(req, cipher,
crypto_cipher_alg(cipher)->cia_encrypt);
}
static int crypto_ecb_decrypt(struct skcipher_request *req)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
struct crypto_cipher *cipher = skcipher_cipher_simple(tfm);
return crypto_ecb_crypt(req, cipher,
crypto_cipher_alg(cipher)->cia_decrypt);
}
static int crypto_ecb_create(struct crypto_template *tmpl, struct rtattr **tb) static int crypto_ecb_create(struct crypto_template *tmpl, struct rtattr **tb)
{ {
struct skcipher_instance *inst; struct crypto_lskcipher_spawn *spawn;
struct lskcipher_alg *cipher_alg;
struct lskcipher_instance *inst;
int err; int err;
inst = skcipher_alloc_instance_simple(tmpl, tb); inst = lskcipher_alloc_instance_simple(tmpl, tb);
if (IS_ERR(inst)) if (IS_ERR(inst)) {
return PTR_ERR(inst); err = crypto_ecb_create2(tmpl, tb);
return err;
}
inst->alg.ivsize = 0; /* ECB mode doesn't take an IV */ spawn = lskcipher_instance_ctx(inst);
cipher_alg = crypto_lskcipher_spawn_alg(spawn);
inst->alg.encrypt = crypto_ecb_encrypt; /* ECB mode doesn't take an IV */
inst->alg.decrypt = crypto_ecb_decrypt; inst->alg.co.ivsize = 0;
if (cipher_alg->co.ivsize)
return -EINVAL;
err = skcipher_register_instance(tmpl, inst); inst->alg.co.base.cra_ctxsize = cipher_alg->co.base.cra_ctxsize;
inst->alg.setkey = cipher_alg->setkey;
inst->alg.encrypt = cipher_alg->encrypt;
inst->alg.decrypt = cipher_alg->decrypt;
inst->alg.init = cipher_alg->init;
inst->alg.exit = cipher_alg->exit;
err = lskcipher_register_instance(tmpl, inst);
if (err) if (err)
inst->free(inst); inst->free(inst);
@ -102,3 +223,4 @@ module_exit(crypto_ecb_module_exit);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("ECB block cipher mode of operation"); MODULE_DESCRIPTION("ECB block cipher mode of operation");
MODULE_ALIAS_CRYPTO("ecb"); MODULE_ALIAS_CRYPTO("ecb");
MODULE_IMPORT_NS(CRYPTO_INTERNAL);

View File

@ -442,6 +442,7 @@ out:
static int essiv_create(struct crypto_template *tmpl, struct rtattr **tb) static int essiv_create(struct crypto_template *tmpl, struct rtattr **tb)
{ {
struct skcipher_alg_common *skcipher_alg = NULL;
struct crypto_attr_type *algt; struct crypto_attr_type *algt;
const char *inner_cipher_name; const char *inner_cipher_name;
const char *shash_name; const char *shash_name;
@ -450,7 +451,6 @@ static int essiv_create(struct crypto_template *tmpl, struct rtattr **tb)
struct crypto_instance *inst; struct crypto_instance *inst;
struct crypto_alg *base, *block_base; struct crypto_alg *base, *block_base;
struct essiv_instance_ctx *ictx; struct essiv_instance_ctx *ictx;
struct skcipher_alg *skcipher_alg = NULL;
struct aead_alg *aead_alg = NULL; struct aead_alg *aead_alg = NULL;
struct crypto_alg *_hash_alg; struct crypto_alg *_hash_alg;
struct shash_alg *hash_alg; struct shash_alg *hash_alg;
@ -475,7 +475,7 @@ static int essiv_create(struct crypto_template *tmpl, struct rtattr **tb)
mask = crypto_algt_inherited_mask(algt); mask = crypto_algt_inherited_mask(algt);
switch (type) { switch (type) {
case CRYPTO_ALG_TYPE_SKCIPHER: case CRYPTO_ALG_TYPE_LSKCIPHER:
skcipher_inst = kzalloc(sizeof(*skcipher_inst) + skcipher_inst = kzalloc(sizeof(*skcipher_inst) +
sizeof(*ictx), GFP_KERNEL); sizeof(*ictx), GFP_KERNEL);
if (!skcipher_inst) if (!skcipher_inst)
@ -489,9 +489,10 @@ static int essiv_create(struct crypto_template *tmpl, struct rtattr **tb)
inner_cipher_name, 0, mask); inner_cipher_name, 0, mask);
if (err) if (err)
goto out_free_inst; goto out_free_inst;
skcipher_alg = crypto_spawn_skcipher_alg(&ictx->u.skcipher_spawn); skcipher_alg = crypto_spawn_skcipher_alg_common(
&ictx->u.skcipher_spawn);
block_base = &skcipher_alg->base; block_base = &skcipher_alg->base;
ivsize = crypto_skcipher_alg_ivsize(skcipher_alg); ivsize = skcipher_alg->ivsize;
break; break;
case CRYPTO_ALG_TYPE_AEAD: case CRYPTO_ALG_TYPE_AEAD:
@ -574,18 +575,17 @@ static int essiv_create(struct crypto_template *tmpl, struct rtattr **tb)
base->cra_alignmask = block_base->cra_alignmask; base->cra_alignmask = block_base->cra_alignmask;
base->cra_priority = block_base->cra_priority; base->cra_priority = block_base->cra_priority;
if (type == CRYPTO_ALG_TYPE_SKCIPHER) { if (type == CRYPTO_ALG_TYPE_LSKCIPHER) {
skcipher_inst->alg.setkey = essiv_skcipher_setkey; skcipher_inst->alg.setkey = essiv_skcipher_setkey;
skcipher_inst->alg.encrypt = essiv_skcipher_encrypt; skcipher_inst->alg.encrypt = essiv_skcipher_encrypt;
skcipher_inst->alg.decrypt = essiv_skcipher_decrypt; skcipher_inst->alg.decrypt = essiv_skcipher_decrypt;
skcipher_inst->alg.init = essiv_skcipher_init_tfm; skcipher_inst->alg.init = essiv_skcipher_init_tfm;
skcipher_inst->alg.exit = essiv_skcipher_exit_tfm; skcipher_inst->alg.exit = essiv_skcipher_exit_tfm;
skcipher_inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(skcipher_alg); skcipher_inst->alg.min_keysize = skcipher_alg->min_keysize;
skcipher_inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(skcipher_alg); skcipher_inst->alg.max_keysize = skcipher_alg->max_keysize;
skcipher_inst->alg.ivsize = ivsize; skcipher_inst->alg.ivsize = ivsize;
skcipher_inst->alg.chunksize = crypto_skcipher_alg_chunksize(skcipher_alg); skcipher_inst->alg.chunksize = skcipher_alg->chunksize;
skcipher_inst->alg.walksize = crypto_skcipher_alg_walksize(skcipher_alg);
skcipher_inst->free = essiv_skcipher_free_instance; skcipher_inst->free = essiv_skcipher_free_instance;
@ -616,7 +616,7 @@ static int essiv_create(struct crypto_template *tmpl, struct rtattr **tb)
out_free_hash: out_free_hash:
crypto_mod_put(_hash_alg); crypto_mod_put(_hash_alg);
out_drop_skcipher: out_drop_skcipher:
if (type == CRYPTO_ALG_TYPE_SKCIPHER) if (type == CRYPTO_ALG_TYPE_LSKCIPHER)
crypto_drop_skcipher(&ictx->u.skcipher_spawn); crypto_drop_skcipher(&ictx->u.skcipher_spawn);
else else
crypto_drop_aead(&ictx->u.aead_spawn); crypto_drop_aead(&ictx->u.aead_spawn);

View File

@ -576,10 +576,10 @@ static int crypto_gcm_create_common(struct crypto_template *tmpl,
const char *ctr_name, const char *ctr_name,
const char *ghash_name) const char *ghash_name)
{ {
struct skcipher_alg_common *ctr;
u32 mask; u32 mask;
struct aead_instance *inst; struct aead_instance *inst;
struct gcm_instance_ctx *ctx; struct gcm_instance_ctx *ctx;
struct skcipher_alg *ctr;
struct hash_alg_common *ghash; struct hash_alg_common *ghash;
int err; int err;
@ -607,13 +607,12 @@ static int crypto_gcm_create_common(struct crypto_template *tmpl,
ctr_name, 0, mask); ctr_name, 0, mask);
if (err) if (err)
goto err_free_inst; goto err_free_inst;
ctr = crypto_spawn_skcipher_alg(&ctx->ctr); ctr = crypto_spawn_skcipher_alg_common(&ctx->ctr);
/* The skcipher algorithm must be CTR mode, using 16-byte blocks. */ /* The skcipher algorithm must be CTR mode, using 16-byte blocks. */
err = -EINVAL; err = -EINVAL;
if (strncmp(ctr->base.cra_name, "ctr(", 4) != 0 || if (strncmp(ctr->base.cra_name, "ctr(", 4) != 0 ||
crypto_skcipher_alg_ivsize(ctr) != 16 || ctr->ivsize != 16 || ctr->base.cra_blocksize != 1)
ctr->base.cra_blocksize != 1)
goto err_free_inst; goto err_free_inst;
err = -ENAMETOOLONG; err = -ENAMETOOLONG;
@ -630,11 +629,10 @@ static int crypto_gcm_create_common(struct crypto_template *tmpl,
inst->alg.base.cra_priority = (ghash->base.cra_priority + inst->alg.base.cra_priority = (ghash->base.cra_priority +
ctr->base.cra_priority) / 2; ctr->base.cra_priority) / 2;
inst->alg.base.cra_blocksize = 1; inst->alg.base.cra_blocksize = 1;
inst->alg.base.cra_alignmask = ghash->base.cra_alignmask | inst->alg.base.cra_alignmask = ctr->base.cra_alignmask;
ctr->base.cra_alignmask;
inst->alg.base.cra_ctxsize = sizeof(struct crypto_gcm_ctx); inst->alg.base.cra_ctxsize = sizeof(struct crypto_gcm_ctx);
inst->alg.ivsize = GCM_AES_IV_SIZE; inst->alg.ivsize = GCM_AES_IV_SIZE;
inst->alg.chunksize = crypto_skcipher_alg_chunksize(ctr); inst->alg.chunksize = ctr->chunksize;
inst->alg.maxauthsize = 16; inst->alg.maxauthsize = 16;
inst->alg.init = crypto_gcm_init_tfm; inst->alg.init = crypto_gcm_init_tfm;
inst->alg.exit = crypto_gcm_exit_tfm; inst->alg.exit = crypto_gcm_exit_tfm;

View File

@ -12,6 +12,16 @@
#include "internal.h" #include "internal.h"
static inline struct crypto_istat_hash *hash_get_stat(
struct hash_alg_common *alg)
{
#ifdef CONFIG_CRYPTO_STATS
return &alg->stat;
#else
return NULL;
#endif
}
static inline int crypto_hash_report_stat(struct sk_buff *skb, static inline int crypto_hash_report_stat(struct sk_buff *skb,
struct crypto_alg *alg, struct crypto_alg *alg,
const char *type) const char *type)
@ -31,9 +41,7 @@ static inline int crypto_hash_report_stat(struct sk_buff *skb,
return nla_put(skb, CRYPTOCFGA_STAT_HASH, sizeof(rhash), &rhash); return nla_put(skb, CRYPTOCFGA_STAT_HASH, sizeof(rhash), &rhash);
} }
int crypto_init_shash_ops_async(struct crypto_tfm *tfm); extern const struct crypto_type crypto_shash_type;
struct crypto_ahash *crypto_clone_shash_ops_async(struct crypto_ahash *nhash,
struct crypto_ahash *hash);
int hash_prepare_alg(struct hash_alg_common *alg); int hash_prepare_alg(struct hash_alg_common *alg);

View File

@ -29,6 +29,9 @@ const char *const hash_algo_name[HASH_ALGO__LAST] = {
[HASH_ALGO_SM3_256] = "sm3", [HASH_ALGO_SM3_256] = "sm3",
[HASH_ALGO_STREEBOG_256] = "streebog256", [HASH_ALGO_STREEBOG_256] = "streebog256",
[HASH_ALGO_STREEBOG_512] = "streebog512", [HASH_ALGO_STREEBOG_512] = "streebog512",
[HASH_ALGO_SHA3_256] = "sha3-256",
[HASH_ALGO_SHA3_384] = "sha3-384",
[HASH_ALGO_SHA3_512] = "sha3-512",
}; };
EXPORT_SYMBOL_GPL(hash_algo_name); EXPORT_SYMBOL_GPL(hash_algo_name);
@ -53,5 +56,8 @@ const int hash_digest_size[HASH_ALGO__LAST] = {
[HASH_ALGO_SM3_256] = SM3256_DIGEST_SIZE, [HASH_ALGO_SM3_256] = SM3256_DIGEST_SIZE,
[HASH_ALGO_STREEBOG_256] = STREEBOG256_DIGEST_SIZE, [HASH_ALGO_STREEBOG_256] = STREEBOG256_DIGEST_SIZE,
[HASH_ALGO_STREEBOG_512] = STREEBOG512_DIGEST_SIZE, [HASH_ALGO_STREEBOG_512] = STREEBOG512_DIGEST_SIZE,
[HASH_ALGO_SHA3_256] = SHA3_256_DIGEST_SIZE,
[HASH_ALGO_SHA3_384] = SHA3_384_DIGEST_SIZE,
[HASH_ALGO_SHA3_512] = SHA3_512_DIGEST_SIZE,
}; };
EXPORT_SYMBOL_GPL(hash_digest_size); EXPORT_SYMBOL_GPL(hash_digest_size);

View File

@ -406,10 +406,10 @@ static int hctr2_create_common(struct crypto_template *tmpl,
const char *xctr_name, const char *xctr_name,
const char *polyval_name) const char *polyval_name)
{ {
struct skcipher_alg_common *xctr_alg;
u32 mask; u32 mask;
struct skcipher_instance *inst; struct skcipher_instance *inst;
struct hctr2_instance_ctx *ictx; struct hctr2_instance_ctx *ictx;
struct skcipher_alg *xctr_alg;
struct crypto_alg *blockcipher_alg; struct crypto_alg *blockcipher_alg;
struct shash_alg *polyval_alg; struct shash_alg *polyval_alg;
char blockcipher_name[CRYPTO_MAX_ALG_NAME]; char blockcipher_name[CRYPTO_MAX_ALG_NAME];
@ -431,7 +431,7 @@ static int hctr2_create_common(struct crypto_template *tmpl,
xctr_name, 0, mask); xctr_name, 0, mask);
if (err) if (err)
goto err_free_inst; goto err_free_inst;
xctr_alg = crypto_spawn_skcipher_alg(&ictx->xctr_spawn); xctr_alg = crypto_spawn_skcipher_alg_common(&ictx->xctr_spawn);
err = -EINVAL; err = -EINVAL;
if (strncmp(xctr_alg->base.cra_name, "xctr(", 5)) if (strncmp(xctr_alg->base.cra_name, "xctr(", 5))
@ -485,8 +485,7 @@ static int hctr2_create_common(struct crypto_template *tmpl,
inst->alg.base.cra_blocksize = BLOCKCIPHER_BLOCK_SIZE; inst->alg.base.cra_blocksize = BLOCKCIPHER_BLOCK_SIZE;
inst->alg.base.cra_ctxsize = sizeof(struct hctr2_tfm_ctx) + inst->alg.base.cra_ctxsize = sizeof(struct hctr2_tfm_ctx) +
polyval_alg->statesize * 2; polyval_alg->statesize * 2;
inst->alg.base.cra_alignmask = xctr_alg->base.cra_alignmask | inst->alg.base.cra_alignmask = xctr_alg->base.cra_alignmask;
polyval_alg->base.cra_alignmask;
/* /*
* The hash function is called twice, so it is weighted higher than the * The hash function is called twice, so it is weighted higher than the
* xctr and blockcipher. * xctr and blockcipher.
@ -500,8 +499,8 @@ static int hctr2_create_common(struct crypto_template *tmpl,
inst->alg.decrypt = hctr2_decrypt; inst->alg.decrypt = hctr2_decrypt;
inst->alg.init = hctr2_init_tfm; inst->alg.init = hctr2_init_tfm;
inst->alg.exit = hctr2_exit_tfm; inst->alg.exit = hctr2_exit_tfm;
inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(xctr_alg); inst->alg.min_keysize = xctr_alg->min_keysize;
inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(xctr_alg); inst->alg.max_keysize = xctr_alg->max_keysize;
inst->alg.ivsize = TWEAK_SIZE; inst->alg.ivsize = TWEAK_SIZE;
inst->free = hctr2_free_instance; inst->free = hctr2_free_instance;

View File

@ -24,31 +24,20 @@
struct hmac_ctx { struct hmac_ctx {
struct crypto_shash *hash; struct crypto_shash *hash;
/* Contains 'u8 ipad[statesize];', then 'u8 opad[statesize];' */
u8 pads[];
}; };
static inline void *align_ptr(void *p, unsigned int align)
{
return (void *)ALIGN((unsigned long)p, align);
}
static inline struct hmac_ctx *hmac_ctx(struct crypto_shash *tfm)
{
return align_ptr(crypto_shash_ctx_aligned(tfm) +
crypto_shash_statesize(tfm) * 2,
crypto_tfm_ctx_alignment());
}
static int hmac_setkey(struct crypto_shash *parent, static int hmac_setkey(struct crypto_shash *parent,
const u8 *inkey, unsigned int keylen) const u8 *inkey, unsigned int keylen)
{ {
int bs = crypto_shash_blocksize(parent); int bs = crypto_shash_blocksize(parent);
int ds = crypto_shash_digestsize(parent); int ds = crypto_shash_digestsize(parent);
int ss = crypto_shash_statesize(parent); int ss = crypto_shash_statesize(parent);
char *ipad = crypto_shash_ctx_aligned(parent); struct hmac_ctx *tctx = crypto_shash_ctx(parent);
char *opad = ipad + ss; struct crypto_shash *hash = tctx->hash;
struct hmac_ctx *ctx = align_ptr(opad + ss, u8 *ipad = &tctx->pads[0];
crypto_tfm_ctx_alignment()); u8 *opad = &tctx->pads[ss];
struct crypto_shash *hash = ctx->hash;
SHASH_DESC_ON_STACK(shash, hash); SHASH_DESC_ON_STACK(shash, hash);
unsigned int i; unsigned int i;
@ -94,16 +83,18 @@ static int hmac_export(struct shash_desc *pdesc, void *out)
static int hmac_import(struct shash_desc *pdesc, const void *in) static int hmac_import(struct shash_desc *pdesc, const void *in)
{ {
struct shash_desc *desc = shash_desc_ctx(pdesc); struct shash_desc *desc = shash_desc_ctx(pdesc);
struct hmac_ctx *ctx = hmac_ctx(pdesc->tfm); const struct hmac_ctx *tctx = crypto_shash_ctx(pdesc->tfm);
desc->tfm = ctx->hash; desc->tfm = tctx->hash;
return crypto_shash_import(desc, in); return crypto_shash_import(desc, in);
} }
static int hmac_init(struct shash_desc *pdesc) static int hmac_init(struct shash_desc *pdesc)
{ {
return hmac_import(pdesc, crypto_shash_ctx_aligned(pdesc->tfm)); const struct hmac_ctx *tctx = crypto_shash_ctx(pdesc->tfm);
return hmac_import(pdesc, &tctx->pads[0]);
} }
static int hmac_update(struct shash_desc *pdesc, static int hmac_update(struct shash_desc *pdesc,
@ -119,7 +110,8 @@ static int hmac_final(struct shash_desc *pdesc, u8 *out)
struct crypto_shash *parent = pdesc->tfm; struct crypto_shash *parent = pdesc->tfm;
int ds = crypto_shash_digestsize(parent); int ds = crypto_shash_digestsize(parent);
int ss = crypto_shash_statesize(parent); int ss = crypto_shash_statesize(parent);
char *opad = crypto_shash_ctx_aligned(parent) + ss; const struct hmac_ctx *tctx = crypto_shash_ctx(parent);
const u8 *opad = &tctx->pads[ss];
struct shash_desc *desc = shash_desc_ctx(pdesc); struct shash_desc *desc = shash_desc_ctx(pdesc);
return crypto_shash_final(desc, out) ?: return crypto_shash_final(desc, out) ?:
@ -134,7 +126,8 @@ static int hmac_finup(struct shash_desc *pdesc, const u8 *data,
struct crypto_shash *parent = pdesc->tfm; struct crypto_shash *parent = pdesc->tfm;
int ds = crypto_shash_digestsize(parent); int ds = crypto_shash_digestsize(parent);
int ss = crypto_shash_statesize(parent); int ss = crypto_shash_statesize(parent);
char *opad = crypto_shash_ctx_aligned(parent) + ss; const struct hmac_ctx *tctx = crypto_shash_ctx(parent);
const u8 *opad = &tctx->pads[ss];
struct shash_desc *desc = shash_desc_ctx(pdesc); struct shash_desc *desc = shash_desc_ctx(pdesc);
return crypto_shash_finup(desc, data, nbytes, out) ?: return crypto_shash_finup(desc, data, nbytes, out) ?:
@ -147,7 +140,7 @@ static int hmac_init_tfm(struct crypto_shash *parent)
struct crypto_shash *hash; struct crypto_shash *hash;
struct shash_instance *inst = shash_alg_instance(parent); struct shash_instance *inst = shash_alg_instance(parent);
struct crypto_shash_spawn *spawn = shash_instance_ctx(inst); struct crypto_shash_spawn *spawn = shash_instance_ctx(inst);
struct hmac_ctx *ctx = hmac_ctx(parent); struct hmac_ctx *tctx = crypto_shash_ctx(parent);
hash = crypto_spawn_shash(spawn); hash = crypto_spawn_shash(spawn);
if (IS_ERR(hash)) if (IS_ERR(hash))
@ -156,14 +149,14 @@ static int hmac_init_tfm(struct crypto_shash *parent)
parent->descsize = sizeof(struct shash_desc) + parent->descsize = sizeof(struct shash_desc) +
crypto_shash_descsize(hash); crypto_shash_descsize(hash);
ctx->hash = hash; tctx->hash = hash;
return 0; return 0;
} }
static int hmac_clone_tfm(struct crypto_shash *dst, struct crypto_shash *src) static int hmac_clone_tfm(struct crypto_shash *dst, struct crypto_shash *src)
{ {
struct hmac_ctx *sctx = hmac_ctx(src); struct hmac_ctx *sctx = crypto_shash_ctx(src);
struct hmac_ctx *dctx = hmac_ctx(dst); struct hmac_ctx *dctx = crypto_shash_ctx(dst);
struct crypto_shash *hash; struct crypto_shash *hash;
hash = crypto_clone_shash(sctx->hash); hash = crypto_clone_shash(sctx->hash);
@ -176,9 +169,9 @@ static int hmac_clone_tfm(struct crypto_shash *dst, struct crypto_shash *src)
static void hmac_exit_tfm(struct crypto_shash *parent) static void hmac_exit_tfm(struct crypto_shash *parent)
{ {
struct hmac_ctx *ctx = hmac_ctx(parent); struct hmac_ctx *tctx = crypto_shash_ctx(parent);
crypto_free_shash(ctx->hash); crypto_free_shash(tctx->hash);
} }
static int hmac_create(struct crypto_template *tmpl, struct rtattr **tb) static int hmac_create(struct crypto_template *tmpl, struct rtattr **tb)
@ -225,15 +218,10 @@ static int hmac_create(struct crypto_template *tmpl, struct rtattr **tb)
inst->alg.base.cra_priority = alg->cra_priority; inst->alg.base.cra_priority = alg->cra_priority;
inst->alg.base.cra_blocksize = alg->cra_blocksize; inst->alg.base.cra_blocksize = alg->cra_blocksize;
inst->alg.base.cra_alignmask = alg->cra_alignmask; inst->alg.base.cra_ctxsize = sizeof(struct hmac_ctx) + (ss * 2);
ss = ALIGN(ss, alg->cra_alignmask + 1);
inst->alg.digestsize = ds; inst->alg.digestsize = ds;
inst->alg.statesize = ss; inst->alg.statesize = ss;
inst->alg.base.cra_ctxsize = sizeof(struct hmac_ctx) +
ALIGN(ss * 2, crypto_tfm_ctx_alignment());
inst->alg.init = hmac_init; inst->alg.init = hmac_init;
inst->alg.update = hmac_update; inst->alg.update = hmac_update;
inst->alg.final = hmac_final; inst->alg.final = hmac_final;

View File

@ -54,6 +54,17 @@
* Helper function * Helper function
***************************************************************************/ ***************************************************************************/
void *jent_kvzalloc(unsigned int len)
{
return kvzalloc(len, GFP_KERNEL);
}
void jent_kvzfree(void *ptr, unsigned int len)
{
memzero_explicit(ptr, len);
kvfree(ptr);
}
void *jent_zalloc(unsigned int len) void *jent_zalloc(unsigned int len)
{ {
return kzalloc(len, GFP_KERNEL); return kzalloc(len, GFP_KERNEL);
@ -245,7 +256,9 @@ static int jent_kcapi_init(struct crypto_tfm *tfm)
crypto_shash_init(sdesc); crypto_shash_init(sdesc);
rng->sdesc = sdesc; rng->sdesc = sdesc;
rng->entropy_collector = jent_entropy_collector_alloc(1, 0, sdesc); rng->entropy_collector =
jent_entropy_collector_alloc(CONFIG_CRYPTO_JITTERENTROPY_OSR, 0,
sdesc);
if (!rng->entropy_collector) { if (!rng->entropy_collector) {
ret = -ENOMEM; ret = -ENOMEM;
goto err; goto err;
@ -334,7 +347,7 @@ static int __init jent_mod_init(void)
desc->tfm = tfm; desc->tfm = tfm;
crypto_shash_init(desc); crypto_shash_init(desc);
ret = jent_entropy_init(desc); ret = jent_entropy_init(CONFIG_CRYPTO_JITTERENTROPY_OSR, 0, desc, NULL);
shash_desc_zero(desc); shash_desc_zero(desc);
crypto_free_shash(tfm); crypto_free_shash(tfm);
if (ret) { if (ret) {

View File

@ -72,11 +72,13 @@ struct rand_data {
__u64 prev_time; /* SENSITIVE Previous time stamp */ __u64 prev_time; /* SENSITIVE Previous time stamp */
__u64 last_delta; /* SENSITIVE stuck test */ __u64 last_delta; /* SENSITIVE stuck test */
__s64 last_delta2; /* SENSITIVE stuck test */ __s64 last_delta2; /* SENSITIVE stuck test */
unsigned int flags; /* Flags used to initialize */
unsigned int osr; /* Oversample rate */ unsigned int osr; /* Oversample rate */
#define JENT_MEMORY_BLOCKS 64
#define JENT_MEMORY_BLOCKSIZE 32
#define JENT_MEMORY_ACCESSLOOPS 128 #define JENT_MEMORY_ACCESSLOOPS 128
#define JENT_MEMORY_SIZE (JENT_MEMORY_BLOCKS*JENT_MEMORY_BLOCKSIZE) #define JENT_MEMORY_SIZE \
(CONFIG_CRYPTO_JITTERENTROPY_MEMORY_BLOCKS * \
CONFIG_CRYPTO_JITTERENTROPY_MEMORY_BLOCKSIZE)
unsigned char *mem; /* Memory access location with size of unsigned char *mem; /* Memory access location with size of
* memblocks * memblocksize */ * memblocks * memblocksize */
unsigned int memlocation; /* Pointer to byte in *mem */ unsigned int memlocation; /* Pointer to byte in *mem */
@ -88,16 +90,9 @@ struct rand_data {
/* Repetition Count Test */ /* Repetition Count Test */
unsigned int rct_count; /* Number of stuck values */ unsigned int rct_count; /* Number of stuck values */
/* Intermittent health test failure threshold of 2^-30 */ /* Adaptive Proportion Test cutoff values */
/* From an SP800-90B perspective, this RCT cutoff value is equal to 31. */ unsigned int apt_cutoff; /* Intermittent health test failure */
/* However, our RCT implementation starts at 1, so we subtract 1 here. */ unsigned int apt_cutoff_permanent; /* Permanent health test failure */
#define JENT_RCT_CUTOFF (31 - 1) /* Taken from SP800-90B sec 4.4.1 */
#define JENT_APT_CUTOFF 325 /* Taken from SP800-90B sec 4.4.2 */
/* Permanent health test failure threshold of 2^-60 */
/* From an SP800-90B perspective, this RCT cutoff value is equal to 61. */
/* However, our RCT implementation starts at 1, so we subtract 1 here. */
#define JENT_RCT_CUTOFF_PERMANENT (61 - 1)
#define JENT_APT_CUTOFF_PERMANENT 355
#define JENT_APT_WINDOW_SIZE 512 /* Data window size */ #define JENT_APT_WINDOW_SIZE 512 /* Data window size */
/* LSB of time stamp to process */ /* LSB of time stamp to process */
#define JENT_APT_LSB 16 #define JENT_APT_LSB 16
@ -105,6 +100,8 @@ struct rand_data {
unsigned int apt_observations; /* Number of collected observations */ unsigned int apt_observations; /* Number of collected observations */
unsigned int apt_count; /* APT counter */ unsigned int apt_count; /* APT counter */
unsigned int apt_base; /* APT base reference */ unsigned int apt_base; /* APT base reference */
unsigned int health_failure; /* Record health failure */
unsigned int apt_base_set:1; /* APT base reference set? */ unsigned int apt_base_set:1; /* APT base reference set? */
}; };
@ -122,6 +119,16 @@ struct rand_data {
* zero). */ * zero). */
#define JENT_ESTUCK 8 /* Too many stuck results during init. */ #define JENT_ESTUCK 8 /* Too many stuck results during init. */
#define JENT_EHEALTH 9 /* Health test failed during initialization */ #define JENT_EHEALTH 9 /* Health test failed during initialization */
#define JENT_ERCT 10 /* RCT failed during initialization */
#define JENT_EHASH 11 /* Hash self test failed */
#define JENT_EMEM 12 /* Can't allocate memory for initialization */
#define JENT_RCT_FAILURE 1 /* Failure in RCT health test. */
#define JENT_APT_FAILURE 2 /* Failure in APT health test. */
#define JENT_PERMANENT_FAILURE_SHIFT 16
#define JENT_PERMANENT_FAILURE(x) (x << JENT_PERMANENT_FAILURE_SHIFT)
#define JENT_RCT_FAILURE_PERMANENT JENT_PERMANENT_FAILURE(JENT_RCT_FAILURE)
#define JENT_APT_FAILURE_PERMANENT JENT_PERMANENT_FAILURE(JENT_APT_FAILURE)
/* /*
* The output n bits can receive more than n bits of min entropy, of course, * The output n bits can receive more than n bits of min entropy, of course,
@ -147,6 +154,48 @@ struct rand_data {
* This test complies with SP800-90B section 4.4.2. * This test complies with SP800-90B section 4.4.2.
***************************************************************************/ ***************************************************************************/
/*
* See the SP 800-90B comment #10b for the corrected cutoff for the SP 800-90B
* APT.
* http://www.untruth.org/~josh/sp80090b/UL%20SP800-90B-final%20comments%20v1.9%2020191212.pdf
* In in the syntax of R, this is C = 2 + qbinom(1 2^(30), 511, 2^(-1/osr)).
* (The original formula wasn't correct because the first symbol must
* necessarily have been observed, so there is no chance of observing 0 of these
* symbols.)
*
* For the alpha < 2^-53, R cannot be used as it uses a float data type without
* arbitrary precision. A SageMath script is used to calculate those cutoff
* values.
*
* For any value above 14, this yields the maximal allowable value of 512
* (by FIPS 140-2 IG 7.19 Resolution # 16, we cannot choose a cutoff value that
* renders the test unable to fail).
*/
static const unsigned int jent_apt_cutoff_lookup[15] = {
325, 422, 459, 477, 488, 494, 499, 502,
505, 507, 508, 509, 510, 511, 512 };
static const unsigned int jent_apt_cutoff_permanent_lookup[15] = {
355, 447, 479, 494, 502, 507, 510, 512,
512, 512, 512, 512, 512, 512, 512 };
#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
static void jent_apt_init(struct rand_data *ec, unsigned int osr)
{
/*
* Establish the apt_cutoff based on the presumed entropy rate of
* 1/osr.
*/
if (osr >= ARRAY_SIZE(jent_apt_cutoff_lookup)) {
ec->apt_cutoff = jent_apt_cutoff_lookup[
ARRAY_SIZE(jent_apt_cutoff_lookup) - 1];
ec->apt_cutoff_permanent = jent_apt_cutoff_permanent_lookup[
ARRAY_SIZE(jent_apt_cutoff_permanent_lookup) - 1];
} else {
ec->apt_cutoff = jent_apt_cutoff_lookup[osr - 1];
ec->apt_cutoff_permanent =
jent_apt_cutoff_permanent_lookup[osr - 1];
}
}
/* /*
* Reset the APT counter * Reset the APT counter
* *
@ -175,26 +224,22 @@ static void jent_apt_insert(struct rand_data *ec, unsigned int delta_masked)
return; return;
} }
if (delta_masked == ec->apt_base) if (delta_masked == ec->apt_base) {
ec->apt_count++; ec->apt_count++;
/* Note, ec->apt_count starts with one. */
if (ec->apt_count >= ec->apt_cutoff_permanent)
ec->health_failure |= JENT_APT_FAILURE_PERMANENT;
else if (ec->apt_count >= ec->apt_cutoff)
ec->health_failure |= JENT_APT_FAILURE;
}
ec->apt_observations++; ec->apt_observations++;
if (ec->apt_observations >= JENT_APT_WINDOW_SIZE) if (ec->apt_observations >= JENT_APT_WINDOW_SIZE)
jent_apt_reset(ec, delta_masked); jent_apt_reset(ec, delta_masked);
} }
/* APT health test failure detection */
static int jent_apt_permanent_failure(struct rand_data *ec)
{
return (ec->apt_count >= JENT_APT_CUTOFF_PERMANENT) ? 1 : 0;
}
static int jent_apt_failure(struct rand_data *ec)
{
return (ec->apt_count >= JENT_APT_CUTOFF) ? 1 : 0;
}
/*************************************************************************** /***************************************************************************
* Stuck Test and its use as Repetition Count Test * Stuck Test and its use as Repetition Count Test
* *
@ -221,6 +266,30 @@ static void jent_rct_insert(struct rand_data *ec, int stuck)
{ {
if (stuck) { if (stuck) {
ec->rct_count++; ec->rct_count++;
/*
* The cutoff value is based on the following consideration:
* alpha = 2^-30 or 2^-60 as recommended in SP800-90B.
* In addition, we require an entropy value H of 1/osr as this
* is the minimum entropy required to provide full entropy.
* Note, we collect (DATA_SIZE_BITS + ENTROPY_SAFETY_FACTOR)*osr
* deltas for inserting them into the entropy pool which should
* then have (close to) DATA_SIZE_BITS bits of entropy in the
* conditioned output.
*
* Note, ec->rct_count (which equals to value B in the pseudo
* code of SP800-90B section 4.4.1) starts with zero. Hence
* we need to subtract one from the cutoff value as calculated
* following SP800-90B. Thus C = ceil(-log_2(alpha)/H) = 30*osr
* or 60*osr.
*/
if ((unsigned int)ec->rct_count >= (60 * ec->osr)) {
ec->rct_count = -1;
ec->health_failure |= JENT_RCT_FAILURE_PERMANENT;
} else if ((unsigned int)ec->rct_count >= (30 * ec->osr)) {
ec->rct_count = -1;
ec->health_failure |= JENT_RCT_FAILURE;
}
} else { } else {
/* Reset RCT */ /* Reset RCT */
ec->rct_count = 0; ec->rct_count = 0;
@ -275,26 +344,25 @@ static int jent_stuck(struct rand_data *ec, __u64 current_delta)
return 0; return 0;
} }
/* RCT health test failure detection */ /*
static int jent_rct_permanent_failure(struct rand_data *ec) * Report any health test failures
*
* @ec [in] Reference to entropy collector
*
* @return a bitmask indicating which tests failed
* 0 No health test failure
* 1 RCT failure
* 2 APT failure
* 1<<JENT_PERMANENT_FAILURE_SHIFT RCT permanent failure
* 2<<JENT_PERMANENT_FAILURE_SHIFT APT permanent failure
*/
static unsigned int jent_health_failure(struct rand_data *ec)
{ {
return (ec->rct_count >= JENT_RCT_CUTOFF_PERMANENT) ? 1 : 0; /* Test is only enabled in FIPS mode */
} if (!fips_enabled)
return 0;
static int jent_rct_failure(struct rand_data *ec) return ec->health_failure;
{
return (ec->rct_count >= JENT_RCT_CUTOFF) ? 1 : 0;
}
/* Report of health test failures */
static int jent_health_failure(struct rand_data *ec)
{
return jent_rct_failure(ec) | jent_apt_failure(ec);
}
static int jent_permanent_health_failure(struct rand_data *ec)
{
return jent_rct_permanent_failure(ec) | jent_apt_permanent_failure(ec);
} }
/*************************************************************************** /***************************************************************************
@ -448,7 +516,7 @@ static void jent_memaccess(struct rand_data *ec, __u64 loop_cnt)
* *
* @return result of stuck test * @return result of stuck test
*/ */
static int jent_measure_jitter(struct rand_data *ec) static int jent_measure_jitter(struct rand_data *ec, __u64 *ret_current_delta)
{ {
__u64 time = 0; __u64 time = 0;
__u64 current_delta = 0; __u64 current_delta = 0;
@ -472,6 +540,10 @@ static int jent_measure_jitter(struct rand_data *ec)
if (jent_condition_data(ec, current_delta, stuck)) if (jent_condition_data(ec, current_delta, stuck))
stuck = 1; stuck = 1;
/* return the raw entropy value */
if (ret_current_delta)
*ret_current_delta = current_delta;
return stuck; return stuck;
} }
@ -489,11 +561,11 @@ static void jent_gen_entropy(struct rand_data *ec)
safety_factor = JENT_ENTROPY_SAFETY_FACTOR; safety_factor = JENT_ENTROPY_SAFETY_FACTOR;
/* priming of the ->prev_time value */ /* priming of the ->prev_time value */
jent_measure_jitter(ec); jent_measure_jitter(ec, NULL);
while (!jent_health_failure(ec)) { while (!jent_health_failure(ec)) {
/* If a stuck measurement is received, repeat measurement */ /* If a stuck measurement is received, repeat measurement */
if (jent_measure_jitter(ec)) if (jent_measure_jitter(ec, NULL))
continue; continue;
/* /*
@ -537,11 +609,12 @@ int jent_read_entropy(struct rand_data *ec, unsigned char *data,
return -1; return -1;
while (len > 0) { while (len > 0) {
unsigned int tocopy; unsigned int tocopy, health_test_result;
jent_gen_entropy(ec); jent_gen_entropy(ec);
if (jent_permanent_health_failure(ec)) { health_test_result = jent_health_failure(ec);
if (health_test_result > JENT_PERMANENT_FAILURE_SHIFT) {
/* /*
* At this point, the Jitter RNG instance is considered * At this point, the Jitter RNG instance is considered
* as a failed instance. There is no rerun of the * as a failed instance. There is no rerun of the
@ -549,13 +622,18 @@ int jent_read_entropy(struct rand_data *ec, unsigned char *data,
* is assumed to not further use this instance. * is assumed to not further use this instance.
*/ */
return -3; return -3;
} else if (jent_health_failure(ec)) { } else if (health_test_result) {
/* /*
* Perform startup health tests and return permanent * Perform startup health tests and return permanent
* error if it fails. * error if it fails.
*/ */
if (jent_entropy_init(ec->hash_state)) if (jent_entropy_init(0, 0, NULL, ec)) {
/* Mark the permanent error */
ec->health_failure &=
JENT_RCT_FAILURE_PERMANENT |
JENT_APT_FAILURE_PERMANENT;
return -3; return -3;
}
return -2; return -2;
} }
@ -592,23 +670,29 @@ struct rand_data *jent_entropy_collector_alloc(unsigned int osr,
/* Allocate memory for adding variations based on memory /* Allocate memory for adding variations based on memory
* access * access
*/ */
entropy_collector->mem = jent_zalloc(JENT_MEMORY_SIZE); entropy_collector->mem = jent_kvzalloc(JENT_MEMORY_SIZE);
if (!entropy_collector->mem) { if (!entropy_collector->mem) {
jent_zfree(entropy_collector); jent_zfree(entropy_collector);
return NULL; return NULL;
} }
entropy_collector->memblocksize = JENT_MEMORY_BLOCKSIZE; entropy_collector->memblocksize =
entropy_collector->memblocks = JENT_MEMORY_BLOCKS; CONFIG_CRYPTO_JITTERENTROPY_MEMORY_BLOCKSIZE;
entropy_collector->memblocks =
CONFIG_CRYPTO_JITTERENTROPY_MEMORY_BLOCKS;
entropy_collector->memaccessloops = JENT_MEMORY_ACCESSLOOPS; entropy_collector->memaccessloops = JENT_MEMORY_ACCESSLOOPS;
} }
/* verify and set the oversampling rate */ /* verify and set the oversampling rate */
if (osr == 0) if (osr == 0)
osr = 1; /* minimum sampling rate is 1 */ osr = 1; /* H_submitter = 1 / osr */
entropy_collector->osr = osr; entropy_collector->osr = osr;
entropy_collector->flags = flags;
entropy_collector->hash_state = hash_state; entropy_collector->hash_state = hash_state;
/* Initialize the APT */
jent_apt_init(entropy_collector, osr);
/* fill the data pad with non-zero values */ /* fill the data pad with non-zero values */
jent_gen_entropy(entropy_collector); jent_gen_entropy(entropy_collector);
@ -617,25 +701,39 @@ struct rand_data *jent_entropy_collector_alloc(unsigned int osr,
void jent_entropy_collector_free(struct rand_data *entropy_collector) void jent_entropy_collector_free(struct rand_data *entropy_collector)
{ {
jent_zfree(entropy_collector->mem); jent_kvzfree(entropy_collector->mem, JENT_MEMORY_SIZE);
entropy_collector->mem = NULL; entropy_collector->mem = NULL;
jent_zfree(entropy_collector); jent_zfree(entropy_collector);
} }
int jent_entropy_init(void *hash_state) int jent_entropy_init(unsigned int osr, unsigned int flags, void *hash_state,
struct rand_data *p_ec)
{ {
int i; /*
__u64 delta_sum = 0; * If caller provides an allocated ec, reuse it which implies that the
__u64 old_delta = 0; * health test entropy data is used to further still the available
unsigned int nonstuck = 0; * entropy pool.
int time_backwards = 0; */
int count_mod = 0; struct rand_data *ec = p_ec;
int count_stuck = 0; int i, time_backwards = 0, ret = 0, ec_free = 0;
struct rand_data ec = { 0 }; unsigned int health_test_result;
/* Required for RCT */ if (!ec) {
ec.osr = 1; ec = jent_entropy_collector_alloc(osr, flags, hash_state);
ec.hash_state = hash_state; if (!ec)
return JENT_EMEM;
ec_free = 1;
} else {
/* Reset the APT */
jent_apt_reset(ec, 0);
/* Ensure that a new APT base is obtained */
ec->apt_base_set = 0;
/* Reset the RCT */
ec->rct_count = 0;
/* Reset intermittent, leave permanent health test result */
ec->health_failure &= (~JENT_RCT_FAILURE);
ec->health_failure &= (~JENT_APT_FAILURE);
}
/* We could perform statistical tests here, but the problem is /* We could perform statistical tests here, but the problem is
* that we only have a few loop counts to do testing. These * that we only have a few loop counts to do testing. These
@ -664,31 +762,28 @@ int jent_entropy_init(void *hash_state)
#define TESTLOOPCOUNT 1024 #define TESTLOOPCOUNT 1024
#define CLEARCACHE 100 #define CLEARCACHE 100
for (i = 0; (TESTLOOPCOUNT + CLEARCACHE) > i; i++) { for (i = 0; (TESTLOOPCOUNT + CLEARCACHE) > i; i++) {
__u64 time = 0; __u64 start_time = 0, end_time = 0, delta = 0;
__u64 time2 = 0;
__u64 delta = 0;
unsigned int lowdelta = 0;
int stuck;
/* Invoke core entropy collection logic */ /* Invoke core entropy collection logic */
jent_get_nstime(&time); jent_measure_jitter(ec, &delta);
ec.prev_time = time; end_time = ec->prev_time;
jent_condition_data(&ec, time, 0); start_time = ec->prev_time - delta;
jent_get_nstime(&time2);
/* test whether timer works */ /* test whether timer works */
if (!time || !time2) if (!start_time || !end_time) {
return JENT_ENOTIME; ret = JENT_ENOTIME;
delta = jent_delta(time, time2); goto out;
}
/* /*
* test whether timer is fine grained enough to provide * test whether timer is fine grained enough to provide
* delta even when called shortly after each other -- this * delta even when called shortly after each other -- this
* implies that we also have a high resolution timer * implies that we also have a high resolution timer
*/ */
if (!delta) if (!delta || (end_time == start_time)) {
return JENT_ECOARSETIME; ret = JENT_ECOARSETIME;
goto out;
stuck = jent_stuck(&ec, delta); }
/* /*
* up to here we did not modify any variable that will be * up to here we did not modify any variable that will be
@ -700,49 +795,9 @@ int jent_entropy_init(void *hash_state)
if (i < CLEARCACHE) if (i < CLEARCACHE)
continue; continue;
if (stuck)
count_stuck++;
else {
nonstuck++;
/*
* Ensure that the APT succeeded.
*
* With the check below that count_stuck must be less
* than 10% of the overall generated raw entropy values
* it is guaranteed that the APT is invoked at
* floor((TESTLOOPCOUNT * 0.9) / 64) == 14 times.
*/
if ((nonstuck % JENT_APT_WINDOW_SIZE) == 0) {
jent_apt_reset(&ec,
delta & JENT_APT_WORD_MASK);
}
}
/* Validate health test result */
if (jent_health_failure(&ec))
return JENT_EHEALTH;
/* test whether we have an increasing timer */ /* test whether we have an increasing timer */
if (!(time2 > time)) if (!(end_time > start_time))
time_backwards++; time_backwards++;
/* use 32 bit value to ensure compilation on 32 bit arches */
lowdelta = time2 - time;
if (!(lowdelta % 100))
count_mod++;
/*
* ensure that we have a varying delta timer which is necessary
* for the calculation of entropy -- perform this check
* only after the first loop is executed as we need to prime
* the old_data value
*/
if (delta > old_delta)
delta_sum += (delta - old_delta);
else
delta_sum += (old_delta - delta);
old_delta = delta;
} }
/* /*
@ -752,31 +807,22 @@ int jent_entropy_init(void *hash_state)
* should not fail. The value of 3 should cover the NTP case being * should not fail. The value of 3 should cover the NTP case being
* performed during our test run. * performed during our test run.
*/ */
if (time_backwards > 3) if (time_backwards > 3) {
return JENT_ENOMONOTONIC; ret = JENT_ENOMONOTONIC;
goto out;
}
/* /* Did we encounter a health test failure? */
* Variations of deltas of time must on average be larger health_test_result = jent_health_failure(ec);
* than 1 to ensure the entropy estimation if (health_test_result) {
* implied with 1 is preserved ret = (health_test_result & JENT_RCT_FAILURE) ? JENT_ERCT :
*/ JENT_EHEALTH;
if ((delta_sum) <= 1) goto out;
return JENT_EVARVAR; }
/* out:
* Ensure that we have variations in the time stamp below 10 for at if (ec_free)
* least 10% of all checks -- on some platforms, the counter increments jent_entropy_collector_free(ec);
* in multiples of 100, but not always
*/
if ((TESTLOOPCOUNT/10 * 9) < count_mod)
return JENT_ECOARSETIME;
/* return ret;
* If we have more than 90% stuck results, then this Jitter RNG is
* likely to not work well.
*/
if ((TESTLOOPCOUNT/10 * 9) < count_stuck)
return JENT_ESTUCK;
return 0;
} }

View File

@ -1,5 +1,7 @@
// SPDX-License-Identifier: GPL-2.0-or-later // SPDX-License-Identifier: GPL-2.0-or-later
extern void *jent_kvzalloc(unsigned int len);
extern void jent_kvzfree(void *ptr, unsigned int len);
extern void *jent_zalloc(unsigned int len); extern void *jent_zalloc(unsigned int len);
extern void jent_zfree(void *ptr); extern void jent_zfree(void *ptr);
extern void jent_get_nstime(__u64 *out); extern void jent_get_nstime(__u64 *out);
@ -9,7 +11,8 @@ extern int jent_hash_time(void *hash_state, __u64 time, u8 *addtl,
int jent_read_random_block(void *hash_state, char *dst, unsigned int dst_len); int jent_read_random_block(void *hash_state, char *dst, unsigned int dst_len);
struct rand_data; struct rand_data;
extern int jent_entropy_init(void *hash_state); extern int jent_entropy_init(unsigned int osr, unsigned int flags,
void *hash_state, struct rand_data *p_ec);
extern int jent_read_entropy(struct rand_data *ec, unsigned char *data, extern int jent_read_entropy(struct rand_data *ec, unsigned char *data,
unsigned int len); unsigned int len);

View File

@ -299,8 +299,8 @@ static void lrw_free_instance(struct skcipher_instance *inst)
static int lrw_create(struct crypto_template *tmpl, struct rtattr **tb) static int lrw_create(struct crypto_template *tmpl, struct rtattr **tb)
{ {
struct crypto_skcipher_spawn *spawn; struct crypto_skcipher_spawn *spawn;
struct skcipher_alg_common *alg;
struct skcipher_instance *inst; struct skcipher_instance *inst;
struct skcipher_alg *alg;
const char *cipher_name; const char *cipher_name;
char ecb_name[CRYPTO_MAX_ALG_NAME]; char ecb_name[CRYPTO_MAX_ALG_NAME];
u32 mask; u32 mask;
@ -336,13 +336,13 @@ static int lrw_create(struct crypto_template *tmpl, struct rtattr **tb)
if (err) if (err)
goto err_free_inst; goto err_free_inst;
alg = crypto_skcipher_spawn_alg(spawn); alg = crypto_spawn_skcipher_alg_common(spawn);
err = -EINVAL; err = -EINVAL;
if (alg->base.cra_blocksize != LRW_BLOCK_SIZE) if (alg->base.cra_blocksize != LRW_BLOCK_SIZE)
goto err_free_inst; goto err_free_inst;
if (crypto_skcipher_alg_ivsize(alg)) if (alg->ivsize)
goto err_free_inst; goto err_free_inst;
err = crypto_inst_setname(skcipher_crypto_instance(inst), "lrw", err = crypto_inst_setname(skcipher_crypto_instance(inst), "lrw",
@ -382,10 +382,8 @@ static int lrw_create(struct crypto_template *tmpl, struct rtattr **tb)
(__alignof__(be128) - 1); (__alignof__(be128) - 1);
inst->alg.ivsize = LRW_BLOCK_SIZE; inst->alg.ivsize = LRW_BLOCK_SIZE;
inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(alg) + inst->alg.min_keysize = alg->min_keysize + LRW_BLOCK_SIZE;
LRW_BLOCK_SIZE; inst->alg.max_keysize = alg->max_keysize + LRW_BLOCK_SIZE;
inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(alg) +
LRW_BLOCK_SIZE;
inst->alg.base.cra_ctxsize = sizeof(struct lrw_tfm_ctx); inst->alg.base.cra_ctxsize = sizeof(struct lrw_tfm_ctx);

634
crypto/lskcipher.c Normal file
View File

@ -0,0 +1,634 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Linear symmetric key cipher operations.
*
* Generic encrypt/decrypt wrapper for ciphers.
*
* Copyright (c) 2023 Herbert Xu <herbert@gondor.apana.org.au>
*/
#include <linux/cryptouser.h>
#include <linux/err.h>
#include <linux/export.h>
#include <linux/kernel.h>
#include <linux/seq_file.h>
#include <linux/slab.h>
#include <linux/string.h>
#include <net/netlink.h>
#include "skcipher.h"
static inline struct crypto_lskcipher *__crypto_lskcipher_cast(
struct crypto_tfm *tfm)
{
return container_of(tfm, struct crypto_lskcipher, base);
}
static inline struct lskcipher_alg *__crypto_lskcipher_alg(
struct crypto_alg *alg)
{
return container_of(alg, struct lskcipher_alg, co.base);
}
static inline struct crypto_istat_cipher *lskcipher_get_stat(
struct lskcipher_alg *alg)
{
return skcipher_get_stat_common(&alg->co);
}
static inline int crypto_lskcipher_errstat(struct lskcipher_alg *alg, int err)
{
struct crypto_istat_cipher *istat = lskcipher_get_stat(alg);
if (!IS_ENABLED(CONFIG_CRYPTO_STATS))
return err;
if (err)
atomic64_inc(&istat->err_cnt);
return err;
}
static int lskcipher_setkey_unaligned(struct crypto_lskcipher *tfm,
const u8 *key, unsigned int keylen)
{
unsigned long alignmask = crypto_lskcipher_alignmask(tfm);
struct lskcipher_alg *cipher = crypto_lskcipher_alg(tfm);
u8 *buffer, *alignbuffer;
unsigned long absize;
int ret;
absize = keylen + alignmask;
buffer = kmalloc(absize, GFP_ATOMIC);
if (!buffer)
return -ENOMEM;
alignbuffer = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);
memcpy(alignbuffer, key, keylen);
ret = cipher->setkey(tfm, alignbuffer, keylen);
kfree_sensitive(buffer);
return ret;
}
int crypto_lskcipher_setkey(struct crypto_lskcipher *tfm, const u8 *key,
unsigned int keylen)
{
unsigned long alignmask = crypto_lskcipher_alignmask(tfm);
struct lskcipher_alg *cipher = crypto_lskcipher_alg(tfm);
if (keylen < cipher->co.min_keysize || keylen > cipher->co.max_keysize)
return -EINVAL;
if ((unsigned long)key & alignmask)
return lskcipher_setkey_unaligned(tfm, key, keylen);
else
return cipher->setkey(tfm, key, keylen);
}
EXPORT_SYMBOL_GPL(crypto_lskcipher_setkey);
static int crypto_lskcipher_crypt_unaligned(
struct crypto_lskcipher *tfm, const u8 *src, u8 *dst, unsigned len,
u8 *iv, int (*crypt)(struct crypto_lskcipher *tfm, const u8 *src,
u8 *dst, unsigned len, u8 *iv, bool final))
{
unsigned ivsize = crypto_lskcipher_ivsize(tfm);
unsigned bs = crypto_lskcipher_blocksize(tfm);
unsigned cs = crypto_lskcipher_chunksize(tfm);
int err;
u8 *tiv;
u8 *p;
BUILD_BUG_ON(MAX_CIPHER_BLOCKSIZE > PAGE_SIZE ||
MAX_CIPHER_ALIGNMASK >= PAGE_SIZE);
tiv = kmalloc(PAGE_SIZE, GFP_ATOMIC);
if (!tiv)
return -ENOMEM;
memcpy(tiv, iv, ivsize);
p = kmalloc(PAGE_SIZE, GFP_ATOMIC);
err = -ENOMEM;
if (!p)
goto out;
while (len >= bs) {
unsigned chunk = min((unsigned)PAGE_SIZE, len);
int err;
if (chunk > cs)
chunk &= ~(cs - 1);
memcpy(p, src, chunk);
err = crypt(tfm, p, p, chunk, tiv, true);
if (err)
goto out;
memcpy(dst, p, chunk);
src += chunk;
dst += chunk;
len -= chunk;
}
err = len ? -EINVAL : 0;
out:
memcpy(iv, tiv, ivsize);
kfree_sensitive(p);
kfree_sensitive(tiv);
return err;
}
static int crypto_lskcipher_crypt(struct crypto_lskcipher *tfm, const u8 *src,
u8 *dst, unsigned len, u8 *iv,
int (*crypt)(struct crypto_lskcipher *tfm,
const u8 *src, u8 *dst,
unsigned len, u8 *iv,
bool final))
{
unsigned long alignmask = crypto_lskcipher_alignmask(tfm);
struct lskcipher_alg *alg = crypto_lskcipher_alg(tfm);
int ret;
if (((unsigned long)src | (unsigned long)dst | (unsigned long)iv) &
alignmask) {
ret = crypto_lskcipher_crypt_unaligned(tfm, src, dst, len, iv,
crypt);
goto out;
}
ret = crypt(tfm, src, dst, len, iv, true);
out:
return crypto_lskcipher_errstat(alg, ret);
}
int crypto_lskcipher_encrypt(struct crypto_lskcipher *tfm, const u8 *src,
u8 *dst, unsigned len, u8 *iv)
{
struct lskcipher_alg *alg = crypto_lskcipher_alg(tfm);
if (IS_ENABLED(CONFIG_CRYPTO_STATS)) {
struct crypto_istat_cipher *istat = lskcipher_get_stat(alg);
atomic64_inc(&istat->encrypt_cnt);
atomic64_add(len, &istat->encrypt_tlen);
}
return crypto_lskcipher_crypt(tfm, src, dst, len, iv, alg->encrypt);
}
EXPORT_SYMBOL_GPL(crypto_lskcipher_encrypt);
int crypto_lskcipher_decrypt(struct crypto_lskcipher *tfm, const u8 *src,
u8 *dst, unsigned len, u8 *iv)
{
struct lskcipher_alg *alg = crypto_lskcipher_alg(tfm);
if (IS_ENABLED(CONFIG_CRYPTO_STATS)) {
struct crypto_istat_cipher *istat = lskcipher_get_stat(alg);
atomic64_inc(&istat->decrypt_cnt);
atomic64_add(len, &istat->decrypt_tlen);
}
return crypto_lskcipher_crypt(tfm, src, dst, len, iv, alg->decrypt);
}
EXPORT_SYMBOL_GPL(crypto_lskcipher_decrypt);
static int crypto_lskcipher_crypt_sg(struct skcipher_request *req,
int (*crypt)(struct crypto_lskcipher *tfm,
const u8 *src, u8 *dst,
unsigned len, u8 *iv,
bool final))
{
struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
struct crypto_lskcipher **ctx = crypto_skcipher_ctx(skcipher);
struct crypto_lskcipher *tfm = *ctx;
struct skcipher_walk walk;
int err;
err = skcipher_walk_virt(&walk, req, false);
while (walk.nbytes) {
err = crypt(tfm, walk.src.virt.addr, walk.dst.virt.addr,
walk.nbytes, walk.iv, walk.nbytes == walk.total);
err = skcipher_walk_done(&walk, err);
}
return err;
}
int crypto_lskcipher_encrypt_sg(struct skcipher_request *req)
{
struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
struct crypto_lskcipher **ctx = crypto_skcipher_ctx(skcipher);
struct lskcipher_alg *alg = crypto_lskcipher_alg(*ctx);
return crypto_lskcipher_crypt_sg(req, alg->encrypt);
}
int crypto_lskcipher_decrypt_sg(struct skcipher_request *req)
{
struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
struct crypto_lskcipher **ctx = crypto_skcipher_ctx(skcipher);
struct lskcipher_alg *alg = crypto_lskcipher_alg(*ctx);
return crypto_lskcipher_crypt_sg(req, alg->decrypt);
}
static void crypto_lskcipher_exit_tfm(struct crypto_tfm *tfm)
{
struct crypto_lskcipher *skcipher = __crypto_lskcipher_cast(tfm);
struct lskcipher_alg *alg = crypto_lskcipher_alg(skcipher);
alg->exit(skcipher);
}
static int crypto_lskcipher_init_tfm(struct crypto_tfm *tfm)
{
struct crypto_lskcipher *skcipher = __crypto_lskcipher_cast(tfm);
struct lskcipher_alg *alg = crypto_lskcipher_alg(skcipher);
if (alg->exit)
skcipher->base.exit = crypto_lskcipher_exit_tfm;
if (alg->init)
return alg->init(skcipher);
return 0;
}
static void crypto_lskcipher_free_instance(struct crypto_instance *inst)
{
struct lskcipher_instance *skcipher =
container_of(inst, struct lskcipher_instance, s.base);
skcipher->free(skcipher);
}
static void __maybe_unused crypto_lskcipher_show(
struct seq_file *m, struct crypto_alg *alg)
{
struct lskcipher_alg *skcipher = __crypto_lskcipher_alg(alg);
seq_printf(m, "type : lskcipher\n");
seq_printf(m, "blocksize : %u\n", alg->cra_blocksize);
seq_printf(m, "min keysize : %u\n", skcipher->co.min_keysize);
seq_printf(m, "max keysize : %u\n", skcipher->co.max_keysize);
seq_printf(m, "ivsize : %u\n", skcipher->co.ivsize);
seq_printf(m, "chunksize : %u\n", skcipher->co.chunksize);
}
static int __maybe_unused crypto_lskcipher_report(
struct sk_buff *skb, struct crypto_alg *alg)
{
struct lskcipher_alg *skcipher = __crypto_lskcipher_alg(alg);
struct crypto_report_blkcipher rblkcipher;
memset(&rblkcipher, 0, sizeof(rblkcipher));
strscpy(rblkcipher.type, "lskcipher", sizeof(rblkcipher.type));
strscpy(rblkcipher.geniv, "<none>", sizeof(rblkcipher.geniv));
rblkcipher.blocksize = alg->cra_blocksize;
rblkcipher.min_keysize = skcipher->co.min_keysize;
rblkcipher.max_keysize = skcipher->co.max_keysize;
rblkcipher.ivsize = skcipher->co.ivsize;
return nla_put(skb, CRYPTOCFGA_REPORT_BLKCIPHER,
sizeof(rblkcipher), &rblkcipher);
}
static int __maybe_unused crypto_lskcipher_report_stat(
struct sk_buff *skb, struct crypto_alg *alg)
{
struct lskcipher_alg *skcipher = __crypto_lskcipher_alg(alg);
struct crypto_istat_cipher *istat;
struct crypto_stat_cipher rcipher;
istat = lskcipher_get_stat(skcipher);
memset(&rcipher, 0, sizeof(rcipher));
strscpy(rcipher.type, "cipher", sizeof(rcipher.type));
rcipher.stat_encrypt_cnt = atomic64_read(&istat->encrypt_cnt);
rcipher.stat_encrypt_tlen = atomic64_read(&istat->encrypt_tlen);
rcipher.stat_decrypt_cnt = atomic64_read(&istat->decrypt_cnt);
rcipher.stat_decrypt_tlen = atomic64_read(&istat->decrypt_tlen);
rcipher.stat_err_cnt = atomic64_read(&istat->err_cnt);
return nla_put(skb, CRYPTOCFGA_STAT_CIPHER, sizeof(rcipher), &rcipher);
}
static const struct crypto_type crypto_lskcipher_type = {
.extsize = crypto_alg_extsize,
.init_tfm = crypto_lskcipher_init_tfm,
.free = crypto_lskcipher_free_instance,
#ifdef CONFIG_PROC_FS
.show = crypto_lskcipher_show,
#endif
#if IS_ENABLED(CONFIG_CRYPTO_USER)
.report = crypto_lskcipher_report,
#endif
#ifdef CONFIG_CRYPTO_STATS
.report_stat = crypto_lskcipher_report_stat,
#endif
.maskclear = ~CRYPTO_ALG_TYPE_MASK,
.maskset = CRYPTO_ALG_TYPE_MASK,
.type = CRYPTO_ALG_TYPE_LSKCIPHER,
.tfmsize = offsetof(struct crypto_lskcipher, base),
};
static void crypto_lskcipher_exit_tfm_sg(struct crypto_tfm *tfm)
{
struct crypto_lskcipher **ctx = crypto_tfm_ctx(tfm);
crypto_free_lskcipher(*ctx);
}
int crypto_init_lskcipher_ops_sg(struct crypto_tfm *tfm)
{
struct crypto_lskcipher **ctx = crypto_tfm_ctx(tfm);
struct crypto_alg *calg = tfm->__crt_alg;
struct crypto_lskcipher *skcipher;
if (!crypto_mod_get(calg))
return -EAGAIN;
skcipher = crypto_create_tfm(calg, &crypto_lskcipher_type);
if (IS_ERR(skcipher)) {
crypto_mod_put(calg);
return PTR_ERR(skcipher);
}
*ctx = skcipher;
tfm->exit = crypto_lskcipher_exit_tfm_sg;
return 0;
}
int crypto_grab_lskcipher(struct crypto_lskcipher_spawn *spawn,
struct crypto_instance *inst,
const char *name, u32 type, u32 mask)
{
spawn->base.frontend = &crypto_lskcipher_type;
return crypto_grab_spawn(&spawn->base, inst, name, type, mask);
}
EXPORT_SYMBOL_GPL(crypto_grab_lskcipher);
struct crypto_lskcipher *crypto_alloc_lskcipher(const char *alg_name,
u32 type, u32 mask)
{
return crypto_alloc_tfm(alg_name, &crypto_lskcipher_type, type, mask);
}
EXPORT_SYMBOL_GPL(crypto_alloc_lskcipher);
static int lskcipher_prepare_alg(struct lskcipher_alg *alg)
{
struct crypto_alg *base = &alg->co.base;
int err;
err = skcipher_prepare_alg_common(&alg->co);
if (err)
return err;
if (alg->co.chunksize & (alg->co.chunksize - 1))
return -EINVAL;
base->cra_type = &crypto_lskcipher_type;
base->cra_flags |= CRYPTO_ALG_TYPE_LSKCIPHER;
return 0;
}
int crypto_register_lskcipher(struct lskcipher_alg *alg)
{
struct crypto_alg *base = &alg->co.base;
int err;
err = lskcipher_prepare_alg(alg);
if (err)
return err;
return crypto_register_alg(base);
}
EXPORT_SYMBOL_GPL(crypto_register_lskcipher);
void crypto_unregister_lskcipher(struct lskcipher_alg *alg)
{
crypto_unregister_alg(&alg->co.base);
}
EXPORT_SYMBOL_GPL(crypto_unregister_lskcipher);
int crypto_register_lskciphers(struct lskcipher_alg *algs, int count)
{
int i, ret;
for (i = 0; i < count; i++) {
ret = crypto_register_lskcipher(&algs[i]);
if (ret)
goto err;
}
return 0;
err:
for (--i; i >= 0; --i)
crypto_unregister_lskcipher(&algs[i]);
return ret;
}
EXPORT_SYMBOL_GPL(crypto_register_lskciphers);
void crypto_unregister_lskciphers(struct lskcipher_alg *algs, int count)
{
int i;
for (i = count - 1; i >= 0; --i)
crypto_unregister_lskcipher(&algs[i]);
}
EXPORT_SYMBOL_GPL(crypto_unregister_lskciphers);
int lskcipher_register_instance(struct crypto_template *tmpl,
struct lskcipher_instance *inst)
{
int err;
if (WARN_ON(!inst->free))
return -EINVAL;
err = lskcipher_prepare_alg(&inst->alg);
if (err)
return err;
return crypto_register_instance(tmpl, lskcipher_crypto_instance(inst));
}
EXPORT_SYMBOL_GPL(lskcipher_register_instance);
static int lskcipher_setkey_simple(struct crypto_lskcipher *tfm, const u8 *key,
unsigned int keylen)
{
struct crypto_lskcipher *cipher = lskcipher_cipher_simple(tfm);
crypto_lskcipher_clear_flags(cipher, CRYPTO_TFM_REQ_MASK);
crypto_lskcipher_set_flags(cipher, crypto_lskcipher_get_flags(tfm) &
CRYPTO_TFM_REQ_MASK);
return crypto_lskcipher_setkey(cipher, key, keylen);
}
static int lskcipher_init_tfm_simple(struct crypto_lskcipher *tfm)
{
struct lskcipher_instance *inst = lskcipher_alg_instance(tfm);
struct crypto_lskcipher **ctx = crypto_lskcipher_ctx(tfm);
struct crypto_lskcipher_spawn *spawn;
struct crypto_lskcipher *cipher;
spawn = lskcipher_instance_ctx(inst);
cipher = crypto_spawn_lskcipher(spawn);
if (IS_ERR(cipher))
return PTR_ERR(cipher);
*ctx = cipher;
return 0;
}
static void lskcipher_exit_tfm_simple(struct crypto_lskcipher *tfm)
{
struct crypto_lskcipher **ctx = crypto_lskcipher_ctx(tfm);
crypto_free_lskcipher(*ctx);
}
static void lskcipher_free_instance_simple(struct lskcipher_instance *inst)
{
crypto_drop_lskcipher(lskcipher_instance_ctx(inst));
kfree(inst);
}
/**
* lskcipher_alloc_instance_simple - allocate instance of simple block cipher
*
* Allocate an lskcipher_instance for a simple block cipher mode of operation,
* e.g. cbc or ecb. The instance context will have just a single crypto_spawn,
* that for the underlying cipher. The {min,max}_keysize, ivsize, blocksize,
* alignmask, and priority are set from the underlying cipher but can be
* overridden if needed. The tfm context defaults to
* struct crypto_lskcipher *, and default ->setkey(), ->init(), and
* ->exit() methods are installed.
*
* @tmpl: the template being instantiated
* @tb: the template parameters
*
* Return: a pointer to the new instance, or an ERR_PTR(). The caller still
* needs to register the instance.
*/
struct lskcipher_instance *lskcipher_alloc_instance_simple(
struct crypto_template *tmpl, struct rtattr **tb)
{
u32 mask;
struct lskcipher_instance *inst;
struct crypto_lskcipher_spawn *spawn;
char ecb_name[CRYPTO_MAX_ALG_NAME];
struct lskcipher_alg *cipher_alg;
const char *cipher_name;
int err;
err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_LSKCIPHER, &mask);
if (err)
return ERR_PTR(err);
cipher_name = crypto_attr_alg_name(tb[1]);
if (IS_ERR(cipher_name))
return ERR_CAST(cipher_name);
inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
if (!inst)
return ERR_PTR(-ENOMEM);
spawn = lskcipher_instance_ctx(inst);
err = crypto_grab_lskcipher(spawn,
lskcipher_crypto_instance(inst),
cipher_name, 0, mask);
ecb_name[0] = 0;
if (err == -ENOENT && !!memcmp(tmpl->name, "ecb", 4)) {
err = -ENAMETOOLONG;
if (snprintf(ecb_name, CRYPTO_MAX_ALG_NAME, "ecb(%s)",
cipher_name) >= CRYPTO_MAX_ALG_NAME)
goto err_free_inst;
err = crypto_grab_lskcipher(spawn,
lskcipher_crypto_instance(inst),
ecb_name, 0, mask);
}
if (err)
goto err_free_inst;
cipher_alg = crypto_lskcipher_spawn_alg(spawn);
err = crypto_inst_setname(lskcipher_crypto_instance(inst), tmpl->name,
&cipher_alg->co.base);
if (err)
goto err_free_inst;
if (ecb_name[0]) {
int len;
err = -EINVAL;
len = strscpy(ecb_name, &cipher_alg->co.base.cra_name[4],
sizeof(ecb_name));
if (len < 2)
goto err_free_inst;
if (ecb_name[len - 1] != ')')
goto err_free_inst;
ecb_name[len - 1] = 0;
err = -ENAMETOOLONG;
if (snprintf(inst->alg.co.base.cra_name, CRYPTO_MAX_ALG_NAME,
"%s(%s)", tmpl->name, ecb_name) >=
CRYPTO_MAX_ALG_NAME)
goto err_free_inst;
if (strcmp(ecb_name, cipher_name) &&
snprintf(inst->alg.co.base.cra_driver_name,
CRYPTO_MAX_ALG_NAME,
"%s(%s)", tmpl->name, cipher_name) >=
CRYPTO_MAX_ALG_NAME)
goto err_free_inst;
} else {
/* Don't allow nesting. */
err = -ELOOP;
if ((cipher_alg->co.base.cra_flags & CRYPTO_ALG_INSTANCE))
goto err_free_inst;
}
err = -EINVAL;
if (cipher_alg->co.ivsize)
goto err_free_inst;
inst->free = lskcipher_free_instance_simple;
/* Default algorithm properties, can be overridden */
inst->alg.co.base.cra_blocksize = cipher_alg->co.base.cra_blocksize;
inst->alg.co.base.cra_alignmask = cipher_alg->co.base.cra_alignmask;
inst->alg.co.base.cra_priority = cipher_alg->co.base.cra_priority;
inst->alg.co.min_keysize = cipher_alg->co.min_keysize;
inst->alg.co.max_keysize = cipher_alg->co.max_keysize;
inst->alg.co.ivsize = cipher_alg->co.base.cra_blocksize;
/* Use struct crypto_lskcipher * by default, can be overridden */
inst->alg.co.base.cra_ctxsize = sizeof(struct crypto_lskcipher *);
inst->alg.setkey = lskcipher_setkey_simple;
inst->alg.init = lskcipher_init_tfm_simple;
inst->alg.exit = lskcipher_exit_tfm_simple;
return inst;
err_free_inst:
lskcipher_free_instance_simple(inst);
return ERR_PTR(err);
}
EXPORT_SYMBOL_GPL(lskcipher_alloc_instance_simple);

View File

@ -117,6 +117,8 @@ static int pcrypt_aead_encrypt(struct aead_request *req)
err = padata_do_parallel(ictx->psenc, padata, &ctx->cb_cpu); err = padata_do_parallel(ictx->psenc, padata, &ctx->cb_cpu);
if (!err) if (!err)
return -EINPROGRESS; return -EINPROGRESS;
if (err == -EBUSY)
return -EAGAIN;
return err; return err;
} }
@ -164,6 +166,8 @@ static int pcrypt_aead_decrypt(struct aead_request *req)
err = padata_do_parallel(ictx->psdec, padata, &ctx->cb_cpu); err = padata_do_parallel(ictx->psdec, padata, &ctx->cb_cpu);
if (!err) if (!err)
return -EINPROGRESS; return -EINPROGRESS;
if (err == -EBUSY)
return -EAGAIN;
return err; return err;
} }

View File

@ -61,6 +61,24 @@ static const u8 rsa_digest_info_sha512[] = {
0x05, 0x00, 0x04, 0x40 0x05, 0x00, 0x04, 0x40
}; };
static const u8 rsa_digest_info_sha3_256[] = {
0x30, 0x31, 0x30, 0x0d, 0x06, 0x09,
0x60, 0x86, 0x48, 0x01, 0x65, 0x03, 0x04, 0x02, 0x08,
0x05, 0x00, 0x04, 0x20
};
static const u8 rsa_digest_info_sha3_384[] = {
0x30, 0x41, 0x30, 0x0d, 0x06, 0x09,
0x60, 0x86, 0x48, 0x01, 0x65, 0x03, 0x04, 0x02, 0x09,
0x05, 0x00, 0x04, 0x30
};
static const u8 rsa_digest_info_sha3_512[] = {
0x30, 0x51, 0x30, 0x0d, 0x06, 0x09,
0x60, 0x86, 0x48, 0x01, 0x65, 0x03, 0x04, 0x02, 0x0A,
0x05, 0x00, 0x04, 0x40
};
static const struct rsa_asn1_template { static const struct rsa_asn1_template {
const char *name; const char *name;
const u8 *data; const u8 *data;
@ -74,8 +92,13 @@ static const struct rsa_asn1_template {
_(sha384), _(sha384),
_(sha512), _(sha512),
_(sha224), _(sha224),
{ NULL }
#undef _ #undef _
#define _(X) { "sha3-" #X, rsa_digest_info_sha3_##X, sizeof(rsa_digest_info_sha3_##X) }
_(256),
_(384),
_(512),
#undef _
{ NULL }
}; };
static const struct rsa_asn1_template *rsa_lookup_asn1(const char *name) static const struct rsa_asn1_template *rsa_lookup_asn1(const char *name)
@ -687,3 +710,5 @@ struct crypto_template rsa_pkcs1pad_tmpl = {
.create = pkcs1pad_create, .create = pkcs1pad_create,
.module = THIS_MODULE, .module = THIS_MODULE,
}; };
MODULE_ALIAS_CRYPTO("pkcs1pad");

View File

@ -1,3 +1,10 @@
-- SPDX-License-Identifier: BSD-3-Clause
--
-- Copyright (C) 2016 IETF Trust and the persons identified as authors
-- of the code
--
-- https://www.rfc-editor.org/rfc/rfc8017#appendix-A.1.2
RsaPrivKey ::= SEQUENCE { RsaPrivKey ::= SEQUENCE {
version INTEGER, version INTEGER,
n INTEGER ({ rsa_get_n }), n INTEGER ({ rsa_get_n }),

View File

@ -1,3 +1,10 @@
-- SPDX-License-Identifier: BSD-3-Clause
--
-- Copyright (C) 2016 IETF Trust and the persons identified as authors
-- of the code
--
-- https://www.rfc-editor.org/rfc/rfc8017#appendix-A.1.1
RsaPubKey ::= SEQUENCE { RsaPubKey ::= SEQUENCE {
n INTEGER ({ rsa_get_n }), n INTEGER ({ rsa_get_n }),
e INTEGER ({ rsa_get_e }) e INTEGER ({ rsa_get_e })

View File

@ -10,17 +10,12 @@
#include <linux/err.h> #include <linux/err.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/slab.h>
#include <linux/seq_file.h> #include <linux/seq_file.h>
#include <linux/string.h> #include <linux/string.h>
#include <net/netlink.h> #include <net/netlink.h>
#include "hash.h" #include "hash.h"
#define MAX_SHASH_ALIGNMASK 63
static const struct crypto_type crypto_shash_type;
static inline struct crypto_istat_hash *shash_get_stat(struct shash_alg *alg) static inline struct crypto_istat_hash *shash_get_stat(struct shash_alg *alg)
{ {
return hash_get_stat(&alg->halg); return hash_get_stat(&alg->halg);
@ -28,7 +23,13 @@ static inline struct crypto_istat_hash *shash_get_stat(struct shash_alg *alg)
static inline int crypto_shash_errstat(struct shash_alg *alg, int err) static inline int crypto_shash_errstat(struct shash_alg *alg, int err)
{ {
return crypto_hash_errstat(&alg->halg, err); if (!IS_ENABLED(CONFIG_CRYPTO_STATS))
return err;
if (err && err != -EINPROGRESS && err != -EBUSY)
atomic64_inc(&shash_get_stat(alg)->err_cnt);
return err;
} }
int shash_no_setkey(struct crypto_shash *tfm, const u8 *key, int shash_no_setkey(struct crypto_shash *tfm, const u8 *key,
@ -38,27 +39,6 @@ int shash_no_setkey(struct crypto_shash *tfm, const u8 *key,
} }
EXPORT_SYMBOL_GPL(shash_no_setkey); EXPORT_SYMBOL_GPL(shash_no_setkey);
static int shash_setkey_unaligned(struct crypto_shash *tfm, const u8 *key,
unsigned int keylen)
{
struct shash_alg *shash = crypto_shash_alg(tfm);
unsigned long alignmask = crypto_shash_alignmask(tfm);
unsigned long absize;
u8 *buffer, *alignbuffer;
int err;
absize = keylen + (alignmask & ~(crypto_tfm_ctx_alignment() - 1));
buffer = kmalloc(absize, GFP_ATOMIC);
if (!buffer)
return -ENOMEM;
alignbuffer = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);
memcpy(alignbuffer, key, keylen);
err = shash->setkey(tfm, alignbuffer, keylen);
kfree_sensitive(buffer);
return err;
}
static void shash_set_needkey(struct crypto_shash *tfm, struct shash_alg *alg) static void shash_set_needkey(struct crypto_shash *tfm, struct shash_alg *alg)
{ {
if (crypto_shash_alg_needs_key(alg)) if (crypto_shash_alg_needs_key(alg))
@ -69,14 +49,9 @@ int crypto_shash_setkey(struct crypto_shash *tfm, const u8 *key,
unsigned int keylen) unsigned int keylen)
{ {
struct shash_alg *shash = crypto_shash_alg(tfm); struct shash_alg *shash = crypto_shash_alg(tfm);
unsigned long alignmask = crypto_shash_alignmask(tfm);
int err; int err;
if ((unsigned long)key & alignmask) err = shash->setkey(tfm, key, keylen);
err = shash_setkey_unaligned(tfm, key, keylen);
else
err = shash->setkey(tfm, key, keylen);
if (unlikely(err)) { if (unlikely(err)) {
shash_set_needkey(tfm, shash); shash_set_needkey(tfm, shash);
return err; return err;
@ -87,108 +62,42 @@ int crypto_shash_setkey(struct crypto_shash *tfm, const u8 *key,
} }
EXPORT_SYMBOL_GPL(crypto_shash_setkey); EXPORT_SYMBOL_GPL(crypto_shash_setkey);
static int shash_update_unaligned(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
struct crypto_shash *tfm = desc->tfm;
struct shash_alg *shash = crypto_shash_alg(tfm);
unsigned long alignmask = crypto_shash_alignmask(tfm);
unsigned int unaligned_len = alignmask + 1 -
((unsigned long)data & alignmask);
/*
* We cannot count on __aligned() working for large values:
* https://patchwork.kernel.org/patch/9507697/
*/
u8 ubuf[MAX_SHASH_ALIGNMASK * 2];
u8 *buf = PTR_ALIGN(&ubuf[0], alignmask + 1);
int err;
if (WARN_ON(buf + unaligned_len > ubuf + sizeof(ubuf)))
return -EINVAL;
if (unaligned_len > len)
unaligned_len = len;
memcpy(buf, data, unaligned_len);
err = shash->update(desc, buf, unaligned_len);
memset(buf, 0, unaligned_len);
return err ?:
shash->update(desc, data + unaligned_len, len - unaligned_len);
}
int crypto_shash_update(struct shash_desc *desc, const u8 *data, int crypto_shash_update(struct shash_desc *desc, const u8 *data,
unsigned int len) unsigned int len)
{ {
struct crypto_shash *tfm = desc->tfm; struct shash_alg *shash = crypto_shash_alg(desc->tfm);
struct shash_alg *shash = crypto_shash_alg(tfm);
unsigned long alignmask = crypto_shash_alignmask(tfm);
int err; int err;
if (IS_ENABLED(CONFIG_CRYPTO_STATS)) if (IS_ENABLED(CONFIG_CRYPTO_STATS))
atomic64_add(len, &shash_get_stat(shash)->hash_tlen); atomic64_add(len, &shash_get_stat(shash)->hash_tlen);
if ((unsigned long)data & alignmask) err = shash->update(desc, data, len);
err = shash_update_unaligned(desc, data, len);
else
err = shash->update(desc, data, len);
return crypto_shash_errstat(shash, err); return crypto_shash_errstat(shash, err);
} }
EXPORT_SYMBOL_GPL(crypto_shash_update); EXPORT_SYMBOL_GPL(crypto_shash_update);
static int shash_final_unaligned(struct shash_desc *desc, u8 *out)
{
struct crypto_shash *tfm = desc->tfm;
unsigned long alignmask = crypto_shash_alignmask(tfm);
struct shash_alg *shash = crypto_shash_alg(tfm);
unsigned int ds = crypto_shash_digestsize(tfm);
/*
* We cannot count on __aligned() working for large values:
* https://patchwork.kernel.org/patch/9507697/
*/
u8 ubuf[MAX_SHASH_ALIGNMASK + HASH_MAX_DIGESTSIZE];
u8 *buf = PTR_ALIGN(&ubuf[0], alignmask + 1);
int err;
if (WARN_ON(buf + ds > ubuf + sizeof(ubuf)))
return -EINVAL;
err = shash->final(desc, buf);
if (err)
goto out;
memcpy(out, buf, ds);
out:
memset(buf, 0, ds);
return err;
}
int crypto_shash_final(struct shash_desc *desc, u8 *out) int crypto_shash_final(struct shash_desc *desc, u8 *out)
{ {
struct crypto_shash *tfm = desc->tfm; struct shash_alg *shash = crypto_shash_alg(desc->tfm);
struct shash_alg *shash = crypto_shash_alg(tfm);
unsigned long alignmask = crypto_shash_alignmask(tfm);
int err; int err;
if (IS_ENABLED(CONFIG_CRYPTO_STATS)) if (IS_ENABLED(CONFIG_CRYPTO_STATS))
atomic64_inc(&shash_get_stat(shash)->hash_cnt); atomic64_inc(&shash_get_stat(shash)->hash_cnt);
if ((unsigned long)out & alignmask) err = shash->final(desc, out);
err = shash_final_unaligned(desc, out);
else
err = shash->final(desc, out);
return crypto_shash_errstat(shash, err); return crypto_shash_errstat(shash, err);
} }
EXPORT_SYMBOL_GPL(crypto_shash_final); EXPORT_SYMBOL_GPL(crypto_shash_final);
static int shash_finup_unaligned(struct shash_desc *desc, const u8 *data, static int shash_default_finup(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out) unsigned int len, u8 *out)
{ {
return shash_update_unaligned(desc, data, len) ?: struct shash_alg *shash = crypto_shash_alg(desc->tfm);
shash_final_unaligned(desc, out);
return shash->update(desc, data, len) ?:
shash->final(desc, out);
} }
int crypto_shash_finup(struct shash_desc *desc, const u8 *data, int crypto_shash_finup(struct shash_desc *desc, const u8 *data,
@ -196,7 +105,6 @@ int crypto_shash_finup(struct shash_desc *desc, const u8 *data,
{ {
struct crypto_shash *tfm = desc->tfm; struct crypto_shash *tfm = desc->tfm;
struct shash_alg *shash = crypto_shash_alg(tfm); struct shash_alg *shash = crypto_shash_alg(tfm);
unsigned long alignmask = crypto_shash_alignmask(tfm);
int err; int err;
if (IS_ENABLED(CONFIG_CRYPTO_STATS)) { if (IS_ENABLED(CONFIG_CRYPTO_STATS)) {
@ -206,22 +114,19 @@ int crypto_shash_finup(struct shash_desc *desc, const u8 *data,
atomic64_add(len, &istat->hash_tlen); atomic64_add(len, &istat->hash_tlen);
} }
if (((unsigned long)data | (unsigned long)out) & alignmask) err = shash->finup(desc, data, len, out);
err = shash_finup_unaligned(desc, data, len, out);
else
err = shash->finup(desc, data, len, out);
return crypto_shash_errstat(shash, err); return crypto_shash_errstat(shash, err);
} }
EXPORT_SYMBOL_GPL(crypto_shash_finup); EXPORT_SYMBOL_GPL(crypto_shash_finup);
static int shash_digest_unaligned(struct shash_desc *desc, const u8 *data, static int shash_default_digest(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out) unsigned int len, u8 *out)
{ {
return crypto_shash_init(desc) ?: struct shash_alg *shash = crypto_shash_alg(desc->tfm);
shash_update_unaligned(desc, data, len) ?:
shash_final_unaligned(desc, out); return shash->init(desc) ?:
shash->finup(desc, data, len, out);
} }
int crypto_shash_digest(struct shash_desc *desc, const u8 *data, int crypto_shash_digest(struct shash_desc *desc, const u8 *data,
@ -229,7 +134,6 @@ int crypto_shash_digest(struct shash_desc *desc, const u8 *data,
{ {
struct crypto_shash *tfm = desc->tfm; struct crypto_shash *tfm = desc->tfm;
struct shash_alg *shash = crypto_shash_alg(tfm); struct shash_alg *shash = crypto_shash_alg(tfm);
unsigned long alignmask = crypto_shash_alignmask(tfm);
int err; int err;
if (IS_ENABLED(CONFIG_CRYPTO_STATS)) { if (IS_ENABLED(CONFIG_CRYPTO_STATS)) {
@ -241,8 +145,6 @@ int crypto_shash_digest(struct shash_desc *desc, const u8 *data,
if (crypto_shash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY) if (crypto_shash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
err = -ENOKEY; err = -ENOKEY;
else if (((unsigned long)data | (unsigned long)out) & alignmask)
err = shash_digest_unaligned(desc, data, len, out);
else else
err = shash->digest(desc, data, len, out); err = shash->digest(desc, data, len, out);
@ -266,202 +168,34 @@ int crypto_shash_tfm_digest(struct crypto_shash *tfm, const u8 *data,
} }
EXPORT_SYMBOL_GPL(crypto_shash_tfm_digest); EXPORT_SYMBOL_GPL(crypto_shash_tfm_digest);
static int shash_default_export(struct shash_desc *desc, void *out) int crypto_shash_export(struct shash_desc *desc, void *out)
{ {
memcpy(out, shash_desc_ctx(desc), crypto_shash_descsize(desc->tfm)); struct crypto_shash *tfm = desc->tfm;
struct shash_alg *shash = crypto_shash_alg(tfm);
if (shash->export)
return shash->export(desc, out);
memcpy(out, shash_desc_ctx(desc), crypto_shash_descsize(tfm));
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(crypto_shash_export);
static int shash_default_import(struct shash_desc *desc, const void *in) int crypto_shash_import(struct shash_desc *desc, const void *in)
{ {
memcpy(shash_desc_ctx(desc), in, crypto_shash_descsize(desc->tfm)); struct crypto_shash *tfm = desc->tfm;
struct shash_alg *shash = crypto_shash_alg(tfm);
if (crypto_shash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
return -ENOKEY;
if (shash->import)
return shash->import(desc, in);
memcpy(shash_desc_ctx(desc), in, crypto_shash_descsize(tfm));
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(crypto_shash_import);
static int shash_async_setkey(struct crypto_ahash *tfm, const u8 *key,
unsigned int keylen)
{
struct crypto_shash **ctx = crypto_ahash_ctx(tfm);
return crypto_shash_setkey(*ctx, key, keylen);
}
static int shash_async_init(struct ahash_request *req)
{
struct crypto_shash **ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
struct shash_desc *desc = ahash_request_ctx(req);
desc->tfm = *ctx;
return crypto_shash_init(desc);
}
int shash_ahash_update(struct ahash_request *req, struct shash_desc *desc)
{
struct crypto_hash_walk walk;
int nbytes;
for (nbytes = crypto_hash_walk_first(req, &walk); nbytes > 0;
nbytes = crypto_hash_walk_done(&walk, nbytes))
nbytes = crypto_shash_update(desc, walk.data, nbytes);
return nbytes;
}
EXPORT_SYMBOL_GPL(shash_ahash_update);
static int shash_async_update(struct ahash_request *req)
{
return shash_ahash_update(req, ahash_request_ctx(req));
}
static int shash_async_final(struct ahash_request *req)
{
return crypto_shash_final(ahash_request_ctx(req), req->result);
}
int shash_ahash_finup(struct ahash_request *req, struct shash_desc *desc)
{
struct crypto_hash_walk walk;
int nbytes;
nbytes = crypto_hash_walk_first(req, &walk);
if (!nbytes)
return crypto_shash_final(desc, req->result);
do {
nbytes = crypto_hash_walk_last(&walk) ?
crypto_shash_finup(desc, walk.data, nbytes,
req->result) :
crypto_shash_update(desc, walk.data, nbytes);
nbytes = crypto_hash_walk_done(&walk, nbytes);
} while (nbytes > 0);
return nbytes;
}
EXPORT_SYMBOL_GPL(shash_ahash_finup);
static int shash_async_finup(struct ahash_request *req)
{
struct crypto_shash **ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
struct shash_desc *desc = ahash_request_ctx(req);
desc->tfm = *ctx;
return shash_ahash_finup(req, desc);
}
int shash_ahash_digest(struct ahash_request *req, struct shash_desc *desc)
{
unsigned int nbytes = req->nbytes;
struct scatterlist *sg;
unsigned int offset;
int err;
if (nbytes &&
(sg = req->src, offset = sg->offset,
nbytes <= min(sg->length, ((unsigned int)(PAGE_SIZE)) - offset))) {
void *data;
data = kmap_local_page(sg_page(sg));
err = crypto_shash_digest(desc, data + offset, nbytes,
req->result);
kunmap_local(data);
} else
err = crypto_shash_init(desc) ?:
shash_ahash_finup(req, desc);
return err;
}
EXPORT_SYMBOL_GPL(shash_ahash_digest);
static int shash_async_digest(struct ahash_request *req)
{
struct crypto_shash **ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
struct shash_desc *desc = ahash_request_ctx(req);
desc->tfm = *ctx;
return shash_ahash_digest(req, desc);
}
static int shash_async_export(struct ahash_request *req, void *out)
{
return crypto_shash_export(ahash_request_ctx(req), out);
}
static int shash_async_import(struct ahash_request *req, const void *in)
{
struct crypto_shash **ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
struct shash_desc *desc = ahash_request_ctx(req);
desc->tfm = *ctx;
return crypto_shash_import(desc, in);
}
static void crypto_exit_shash_ops_async(struct crypto_tfm *tfm)
{
struct crypto_shash **ctx = crypto_tfm_ctx(tfm);
crypto_free_shash(*ctx);
}
int crypto_init_shash_ops_async(struct crypto_tfm *tfm)
{
struct crypto_alg *calg = tfm->__crt_alg;
struct shash_alg *alg = __crypto_shash_alg(calg);
struct crypto_ahash *crt = __crypto_ahash_cast(tfm);
struct crypto_shash **ctx = crypto_tfm_ctx(tfm);
struct crypto_shash *shash;
if (!crypto_mod_get(calg))
return -EAGAIN;
shash = crypto_create_tfm(calg, &crypto_shash_type);
if (IS_ERR(shash)) {
crypto_mod_put(calg);
return PTR_ERR(shash);
}
*ctx = shash;
tfm->exit = crypto_exit_shash_ops_async;
crt->init = shash_async_init;
crt->update = shash_async_update;
crt->final = shash_async_final;
crt->finup = shash_async_finup;
crt->digest = shash_async_digest;
if (crypto_shash_alg_has_setkey(alg))
crt->setkey = shash_async_setkey;
crypto_ahash_set_flags(crt, crypto_shash_get_flags(shash) &
CRYPTO_TFM_NEED_KEY);
crt->export = shash_async_export;
crt->import = shash_async_import;
crt->reqsize = sizeof(struct shash_desc) + crypto_shash_descsize(shash);
return 0;
}
struct crypto_ahash *crypto_clone_shash_ops_async(struct crypto_ahash *nhash,
struct crypto_ahash *hash)
{
struct crypto_shash **nctx = crypto_ahash_ctx(nhash);
struct crypto_shash **ctx = crypto_ahash_ctx(hash);
struct crypto_shash *shash;
shash = crypto_clone_shash(*ctx);
if (IS_ERR(shash)) {
crypto_free_ahash(nhash);
return ERR_CAST(shash);
}
*nctx = shash;
return nhash;
}
static void crypto_shash_exit_tfm(struct crypto_tfm *tfm) static void crypto_shash_exit_tfm(struct crypto_tfm *tfm)
{ {
@ -541,7 +275,7 @@ static int __maybe_unused crypto_shash_report_stat(
return crypto_hash_report_stat(skb, alg, "shash"); return crypto_hash_report_stat(skb, alg, "shash");
} }
static const struct crypto_type crypto_shash_type = { const struct crypto_type crypto_shash_type = {
.extsize = crypto_alg_extsize, .extsize = crypto_alg_extsize,
.init_tfm = crypto_shash_init_tfm, .init_tfm = crypto_shash_init_tfm,
.free = crypto_shash_free_instance, .free = crypto_shash_free_instance,
@ -626,6 +360,10 @@ int hash_prepare_alg(struct hash_alg_common *alg)
if (alg->digestsize > HASH_MAX_DIGESTSIZE) if (alg->digestsize > HASH_MAX_DIGESTSIZE)
return -EINVAL; return -EINVAL;
/* alignmask is not useful for hashes, so it is not supported. */
if (base->cra_alignmask)
return -EINVAL;
base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK; base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK;
if (IS_ENABLED(CONFIG_CRYPTO_STATS)) if (IS_ENABLED(CONFIG_CRYPTO_STATS))
@ -642,9 +380,6 @@ static int shash_prepare_alg(struct shash_alg *alg)
if (alg->descsize > HASH_MAX_DESCSIZE) if (alg->descsize > HASH_MAX_DESCSIZE)
return -EINVAL; return -EINVAL;
if (base->cra_alignmask > MAX_SHASH_ALIGNMASK)
return -EINVAL;
if ((alg->export && !alg->import) || (alg->import && !alg->export)) if ((alg->export && !alg->import) || (alg->import && !alg->export))
return -EINVAL; return -EINVAL;
@ -655,15 +390,23 @@ static int shash_prepare_alg(struct shash_alg *alg)
base->cra_type = &crypto_shash_type; base->cra_type = &crypto_shash_type;
base->cra_flags |= CRYPTO_ALG_TYPE_SHASH; base->cra_flags |= CRYPTO_ALG_TYPE_SHASH;
/*
* Handle missing optional functions. For each one we can either
* install a default here, or we can leave the pointer as NULL and check
* the pointer for NULL in crypto_shash_*(), avoiding an indirect call
* when the default behavior is desired. For ->finup and ->digest we
* install defaults, since for optimal performance algorithms should
* implement these anyway. On the other hand, for ->import and
* ->export the common case and best performance comes from the simple
* memcpy of the shash_desc_ctx, so when those pointers are NULL we
* leave them NULL and provide the memcpy with no indirect call.
*/
if (!alg->finup) if (!alg->finup)
alg->finup = shash_finup_unaligned; alg->finup = shash_default_finup;
if (!alg->digest) if (!alg->digest)
alg->digest = shash_digest_unaligned; alg->digest = shash_default_digest;
if (!alg->export) { if (!alg->export)
alg->export = shash_default_export;
alg->import = shash_default_import;
alg->halg.statesize = alg->descsize; alg->halg.statesize = alg->descsize;
}
if (!alg->setkey) if (!alg->setkey)
alg->setkey = shash_no_setkey; alg->setkey = shash_no_setkey;

View File

@ -24,8 +24,9 @@
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/string.h> #include <linux/string.h>
#include <net/netlink.h> #include <net/netlink.h>
#include "skcipher.h"
#include "internal.h" #define CRYPTO_ALG_TYPE_SKCIPHER_MASK 0x0000000e
enum { enum {
SKCIPHER_WALK_PHYS = 1 << 0, SKCIPHER_WALK_PHYS = 1 << 0,
@ -43,6 +44,8 @@ struct skcipher_walk_buffer {
u8 buffer[]; u8 buffer[];
}; };
static const struct crypto_type crypto_skcipher_type;
static int skcipher_walk_next(struct skcipher_walk *walk); static int skcipher_walk_next(struct skcipher_walk *walk);
static inline void skcipher_map_src(struct skcipher_walk *walk) static inline void skcipher_map_src(struct skcipher_walk *walk)
@ -89,11 +92,7 @@ static inline struct skcipher_alg *__crypto_skcipher_alg(
static inline struct crypto_istat_cipher *skcipher_get_stat( static inline struct crypto_istat_cipher *skcipher_get_stat(
struct skcipher_alg *alg) struct skcipher_alg *alg)
{ {
#ifdef CONFIG_CRYPTO_STATS return skcipher_get_stat_common(&alg->co);
return &alg->stat;
#else
return NULL;
#endif
} }
static inline int crypto_skcipher_errstat(struct skcipher_alg *alg, int err) static inline int crypto_skcipher_errstat(struct skcipher_alg *alg, int err)
@ -468,6 +467,7 @@ static int skcipher_walk_skcipher(struct skcipher_walk *walk,
struct skcipher_request *req) struct skcipher_request *req)
{ {
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
struct skcipher_alg *alg = crypto_skcipher_alg(tfm);
walk->total = req->cryptlen; walk->total = req->cryptlen;
walk->nbytes = 0; walk->nbytes = 0;
@ -485,10 +485,14 @@ static int skcipher_walk_skcipher(struct skcipher_walk *walk,
SKCIPHER_WALK_SLEEP : 0; SKCIPHER_WALK_SLEEP : 0;
walk->blocksize = crypto_skcipher_blocksize(tfm); walk->blocksize = crypto_skcipher_blocksize(tfm);
walk->stride = crypto_skcipher_walksize(tfm);
walk->ivsize = crypto_skcipher_ivsize(tfm); walk->ivsize = crypto_skcipher_ivsize(tfm);
walk->alignmask = crypto_skcipher_alignmask(tfm); walk->alignmask = crypto_skcipher_alignmask(tfm);
if (alg->co.base.cra_type != &crypto_skcipher_type)
walk->stride = alg->co.chunksize;
else
walk->stride = alg->walksize;
return skcipher_walk_first(walk); return skcipher_walk_first(walk);
} }
@ -616,6 +620,17 @@ int crypto_skcipher_setkey(struct crypto_skcipher *tfm, const u8 *key,
unsigned long alignmask = crypto_skcipher_alignmask(tfm); unsigned long alignmask = crypto_skcipher_alignmask(tfm);
int err; int err;
if (cipher->co.base.cra_type != &crypto_skcipher_type) {
struct crypto_lskcipher **ctx = crypto_skcipher_ctx(tfm);
crypto_lskcipher_clear_flags(*ctx, CRYPTO_TFM_REQ_MASK);
crypto_lskcipher_set_flags(*ctx,
crypto_skcipher_get_flags(tfm) &
CRYPTO_TFM_REQ_MASK);
err = crypto_lskcipher_setkey(*ctx, key, keylen);
goto out;
}
if (keylen < cipher->min_keysize || keylen > cipher->max_keysize) if (keylen < cipher->min_keysize || keylen > cipher->max_keysize)
return -EINVAL; return -EINVAL;
@ -624,6 +639,7 @@ int crypto_skcipher_setkey(struct crypto_skcipher *tfm, const u8 *key,
else else
err = cipher->setkey(tfm, key, keylen); err = cipher->setkey(tfm, key, keylen);
out:
if (unlikely(err)) { if (unlikely(err)) {
skcipher_set_needkey(tfm); skcipher_set_needkey(tfm);
return err; return err;
@ -649,6 +665,8 @@ int crypto_skcipher_encrypt(struct skcipher_request *req)
if (crypto_skcipher_get_flags(tfm) & CRYPTO_TFM_NEED_KEY) if (crypto_skcipher_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
ret = -ENOKEY; ret = -ENOKEY;
else if (alg->co.base.cra_type != &crypto_skcipher_type)
ret = crypto_lskcipher_encrypt_sg(req);
else else
ret = alg->encrypt(req); ret = alg->encrypt(req);
@ -671,6 +689,8 @@ int crypto_skcipher_decrypt(struct skcipher_request *req)
if (crypto_skcipher_get_flags(tfm) & CRYPTO_TFM_NEED_KEY) if (crypto_skcipher_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
ret = -ENOKEY; ret = -ENOKEY;
else if (alg->co.base.cra_type != &crypto_skcipher_type)
ret = crypto_lskcipher_decrypt_sg(req);
else else
ret = alg->decrypt(req); ret = alg->decrypt(req);
@ -693,6 +713,9 @@ static int crypto_skcipher_init_tfm(struct crypto_tfm *tfm)
skcipher_set_needkey(skcipher); skcipher_set_needkey(skcipher);
if (tfm->__crt_alg->cra_type != &crypto_skcipher_type)
return crypto_init_lskcipher_ops_sg(tfm);
if (alg->exit) if (alg->exit)
skcipher->base.exit = crypto_skcipher_exit_tfm; skcipher->base.exit = crypto_skcipher_exit_tfm;
@ -702,6 +725,14 @@ static int crypto_skcipher_init_tfm(struct crypto_tfm *tfm)
return 0; return 0;
} }
static unsigned int crypto_skcipher_extsize(struct crypto_alg *alg)
{
if (alg->cra_type != &crypto_skcipher_type)
return sizeof(struct crypto_lskcipher *);
return crypto_alg_extsize(alg);
}
static void crypto_skcipher_free_instance(struct crypto_instance *inst) static void crypto_skcipher_free_instance(struct crypto_instance *inst)
{ {
struct skcipher_instance *skcipher = struct skcipher_instance *skcipher =
@ -770,7 +801,7 @@ static int __maybe_unused crypto_skcipher_report_stat(
} }
static const struct crypto_type crypto_skcipher_type = { static const struct crypto_type crypto_skcipher_type = {
.extsize = crypto_alg_extsize, .extsize = crypto_skcipher_extsize,
.init_tfm = crypto_skcipher_init_tfm, .init_tfm = crypto_skcipher_init_tfm,
.free = crypto_skcipher_free_instance, .free = crypto_skcipher_free_instance,
#ifdef CONFIG_PROC_FS #ifdef CONFIG_PROC_FS
@ -783,7 +814,7 @@ static const struct crypto_type crypto_skcipher_type = {
.report_stat = crypto_skcipher_report_stat, .report_stat = crypto_skcipher_report_stat,
#endif #endif
.maskclear = ~CRYPTO_ALG_TYPE_MASK, .maskclear = ~CRYPTO_ALG_TYPE_MASK,
.maskset = CRYPTO_ALG_TYPE_MASK, .maskset = CRYPTO_ALG_TYPE_SKCIPHER_MASK,
.type = CRYPTO_ALG_TYPE_SKCIPHER, .type = CRYPTO_ALG_TYPE_SKCIPHER,
.tfmsize = offsetof(struct crypto_skcipher, base), .tfmsize = offsetof(struct crypto_skcipher, base),
}; };
@ -834,27 +865,43 @@ int crypto_has_skcipher(const char *alg_name, u32 type, u32 mask)
} }
EXPORT_SYMBOL_GPL(crypto_has_skcipher); EXPORT_SYMBOL_GPL(crypto_has_skcipher);
static int skcipher_prepare_alg(struct skcipher_alg *alg) int skcipher_prepare_alg_common(struct skcipher_alg_common *alg)
{ {
struct crypto_istat_cipher *istat = skcipher_get_stat(alg); struct crypto_istat_cipher *istat = skcipher_get_stat_common(alg);
struct crypto_alg *base = &alg->base; struct crypto_alg *base = &alg->base;
if (alg->ivsize > PAGE_SIZE / 8 || alg->chunksize > PAGE_SIZE / 8 || if (alg->ivsize > PAGE_SIZE / 8 || alg->chunksize > PAGE_SIZE / 8)
alg->walksize > PAGE_SIZE / 8)
return -EINVAL; return -EINVAL;
if (!alg->chunksize) if (!alg->chunksize)
alg->chunksize = base->cra_blocksize; alg->chunksize = base->cra_blocksize;
base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK;
if (IS_ENABLED(CONFIG_CRYPTO_STATS))
memset(istat, 0, sizeof(*istat));
return 0;
}
static int skcipher_prepare_alg(struct skcipher_alg *alg)
{
struct crypto_alg *base = &alg->base;
int err;
err = skcipher_prepare_alg_common(&alg->co);
if (err)
return err;
if (alg->walksize > PAGE_SIZE / 8)
return -EINVAL;
if (!alg->walksize) if (!alg->walksize)
alg->walksize = alg->chunksize; alg->walksize = alg->chunksize;
base->cra_type = &crypto_skcipher_type; base->cra_type = &crypto_skcipher_type;
base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK;
base->cra_flags |= CRYPTO_ALG_TYPE_SKCIPHER; base->cra_flags |= CRYPTO_ALG_TYPE_SKCIPHER;
if (IS_ENABLED(CONFIG_CRYPTO_STATS))
memset(istat, 0, sizeof(*istat));
return 0; return 0;
} }

28
crypto/skcipher.h Normal file
View File

@ -0,0 +1,28 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* Cryptographic API.
*
* Copyright (c) 2023 Herbert Xu <herbert@gondor.apana.org.au>
*/
#ifndef _LOCAL_CRYPTO_SKCIPHER_H
#define _LOCAL_CRYPTO_SKCIPHER_H
#include <crypto/internal/skcipher.h>
#include "internal.h"
static inline struct crypto_istat_cipher *skcipher_get_stat_common(
struct skcipher_alg_common *alg)
{
#ifdef CONFIG_CRYPTO_STATS
return &alg->stat;
#else
return NULL;
#endif
}
int crypto_lskcipher_encrypt_sg(struct skcipher_request *req);
int crypto_lskcipher_decrypt_sg(struct skcipher_request *req);
int crypto_init_lskcipher_ops_sg(struct crypto_tfm *tfm);
int skcipher_prepare_alg_common(struct skcipher_alg_common *alg);
#endif /* _LOCAL_CRYPTO_SKCIPHER_H */

View File

@ -408,17 +408,15 @@ static const struct testvec_config default_hash_testvec_configs[] = {
.finalization_type = FINALIZATION_TYPE_FINAL, .finalization_type = FINALIZATION_TYPE_FINAL,
.key_offset = 1, .key_offset = 1,
}, { }, {
.name = "digest buffer aligned only to alignmask", .name = "digest misaligned buffer",
.src_divs = { .src_divs = {
{ {
.proportion_of_total = 10000, .proportion_of_total = 10000,
.offset = 1, .offset = 1,
.offset_relative_to_alignmask = true,
}, },
}, },
.finalization_type = FINALIZATION_TYPE_DIGEST, .finalization_type = FINALIZATION_TYPE_DIGEST,
.key_offset = 1, .key_offset = 1,
.key_offset_relative_to_alignmask = true,
}, { }, {
.name = "init+update+update+final two even splits", .name = "init+update+update+final two even splits",
.src_divs = { .src_divs = {
@ -1275,7 +1273,6 @@ static int test_shash_vec_cfg(const struct hash_testvec *vec,
u8 *hashstate) u8 *hashstate)
{ {
struct crypto_shash *tfm = desc->tfm; struct crypto_shash *tfm = desc->tfm;
const unsigned int alignmask = crypto_shash_alignmask(tfm);
const unsigned int digestsize = crypto_shash_digestsize(tfm); const unsigned int digestsize = crypto_shash_digestsize(tfm);
const unsigned int statesize = crypto_shash_statesize(tfm); const unsigned int statesize = crypto_shash_statesize(tfm);
const char *driver = crypto_shash_driver_name(tfm); const char *driver = crypto_shash_driver_name(tfm);
@ -1287,7 +1284,7 @@ static int test_shash_vec_cfg(const struct hash_testvec *vec,
/* Set the key, if specified */ /* Set the key, if specified */
if (vec->ksize) { if (vec->ksize) {
err = do_setkey(crypto_shash_setkey, tfm, vec->key, vec->ksize, err = do_setkey(crypto_shash_setkey, tfm, vec->key, vec->ksize,
cfg, alignmask); cfg, 0);
if (err) { if (err) {
if (err == vec->setkey_error) if (err == vec->setkey_error)
return 0; return 0;
@ -1304,7 +1301,7 @@ static int test_shash_vec_cfg(const struct hash_testvec *vec,
} }
/* Build the scatterlist for the source data */ /* Build the scatterlist for the source data */
err = build_hash_sglist(tsgl, vec, cfg, alignmask, divs); err = build_hash_sglist(tsgl, vec, cfg, 0, divs);
if (err) { if (err) {
pr_err("alg: shash: %s: error preparing scatterlist for test vector %s, cfg=\"%s\"\n", pr_err("alg: shash: %s: error preparing scatterlist for test vector %s, cfg=\"%s\"\n",
driver, vec_name, cfg->name); driver, vec_name, cfg->name);
@ -1459,7 +1456,6 @@ static int test_ahash_vec_cfg(const struct hash_testvec *vec,
u8 *hashstate) u8 *hashstate)
{ {
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
const unsigned int alignmask = crypto_ahash_alignmask(tfm);
const unsigned int digestsize = crypto_ahash_digestsize(tfm); const unsigned int digestsize = crypto_ahash_digestsize(tfm);
const unsigned int statesize = crypto_ahash_statesize(tfm); const unsigned int statesize = crypto_ahash_statesize(tfm);
const char *driver = crypto_ahash_driver_name(tfm); const char *driver = crypto_ahash_driver_name(tfm);
@ -1475,7 +1471,7 @@ static int test_ahash_vec_cfg(const struct hash_testvec *vec,
/* Set the key, if specified */ /* Set the key, if specified */
if (vec->ksize) { if (vec->ksize) {
err = do_setkey(crypto_ahash_setkey, tfm, vec->key, vec->ksize, err = do_setkey(crypto_ahash_setkey, tfm, vec->key, vec->ksize,
cfg, alignmask); cfg, 0);
if (err) { if (err) {
if (err == vec->setkey_error) if (err == vec->setkey_error)
return 0; return 0;
@ -1492,7 +1488,7 @@ static int test_ahash_vec_cfg(const struct hash_testvec *vec,
} }
/* Build the scatterlist for the source data */ /* Build the scatterlist for the source data */
err = build_hash_sglist(tsgl, vec, cfg, alignmask, divs); err = build_hash_sglist(tsgl, vec, cfg, 0, divs);
if (err) { if (err) {
pr_err("alg: ahash: %s: error preparing scatterlist for test vector %s, cfg=\"%s\"\n", pr_err("alg: ahash: %s: error preparing scatterlist for test vector %s, cfg=\"%s\"\n",
driver, vec_name, cfg->name); driver, vec_name, cfg->name);
@ -4963,7 +4959,7 @@ static const struct alg_test_desc alg_test_descs[] = {
} }
}, { }, {
.alg = "ecb(arc4)", .alg = "ecb(arc4)",
.generic_driver = "ecb(arc4)-generic", .generic_driver = "arc4-generic",
.test = alg_test_skcipher, .test = alg_test_skcipher,
.suite = { .suite = {
.cipher = __VECS(arc4_tv_template) .cipher = __VECS(arc4_tv_template)
@ -5460,6 +5456,18 @@ static const struct alg_test_desc alg_test_descs[] = {
.suite = { .suite = {
.akcipher = __VECS(pkcs1pad_rsa_tv_template) .akcipher = __VECS(pkcs1pad_rsa_tv_template)
} }
}, {
.alg = "pkcs1pad(rsa,sha3-256)",
.test = alg_test_null,
.fips_allowed = 1,
}, {
.alg = "pkcs1pad(rsa,sha3-384)",
.test = alg_test_null,
.fips_allowed = 1,
}, {
.alg = "pkcs1pad(rsa,sha3-512)",
.test = alg_test_null,
.fips_allowed = 1,
}, { }, {
.alg = "pkcs1pad(rsa,sha384)", .alg = "pkcs1pad(rsa,sha384)",
.test = alg_test_null, .test = alg_test_null,
@ -5772,16 +5780,6 @@ static const struct alg_test_desc alg_test_descs[] = {
.suite = { .suite = {
.hash = __VECS(xxhash64_tv_template) .hash = __VECS(xxhash64_tv_template)
} }
}, {
.alg = "zlib-deflate",
.test = alg_test_comp,
.fips_allowed = 1,
.suite = {
.comp = {
.comp = __VECS(zlib_deflate_comp_tv_template),
.decomp = __VECS(zlib_deflate_decomp_tv_template)
}
}
}, { }, {
.alg = "zstd", .alg = "zstd",
.test = alg_test_comp, .test = alg_test_comp,
@ -5945,6 +5943,25 @@ test_done:
return rc; return rc;
notest: notest:
if ((type & CRYPTO_ALG_TYPE_MASK) == CRYPTO_ALG_TYPE_LSKCIPHER) {
char nalg[CRYPTO_MAX_ALG_NAME];
if (snprintf(nalg, sizeof(nalg), "ecb(%s)", alg) >=
sizeof(nalg))
goto notest2;
i = alg_find_test(nalg);
if (i < 0)
goto notest2;
if (fips_enabled && !alg_test_descs[i].fips_allowed)
goto non_fips_alg;
rc = alg_test_skcipher(alg_test_descs + i, driver, type, mask);
goto test_done;
}
notest2:
printk(KERN_INFO "alg: No test for %s (%s)\n", alg, driver); printk(KERN_INFO "alg: No test for %s (%s)\n", alg, driver);
if (type & CRYPTO_ALG_FIPS_INTERNAL) if (type & CRYPTO_ALG_FIPS_INTERNAL)

View File

@ -653,30 +653,6 @@ static const struct akcipher_testvec rsa_tv_template[] = {
static const struct akcipher_testvec ecdsa_nist_p192_tv_template[] = { static const struct akcipher_testvec ecdsa_nist_p192_tv_template[] = {
{ {
.key = .key =
"\x04\xf7\x46\xf8\x2f\x15\xf6\x22\x8e\xd7\x57\x4f\xcc\xe7\xbb\xc1"
"\xd4\x09\x73\xcf\xea\xd0\x15\x07\x3d\xa5\x8a\x8a\x95\x43\xe4\x68"
"\xea\xc6\x25\xc1\xc1\x01\x25\x4c\x7e\xc3\x3c\xa6\x04\x0a\xe7\x08"
"\x98",
.key_len = 49,
.params =
"\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48"
"\xce\x3d\x03\x01\x01",
.param_len = 21,
.m =
"\xcd\xb9\xd2\x1c\xb7\x6f\xcd\x44\xb3\xfd\x63\xea\xa3\x66\x7f\xae"
"\x63\x85\xe7\x82",
.m_size = 20,
.algo = OID_id_ecdsa_with_sha1,
.c =
"\x30\x35\x02\x19\x00\xba\xe5\x93\x83\x6e\xb6\x3b\x63\xa0\x27\x91"
"\xc6\xf6\x7f\xc3\x09\xad\x59\xad\x88\x27\xd6\x92\x6b\x02\x18\x10"
"\x68\x01\x9d\xba\xce\x83\x08\xef\x95\x52\x7b\xa0\x0f\xe4\x18\x86"
"\x80\x6f\xa5\x79\x77\xda\xd0",
.c_size = 55,
.public_key_vec = true,
.siggen_sigver_test = true,
}, {
.key =
"\x04\xb6\x4b\xb1\xd1\xac\xba\x24\x8f\x65\xb2\x60\x00\x90\xbf\xbd" "\x04\xb6\x4b\xb1\xd1\xac\xba\x24\x8f\x65\xb2\x60\x00\x90\xbf\xbd"
"\x78\x05\x73\xe9\x79\x1d\x6f\x7c\x0b\xd2\xc3\x93\xa7\x28\xe1\x75" "\x78\x05\x73\xe9\x79\x1d\x6f\x7c\x0b\xd2\xc3\x93\xa7\x28\xe1\x75"
"\xf7\xd5\x95\x1d\x28\x10\xc0\x75\x50\x5c\x1a\x4f\x3f\x8f\xa5\xee" "\xf7\xd5\x95\x1d\x28\x10\xc0\x75\x50\x5c\x1a\x4f\x3f\x8f\xa5\xee"
@ -780,32 +756,6 @@ static const struct akcipher_testvec ecdsa_nist_p192_tv_template[] = {
static const struct akcipher_testvec ecdsa_nist_p256_tv_template[] = { static const struct akcipher_testvec ecdsa_nist_p256_tv_template[] = {
{ {
.key = .key =
"\x04\xb9\x7b\xbb\xd7\x17\x64\xd2\x7e\xfc\x81\x5d\x87\x06\x83\x41"
"\x22\xd6\x9a\xaa\x87\x17\xec\x4f\x63\x55\x2f\x94\xba\xdd\x83\xe9"
"\x34\x4b\xf3\xe9\x91\x13\x50\xb6\xcb\xca\x62\x08\xe7\x3b\x09\xdc"
"\xc3\x63\x4b\x2d\xb9\x73\x53\xe4\x45\xe6\x7c\xad\xe7\x6b\xb0\xe8"
"\xaf",
.key_len = 65,
.params =
"\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48"
"\xce\x3d\x03\x01\x07",
.param_len = 21,
.m =
"\xc2\x2b\x5f\x91\x78\x34\x26\x09\x42\x8d\x6f\x51\xb2\xc5\xaf\x4c"
"\x0b\xde\x6a\x42",
.m_size = 20,
.algo = OID_id_ecdsa_with_sha1,
.c =
"\x30\x46\x02\x21\x00\xf9\x25\xce\x9f\x3a\xa6\x35\x81\xcf\xd4\xe7"
"\xb7\xf0\x82\x56\x41\xf7\xd4\xad\x8d\x94\x5a\x69\x89\xee\xca\x6a"
"\x52\x0e\x48\x4d\xcc\x02\x21\x00\xd7\xe4\xef\x52\x66\xd3\x5b\x9d"
"\x8a\xfa\x54\x93\x29\xa7\x70\x86\xf1\x03\x03\xf3\x3b\xe2\x73\xf7"
"\xfb\x9d\x8b\xde\xd4\x8d\x6f\xad",
.c_size = 72,
.public_key_vec = true,
.siggen_sigver_test = true,
}, {
.key =
"\x04\x8b\x6d\xc0\x33\x8e\x2d\x8b\x67\xf5\xeb\xc4\x7f\xa0\xf5\xd9" "\x04\x8b\x6d\xc0\x33\x8e\x2d\x8b\x67\xf5\xeb\xc4\x7f\xa0\xf5\xd9"
"\x7b\x03\xa5\x78\x9a\xb5\xea\x14\xe4\x23\xd0\xaf\xd7\x0e\x2e\xa0" "\x7b\x03\xa5\x78\x9a\xb5\xea\x14\xe4\x23\xd0\xaf\xd7\x0e\x2e\xa0"
"\xc9\x8b\xdb\x95\xf8\xb3\xaf\xac\x00\x2c\x2c\x1f\x7a\xfd\x95\x88" "\xc9\x8b\xdb\x95\xf8\xb3\xaf\xac\x00\x2c\x2c\x1f\x7a\xfd\x95\x88"
@ -916,36 +866,6 @@ static const struct akcipher_testvec ecdsa_nist_p256_tv_template[] = {
static const struct akcipher_testvec ecdsa_nist_p384_tv_template[] = { static const struct akcipher_testvec ecdsa_nist_p384_tv_template[] = {
{ {
.key = /* secp384r1(sha1) */
"\x04\x89\x25\xf3\x97\x88\xcb\xb0\x78\xc5\x72\x9a\x14\x6e\x7a\xb1"
"\x5a\xa5\x24\xf1\x95\x06\x9e\x28\xfb\xc4\xb9\xbe\x5a\x0d\xd9\x9f"
"\xf3\xd1\x4d\x2d\x07\x99\xbd\xda\xa7\x66\xec\xbb\xea\xba\x79\x42"
"\xc9\x34\x89\x6a\xe7\x0b\xc3\xf2\xfe\x32\x30\xbe\xba\xf9\xdf\x7e"
"\x4b\x6a\x07\x8e\x26\x66\x3f\x1d\xec\xa2\x57\x91\x51\xdd\x17\x0e"
"\x0b\x25\xd6\x80\x5c\x3b\xe6\x1a\x98\x48\x91\x45\x7a\x73\xb0\xc3"
"\xf1",
.key_len = 97,
.params =
"\x30\x10\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x05\x2b\x81\x04"
"\x00\x22",
.param_len = 18,
.m =
"\x12\x55\x28\xf0\x77\xd5\xb6\x21\x71\x32\x48\xcd\x28\xa8\x25\x22"
"\x3a\x69\xc1\x93",
.m_size = 20,
.algo = OID_id_ecdsa_with_sha1,
.c =
"\x30\x66\x02\x31\x00\xf5\x0f\x24\x4c\x07\x93\x6f\x21\x57\x55\x07"
"\x20\x43\x30\xde\xa0\x8d\x26\x8e\xae\x63\x3f\xbc\x20\x3a\xc6\xf1"
"\x32\x3c\xce\x70\x2b\x78\xf1\x4c\x26\xe6\x5b\x86\xcf\xec\x7c\x7e"
"\xd0\x87\xd7\xd7\x6e\x02\x31\x00\xcd\xbb\x7e\x81\x5d\x8f\x63\xc0"
"\x5f\x63\xb1\xbe\x5e\x4c\x0e\xa1\xdf\x28\x8c\x1b\xfa\xf9\x95\x88"
"\x74\xa0\x0f\xbf\xaf\xc3\x36\x76\x4a\xa1\x59\xf1\x1c\xa4\x58\x26"
"\x79\x12\x2a\xb7\xc5\x15\x92\xc5",
.c_size = 104,
.public_key_vec = true,
.siggen_sigver_test = true,
}, {
.key = /* secp384r1(sha224) */ .key = /* secp384r1(sha224) */
"\x04\x69\x6c\xcf\x62\xee\xd0\x0d\xe5\xb5\x2f\x70\x54\xcf\x26\xa0" "\x04\x69\x6c\xcf\x62\xee\xd0\x0d\xe5\xb5\x2f\x70\x54\xcf\x26\xa0"
"\xd9\x98\x8d\x92\x2a\xab\x9b\x11\xcb\x48\x18\xa1\xa9\x0d\xd5\x18" "\xd9\x98\x8d\x92\x2a\xab\x9b\x11\xcb\x48\x18\xa1\xa9\x0d\xd5\x18"
@ -35754,81 +35674,6 @@ static const struct comp_testvec deflate_decomp_tv_template[] = {
}, },
}; };
static const struct comp_testvec zlib_deflate_comp_tv_template[] = {
{
.inlen = 70,
.outlen = 44,
.input = "Join us now and share the software "
"Join us now and share the software ",
.output = "\x78\x5e\xf3\xca\xcf\xcc\x53\x28"
"\x2d\x56\xc8\xcb\x2f\x57\x48\xcc"
"\x4b\x51\x28\xce\x48\x2c\x4a\x55"
"\x28\xc9\x48\x55\x28\xce\x4f\x2b"
"\x29\x07\x71\xbc\x08\x2b\x01\x00"
"\x7c\x65\x19\x3d",
}, {
.inlen = 191,
.outlen = 129,
.input = "This document describes a compression method based on the DEFLATE"
"compression algorithm. This document defines the application of "
"the DEFLATE algorithm to the IP Payload Compression Protocol.",
.output = "\x78\x5e\x5d\xce\x41\x0a\xc3\x30"
"\x0c\x04\xc0\xaf\xec\x0b\xf2\x87"
"\xd2\xa6\x50\xe8\xc1\x07\x7f\x40"
"\xb1\x95\x5a\x60\x5b\xc6\x56\x0f"
"\xfd\x7d\x93\x1e\x42\xe8\x51\xec"
"\xee\x20\x9f\x64\x20\x6a\x78\x17"
"\xae\x86\xc8\x23\x74\x59\x78\x80"
"\x10\xb4\xb4\xce\x63\x88\x56\x14"
"\xb6\xa4\x11\x0b\x0d\x8e\xd8\x6e"
"\x4b\x8c\xdb\x7c\x7f\x5e\xfc\x7c"
"\xae\x51\x7e\x69\x17\x4b\x65\x02"
"\xfc\x1f\xbc\x4a\xdd\xd8\x7d\x48"
"\xad\x65\x09\x64\x3b\xac\xeb\xd9"
"\xc2\x01\xc0\xf4\x17\x3c\x1c\x1c"
"\x7d\xb2\x52\xc4\xf5\xf4\x8f\xeb"
"\x6a\x1a\x34\x4f\x5f\x2e\x32\x45"
"\x4e",
},
};
static const struct comp_testvec zlib_deflate_decomp_tv_template[] = {
{
.inlen = 128,
.outlen = 191,
.input = "\x78\x9c\x5d\x8d\x31\x0e\xc2\x30"
"\x10\x04\xbf\xb2\x2f\xc8\x1f\x10"
"\x04\x09\x89\xc2\x85\x3f\x70\xb1"
"\x2f\xf8\x24\xdb\x67\xd9\x47\xc1"
"\xef\x49\x68\x12\x51\xae\x76\x67"
"\xd6\x27\x19\x88\x1a\xde\x85\xab"
"\x21\xf2\x08\x5d\x16\x1e\x20\x04"
"\x2d\xad\xf3\x18\xa2\x15\x85\x2d"
"\x69\xc4\x42\x83\x23\xb6\x6c\x89"
"\x71\x9b\xef\xcf\x8b\x9f\xcf\x33"
"\xca\x2f\xed\x62\xa9\x4c\x80\xff"
"\x13\xaf\x52\x37\xed\x0e\x52\x6b"
"\x59\x02\xd9\x4e\xe8\x7a\x76\x1d"
"\x02\x98\xfe\x8a\x87\x83\xa3\x4f"
"\x56\x8a\xb8\x9e\x8e\x5c\x57\xd3"
"\xa0\x79\xfa\x02\x2e\x32\x45\x4e",
.output = "This document describes a compression method based on the DEFLATE"
"compression algorithm. This document defines the application of "
"the DEFLATE algorithm to the IP Payload Compression Protocol.",
}, {
.inlen = 44,
.outlen = 70,
.input = "\x78\x9c\xf3\xca\xcf\xcc\x53\x28"
"\x2d\x56\xc8\xcb\x2f\x57\x48\xcc"
"\x4b\x51\x28\xce\x48\x2c\x4a\x55"
"\x28\xc9\x48\x55\x28\xce\x4f\x2b"
"\x29\x07\x71\xbc\x08\x2b\x01\x00"
"\x7c\x65\x19\x3d",
.output = "Join us now and share the software "
"Join us now and share the software ",
},
};
/* /*
* LZO test vectors (null-terminated strings). * LZO test vectors (null-terminated strings).
*/ */

View File

@ -649,7 +649,6 @@ static int vmac_create(struct crypto_template *tmpl, struct rtattr **tb)
inst->alg.base.cra_priority = alg->cra_priority; inst->alg.base.cra_priority = alg->cra_priority;
inst->alg.base.cra_blocksize = alg->cra_blocksize; inst->alg.base.cra_blocksize = alg->cra_blocksize;
inst->alg.base.cra_alignmask = alg->cra_alignmask;
inst->alg.base.cra_ctxsize = sizeof(struct vmac_tfm_ctx); inst->alg.base.cra_ctxsize = sizeof(struct vmac_tfm_ctx);
inst->alg.base.cra_init = vmac_init_tfm; inst->alg.base.cra_init = vmac_init_tfm;

View File

@ -27,7 +27,7 @@ static u_int32_t ks[12] = {0x01010101, 0x01010101, 0x01010101, 0x01010101,
*/ */
struct xcbc_tfm_ctx { struct xcbc_tfm_ctx {
struct crypto_cipher *child; struct crypto_cipher *child;
u8 ctx[]; u8 consts[];
}; };
/* /*
@ -43,7 +43,7 @@ struct xcbc_tfm_ctx {
*/ */
struct xcbc_desc_ctx { struct xcbc_desc_ctx {
unsigned int len; unsigned int len;
u8 ctx[]; u8 odds[];
}; };
#define XCBC_BLOCKSIZE 16 #define XCBC_BLOCKSIZE 16
@ -51,9 +51,8 @@ struct xcbc_desc_ctx {
static int crypto_xcbc_digest_setkey(struct crypto_shash *parent, static int crypto_xcbc_digest_setkey(struct crypto_shash *parent,
const u8 *inkey, unsigned int keylen) const u8 *inkey, unsigned int keylen)
{ {
unsigned long alignmask = crypto_shash_alignmask(parent);
struct xcbc_tfm_ctx *ctx = crypto_shash_ctx(parent); struct xcbc_tfm_ctx *ctx = crypto_shash_ctx(parent);
u8 *consts = PTR_ALIGN(&ctx->ctx[0], alignmask + 1); u8 *consts = ctx->consts;
int err = 0; int err = 0;
u8 key1[XCBC_BLOCKSIZE]; u8 key1[XCBC_BLOCKSIZE];
int bs = sizeof(key1); int bs = sizeof(key1);
@ -71,10 +70,9 @@ static int crypto_xcbc_digest_setkey(struct crypto_shash *parent,
static int crypto_xcbc_digest_init(struct shash_desc *pdesc) static int crypto_xcbc_digest_init(struct shash_desc *pdesc)
{ {
unsigned long alignmask = crypto_shash_alignmask(pdesc->tfm);
struct xcbc_desc_ctx *ctx = shash_desc_ctx(pdesc); struct xcbc_desc_ctx *ctx = shash_desc_ctx(pdesc);
int bs = crypto_shash_blocksize(pdesc->tfm); int bs = crypto_shash_blocksize(pdesc->tfm);
u8 *prev = PTR_ALIGN(&ctx->ctx[0], alignmask + 1) + bs; u8 *prev = &ctx->odds[bs];
ctx->len = 0; ctx->len = 0;
memset(prev, 0, bs); memset(prev, 0, bs);
@ -86,12 +84,11 @@ static int crypto_xcbc_digest_update(struct shash_desc *pdesc, const u8 *p,
unsigned int len) unsigned int len)
{ {
struct crypto_shash *parent = pdesc->tfm; struct crypto_shash *parent = pdesc->tfm;
unsigned long alignmask = crypto_shash_alignmask(parent);
struct xcbc_tfm_ctx *tctx = crypto_shash_ctx(parent); struct xcbc_tfm_ctx *tctx = crypto_shash_ctx(parent);
struct xcbc_desc_ctx *ctx = shash_desc_ctx(pdesc); struct xcbc_desc_ctx *ctx = shash_desc_ctx(pdesc);
struct crypto_cipher *tfm = tctx->child; struct crypto_cipher *tfm = tctx->child;
int bs = crypto_shash_blocksize(parent); int bs = crypto_shash_blocksize(parent);
u8 *odds = PTR_ALIGN(&ctx->ctx[0], alignmask + 1); u8 *odds = ctx->odds;
u8 *prev = odds + bs; u8 *prev = odds + bs;
/* checking the data can fill the block */ /* checking the data can fill the block */
@ -132,13 +129,11 @@ static int crypto_xcbc_digest_update(struct shash_desc *pdesc, const u8 *p,
static int crypto_xcbc_digest_final(struct shash_desc *pdesc, u8 *out) static int crypto_xcbc_digest_final(struct shash_desc *pdesc, u8 *out)
{ {
struct crypto_shash *parent = pdesc->tfm; struct crypto_shash *parent = pdesc->tfm;
unsigned long alignmask = crypto_shash_alignmask(parent);
struct xcbc_tfm_ctx *tctx = crypto_shash_ctx(parent); struct xcbc_tfm_ctx *tctx = crypto_shash_ctx(parent);
struct xcbc_desc_ctx *ctx = shash_desc_ctx(pdesc); struct xcbc_desc_ctx *ctx = shash_desc_ctx(pdesc);
struct crypto_cipher *tfm = tctx->child; struct crypto_cipher *tfm = tctx->child;
int bs = crypto_shash_blocksize(parent); int bs = crypto_shash_blocksize(parent);
u8 *consts = PTR_ALIGN(&tctx->ctx[0], alignmask + 1); u8 *odds = ctx->odds;
u8 *odds = PTR_ALIGN(&ctx->ctx[0], alignmask + 1);
u8 *prev = odds + bs; u8 *prev = odds + bs;
unsigned int offset = 0; unsigned int offset = 0;
@ -157,7 +152,7 @@ static int crypto_xcbc_digest_final(struct shash_desc *pdesc, u8 *out)
} }
crypto_xor(prev, odds, bs); crypto_xor(prev, odds, bs);
crypto_xor(prev, consts + offset, bs); crypto_xor(prev, &tctx->consts[offset], bs);
crypto_cipher_encrypt_one(tfm, out, prev); crypto_cipher_encrypt_one(tfm, out, prev);
@ -191,7 +186,6 @@ static int xcbc_create(struct crypto_template *tmpl, struct rtattr **tb)
struct shash_instance *inst; struct shash_instance *inst;
struct crypto_cipher_spawn *spawn; struct crypto_cipher_spawn *spawn;
struct crypto_alg *alg; struct crypto_alg *alg;
unsigned long alignmask;
u32 mask; u32 mask;
int err; int err;
@ -218,21 +212,15 @@ static int xcbc_create(struct crypto_template *tmpl, struct rtattr **tb)
if (err) if (err)
goto err_free_inst; goto err_free_inst;
alignmask = alg->cra_alignmask | 3;
inst->alg.base.cra_alignmask = alignmask;
inst->alg.base.cra_priority = alg->cra_priority; inst->alg.base.cra_priority = alg->cra_priority;
inst->alg.base.cra_blocksize = alg->cra_blocksize; inst->alg.base.cra_blocksize = alg->cra_blocksize;
inst->alg.base.cra_ctxsize = sizeof(struct xcbc_tfm_ctx) +
alg->cra_blocksize * 2;
inst->alg.digestsize = alg->cra_blocksize; inst->alg.digestsize = alg->cra_blocksize;
inst->alg.descsize = ALIGN(sizeof(struct xcbc_desc_ctx), inst->alg.descsize = sizeof(struct xcbc_desc_ctx) +
crypto_tfm_ctx_alignment()) +
(alignmask &
~(crypto_tfm_ctx_alignment() - 1)) +
alg->cra_blocksize * 2; alg->cra_blocksize * 2;
inst->alg.base.cra_ctxsize = ALIGN(sizeof(struct xcbc_tfm_ctx),
alignmask + 1) +
alg->cra_blocksize * 2;
inst->alg.base.cra_init = xcbc_init_tfm; inst->alg.base.cra_init = xcbc_init_tfm;
inst->alg.base.cra_exit = xcbc_exit_tfm; inst->alg.base.cra_exit = xcbc_exit_tfm;

View File

@ -28,7 +28,7 @@ struct xts_tfm_ctx {
struct xts_instance_ctx { struct xts_instance_ctx {
struct crypto_skcipher_spawn spawn; struct crypto_skcipher_spawn spawn;
char name[CRYPTO_MAX_ALG_NAME]; struct crypto_cipher_spawn tweak_spawn;
}; };
struct xts_request_ctx { struct xts_request_ctx {
@ -306,7 +306,7 @@ static int xts_init_tfm(struct crypto_skcipher *tfm)
ctx->child = child; ctx->child = child;
tweak = crypto_alloc_cipher(ictx->name, 0, 0); tweak = crypto_spawn_cipher(&ictx->tweak_spawn);
if (IS_ERR(tweak)) { if (IS_ERR(tweak)) {
crypto_free_skcipher(ctx->child); crypto_free_skcipher(ctx->child);
return PTR_ERR(tweak); return PTR_ERR(tweak);
@ -333,14 +333,16 @@ static void xts_free_instance(struct skcipher_instance *inst)
struct xts_instance_ctx *ictx = skcipher_instance_ctx(inst); struct xts_instance_ctx *ictx = skcipher_instance_ctx(inst);
crypto_drop_skcipher(&ictx->spawn); crypto_drop_skcipher(&ictx->spawn);
crypto_drop_cipher(&ictx->tweak_spawn);
kfree(inst); kfree(inst);
} }
static int xts_create(struct crypto_template *tmpl, struct rtattr **tb) static int xts_create(struct crypto_template *tmpl, struct rtattr **tb)
{ {
struct skcipher_alg_common *alg;
char name[CRYPTO_MAX_ALG_NAME];
struct skcipher_instance *inst; struct skcipher_instance *inst;
struct xts_instance_ctx *ctx; struct xts_instance_ctx *ctx;
struct skcipher_alg *alg;
const char *cipher_name; const char *cipher_name;
u32 mask; u32 mask;
int err; int err;
@ -363,25 +365,25 @@ static int xts_create(struct crypto_template *tmpl, struct rtattr **tb)
cipher_name, 0, mask); cipher_name, 0, mask);
if (err == -ENOENT) { if (err == -ENOENT) {
err = -ENAMETOOLONG; err = -ENAMETOOLONG;
if (snprintf(ctx->name, CRYPTO_MAX_ALG_NAME, "ecb(%s)", if (snprintf(name, CRYPTO_MAX_ALG_NAME, "ecb(%s)",
cipher_name) >= CRYPTO_MAX_ALG_NAME) cipher_name) >= CRYPTO_MAX_ALG_NAME)
goto err_free_inst; goto err_free_inst;
err = crypto_grab_skcipher(&ctx->spawn, err = crypto_grab_skcipher(&ctx->spawn,
skcipher_crypto_instance(inst), skcipher_crypto_instance(inst),
ctx->name, 0, mask); name, 0, mask);
} }
if (err) if (err)
goto err_free_inst; goto err_free_inst;
alg = crypto_skcipher_spawn_alg(&ctx->spawn); alg = crypto_spawn_skcipher_alg_common(&ctx->spawn);
err = -EINVAL; err = -EINVAL;
if (alg->base.cra_blocksize != XTS_BLOCK_SIZE) if (alg->base.cra_blocksize != XTS_BLOCK_SIZE)
goto err_free_inst; goto err_free_inst;
if (crypto_skcipher_alg_ivsize(alg)) if (alg->ivsize)
goto err_free_inst; goto err_free_inst;
err = crypto_inst_setname(skcipher_crypto_instance(inst), "xts", err = crypto_inst_setname(skcipher_crypto_instance(inst), "xts",
@ -398,31 +400,36 @@ static int xts_create(struct crypto_template *tmpl, struct rtattr **tb)
if (!strncmp(cipher_name, "ecb(", 4)) { if (!strncmp(cipher_name, "ecb(", 4)) {
int len; int len;
len = strscpy(ctx->name, cipher_name + 4, sizeof(ctx->name)); len = strscpy(name, cipher_name + 4, sizeof(name));
if (len < 2) if (len < 2)
goto err_free_inst; goto err_free_inst;
if (ctx->name[len - 1] != ')') if (name[len - 1] != ')')
goto err_free_inst; goto err_free_inst;
ctx->name[len - 1] = 0; name[len - 1] = 0;
if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME, if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME,
"xts(%s)", ctx->name) >= CRYPTO_MAX_ALG_NAME) { "xts(%s)", name) >= CRYPTO_MAX_ALG_NAME) {
err = -ENAMETOOLONG; err = -ENAMETOOLONG;
goto err_free_inst; goto err_free_inst;
} }
} else } else
goto err_free_inst; goto err_free_inst;
err = crypto_grab_cipher(&ctx->tweak_spawn,
skcipher_crypto_instance(inst), name, 0, mask);
if (err)
goto err_free_inst;
inst->alg.base.cra_priority = alg->base.cra_priority; inst->alg.base.cra_priority = alg->base.cra_priority;
inst->alg.base.cra_blocksize = XTS_BLOCK_SIZE; inst->alg.base.cra_blocksize = XTS_BLOCK_SIZE;
inst->alg.base.cra_alignmask = alg->base.cra_alignmask | inst->alg.base.cra_alignmask = alg->base.cra_alignmask |
(__alignof__(u64) - 1); (__alignof__(u64) - 1);
inst->alg.ivsize = XTS_BLOCK_SIZE; inst->alg.ivsize = XTS_BLOCK_SIZE;
inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(alg) * 2; inst->alg.min_keysize = alg->min_keysize * 2;
inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(alg) * 2; inst->alg.max_keysize = alg->max_keysize * 2;
inst->alg.base.cra_ctxsize = sizeof(struct xts_tfm_ctx); inst->alg.base.cra_ctxsize = sizeof(struct xts_tfm_ctx);

View File

@ -70,7 +70,7 @@ static int bcm2835_rng_read(struct hwrng *rng, void *buf, size_t max,
while ((rng_readl(priv, RNG_STATUS) >> 24) == 0) { while ((rng_readl(priv, RNG_STATUS) >> 24) == 0) {
if (!wait) if (!wait)
return 0; return 0;
hwrng_msleep(rng, 1000); hwrng_yield(rng);
} }
num_words = rng_readl(priv, RNG_STATUS) >> 24; num_words = rng_readl(priv, RNG_STATUS) >> 24;
@ -149,8 +149,6 @@ static int bcm2835_rng_probe(struct platform_device *pdev)
if (!priv) if (!priv)
return -ENOMEM; return -ENOMEM;
platform_set_drvdata(pdev, priv);
/* map peripheral */ /* map peripheral */
priv->base = devm_platform_ioremap_resource(pdev, 0); priv->base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(priv->base)) if (IS_ERR(priv->base))

View File

@ -678,6 +678,12 @@ long hwrng_msleep(struct hwrng *rng, unsigned int msecs)
} }
EXPORT_SYMBOL_GPL(hwrng_msleep); EXPORT_SYMBOL_GPL(hwrng_msleep);
long hwrng_yield(struct hwrng *rng)
{
return wait_for_completion_interruptible_timeout(&rng->dying, 1);
}
EXPORT_SYMBOL_GPL(hwrng_yield);
static int __init hwrng_modinit(void) static int __init hwrng_modinit(void)
{ {
int ret; int ret;

View File

@ -58,7 +58,8 @@ struct amd_geode_priv {
static int geode_rng_data_read(struct hwrng *rng, u32 *data) static int geode_rng_data_read(struct hwrng *rng, u32 *data)
{ {
void __iomem *mem = (void __iomem *)rng->priv; struct amd_geode_priv *priv = (struct amd_geode_priv *)rng->priv;
void __iomem *mem = priv->membase;
*data = readl(mem + GEODE_RNG_DATA_REG); *data = readl(mem + GEODE_RNG_DATA_REG);
@ -67,7 +68,8 @@ static int geode_rng_data_read(struct hwrng *rng, u32 *data)
static int geode_rng_data_present(struct hwrng *rng, int wait) static int geode_rng_data_present(struct hwrng *rng, int wait)
{ {
void __iomem *mem = (void __iomem *)rng->priv; struct amd_geode_priv *priv = (struct amd_geode_priv *)rng->priv;
void __iomem *mem = priv->membase;
int data, i; int data, i;
for (i = 0; i < 20; i++) { for (i = 0; i < 20; i++) {

View File

@ -79,8 +79,6 @@ static int hisi_rng_probe(struct platform_device *pdev)
if (!rng) if (!rng)
return -ENOMEM; return -ENOMEM;
platform_set_drvdata(pdev, rng);
rng->base = devm_platform_ioremap_resource(pdev, 0); rng->base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(rng->base)) if (IS_ERR(rng->base))
return PTR_ERR(rng->base); return PTR_ERR(rng->base);

View File

@ -51,8 +51,8 @@
#define RNGC_ERROR_STATUS_STAT_ERR 0x00000008 #define RNGC_ERROR_STATUS_STAT_ERR 0x00000008
#define RNGC_TIMEOUT 3000 /* 3 sec */ #define RNGC_SELFTEST_TIMEOUT 2500 /* us */
#define RNGC_SEED_TIMEOUT 200 /* ms */
static bool self_test = true; static bool self_test = true;
module_param(self_test, bool, 0); module_param(self_test, bool, 0);
@ -110,7 +110,8 @@ static int imx_rngc_self_test(struct imx_rngc *rngc)
cmd = readl(rngc->base + RNGC_COMMAND); cmd = readl(rngc->base + RNGC_COMMAND);
writel(cmd | RNGC_CMD_SELF_TEST, rngc->base + RNGC_COMMAND); writel(cmd | RNGC_CMD_SELF_TEST, rngc->base + RNGC_COMMAND);
ret = wait_for_completion_timeout(&rngc->rng_op_done, msecs_to_jiffies(RNGC_TIMEOUT)); ret = wait_for_completion_timeout(&rngc->rng_op_done,
usecs_to_jiffies(RNGC_SELFTEST_TIMEOUT));
imx_rngc_irq_mask_clear(rngc); imx_rngc_irq_mask_clear(rngc);
if (!ret) if (!ret)
return -ETIMEDOUT; return -ETIMEDOUT;
@ -182,7 +183,8 @@ static int imx_rngc_init(struct hwrng *rng)
cmd = readl(rngc->base + RNGC_COMMAND); cmd = readl(rngc->base + RNGC_COMMAND);
writel(cmd | RNGC_CMD_SEED, rngc->base + RNGC_COMMAND); writel(cmd | RNGC_CMD_SEED, rngc->base + RNGC_COMMAND);
ret = wait_for_completion_timeout(&rngc->rng_op_done, msecs_to_jiffies(RNGC_TIMEOUT)); ret = wait_for_completion_timeout(&rngc->rng_op_done,
msecs_to_jiffies(RNGC_SEED_TIMEOUT));
if (!ret) { if (!ret) {
ret = -ETIMEDOUT; ret = -ETIMEDOUT;
goto err; goto err;

View File

@ -81,7 +81,6 @@ struct trng_regs {
}; };
struct ks_sa_rng { struct ks_sa_rng {
struct device *dev;
struct hwrng rng; struct hwrng rng;
struct clk *clk; struct clk *clk;
struct regmap *regmap_cfg; struct regmap *regmap_cfg;
@ -113,8 +112,7 @@ static unsigned int refill_delay_ns(unsigned long clk_rate)
static int ks_sa_rng_init(struct hwrng *rng) static int ks_sa_rng_init(struct hwrng *rng)
{ {
u32 value; u32 value;
struct device *dev = (struct device *)rng->priv; struct ks_sa_rng *ks_sa_rng = container_of(rng, struct ks_sa_rng, rng);
struct ks_sa_rng *ks_sa_rng = dev_get_drvdata(dev);
unsigned long clk_rate = clk_get_rate(ks_sa_rng->clk); unsigned long clk_rate = clk_get_rate(ks_sa_rng->clk);
/* Enable RNG module */ /* Enable RNG module */
@ -153,8 +151,7 @@ static int ks_sa_rng_init(struct hwrng *rng)
static void ks_sa_rng_cleanup(struct hwrng *rng) static void ks_sa_rng_cleanup(struct hwrng *rng)
{ {
struct device *dev = (struct device *)rng->priv; struct ks_sa_rng *ks_sa_rng = container_of(rng, struct ks_sa_rng, rng);
struct ks_sa_rng *ks_sa_rng = dev_get_drvdata(dev);
/* Disable RNG */ /* Disable RNG */
writel(0, &ks_sa_rng->reg_rng->control); writel(0, &ks_sa_rng->reg_rng->control);
@ -164,8 +161,7 @@ static void ks_sa_rng_cleanup(struct hwrng *rng)
static int ks_sa_rng_data_read(struct hwrng *rng, u32 *data) static int ks_sa_rng_data_read(struct hwrng *rng, u32 *data)
{ {
struct device *dev = (struct device *)rng->priv; struct ks_sa_rng *ks_sa_rng = container_of(rng, struct ks_sa_rng, rng);
struct ks_sa_rng *ks_sa_rng = dev_get_drvdata(dev);
/* Read random data */ /* Read random data */
data[0] = readl(&ks_sa_rng->reg_rng->output_l); data[0] = readl(&ks_sa_rng->reg_rng->output_l);
@ -179,8 +175,7 @@ static int ks_sa_rng_data_read(struct hwrng *rng, u32 *data)
static int ks_sa_rng_data_present(struct hwrng *rng, int wait) static int ks_sa_rng_data_present(struct hwrng *rng, int wait)
{ {
struct device *dev = (struct device *)rng->priv; struct ks_sa_rng *ks_sa_rng = container_of(rng, struct ks_sa_rng, rng);
struct ks_sa_rng *ks_sa_rng = dev_get_drvdata(dev);
u64 now = ktime_get_ns(); u64 now = ktime_get_ns();
u32 ready; u32 ready;
@ -217,7 +212,6 @@ static int ks_sa_rng_probe(struct platform_device *pdev)
if (!ks_sa_rng) if (!ks_sa_rng)
return -ENOMEM; return -ENOMEM;
ks_sa_rng->dev = dev;
ks_sa_rng->rng = (struct hwrng) { ks_sa_rng->rng = (struct hwrng) {
.name = "ks_sa_hwrng", .name = "ks_sa_hwrng",
.init = ks_sa_rng_init, .init = ks_sa_rng_init,
@ -225,7 +219,6 @@ static int ks_sa_rng_probe(struct platform_device *pdev)
.data_present = ks_sa_rng_data_present, .data_present = ks_sa_rng_data_present,
.cleanup = ks_sa_rng_cleanup, .cleanup = ks_sa_rng_cleanup,
}; };
ks_sa_rng->rng.priv = (unsigned long)dev;
ks_sa_rng->reg_rng = devm_platform_ioremap_resource(pdev, 0); ks_sa_rng->reg_rng = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(ks_sa_rng->reg_rng)) if (IS_ERR(ks_sa_rng->reg_rng))
@ -235,21 +228,16 @@ static int ks_sa_rng_probe(struct platform_device *pdev)
syscon_regmap_lookup_by_phandle(dev->of_node, syscon_regmap_lookup_by_phandle(dev->of_node,
"ti,syscon-sa-cfg"); "ti,syscon-sa-cfg");
if (IS_ERR(ks_sa_rng->regmap_cfg)) { if (IS_ERR(ks_sa_rng->regmap_cfg))
dev_err(dev, "syscon_node_to_regmap failed\n"); return dev_err_probe(dev, -EINVAL, "syscon_node_to_regmap failed\n");
return -EINVAL;
}
pm_runtime_enable(dev); pm_runtime_enable(dev);
ret = pm_runtime_resume_and_get(dev); ret = pm_runtime_resume_and_get(dev);
if (ret < 0) { if (ret < 0) {
dev_err(dev, "Failed to enable SA power-domain\n");
pm_runtime_disable(dev); pm_runtime_disable(dev);
return ret; return dev_err_probe(dev, ret, "Failed to enable SA power-domain\n");
} }
platform_set_drvdata(pdev, ks_sa_rng);
return devm_hwrng_register(&pdev->dev, &ks_sa_rng->rng); return devm_hwrng_register(&pdev->dev, &ks_sa_rng->rng);
} }

View File

@ -13,12 +13,23 @@
#include <linux/types.h> #include <linux/types.h>
#include <linux/of.h> #include <linux/of.h>
#include <linux/clk.h> #include <linux/clk.h>
#include <linux/iopoll.h>
#define RNG_DATA 0x00 #define RNG_DATA 0x00
#define RNG_S4_DATA 0x08
#define RNG_S4_CFG 0x00
#define RUN_BIT BIT(0)
#define SEED_READY_STS_BIT BIT(31)
struct meson_rng_priv {
int (*read)(struct hwrng *rng, void *buf, size_t max, bool wait);
};
struct meson_rng_data { struct meson_rng_data {
void __iomem *base; void __iomem *base;
struct hwrng rng; struct hwrng rng;
struct device *dev;
}; };
static int meson_rng_read(struct hwrng *rng, void *buf, size_t max, bool wait) static int meson_rng_read(struct hwrng *rng, void *buf, size_t max, bool wait)
@ -31,16 +42,62 @@ static int meson_rng_read(struct hwrng *rng, void *buf, size_t max, bool wait)
return sizeof(u32); return sizeof(u32);
} }
static int meson_rng_wait_status(void __iomem *cfg_addr, int bit)
{
u32 status = 0;
int ret;
ret = readl_relaxed_poll_timeout_atomic(cfg_addr,
status, !(status & bit),
10, 10000);
if (ret)
return -EBUSY;
return 0;
}
static int meson_s4_rng_read(struct hwrng *rng, void *buf, size_t max, bool wait)
{
struct meson_rng_data *data =
container_of(rng, struct meson_rng_data, rng);
void __iomem *cfg_addr = data->base + RNG_S4_CFG;
int err;
writel_relaxed(readl_relaxed(cfg_addr) | SEED_READY_STS_BIT, cfg_addr);
err = meson_rng_wait_status(cfg_addr, SEED_READY_STS_BIT);
if (err) {
dev_err(data->dev, "Seed isn't ready, try again\n");
return err;
}
err = meson_rng_wait_status(cfg_addr, RUN_BIT);
if (err) {
dev_err(data->dev, "Can't get random number, try again\n");
return err;
}
*(u32 *)buf = readl_relaxed(data->base + RNG_S4_DATA);
return sizeof(u32);
}
static int meson_rng_probe(struct platform_device *pdev) static int meson_rng_probe(struct platform_device *pdev)
{ {
struct device *dev = &pdev->dev; struct device *dev = &pdev->dev;
struct meson_rng_data *data; struct meson_rng_data *data;
struct clk *core_clk; struct clk *core_clk;
const struct meson_rng_priv *priv;
data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL); data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);
if (!data) if (!data)
return -ENOMEM; return -ENOMEM;
priv = device_get_match_data(&pdev->dev);
if (!priv)
return -ENODEV;
data->base = devm_platform_ioremap_resource(pdev, 0); data->base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(data->base)) if (IS_ERR(data->base))
return PTR_ERR(data->base); return PTR_ERR(data->base);
@ -51,13 +108,30 @@ static int meson_rng_probe(struct platform_device *pdev)
"Failed to get core clock\n"); "Failed to get core clock\n");
data->rng.name = pdev->name; data->rng.name = pdev->name;
data->rng.read = meson_rng_read; data->rng.read = priv->read;
data->dev = &pdev->dev;
return devm_hwrng_register(dev, &data->rng); return devm_hwrng_register(dev, &data->rng);
} }
static const struct meson_rng_priv meson_rng_priv = {
.read = meson_rng_read,
};
static const struct meson_rng_priv meson_rng_priv_s4 = {
.read = meson_s4_rng_read,
};
static const struct of_device_id meson_rng_of_match[] = { static const struct of_device_id meson_rng_of_match[] = {
{ .compatible = "amlogic,meson-rng", }, {
.compatible = "amlogic,meson-rng",
.data = (void *)&meson_rng_priv,
},
{
.compatible = "amlogic,meson-s4-rng",
.data = (void *)&meson_rng_priv_s4,
},
{}, {},
}; };
MODULE_DEVICE_TABLE(of, meson_rng_of_match); MODULE_DEVICE_TABLE(of, meson_rng_of_match);

View File

@ -79,8 +79,6 @@ static int mpfs_rng_probe(struct platform_device *pdev)
rng_priv->rng.read = mpfs_rng_read; rng_priv->rng.read = mpfs_rng_read;
rng_priv->rng.name = pdev->name; rng_priv->rng.name = pdev->name;
platform_set_drvdata(pdev, rng_priv);
ret = devm_hwrng_register(&pdev->dev, &rng_priv->rng); ret = devm_hwrng_register(&pdev->dev, &rng_priv->rng);
if (ret) if (ret)
return dev_err_probe(&pdev->dev, ret, "Failed to register MPFS hwrng\n"); return dev_err_probe(&pdev->dev, ret, "Failed to register MPFS hwrng\n");

View File

@ -14,7 +14,8 @@
#include <linux/hw_random.h> #include <linux/hw_random.h>
#include <linux/of.h> #include <linux/of.h>
#include <linux/of_device.h> #include <linux/platform_device.h>
#include <linux/property.h>
#include <asm/hypervisor.h> #include <asm/hypervisor.h>
@ -695,20 +696,15 @@ static void n2rng_driver_version(void)
static const struct of_device_id n2rng_match[]; static const struct of_device_id n2rng_match[];
static int n2rng_probe(struct platform_device *op) static int n2rng_probe(struct platform_device *op)
{ {
const struct of_device_id *match;
int err = -ENOMEM; int err = -ENOMEM;
struct n2rng *np; struct n2rng *np;
match = of_match_device(n2rng_match, &op->dev);
if (!match)
return -EINVAL;
n2rng_driver_version(); n2rng_driver_version();
np = devm_kzalloc(&op->dev, sizeof(*np), GFP_KERNEL); np = devm_kzalloc(&op->dev, sizeof(*np), GFP_KERNEL);
if (!np) if (!np)
goto out; goto out;
np->op = op; np->op = op;
np->data = (struct n2rng_template *)match->data; np->data = (struct n2rng_template *)device_get_match_data(&op->dev);
INIT_DELAYED_WORK(&np->work, n2rng_work); INIT_DELAYED_WORK(&np->work, n2rng_work);

View File

@ -88,4 +88,5 @@ static struct amba_driver nmk_rng_driver = {
module_amba_driver(nmk_rng_driver); module_amba_driver(nmk_rng_driver);
MODULE_DESCRIPTION("ST-Ericsson Nomadik Random Number Generator");
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");

Some files were not shown because too many files have changed in this diff Show More