linux-stable/lib/lz4/lz4defs.h
Kyungsik Lee cffb78b0e0 decompressor: add LZ4 decompressor module
Add support for LZ4 decompression in the Linux Kernel.  LZ4 Decompression
APIs for kernel are based on LZ4 implementation by Yann Collet.

Benchmark Results(PATCH v3)
Compiler: Linaro ARM gcc 4.6.2

1. ARMv7, 1.5GHz based board
   Kernel: linux 3.4
   Uncompressed Kernel Size: 14MB
        Compressed Size  Decompression Speed
   LZO  6.7MB            20.1MB/s, 25.2MB/s(UA)
   LZ4  7.3MB            29.1MB/s, 45.6MB/s(UA)

2. ARMv7, 1.7GHz based board
   Kernel: linux 3.7
   Uncompressed Kernel Size: 14MB
        Compressed Size  Decompression Speed
   LZO  6.0MB            34.1MB/s, 52.2MB/s(UA)
   LZ4  6.5MB            86.7MB/s
- UA: Unaligned memory Access support
- Latest patch set for LZO applied

This patch set is for adding support for LZ4-compressed Kernel.  LZ4 is a
very fast lossless compression algorithm and it also features an extremely
fast decoder [1].

But we have five of decompressors already and one question which does
arise, however, is that of where do we stop adding new ones?  This issue
had been discussed and came to the conclusion [2].

Russell King said that we should have:

 - one decompressor which is the fastest
 - one decompressor for the highest compression ratio
 - one popular decompressor (eg conventional gzip)

If we have a replacement one for one of these, then it should do exactly
that: replace it.

The benchmark shows that an 8% increase in image size vs a 66% increase
in decompression speed compared to LZO(which has been known as the
fastest decompressor in the Kernel).  Therefore the "fast but may not be
small" compression title has clearly been taken by LZ4 [3].

[1] http://code.google.com/p/lz4/
[2] http://thread.gmane.org/gmane.linux.kbuild.devel/9157
[3] http://thread.gmane.org/gmane.linux.kbuild.devel/9347

LZ4 homepage: http://fastcompression.blogspot.com/p/lz4.html
LZ4 source repository: http://code.google.com/p/lz4/

Signed-off-by: Kyungsik Lee <kyungsik.lee@lge.com>
Signed-off-by: Yann Collet <yann.collet.73@gmail.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Florian Fainelli <florian@openwrt.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-07-09 10:33:30 -07:00

94 lines
2 KiB
C

/*
* lz4defs.h -- architecture specific defines
*
* Copyright (C) 2013, LG Electronics, Kyungsik Lee <kyungsik.lee@lge.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
/*
* Detects 64 bits mode
*/
#if (defined(__x86_64__) || defined(__x86_64) || defined(__amd64__) \
|| defined(__ppc64__) || defined(__LP64__))
#define LZ4_ARCH64 1
#else
#define LZ4_ARCH64 0
#endif
/*
* Architecture-specific macros
*/
#define BYTE u8
#if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) \
|| defined(CONFIG_ARM) && __LINUX_ARM_ARCH__ >= 6 \
&& defined(ARM_EFFICIENT_UNALIGNED_ACCESS)
typedef struct _U32_S { u32 v; } U32_S;
typedef struct _U64_S { u64 v; } U64_S;
#define A32(x) (((U32_S *)(x))->v)
#define A64(x) (((U64_S *)(x))->v)
#define PUT4(s, d) (A32(d) = A32(s))
#define PUT8(s, d) (A64(d) = A64(s))
#else /* CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS */
#define PUT4(s, d) \
put_unaligned(get_unaligned((const u32 *) s), (u32 *) d)
#define PUT8(s, d) \
put_unaligned(get_unaligned((const u64 *) s), (u64 *) d)
#endif
#define COPYLENGTH 8
#define ML_BITS 4
#define ML_MASK ((1U << ML_BITS) - 1)
#define RUN_BITS (8 - ML_BITS)
#define RUN_MASK ((1U << RUN_BITS) - 1)
#if LZ4_ARCH64/* 64-bit */
#define STEPSIZE 8
#define LZ4_COPYSTEP(s, d) \
do { \
PUT8(s, d); \
d += 8; \
s += 8; \
} while (0)
#define LZ4_COPYPACKET(s, d) LZ4_COPYSTEP(s, d)
#define LZ4_SECURECOPY(s, d, e) \
do { \
if (d < e) { \
LZ4_WILDCOPY(s, d, e); \
} \
} while (0)
#else /* 32-bit */
#define STEPSIZE 4
#define LZ4_COPYSTEP(s, d) \
do { \
PUT4(s, d); \
d += 4; \
s += 4; \
} while (0)
#define LZ4_COPYPACKET(s, d) \
do { \
LZ4_COPYSTEP(s, d); \
LZ4_COPYSTEP(s, d); \
} while (0)
#define LZ4_SECURECOPY LZ4_WILDCOPY
#endif
#define LZ4_READ_LITTLEENDIAN_16(d, s, p) \
(d = s - get_unaligned_le16(p))
#define LZ4_WILDCOPY(s, d, e) \
do { \
LZ4_COPYPACKET(s, d); \
} while (d < e)