Make AARCH64 harder, better, faster, stronger

- Perform some housekeeping on scalar math function code
- Import ARM's Optimized Routines for SIMD string processing
- Upgrade to latest Chromium zlib and enable more SIMD optimizations
This commit is contained in:
Justine Tunney 2023-05-15 01:51:29 -07:00
parent 550b52abf6
commit cc1732bc42
No known key found for this signature in database
GPG key ID: BE714B4575D6E328
143 changed files with 15661 additions and 1329 deletions

View file

@ -1696,6 +1696,11 @@ uLong adler32_combine(uLong adler1, uLong adler2, int64_t len2);
*/
uLong crc32(uLong crc, const Bytef *buf, uInt len);
/**
* Same as crc32(), but with a size_t length.
*/
uint32_t crc32_z(uint32_t crc, const void *buf, size_t len);
/**
* Combine two CRC-32 check values into one. For two sequences of bytes,
* seq1 and seq2 with lengths len1 and len2, CRC-32 check values were