vfs-6.9.ntfs

-----BEGIN PGP SIGNATURE-----
 
 iHUEABYKAB0WIQRAhzRXHqcMeLMyaSiRxhvAZXjcogUCZem42QAKCRCRxhvAZXjc
 opOtAQDUkiJNaOu3fR6ENLvDZSFmaI2jQXIL8ulHYpEiFrXmKwD9EZQ8bmEYU7uO
 WN4VM8p8UwQ7BmIV9b+jvwciF8Qi8QI=
 =T03q
 -----END PGP SIGNATURE-----

Merge tag 'vfs-6.9.ntfs' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs

Pull ntfs update from Christian Brauner:
 "This removes the old ntfs driver. The new ntfs3 driver is a full
  replacement that was merged over two years ago. We've went through
  various userspace and either they use ntfs3 or they use the fuse
  version of ntfs and thus build neither ntfs nor ntfs3. I think that's
  a clear sign that we should risk removing the legacy ntfs driver.

  Quoting from Arch Linux and Debian:

   - Debian does neither build the legacy ntfs nor the new ntfs3:

     "Not currently built with Debian's kernel packages, 'ntfs' has been
      symlinked to 'ntfs-3g' as it relates to fstab and mount commands.

      Debian kernels are built without support of the ntfs3 driver
      developed by Paragon Software."  (cf. [2])

   - Archlinux provides ntfs3 as their default since 5.15:

     "All officially supported kernels with versions 5.15 or newer are
      built with CONFIG_NTFS3_FS=m and thus support it. Before 5.15,
      NTFS read and write support is provided by the NTFS-3G FUSE file
      system."  (cf. [1]).

  It's unmaintained apart from various odd fixes as well. Worst case we
  have to reintroduce it if someone really has a valid dependency on it.
  But it's worth trying to see whether we can remove it"

Link: https://wiki.archlinux.org/title/NTFS [1]
Link: https://wiki.debian.org/NTFS [2]

* tag 'vfs-6.9.ntfs' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs:
  fs: remove NTFS classic from docum. index
  fs: Remove NTFS classic
This commit is contained in:
Linus Torvalds 2024-03-11 09:55:17 -07:00
commit 77417942e4
52 changed files with 5 additions and 29303 deletions

View File

@ -63,6 +63,11 @@ D: dosfs, LILO, some fd features, ATM, various other hacks here and there
S: Buenos Aires
S: Argentina
NTFS FILESYSTEM
N: Anton Altaparmakov
E: anton@tuxera.com
D: NTFS filesystem
N: Tim Alpaerts
E: tim_alpaerts@toyota-motor-europe.com
D: 802.2 class II logical link control layer,

View File

@ -98,7 +98,6 @@ Documentation for filesystem implementations.
isofs
nilfs2
nfs/index
ntfs
ntfs3
ocfs2
ocfs2-online-filecheck

View File

@ -1,466 +0,0 @@
.. SPDX-License-Identifier: GPL-2.0
================================
The Linux NTFS filesystem driver
================================
.. Table of contents
- Overview
- Web site
- Features
- Supported mount options
- Known bugs and (mis-)features
- Using NTFS volume and stripe sets
- The Device-Mapper driver
- The Software RAID / MD driver
- Limitations when using the MD driver
Overview
========
Linux-NTFS comes with a number of user-space programs known as ntfsprogs.
These include mkntfs, a full-featured ntfs filesystem format utility,
ntfsundelete used for recovering files that were unintentionally deleted
from an NTFS volume and ntfsresize which is used to resize an NTFS partition.
See the web site for more information.
To mount an NTFS 1.2/3.x (Windows NT4/2000/XP/2003) volume, use the file
system type 'ntfs'. The driver currently supports read-only mode (with no
fault-tolerance, encryption or journalling) and very limited, but safe, write
support.
For fault tolerance and raid support (i.e. volume and stripe sets), you can
use the kernel's Software RAID / MD driver. See section "Using Software RAID
with NTFS" for details.
Web site
========
There is plenty of additional information on the linux-ntfs web site
at http://www.linux-ntfs.org/
The web site has a lot of additional information, such as a comprehensive
FAQ, documentation on the NTFS on-disk format, information on the Linux-NTFS
userspace utilities, etc.
Features
========
- This is a complete rewrite of the NTFS driver that used to be in the 2.4 and
earlier kernels. This new driver implements NTFS read support and is
functionally equivalent to the old ntfs driver and it also implements limited
write support. The biggest limitation at present is that files/directories
cannot be created or deleted. See below for the list of write features that
are so far supported. Another limitation is that writing to compressed files
is not implemented at all. Also, neither read nor write access to encrypted
files is so far implemented.
- The new driver has full support for sparse files on NTFS 3.x volumes which
the old driver isn't happy with.
- The new driver supports execution of binaries due to mmap() now being
supported.
- The new driver supports loopback mounting of files on NTFS which is used by
some Linux distributions to enable the user to run Linux from an NTFS
partition by creating a large file while in Windows and then loopback
mounting the file while in Linux and creating a Linux filesystem on it that
is used to install Linux on it.
- A comparison of the two drivers using::
time find . -type f -exec md5sum "{}" \;
run three times in sequence with each driver (after a reboot) on a 1.4GiB
NTFS partition, showed the new driver to be 20% faster in total time elapsed
(from 9:43 minutes on average down to 7:53). The time spent in user space
was unchanged but the time spent in the kernel was decreased by a factor of
2.5 (from 85 CPU seconds down to 33).
- The driver does not support short file names in general. For backwards
compatibility, we implement access to files using their short file names if
they exist. The driver will not create short file names however, and a
rename will discard any existing short file name.
- The new driver supports exporting of mounted NTFS volumes via NFS.
- The new driver supports async io (aio).
- The new driver supports fsync(2), fdatasync(2), and msync(2).
- The new driver supports readv(2) and writev(2).
- The new driver supports access time updates (including mtime and ctime).
- The new driver supports truncate(2) and open(2) with O_TRUNC. But at present
only very limited support for highly fragmented files, i.e. ones which have
their data attribute split across multiple extents, is included. Another
limitation is that at present truncate(2) will never create sparse files,
since to mark a file sparse we need to modify the directory entry for the
file and we do not implement directory modifications yet.
- The new driver supports write(2) which can both overwrite existing data and
extend the file size so that you can write beyond the existing data. Also,
writing into sparse regions is supported and the holes are filled in with
clusters. But at present only limited support for highly fragmented files,
i.e. ones which have their data attribute split across multiple extents, is
included. Another limitation is that write(2) will never create sparse
files, since to mark a file sparse we need to modify the directory entry for
the file and we do not implement directory modifications yet.
Supported mount options
=======================
In addition to the generic mount options described by the manual page for the
mount command (man 8 mount, also see man 5 fstab), the NTFS driver supports the
following mount options:
======================= =======================================================
iocharset=name Deprecated option. Still supported but please use
nls=name in the future. See description for nls=name.
nls=name Character set to use when returning file names.
Unlike VFAT, NTFS suppresses names that contain
unconvertible characters. Note that most character
sets contain insufficient characters to represent all
possible Unicode characters that can exist on NTFS.
To be sure you are not missing any files, you are
advised to use nls=utf8 which is capable of
representing all Unicode characters.
utf8=<bool> Option no longer supported. Currently mapped to
nls=utf8 but please use nls=utf8 in the future and
make sure utf8 is compiled either as module or into
the kernel. See description for nls=name.
uid=
gid=
umask= Provide default owner, group, and access mode mask.
These options work as documented in mount(8). By
default, the files/directories are owned by root and
he/she has read and write permissions, as well as
browse permission for directories. No one else has any
access permissions. I.e. the mode on all files is by
default rw------- and for directories rwx------, a
consequence of the default fmask=0177 and dmask=0077.
Using a umask of zero will grant all permissions to
everyone, i.e. all files and directories will have mode
rwxrwxrwx.
fmask=
dmask= Instead of specifying umask which applies both to
files and directories, fmask applies only to files and
dmask only to directories.
sloppy=<BOOL> If sloppy is specified, ignore unknown mount options.
Otherwise the default behaviour is to abort mount if
any unknown options are found.
show_sys_files=<BOOL> If show_sys_files is specified, show the system files
in directory listings. Otherwise the default behaviour
is to hide the system files.
Note that even when show_sys_files is specified, "$MFT"
will not be visible due to bugs/mis-features in glibc.
Further, note that irrespective of show_sys_files, all
files are accessible by name, i.e. you can always do
"ls -l \$UpCase" for example to specifically show the
system file containing the Unicode upcase table.
case_sensitive=<BOOL> If case_sensitive is specified, treat all file names as
case sensitive and create file names in the POSIX
namespace. Otherwise the default behaviour is to treat
file names as case insensitive and to create file names
in the WIN32/LONG name space. Note, the Linux NTFS
driver will never create short file names and will
remove them on rename/delete of the corresponding long
file name.
Note that files remain accessible via their short file
name, if it exists. If case_sensitive, you will need
to provide the correct case of the short file name.
disable_sparse=<BOOL> If disable_sparse is specified, creation of sparse
regions, i.e. holes, inside files is disabled for the
volume (for the duration of this mount only). By
default, creation of sparse regions is enabled, which
is consistent with the behaviour of traditional Unix
filesystems.
errors=opt What to do when critical filesystem errors are found.
Following values can be used for "opt":
======== =========================================
continue DEFAULT, try to clean-up as much as
possible, e.g. marking a corrupt inode as
bad so it is no longer accessed, and then
continue.
recover At present only supported is recovery of
the boot sector from the backup copy.
If read-only mount, the recovery is done
in memory only and not written to disk.
======== =========================================
Note that the options are additive, i.e. specifying::
errors=continue,errors=recover
means the driver will attempt to recover and if that
fails it will clean-up as much as possible and
continue.
mft_zone_multiplier= Set the MFT zone multiplier for the volume (this
setting is not persistent across mounts and can be
changed from mount to mount but cannot be changed on
remount). Values of 1 to 4 are allowed, 1 being the
default. The MFT zone multiplier determines how much
space is reserved for the MFT on the volume. If all
other space is used up, then the MFT zone will be
shrunk dynamically, so this has no impact on the
amount of free space. However, it can have an impact
on performance by affecting fragmentation of the MFT.
In general use the default. If you have a lot of small
files then use a higher value. The values have the
following meaning:
===== =================================
Value MFT zone size (% of volume size)
===== =================================
1 12.5%
2 25%
3 37.5%
4 50%
===== =================================
Note this option is irrelevant for read-only mounts.
======================= =======================================================
Known bugs and (mis-)features
=============================
- The link count on each directory inode entry is set to 1, due to Linux not
supporting directory hard links. This may well confuse some user space
applications, since the directory names will have the same inode numbers.
This also speeds up ntfs_read_inode() immensely. And we haven't found any
problems with this approach so far. If you find a problem with this, please
let us know.
Please send bug reports/comments/feedback/abuse to the Linux-NTFS development
list at sourceforge: linux-ntfs-dev@lists.sourceforge.net
Using NTFS volume and stripe sets
=================================
For support of volume and stripe sets, you can either use the kernel's
Device-Mapper driver or the kernel's Software RAID / MD driver. The former is
the recommended one to use for linear raid. But the latter is required for
raid level 5. For striping and mirroring, either driver should work fine.
The Device-Mapper driver
------------------------
You will need to create a table of the components of the volume/stripe set and
how they fit together and load this into the kernel using the dmsetup utility
(see man 8 dmsetup).
Linear volume sets, i.e. linear raid, has been tested and works fine. Even
though untested, there is no reason why stripe sets, i.e. raid level 0, and
mirrors, i.e. raid level 1 should not work, too. Stripes with parity, i.e.
raid level 5, unfortunately cannot work yet because the current version of the
Device-Mapper driver does not support raid level 5. You may be able to use the
Software RAID / MD driver for raid level 5, see the next section for details.
To create the table describing your volume you will need to know each of its
components and their sizes in sectors, i.e. multiples of 512-byte blocks.
For NT4 fault tolerant volumes you can obtain the sizes using fdisk. So for
example if one of your partitions is /dev/hda2 you would do::
$ fdisk -ul /dev/hda
Disk /dev/hda: 81.9 GB, 81964302336 bytes
255 heads, 63 sectors/track, 9964 cylinders, total 160086528 sectors
Units = sectors of 1 * 512 = 512 bytes
Device Boot Start End Blocks Id System
/dev/hda1 * 63 4209029 2104483+ 83 Linux
/dev/hda2 4209030 37768814 16779892+ 86 NTFS
/dev/hda3 37768815 46170809 4200997+ 83 Linux
And you would know that /dev/hda2 has a size of 37768814 - 4209030 + 1 =
33559785 sectors.
For Win2k and later dynamic disks, you can for example use the ldminfo utility
which is part of the Linux LDM tools (the latest version at the time of
writing is linux-ldm-0.0.8.tar.bz2). You can download it from:
http://www.linux-ntfs.org/
Simply extract the downloaded archive (tar xvjf linux-ldm-0.0.8.tar.bz2), go
into it (cd linux-ldm-0.0.8) and change to the test directory (cd test). You
will find the precompiled (i386) ldminfo utility there. NOTE: You will not be
able to compile this yourself easily so use the binary version!
Then you would use ldminfo in dump mode to obtain the necessary information::
$ ./ldminfo --dump /dev/hda
This would dump the LDM database found on /dev/hda which describes all of your
dynamic disks and all the volumes on them. At the bottom you will see the
VOLUME DEFINITIONS section which is all you really need. You may need to look
further above to determine which of the disks in the volume definitions is
which device in Linux. Hint: Run ldminfo on each of your dynamic disks and
look at the Disk Id close to the top of the output for each (the PRIVATE HEADER
section). You can then find these Disk Ids in the VBLK DATABASE section in the
<Disk> components where you will get the LDM Name for the disk that is found in
the VOLUME DEFINITIONS section.
Note you will also need to enable the LDM driver in the Linux kernel. If your
distribution did not enable it, you will need to recompile the kernel with it
enabled. This will create the LDM partitions on each device at boot time. You
would then use those devices (for /dev/hda they would be /dev/hda1, 2, 3, etc)
in the Device-Mapper table.
You can also bypass using the LDM driver by using the main device (e.g.
/dev/hda) and then using the offsets of the LDM partitions into this device as
the "Start sector of device" when creating the table. Once again ldminfo would
give you the correct information to do this.
Assuming you know all your devices and their sizes things are easy.
For a linear raid the table would look like this (note all values are in
512-byte sectors)::
# Offset into Size of this Raid type Device Start sector
# volume device of device
0 1028161 linear /dev/hda1 0
1028161 3903762 linear /dev/hdb2 0
4931923 2103211 linear /dev/hdc1 0
For a striped volume, i.e. raid level 0, you will need to know the chunk size
you used when creating the volume. Windows uses 64kiB as the default, so it
will probably be this unless you changes the defaults when creating the array.
For a raid level 0 the table would look like this (note all values are in
512-byte sectors)::
# Offset Size Raid Number Chunk 1st Start 2nd Start
# into of the type of size Device in Device in
# volume volume stripes device device
0 2056320 striped 2 128 /dev/hda1 0 /dev/hdb1 0
If there are more than two devices, just add each of them to the end of the
line.
Finally, for a mirrored volume, i.e. raid level 1, the table would look like
this (note all values are in 512-byte sectors)::
# Ofs Size Raid Log Number Region Should Number Source Start Target Start
# in of the type type of log size sync? of Device in Device in
# vol volume params mirrors Device Device
0 2056320 mirror core 2 16 nosync 2 /dev/hda1 0 /dev/hdb1 0
If you are mirroring to multiple devices you can specify further targets at the
end of the line.
Note the "Should sync?" parameter "nosync" means that the two mirrors are
already in sync which will be the case on a clean shutdown of Windows. If the
mirrors are not clean, you can specify the "sync" option instead of "nosync"
and the Device-Mapper driver will then copy the entirety of the "Source Device"
to the "Target Device" or if you specified multiple target devices to all of
them.
Once you have your table, save it in a file somewhere (e.g. /etc/ntfsvolume1),
and hand it over to dmsetup to work with, like so::
$ dmsetup create myvolume1 /etc/ntfsvolume1
You can obviously replace "myvolume1" with whatever name you like.
If it all worked, you will now have the device /dev/device-mapper/myvolume1
which you can then just use as an argument to the mount command as usual to
mount the ntfs volume. For example::
$ mount -t ntfs -o ro /dev/device-mapper/myvolume1 /mnt/myvol1
(You need to create the directory /mnt/myvol1 first and of course you can use
anything you like instead of /mnt/myvol1 as long as it is an existing
directory.)
It is advisable to do the mount read-only to see if the volume has been setup
correctly to avoid the possibility of causing damage to the data on the ntfs
volume.
The Software RAID / MD driver
-----------------------------
An alternative to using the Device-Mapper driver is to use the kernel's
Software RAID / MD driver. For which you need to set up your /etc/raidtab
appropriately (see man 5 raidtab).
Linear volume sets, i.e. linear raid, as well as stripe sets, i.e. raid level
0, have been tested and work fine (though see section "Limitations when using
the MD driver with NTFS volumes" especially if you want to use linear raid).
Even though untested, there is no reason why mirrors, i.e. raid level 1, and
stripes with parity, i.e. raid level 5, should not work, too.
You have to use the "persistent-superblock 0" option for each raid-disk in the
NTFS volume/stripe you are configuring in /etc/raidtab as the persistent
superblock used by the MD driver would damage the NTFS volume.
Windows by default uses a stripe chunk size of 64k, so you probably want the
"chunk-size 64k" option for each raid-disk, too.
For example, if you have a stripe set consisting of two partitions /dev/hda5
and /dev/hdb1 your /etc/raidtab would look like this::
raiddev /dev/md0
raid-level 0
nr-raid-disks 2
nr-spare-disks 0
persistent-superblock 0
chunk-size 64k
device /dev/hda5
raid-disk 0
device /dev/hdb1
raid-disk 1
For linear raid, just change the raid-level above to "raid-level linear", for
mirrors, change it to "raid-level 1", and for stripe sets with parity, change
it to "raid-level 5".
Note for stripe sets with parity you will also need to tell the MD driver
which parity algorithm to use by specifying the option "parity-algorithm
which", where you need to replace "which" with the name of the algorithm to
use (see man 5 raidtab for available algorithms) and you will have to try the
different available algorithms until you find one that works. Make sure you
are working read-only when playing with this as you may damage your data
otherwise. If you find which algorithm works please let us know (email the
linux-ntfs developers list linux-ntfs-dev@lists.sourceforge.net or drop in on
IRC in channel #ntfs on the irc.freenode.net network) so we can update this
documentation.
Once the raidtab is setup, run for example raid0run -a to start all devices or
raid0run /dev/md0 to start a particular md device, in this case /dev/md0.
Then just use the mount command as usual to mount the ntfs volume using for
example::
mount -t ntfs -o ro /dev/md0 /mnt/myntfsvolume
It is advisable to do the mount read-only to see if the md volume has been
setup correctly to avoid the possibility of causing damage to the data on the
ntfs volume.
Limitations when using the Software RAID / MD driver
-----------------------------------------------------
Using the md driver will not work properly if any of your NTFS partitions have
an odd number of sectors. This is especially important for linear raid as all
data after the first partition with an odd number of sectors will be offset by
one or more sectors so if you mount such a partition with write support you
will cause massive damage to the data on the volume which will only become
apparent when you try to use the volume again under Windows.
So when using linear raid, make sure that all your partitions have an even
number of sectors BEFORE attempting to use it. You have been warned!
Even better is to simply use the Device-Mapper for linear raid and then you do
not have this problem with odd numbers of sectors.

View File

@ -15589,16 +15589,6 @@ W: https://github.com/davejiang/linux/wiki
T: git https://github.com/davejiang/linux.git
F: drivers/ntb/hw/intel/
NTFS FILESYSTEM
M: Anton Altaparmakov <anton@tuxera.com>
R: Namjae Jeon <linkinjeon@kernel.org>
L: linux-ntfs-dev@lists.sourceforge.net
S: Supported
W: http://www.tuxera.com/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/aia21/ntfs.git
F: Documentation/filesystems/ntfs.rst
F: fs/ntfs/
NTFS3 FILESYSTEM
M: Konstantin Komarov <almaz.alexandrovich@paragon-software.com>
L: ntfs3@lists.linux.dev

View File

@ -162,7 +162,6 @@ menu "DOS/FAT/EXFAT/NT Filesystems"
source "fs/fat/Kconfig"
source "fs/exfat/Kconfig"
source "fs/ntfs/Kconfig"
source "fs/ntfs3/Kconfig"
endmenu

View File

@ -91,7 +91,6 @@ obj-y += unicode/
obj-$(CONFIG_SYSV_FS) += sysv/
obj-$(CONFIG_SMBFS) += smb/
obj-$(CONFIG_HPFS_FS) += hpfs/
obj-$(CONFIG_NTFS_FS) += ntfs/
obj-$(CONFIG_NTFS3_FS) += ntfs3/
obj-$(CONFIG_UFS_FS) += ufs/
obj-$(CONFIG_EFS_FS) += efs/

View File

@ -1,81 +0,0 @@
# SPDX-License-Identifier: GPL-2.0-only
config NTFS_FS
tristate "NTFS file system support"
select BUFFER_HEAD
select NLS
help
NTFS is the file system of Microsoft Windows NT, 2000, XP and 2003.
Saying Y or M here enables read support. There is partial, but
safe, write support available. For write support you must also
say Y to "NTFS write support" below.
There are also a number of user-space tools available, called
ntfsprogs. These include ntfsundelete and ntfsresize, that work
without NTFS support enabled in the kernel.
This is a rewrite from scratch of Linux NTFS support and replaced
the old NTFS code starting with Linux 2.5.11. A backport to
the Linux 2.4 kernel series is separately available as a patch
from the project web site.
For more information see <file:Documentation/filesystems/ntfs.rst>
and <http://www.linux-ntfs.org/>.
To compile this file system support as a module, choose M here: the
module will be called ntfs.
If you are not using Windows NT, 2000, XP or 2003 in addition to
Linux on your computer it is safe to say N.
config NTFS_DEBUG
bool "NTFS debugging support"
depends on NTFS_FS
help
If you are experiencing any problems with the NTFS file system, say
Y here. This will result in additional consistency checks to be
performed by the driver as well as additional debugging messages to
be written to the system log. Note that debugging messages are
disabled by default. To enable them, supply the option debug_msgs=1
at the kernel command line when booting the kernel or as an option
to insmod when loading the ntfs module. Once the driver is active,
you can enable debugging messages by doing (as root):
echo 1 > /proc/sys/fs/ntfs-debug
Replacing the "1" with "0" would disable debug messages.
If you leave debugging messages disabled, this results in little
overhead, but enabling debug messages results in very significant
slowdown of the system.
When reporting bugs, please try to have available a full dump of
debugging messages while the misbehaviour was occurring.
config NTFS_RW
bool "NTFS write support"
depends on NTFS_FS
depends on PAGE_SIZE_LESS_THAN_64KB
help
This enables the partial, but safe, write support in the NTFS driver.
The only supported operation is overwriting existing files, without
changing the file length. No file or directory creation, deletion or
renaming is possible. Note only non-resident files can be written to
so you may find that some very small files (<500 bytes or so) cannot
be written to.
While we cannot guarantee that it will not damage any data, we have
so far not received a single report where the driver would have
damaged someones data so we assume it is perfectly safe to use.
Note: While write support is safe in this version (a rewrite from
scratch of the NTFS support), it should be noted that the old NTFS
write support, included in Linux 2.5.10 and before (since 1997),
is not safe.
This is currently useful with TopologiLinux. TopologiLinux is run
on top of any DOS/Microsoft Windows system without partitioning your
hard disk. Unlike other Linux distributions TopologiLinux does not
need its own partition. For more information see
<http://topologi-linux.sourceforge.net/>
It is perfectly safe to say N here.

View File

@ -1,15 +0,0 @@
# SPDX-License-Identifier: GPL-2.0
# Rules for making the NTFS driver.
obj-$(CONFIG_NTFS_FS) += ntfs.o
ntfs-y := aops.o attrib.o collate.o compress.o debug.o dir.o file.o \
index.o inode.o mft.o mst.o namei.o runlist.o super.o sysctl.o \
unistr.o upcase.o
ntfs-$(CONFIG_NTFS_RW) += bitmap.o lcnalloc.o logfile.o quota.o usnjrnl.o
ccflags-y := -DNTFS_VERSION=\"2.1.32\"
ccflags-$(CONFIG_NTFS_DEBUG) += -DDEBUG
ccflags-$(CONFIG_NTFS_RW) += -DNTFS_RW

File diff suppressed because it is too large Load Diff

View File

@ -1,88 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* aops.h - Defines for NTFS kernel address space operations and page cache
* handling. Part of the Linux-NTFS project.
*
* Copyright (c) 2001-2004 Anton Altaparmakov
* Copyright (c) 2002 Richard Russon
*/
#ifndef _LINUX_NTFS_AOPS_H
#define _LINUX_NTFS_AOPS_H
#include <linux/mm.h>
#include <linux/highmem.h>
#include <linux/pagemap.h>
#include <linux/fs.h>
#include "inode.h"
/**
* ntfs_unmap_page - release a page that was mapped using ntfs_map_page()
* @page: the page to release
*
* Unpin, unmap and release a page that was obtained from ntfs_map_page().
*/
static inline void ntfs_unmap_page(struct page *page)
{
kunmap(page);
put_page(page);
}
/**
* ntfs_map_page - map a page into accessible memory, reading it if necessary
* @mapping: address space for which to obtain the page
* @index: index into the page cache for @mapping of the page to map
*
* Read a page from the page cache of the address space @mapping at position
* @index, where @index is in units of PAGE_SIZE, and not in bytes.
*
* If the page is not in memory it is loaded from disk first using the
* read_folio method defined in the address space operations of @mapping
* and the page is added to the page cache of @mapping in the process.
*
* If the page belongs to an mst protected attribute and it is marked as such
* in its ntfs inode (NInoMstProtected()) the mst fixups are applied but no
* error checking is performed. This means the caller has to verify whether
* the ntfs record(s) contained in the page are valid or not using one of the
* ntfs_is_XXXX_record{,p}() macros, where XXXX is the record type you are
* expecting to see. (For details of the macros, see fs/ntfs/layout.h.)
*
* If the page is in high memory it is mapped into memory directly addressible
* by the kernel.
*
* Finally the page count is incremented, thus pinning the page into place.
*
* The above means that page_address(page) can be used on all pages obtained
* with ntfs_map_page() to get the kernel virtual address of the page.
*
* When finished with the page, the caller has to call ntfs_unmap_page() to
* unpin, unmap and release the page.
*
* Note this does not grant exclusive access. If such is desired, the caller
* must provide it independently of the ntfs_{un}map_page() calls by using
* a {rw_}semaphore or other means of serialization. A spin lock cannot be
* used as ntfs_map_page() can block.
*
* The unlocked and uptodate page is returned on success or an encoded error
* on failure. Caller has to test for error using the IS_ERR() macro on the
* return value. If that evaluates to 'true', the negative error code can be
* obtained using PTR_ERR() on the return value of ntfs_map_page().
*/
static inline struct page *ntfs_map_page(struct address_space *mapping,
unsigned long index)
{
struct page *page = read_mapping_page(mapping, index, NULL);
if (!IS_ERR(page))
kmap(page);
return page;
}
#ifdef NTFS_RW
extern void mark_ntfs_record_dirty(struct page *page, const unsigned int ofs);
#endif /* NTFS_RW */
#endif /* _LINUX_NTFS_AOPS_H */

File diff suppressed because it is too large Load Diff

View File

@ -1,102 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* attrib.h - Defines for attribute handling in NTFS Linux kernel driver.
* Part of the Linux-NTFS project.
*
* Copyright (c) 2001-2005 Anton Altaparmakov
* Copyright (c) 2002 Richard Russon
*/
#ifndef _LINUX_NTFS_ATTRIB_H
#define _LINUX_NTFS_ATTRIB_H
#include "endian.h"
#include "types.h"
#include "layout.h"
#include "inode.h"
#include "runlist.h"
#include "volume.h"
/**
* ntfs_attr_search_ctx - used in attribute search functions
* @mrec: buffer containing mft record to search
* @attr: attribute record in @mrec where to begin/continue search
* @is_first: if true ntfs_attr_lookup() begins search with @attr, else after
*
* Structure must be initialized to zero before the first call to one of the
* attribute search functions. Initialize @mrec to point to the mft record to
* search, and @attr to point to the first attribute within @mrec (not necessary
* if calling the _first() functions), and set @is_first to 'true' (not necessary
* if calling the _first() functions).
*
* If @is_first is 'true', the search begins with @attr. If @is_first is 'false',
* the search begins after @attr. This is so that, after the first call to one
* of the search attribute functions, we can call the function again, without
* any modification of the search context, to automagically get the next
* matching attribute.
*/
typedef struct {
MFT_RECORD *mrec;
ATTR_RECORD *attr;
bool is_first;
ntfs_inode *ntfs_ino;
ATTR_LIST_ENTRY *al_entry;
ntfs_inode *base_ntfs_ino;
MFT_RECORD *base_mrec;
ATTR_RECORD *base_attr;
} ntfs_attr_search_ctx;
extern int ntfs_map_runlist_nolock(ntfs_inode *ni, VCN vcn,
ntfs_attr_search_ctx *ctx);
extern int ntfs_map_runlist(ntfs_inode *ni, VCN vcn);
extern LCN ntfs_attr_vcn_to_lcn_nolock(ntfs_inode *ni, const VCN vcn,
const bool write_locked);
extern runlist_element *ntfs_attr_find_vcn_nolock(ntfs_inode *ni,
const VCN vcn, ntfs_attr_search_ctx *ctx);
int ntfs_attr_lookup(const ATTR_TYPE type, const ntfschar *name,
const u32 name_len, const IGNORE_CASE_BOOL ic,
const VCN lowest_vcn, const u8 *val, const u32 val_len,
ntfs_attr_search_ctx *ctx);
extern int load_attribute_list(ntfs_volume *vol, runlist *rl, u8 *al_start,
const s64 size, const s64 initialized_size);
static inline s64 ntfs_attr_size(const ATTR_RECORD *a)
{
if (!a->non_resident)
return (s64)le32_to_cpu(a->data.resident.value_length);
return sle64_to_cpu(a->data.non_resident.data_size);
}
extern void ntfs_attr_reinit_search_ctx(ntfs_attr_search_ctx *ctx);
extern ntfs_attr_search_ctx *ntfs_attr_get_search_ctx(ntfs_inode *ni,
MFT_RECORD *mrec);
extern void ntfs_attr_put_search_ctx(ntfs_attr_search_ctx *ctx);
#ifdef NTFS_RW
extern int ntfs_attr_size_bounds_check(const ntfs_volume *vol,
const ATTR_TYPE type, const s64 size);
extern int ntfs_attr_can_be_non_resident(const ntfs_volume *vol,
const ATTR_TYPE type);
extern int ntfs_attr_can_be_resident(const ntfs_volume *vol,
const ATTR_TYPE type);
extern int ntfs_attr_record_resize(MFT_RECORD *m, ATTR_RECORD *a, u32 new_size);
extern int ntfs_resident_attr_value_resize(MFT_RECORD *m, ATTR_RECORD *a,
const u32 new_size);
extern int ntfs_attr_make_non_resident(ntfs_inode *ni, const u32 data_size);
extern s64 ntfs_attr_extend_allocation(ntfs_inode *ni, s64 new_alloc_size,
const s64 new_data_size, const s64 data_start);
extern int ntfs_attr_set(ntfs_inode *ni, const s64 ofs, const s64 cnt,
const u8 val);
#endif /* NTFS_RW */
#endif /* _LINUX_NTFS_ATTRIB_H */

View File

@ -1,179 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* bitmap.c - NTFS kernel bitmap handling. Part of the Linux-NTFS project.
*
* Copyright (c) 2004-2005 Anton Altaparmakov
*/
#ifdef NTFS_RW
#include <linux/pagemap.h>
#include "bitmap.h"
#include "debug.h"
#include "aops.h"
#include "ntfs.h"
/**
* __ntfs_bitmap_set_bits_in_run - set a run of bits in a bitmap to a value
* @vi: vfs inode describing the bitmap
* @start_bit: first bit to set
* @count: number of bits to set
* @value: value to set the bits to (i.e. 0 or 1)
* @is_rollback: if 'true' this is a rollback operation
*
* Set @count bits starting at bit @start_bit in the bitmap described by the
* vfs inode @vi to @value, where @value is either 0 or 1.
*
* @is_rollback should always be 'false', it is for internal use to rollback
* errors. You probably want to use ntfs_bitmap_set_bits_in_run() instead.
*
* Return 0 on success and -errno on error.
*/
int __ntfs_bitmap_set_bits_in_run(struct inode *vi, const s64 start_bit,
const s64 count, const u8 value, const bool is_rollback)
{
s64 cnt = count;
pgoff_t index, end_index;
struct address_space *mapping;
struct page *page;
u8 *kaddr;
int pos, len;
u8 bit;
BUG_ON(!vi);
ntfs_debug("Entering for i_ino 0x%lx, start_bit 0x%llx, count 0x%llx, "
"value %u.%s", vi->i_ino, (unsigned long long)start_bit,
(unsigned long long)cnt, (unsigned int)value,
is_rollback ? " (rollback)" : "");
BUG_ON(start_bit < 0);
BUG_ON(cnt < 0);
BUG_ON(value > 1);
/*
* Calculate the indices for the pages containing the first and last
* bits, i.e. @start_bit and @start_bit + @cnt - 1, respectively.
*/
index = start_bit >> (3 + PAGE_SHIFT);
end_index = (start_bit + cnt - 1) >> (3 + PAGE_SHIFT);
/* Get the page containing the first bit (@start_bit). */
mapping = vi->i_mapping;
page = ntfs_map_page(mapping, index);
if (IS_ERR(page)) {
if (!is_rollback)
ntfs_error(vi->i_sb, "Failed to map first page (error "
"%li), aborting.", PTR_ERR(page));
return PTR_ERR(page);
}
kaddr = page_address(page);
/* Set @pos to the position of the byte containing @start_bit. */
pos = (start_bit >> 3) & ~PAGE_MASK;
/* Calculate the position of @start_bit in the first byte. */
bit = start_bit & 7;
/* If the first byte is partial, modify the appropriate bits in it. */
if (bit) {
u8 *byte = kaddr + pos;
while ((bit & 7) && cnt) {
cnt--;
if (value)
*byte |= 1 << bit++;
else
*byte &= ~(1 << bit++);
}
/* If we are done, unmap the page and return success. */
if (!cnt)
goto done;
/* Update @pos to the new position. */
pos++;
}
/*
* Depending on @value, modify all remaining whole bytes in the page up
* to @cnt.
*/
len = min_t(s64, cnt >> 3, PAGE_SIZE - pos);
memset(kaddr + pos, value ? 0xff : 0, len);
cnt -= len << 3;
/* Update @len to point to the first not-done byte in the page. */
if (cnt < 8)
len += pos;
/* If we are not in the last page, deal with all subsequent pages. */
while (index < end_index) {
BUG_ON(cnt <= 0);
/* Update @index and get the next page. */
flush_dcache_page(page);
set_page_dirty(page);
ntfs_unmap_page(page);
page = ntfs_map_page(mapping, ++index);
if (IS_ERR(page))
goto rollback;
kaddr = page_address(page);
/*
* Depending on @value, modify all remaining whole bytes in the
* page up to @cnt.
*/
len = min_t(s64, cnt >> 3, PAGE_SIZE);
memset(kaddr, value ? 0xff : 0, len);
cnt -= len << 3;
}
/*
* The currently mapped page is the last one. If the last byte is
* partial, modify the appropriate bits in it. Note, @len is the
* position of the last byte inside the page.
*/
if (cnt) {
u8 *byte;
BUG_ON(cnt > 7);
bit = cnt;
byte = kaddr + len;
while (bit--) {
if (value)
*byte |= 1 << bit;
else
*byte &= ~(1 << bit);
}
}
done:
/* We are done. Unmap the page and return success. */
flush_dcache_page(page);
set_page_dirty(page);
ntfs_unmap_page(page);
ntfs_debug("Done.");
return 0;
rollback:
/*
* Current state:
* - no pages are mapped
* - @count - @cnt is the number of bits that have been modified
*/
if (is_rollback)
return PTR_ERR(page);
if (count != cnt)
pos = __ntfs_bitmap_set_bits_in_run(vi, start_bit, count - cnt,
value ? 0 : 1, true);
else
pos = 0;
if (!pos) {
/* Rollback was successful. */
ntfs_error(vi->i_sb, "Failed to map subsequent page (error "
"%li), aborting.", PTR_ERR(page));
} else {
/* Rollback failed. */
ntfs_error(vi->i_sb, "Failed to map subsequent page (error "
"%li) and rollback failed (error %i). "
"Aborting and leaving inconsistent metadata. "
"Unmount and run chkdsk.", PTR_ERR(page), pos);
NVolSetErrors(NTFS_SB(vi->i_sb));
}
return PTR_ERR(page);
}
#endif /* NTFS_RW */

View File

@ -1,104 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* bitmap.h - Defines for NTFS kernel bitmap handling. Part of the Linux-NTFS
* project.
*
* Copyright (c) 2004 Anton Altaparmakov
*/
#ifndef _LINUX_NTFS_BITMAP_H
#define _LINUX_NTFS_BITMAP_H
#ifdef NTFS_RW
#include <linux/fs.h>
#include "types.h"
extern int __ntfs_bitmap_set_bits_in_run(struct inode *vi, const s64 start_bit,
const s64 count, const u8 value, const bool is_rollback);
/**
* ntfs_bitmap_set_bits_in_run - set a run of bits in a bitmap to a value
* @vi: vfs inode describing the bitmap
* @start_bit: first bit to set
* @count: number of bits to set
* @value: value to set the bits to (i.e. 0 or 1)
*
* Set @count bits starting at bit @start_bit in the bitmap described by the
* vfs inode @vi to @value, where @value is either 0 or 1.
*
* Return 0 on success and -errno on error.
*/
static inline int ntfs_bitmap_set_bits_in_run(struct inode *vi,
const s64 start_bit, const s64 count, const u8 value)
{
return __ntfs_bitmap_set_bits_in_run(vi, start_bit, count, value,
false);
}
/**
* ntfs_bitmap_set_run - set a run of bits in a bitmap
* @vi: vfs inode describing the bitmap
* @start_bit: first bit to set
* @count: number of bits to set
*
* Set @count bits starting at bit @start_bit in the bitmap described by the
* vfs inode @vi.
*
* Return 0 on success and -errno on error.
*/
static inline int ntfs_bitmap_set_run(struct inode *vi, const s64 start_bit,
const s64 count)
{
return ntfs_bitmap_set_bits_in_run(vi, start_bit, count, 1);
}
/**
* ntfs_bitmap_clear_run - clear a run of bits in a bitmap
* @vi: vfs inode describing the bitmap
* @start_bit: first bit to clear
* @count: number of bits to clear
*
* Clear @count bits starting at bit @start_bit in the bitmap described by the
* vfs inode @vi.
*
* Return 0 on success and -errno on error.
*/
static inline int ntfs_bitmap_clear_run(struct inode *vi, const s64 start_bit,
const s64 count)
{
return ntfs_bitmap_set_bits_in_run(vi, start_bit, count, 0);
}
/**
* ntfs_bitmap_set_bit - set a bit in a bitmap
* @vi: vfs inode describing the bitmap
* @bit: bit to set
*
* Set bit @bit in the bitmap described by the vfs inode @vi.
*
* Return 0 on success and -errno on error.
*/
static inline int ntfs_bitmap_set_bit(struct inode *vi, const s64 bit)
{
return ntfs_bitmap_set_run(vi, bit, 1);
}
/**
* ntfs_bitmap_clear_bit - clear a bit in a bitmap
* @vi: vfs inode describing the bitmap
* @bit: bit to clear
*
* Clear bit @bit in the bitmap described by the vfs inode @vi.
*
* Return 0 on success and -errno on error.
*/
static inline int ntfs_bitmap_clear_bit(struct inode *vi, const s64 bit)
{
return ntfs_bitmap_clear_run(vi, bit, 1);
}
#endif /* NTFS_RW */
#endif /* defined _LINUX_NTFS_BITMAP_H */

View File

@ -1,110 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* collate.c - NTFS kernel collation handling. Part of the Linux-NTFS project.
*
* Copyright (c) 2004 Anton Altaparmakov
*/
#include "collate.h"
#include "debug.h"
#include "ntfs.h"
static int ntfs_collate_binary(ntfs_volume *vol,
const void *data1, const int data1_len,
const void *data2, const int data2_len)
{
int rc;
ntfs_debug("Entering.");
rc = memcmp(data1, data2, min(data1_len, data2_len));
if (!rc && (data1_len != data2_len)) {
if (data1_len < data2_len)
rc = -1;
else
rc = 1;
}
ntfs_debug("Done, returning %i", rc);
return rc;
}
static int ntfs_collate_ntofs_ulong(ntfs_volume *vol,
const void *data1, const int data1_len,
const void *data2, const int data2_len)
{
int rc;
u32 d1, d2;
ntfs_debug("Entering.");
// FIXME: We don't really want to bug here.
BUG_ON(data1_len != data2_len);
BUG_ON(data1_len != 4);
d1 = le32_to_cpup(data1);
d2 = le32_to_cpup(data2);
if (d1 < d2)
rc = -1;
else {
if (d1 == d2)
rc = 0;
else
rc = 1;
}
ntfs_debug("Done, returning %i", rc);
return rc;
}
typedef int (*ntfs_collate_func_t)(ntfs_volume *, const void *, const int,
const void *, const int);
static ntfs_collate_func_t ntfs_do_collate0x0[3] = {
ntfs_collate_binary,
NULL/*ntfs_collate_file_name*/,
NULL/*ntfs_collate_unicode_string*/,
};
static ntfs_collate_func_t ntfs_do_collate0x1[4] = {
ntfs_collate_ntofs_ulong,
NULL/*ntfs_collate_ntofs_sid*/,
NULL/*ntfs_collate_ntofs_security_hash*/,
NULL/*ntfs_collate_ntofs_ulongs*/,
};
/**
* ntfs_collate - collate two data items using a specified collation rule
* @vol: ntfs volume to which the data items belong
* @cr: collation rule to use when comparing the items
* @data1: first data item to collate
* @data1_len: length in bytes of @data1
* @data2: second data item to collate
* @data2_len: length in bytes of @data2
*
* Collate the two data items @data1 and @data2 using the collation rule @cr
* and return -1, 0, ir 1 if @data1 is found, respectively, to collate before,
* to match, or to collate after @data2.
*
* For speed we use the collation rule @cr as an index into two tables of
* function pointers to call the appropriate collation function.
*/
int ntfs_collate(ntfs_volume *vol, COLLATION_RULE cr,
const void *data1, const int data1_len,
const void *data2, const int data2_len) {
int i;
ntfs_debug("Entering.");
/*
* FIXME: At the moment we only support COLLATION_BINARY and
* COLLATION_NTOFS_ULONG, so we BUG() for everything else for now.
*/
BUG_ON(cr != COLLATION_BINARY && cr != COLLATION_NTOFS_ULONG);
i = le32_to_cpu(cr);
BUG_ON(i < 0);
if (i <= 0x02)
return ntfs_do_collate0x0[i](vol, data1, data1_len,
data2, data2_len);
BUG_ON(i < 0x10);
i -= 0x10;
if (likely(i <= 3))
return ntfs_do_collate0x1[i](vol, data1, data1_len,
data2, data2_len);
BUG();
return 0;
}

View File

@ -1,36 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* collate.h - Defines for NTFS kernel collation handling. Part of the
* Linux-NTFS project.
*
* Copyright (c) 2004 Anton Altaparmakov
*/
#ifndef _LINUX_NTFS_COLLATE_H
#define _LINUX_NTFS_COLLATE_H
#include "types.h"
#include "volume.h"
static inline bool ntfs_is_collation_rule_supported(COLLATION_RULE cr) {
int i;
/*
* FIXME: At the moment we only support COLLATION_BINARY and
* COLLATION_NTOFS_ULONG, so we return false for everything else for
* now.
*/
if (unlikely(cr != COLLATION_BINARY && cr != COLLATION_NTOFS_ULONG))
return false;
i = le32_to_cpu(cr);
if (likely(((i >= 0) && (i <= 0x02)) ||
((i >= 0x10) && (i <= 0x13))))
return true;
return false;
}
extern int ntfs_collate(ntfs_volume *vol, COLLATION_RULE cr,
const void *data1, const int data1_len,
const void *data2, const int data2_len);
#endif /* _LINUX_NTFS_COLLATE_H */

View File

@ -1,950 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* compress.c - NTFS kernel compressed attributes handling.
* Part of the Linux-NTFS project.
*
* Copyright (c) 2001-2004 Anton Altaparmakov
* Copyright (c) 2002 Richard Russon
*/
#include <linux/fs.h>
#include <linux/buffer_head.h>
#include <linux/blkdev.h>
#include <linux/vmalloc.h>
#include <linux/slab.h>
#include "attrib.h"
#include "inode.h"
#include "debug.h"
#include "ntfs.h"
/**
* ntfs_compression_constants - enum of constants used in the compression code
*/
typedef enum {
/* Token types and access mask. */
NTFS_SYMBOL_TOKEN = 0,
NTFS_PHRASE_TOKEN = 1,
NTFS_TOKEN_MASK = 1,
/* Compression sub-block constants. */
NTFS_SB_SIZE_MASK = 0x0fff,
NTFS_SB_SIZE = 0x1000,
NTFS_SB_IS_COMPRESSED = 0x8000,
/*
* The maximum compression block size is by definition 16 * the cluster
* size, with the maximum supported cluster size being 4kiB. Thus the
* maximum compression buffer size is 64kiB, so we use this when
* initializing the compression buffer.
*/
NTFS_MAX_CB_SIZE = 64 * 1024,
} ntfs_compression_constants;
/*
* ntfs_compression_buffer - one buffer for the decompression engine
*/
static u8 *ntfs_compression_buffer;
/*
* ntfs_cb_lock - spinlock which protects ntfs_compression_buffer
*/
static DEFINE_SPINLOCK(ntfs_cb_lock);
/**
* allocate_compression_buffers - allocate the decompression buffers
*
* Caller has to hold the ntfs_lock mutex.
*
* Return 0 on success or -ENOMEM if the allocations failed.
*/
int allocate_compression_buffers(void)
{
BUG_ON(ntfs_compression_buffer);
ntfs_compression_buffer = vmalloc(NTFS_MAX_CB_SIZE);
if (!ntfs_compression_buffer)
return -ENOMEM;
return 0;
}
/**
* free_compression_buffers - free the decompression buffers
*
* Caller has to hold the ntfs_lock mutex.
*/
void free_compression_buffers(void)
{
BUG_ON(!ntfs_compression_buffer);
vfree(ntfs_compression_buffer);
ntfs_compression_buffer = NULL;
}
/**
* zero_partial_compressed_page - zero out of bounds compressed page region
*/
static void zero_partial_compressed_page(struct page *page,
const s64 initialized_size)
{
u8 *kp = page_address(page);
unsigned int kp_ofs;
ntfs_debug("Zeroing page region outside initialized size.");
if (((s64)page->index << PAGE_SHIFT) >= initialized_size) {
clear_page(kp);
return;
}
kp_ofs = initialized_size & ~PAGE_MASK;
memset(kp + kp_ofs, 0, PAGE_SIZE - kp_ofs);
return;
}
/**
* handle_bounds_compressed_page - test for&handle out of bounds compressed page
*/
static inline void handle_bounds_compressed_page(struct page *page,
const loff_t i_size, const s64 initialized_size)
{
if ((page->index >= (initialized_size >> PAGE_SHIFT)) &&
(initialized_size < i_size))
zero_partial_compressed_page(page, initialized_size);
return;
}
/**
* ntfs_decompress - decompress a compression block into an array of pages
* @dest_pages: destination array of pages
* @completed_pages: scratch space to track completed pages
* @dest_index: current index into @dest_pages (IN/OUT)
* @dest_ofs: current offset within @dest_pages[@dest_index] (IN/OUT)
* @dest_max_index: maximum index into @dest_pages (IN)
* @dest_max_ofs: maximum offset within @dest_pages[@dest_max_index] (IN)
* @xpage: the target page (-1 if none) (IN)
* @xpage_done: set to 1 if xpage was completed successfully (IN/OUT)
* @cb_start: compression block to decompress (IN)
* @cb_size: size of compression block @cb_start in bytes (IN)
* @i_size: file size when we started the read (IN)
* @initialized_size: initialized file size when we started the read (IN)
*
* The caller must have disabled preemption. ntfs_decompress() reenables it when
* the critical section is finished.
*
* This decompresses the compression block @cb_start into the array of
* destination pages @dest_pages starting at index @dest_index into @dest_pages
* and at offset @dest_pos into the page @dest_pages[@dest_index].
*
* When the page @dest_pages[@xpage] is completed, @xpage_done is set to 1.
* If xpage is -1 or @xpage has not been completed, @xpage_done is not modified.
*
* @cb_start is a pointer to the compression block which needs decompressing
* and @cb_size is the size of @cb_start in bytes (8-64kiB).
*
* Return 0 if success or -EOVERFLOW on error in the compressed stream.
* @xpage_done indicates whether the target page (@dest_pages[@xpage]) was
* completed during the decompression of the compression block (@cb_start).
*
* Warning: This function *REQUIRES* PAGE_SIZE >= 4096 or it will blow up
* unpredicatbly! You have been warned!
*
* Note to hackers: This function may not sleep until it has finished accessing
* the compression block @cb_start as it is a per-CPU buffer.
*/
static int ntfs_decompress(struct page *dest_pages[], int completed_pages[],
int *dest_index, int *dest_ofs, const int dest_max_index,
const int dest_max_ofs, const int xpage, char *xpage_done,
u8 *const cb_start, const u32 cb_size, const loff_t i_size,
const s64 initialized_size)
{
/*
* Pointers into the compressed data, i.e. the compression block (cb),
* and the therein contained sub-blocks (sb).
*/
u8 *cb_end = cb_start + cb_size; /* End of cb. */
u8 *cb = cb_start; /* Current position in cb. */
u8 *cb_sb_start; /* Beginning of the current sb in the cb. */
u8 *cb_sb_end; /* End of current sb / beginning of next sb. */
/* Variables for uncompressed data / destination. */
struct page *dp; /* Current destination page being worked on. */
u8 *dp_addr; /* Current pointer into dp. */
u8 *dp_sb_start; /* Start of current sub-block in dp. */
u8 *dp_sb_end; /* End of current sb in dp (dp_sb_start +
NTFS_SB_SIZE). */
u16 do_sb_start; /* @dest_ofs when starting this sub-block. */
u16 do_sb_end; /* @dest_ofs of end of this sb (do_sb_start +
NTFS_SB_SIZE). */
/* Variables for tag and token parsing. */
u8 tag; /* Current tag. */
int token; /* Loop counter for the eight tokens in tag. */
int nr_completed_pages = 0;
/* Default error code. */
int err = -EOVERFLOW;
ntfs_debug("Entering, cb_size = 0x%x.", cb_size);
do_next_sb:
ntfs_debug("Beginning sub-block at offset = 0x%zx in the cb.",
cb - cb_start);
/*
* Have we reached the end of the compression block or the end of the
* decompressed data? The latter can happen for example if the current
* position in the compression block is one byte before its end so the
* first two checks do not detect it.
*/
if (cb == cb_end || !le16_to_cpup((le16*)cb) ||
(*dest_index == dest_max_index &&
*dest_ofs == dest_max_ofs)) {
int i;
ntfs_debug("Completed. Returning success (0).");
err = 0;
return_error:
/* We can sleep from now on, so we drop lock. */
spin_unlock(&ntfs_cb_lock);
/* Second stage: finalize completed pages. */
if (nr_completed_pages > 0) {
for (i = 0; i < nr_completed_pages; i++) {
int di = completed_pages[i];
dp = dest_pages[di];
/*
* If we are outside the initialized size, zero
* the out of bounds page range.
*/
handle_bounds_compressed_page(dp, i_size,
initialized_size);
flush_dcache_page(dp);
kunmap(dp);
SetPageUptodate(dp);
unlock_page(dp);
if (di == xpage)
*xpage_done = 1;
else
put_page(dp);
dest_pages[di] = NULL;
}
}
return err;
}
/* Setup offsets for the current sub-block destination. */
do_sb_start = *dest_ofs;
do_sb_end = do_sb_start + NTFS_SB_SIZE;
/* Check that we are still within allowed boundaries. */
if (*dest_index == dest_max_index && do_sb_end > dest_max_ofs)
goto return_overflow;
/* Does the minimum size of a compressed sb overflow valid range? */
if (cb + 6 > cb_end)
goto return_overflow;
/* Setup the current sub-block source pointers and validate range. */
cb_sb_start = cb;
cb_sb_end = cb_sb_start + (le16_to_cpup((le16*)cb) & NTFS_SB_SIZE_MASK)
+ 3;
if (cb_sb_end > cb_end)
goto return_overflow;
/* Get the current destination page. */
dp = dest_pages[*dest_index];
if (!dp) {
/* No page present. Skip decompression of this sub-block. */
cb = cb_sb_end;
/* Advance destination position to next sub-block. */
*dest_ofs = (*dest_ofs + NTFS_SB_SIZE) & ~PAGE_MASK;
if (!*dest_ofs && (++*dest_index > dest_max_index))
goto return_overflow;
goto do_next_sb;
}
/* We have a valid destination page. Setup the destination pointers. */
dp_addr = (u8*)page_address(dp) + do_sb_start;
/* Now, we are ready to process the current sub-block (sb). */
if (!(le16_to_cpup((le16*)cb) & NTFS_SB_IS_COMPRESSED)) {
ntfs_debug("Found uncompressed sub-block.");
/* This sb is not compressed, just copy it into destination. */
/* Advance source position to first data byte. */
cb += 2;
/* An uncompressed sb must be full size. */
if (cb_sb_end - cb != NTFS_SB_SIZE)
goto return_overflow;
/* Copy the block and advance the source position. */
memcpy(dp_addr, cb, NTFS_SB_SIZE);
cb += NTFS_SB_SIZE;
/* Advance destination position to next sub-block. */
*dest_ofs += NTFS_SB_SIZE;
if (!(*dest_ofs &= ~PAGE_MASK)) {
finalize_page:
/*
* First stage: add current page index to array of
* completed pages.
*/
completed_pages[nr_completed_pages++] = *dest_index;
if (++*dest_index > dest_max_index)
goto return_overflow;
}
goto do_next_sb;
}
ntfs_debug("Found compressed sub-block.");
/* This sb is compressed, decompress it into destination. */
/* Setup destination pointers. */
dp_sb_start = dp_addr;
dp_sb_end = dp_sb_start + NTFS_SB_SIZE;
/* Forward to the first tag in the sub-block. */
cb += 2;
do_next_tag:
if (cb == cb_sb_end) {
/* Check if the decompressed sub-block was not full-length. */
if (dp_addr < dp_sb_end) {
int nr_bytes = do_sb_end - *dest_ofs;
ntfs_debug("Filling incomplete sub-block with "
"zeroes.");
/* Zero remainder and update destination position. */
memset(dp_addr, 0, nr_bytes);
*dest_ofs += nr_bytes;
}
/* We have finished the current sub-block. */
if (!(*dest_ofs &= ~PAGE_MASK))
goto finalize_page;
goto do_next_sb;
}
/* Check we are still in range. */
if (cb > cb_sb_end || dp_addr > dp_sb_end)
goto return_overflow;
/* Get the next tag and advance to first token. */
tag = *cb++;
/* Parse the eight tokens described by the tag. */
for (token = 0; token < 8; token++, tag >>= 1) {
u16 lg, pt, length, max_non_overlap;
register u16 i;
u8 *dp_back_addr;
/* Check if we are done / still in range. */
if (cb >= cb_sb_end || dp_addr > dp_sb_end)
break;
/* Determine token type and parse appropriately.*/
if ((tag & NTFS_TOKEN_MASK) == NTFS_SYMBOL_TOKEN) {
/*
* We have a symbol token, copy the symbol across, and
* advance the source and destination positions.
*/
*dp_addr++ = *cb++;
++*dest_ofs;
/* Continue with the next token. */
continue;
}
/*
* We have a phrase token. Make sure it is not the first tag in
* the sb as this is illegal and would confuse the code below.
*/
if (dp_addr == dp_sb_start)
goto return_overflow;
/*
* Determine the number of bytes to go back (p) and the number
* of bytes to copy (l). We use an optimized algorithm in which
* we first calculate log2(current destination position in sb),
* which allows determination of l and p in O(1) rather than
* O(n). We just need an arch-optimized log2() function now.
*/
lg = 0;
for (i = *dest_ofs - do_sb_start - 1; i >= 0x10; i >>= 1)
lg++;
/* Get the phrase token into i. */
pt = le16_to_cpup((le16*)cb);
/*
* Calculate starting position of the byte sequence in
* the destination using the fact that p = (pt >> (12 - lg)) + 1
* and make sure we don't go too far back.
*/
dp_back_addr = dp_addr - (pt >> (12 - lg)) - 1;
if (dp_back_addr < dp_sb_start)
goto return_overflow;
/* Now calculate the length of the byte sequence. */
length = (pt & (0xfff >> lg)) + 3;
/* Advance destination position and verify it is in range. */
*dest_ofs += length;
if (*dest_ofs > do_sb_end)
goto return_overflow;
/* The number of non-overlapping bytes. */
max_non_overlap = dp_addr - dp_back_addr;
if (length <= max_non_overlap) {
/* The byte sequence doesn't overlap, just copy it. */
memcpy(dp_addr, dp_back_addr, length);
/* Advance destination pointer. */
dp_addr += length;
} else {
/*
* The byte sequence does overlap, copy non-overlapping
* part and then do a slow byte by byte copy for the
* overlapping part. Also, advance the destination
* pointer.
*/
memcpy(dp_addr, dp_back_addr, max_non_overlap);
dp_addr += max_non_overlap;
dp_back_addr += max_non_overlap;
length -= max_non_overlap;
while (length--)
*dp_addr++ = *dp_back_addr++;
}
/* Advance source position and continue with the next token. */
cb += 2;
}
/* No tokens left in the current tag. Continue with the next tag. */
goto do_next_tag;
return_overflow:
ntfs_error(NULL, "Failed. Returning -EOVERFLOW.");
goto return_error;
}
/**
* ntfs_read_compressed_block - read a compressed block into the page cache
* @page: locked page in the compression block(s) we need to read
*
* When we are called the page has already been verified to be locked and the
* attribute is known to be non-resident, not encrypted, but compressed.
*
* 1. Determine which compression block(s) @page is in.
* 2. Get hold of all pages corresponding to this/these compression block(s).
* 3. Read the (first) compression block.
* 4. Decompress it into the corresponding pages.
* 5. Throw the compressed data away and proceed to 3. for the next compression
* block or return success if no more compression blocks left.
*
* Warning: We have to be careful what we do about existing pages. They might
* have been written to so that we would lose data if we were to just overwrite
* them with the out-of-date uncompressed data.
*
* FIXME: For PAGE_SIZE > cb_size we are not doing the Right Thing(TM) at
* the end of the file I think. We need to detect this case and zero the out
* of bounds remainder of the page in question and mark it as handled. At the
* moment we would just return -EIO on such a page. This bug will only become
* apparent if pages are above 8kiB and the NTFS volume only uses 512 byte
* clusters so is probably not going to be seen by anyone. Still this should
* be fixed. (AIA)
*
* FIXME: Again for PAGE_SIZE > cb_size we are screwing up both in
* handling sparse and compressed cbs. (AIA)
*
* FIXME: At the moment we don't do any zeroing out in the case that
* initialized_size is less than data_size. This should be safe because of the
* nature of the compression algorithm used. Just in case we check and output
* an error message in read inode if the two sizes are not equal for a
* compressed file. (AIA)
*/
int ntfs_read_compressed_block(struct page *page)
{
loff_t i_size;
s64 initialized_size;
struct address_space *mapping = page->mapping;
ntfs_inode *ni = NTFS_I(mapping->host);
ntfs_volume *vol = ni->vol;
struct super_block *sb = vol->sb;
runlist_element *rl;
unsigned long flags, block_size = sb->s_blocksize;
unsigned char block_size_bits = sb->s_blocksize_bits;
u8 *cb, *cb_pos, *cb_end;
struct buffer_head **bhs;
unsigned long offset, index = page->index;
u32 cb_size = ni->itype.compressed.block_size;
u64 cb_size_mask = cb_size - 1UL;
VCN vcn;
LCN lcn;
/* The first wanted vcn (minimum alignment is PAGE_SIZE). */
VCN start_vcn = (((s64)index << PAGE_SHIFT) & ~cb_size_mask) >>
vol->cluster_size_bits;
/*
* The first vcn after the last wanted vcn (minimum alignment is again
* PAGE_SIZE.
*/
VCN end_vcn = ((((s64)(index + 1UL) << PAGE_SHIFT) + cb_size - 1)
& ~cb_size_mask) >> vol->cluster_size_bits;
/* Number of compression blocks (cbs) in the wanted vcn range. */
unsigned int nr_cbs = (end_vcn - start_vcn) << vol->cluster_size_bits
>> ni->itype.compressed.block_size_bits;
/*
* Number of pages required to store the uncompressed data from all
* compression blocks (cbs) overlapping @page. Due to alignment
* guarantees of start_vcn and end_vcn, no need to round up here.
*/
unsigned int nr_pages = (end_vcn - start_vcn) <<
vol->cluster_size_bits >> PAGE_SHIFT;
unsigned int xpage, max_page, cur_page, cur_ofs, i;
unsigned int cb_clusters, cb_max_ofs;
int block, max_block, cb_max_page, bhs_size, nr_bhs, err = 0;
struct page **pages;
int *completed_pages;
unsigned char xpage_done = 0;
ntfs_debug("Entering, page->index = 0x%lx, cb_size = 0x%x, nr_pages = "
"%i.", index, cb_size, nr_pages);
/*
* Bad things happen if we get here for anything that is not an
* unnamed $DATA attribute.
*/
BUG_ON(ni->type != AT_DATA);
BUG_ON(ni->name_len);
pages = kmalloc_array(nr_pages, sizeof(struct page *), GFP_NOFS);
completed_pages = kmalloc_array(nr_pages + 1, sizeof(int), GFP_NOFS);
/* Allocate memory to store the buffer heads we need. */
bhs_size = cb_size / block_size * sizeof(struct buffer_head *);
bhs = kmalloc(bhs_size, GFP_NOFS);
if (unlikely(!pages || !bhs || !completed_pages)) {
kfree(bhs);
kfree(pages);
kfree(completed_pages);
unlock_page(page);
ntfs_error(vol->sb, "Failed to allocate internal buffers.");
return -ENOMEM;
}
/*
* We have already been given one page, this is the one we must do.
* Once again, the alignment guarantees keep it simple.
*/
offset = start_vcn << vol->cluster_size_bits >> PAGE_SHIFT;
xpage = index - offset;
pages[xpage] = page;
/*
* The remaining pages need to be allocated and inserted into the page
* cache, alignment guarantees keep all the below much simpler. (-8
*/
read_lock_irqsave(&ni->size_lock, flags);
i_size = i_size_read(VFS_I(ni));
initialized_size = ni->initialized_size;
read_unlock_irqrestore(&ni->size_lock, flags);
max_page = ((i_size + PAGE_SIZE - 1) >> PAGE_SHIFT) -
offset;
/* Is the page fully outside i_size? (truncate in progress) */
if (xpage >= max_page) {
kfree(bhs);
kfree(pages);
kfree(completed_pages);
zero_user(page, 0, PAGE_SIZE);
ntfs_debug("Compressed read outside i_size - truncated?");
SetPageUptodate(page);
unlock_page(page);
return 0;
}
if (nr_pages < max_page)
max_page = nr_pages;
for (i = 0; i < max_page; i++, offset++) {
if (i != xpage)
pages[i] = grab_cache_page_nowait(mapping, offset);
page = pages[i];
if (page) {
/*
* We only (re)read the page if it isn't already read
* in and/or dirty or we would be losing data or at
* least wasting our time.
*/
if (!PageDirty(page) && (!PageUptodate(page) ||
PageError(page))) {
ClearPageError(page);
kmap(page);
continue;
}
unlock_page(page);
put_page(page);
pages[i] = NULL;
}
}
/*
* We have the runlist, and all the destination pages we need to fill.
* Now read the first compression block.
*/
cur_page = 0;
cur_ofs = 0;
cb_clusters = ni->itype.compressed.block_clusters;
do_next_cb:
nr_cbs--;
nr_bhs = 0;
/* Read all cb buffer heads one cluster at a time. */
rl = NULL;
for (vcn = start_vcn, start_vcn += cb_clusters; vcn < start_vcn;
vcn++) {
bool is_retry = false;
if (!rl) {
lock_retry_remap:
down_read(&ni->runlist.lock);
rl = ni->runlist.rl;
}
if (likely(rl != NULL)) {
/* Seek to element containing target vcn. */
while (rl->length && rl[1].vcn <= vcn)
rl++;
lcn = ntfs_rl_vcn_to_lcn(rl, vcn);
} else
lcn = LCN_RL_NOT_MAPPED;
ntfs_debug("Reading vcn = 0x%llx, lcn = 0x%llx.",
(unsigned long long)vcn,
(unsigned long long)lcn);
if (lcn < 0) {
/*
* When we reach the first sparse cluster we have
* finished with the cb.
*/
if (lcn == LCN_HOLE)
break;
if (is_retry || lcn != LCN_RL_NOT_MAPPED)
goto rl_err;
is_retry = true;
/*
* Attempt to map runlist, dropping lock for the
* duration.
*/
up_read(&ni->runlist.lock);
if (!ntfs_map_runlist(ni, vcn))
goto lock_retry_remap;
goto map_rl_err;
}
block = lcn << vol->cluster_size_bits >> block_size_bits;
/* Read the lcn from device in chunks of block_size bytes. */
max_block = block + (vol->cluster_size >> block_size_bits);
do {
ntfs_debug("block = 0x%x.", block);
if (unlikely(!(bhs[nr_bhs] = sb_getblk(sb, block))))
goto getblk_err;
nr_bhs++;
} while (++block < max_block);
}
/* Release the lock if we took it. */
if (rl)
up_read(&ni->runlist.lock);
/* Setup and initiate io on all buffer heads. */
for (i = 0; i < nr_bhs; i++) {
struct buffer_head *tbh = bhs[i];
if (!trylock_buffer(tbh))
continue;
if (unlikely(buffer_uptodate(tbh))) {
unlock_buffer(tbh);
continue;
}
get_bh(tbh);
tbh->b_end_io = end_buffer_read_sync;
submit_bh(REQ_OP_READ, tbh);
}
/* Wait for io completion on all buffer heads. */
for (i = 0; i < nr_bhs; i++) {
struct buffer_head *tbh = bhs[i];
if (buffer_uptodate(tbh))
continue;
wait_on_buffer(tbh);
/*
* We need an optimization barrier here, otherwise we start
* hitting the below fixup code when accessing a loopback
* mounted ntfs partition. This indicates either there is a
* race condition in the loop driver or, more likely, gcc
* overoptimises the code without the barrier and it doesn't
* do the Right Thing(TM).
*/
barrier();
if (unlikely(!buffer_uptodate(tbh))) {
ntfs_warning(vol->sb, "Buffer is unlocked but not "
"uptodate! Unplugging the disk queue "
"and rescheduling.");
get_bh(tbh);
io_schedule();
put_bh(tbh);
if (unlikely(!buffer_uptodate(tbh)))
goto read_err;
ntfs_warning(vol->sb, "Buffer is now uptodate. Good.");
}
}
/*
* Get the compression buffer. We must not sleep any more
* until we are finished with it.
*/
spin_lock(&ntfs_cb_lock);
cb = ntfs_compression_buffer;
BUG_ON(!cb);
cb_pos = cb;
cb_end = cb + cb_size;
/* Copy the buffer heads into the contiguous buffer. */
for (i = 0; i < nr_bhs; i++) {
memcpy(cb_pos, bhs[i]->b_data, block_size);
cb_pos += block_size;
}
/* Just a precaution. */
if (cb_pos + 2 <= cb + cb_size)
*(u16*)cb_pos = 0;
/* Reset cb_pos back to the beginning. */
cb_pos = cb;
/* We now have both source (if present) and destination. */
ntfs_debug("Successfully read the compression block.");
/* The last page and maximum offset within it for the current cb. */
cb_max_page = (cur_page << PAGE_SHIFT) + cur_ofs + cb_size;
cb_max_ofs = cb_max_page & ~PAGE_MASK;
cb_max_page >>= PAGE_SHIFT;
/* Catch end of file inside a compression block. */
if (cb_max_page > max_page)
cb_max_page = max_page;
if (vcn == start_vcn - cb_clusters) {
/* Sparse cb, zero out page range overlapping the cb. */
ntfs_debug("Found sparse compression block.");
/* We can sleep from now on, so we drop lock. */
spin_unlock(&ntfs_cb_lock);
if (cb_max_ofs)
cb_max_page--;
for (; cur_page < cb_max_page; cur_page++) {
page = pages[cur_page];
if (page) {
if (likely(!cur_ofs))
clear_page(page_address(page));
else
memset(page_address(page) + cur_ofs, 0,
PAGE_SIZE -
cur_ofs);
flush_dcache_page(page);
kunmap(page);
SetPageUptodate(page);
unlock_page(page);
if (cur_page == xpage)
xpage_done = 1;
else
put_page(page);
pages[cur_page] = NULL;
}
cb_pos += PAGE_SIZE - cur_ofs;
cur_ofs = 0;
if (cb_pos >= cb_end)
break;
}
/* If we have a partial final page, deal with it now. */
if (cb_max_ofs && cb_pos < cb_end) {
page = pages[cur_page];
if (page)
memset(page_address(page) + cur_ofs, 0,
cb_max_ofs - cur_ofs);
/*
* No need to update cb_pos at this stage:
* cb_pos += cb_max_ofs - cur_ofs;
*/
cur_ofs = cb_max_ofs;
}
} else if (vcn == start_vcn) {
/* We can't sleep so we need two stages. */
unsigned int cur2_page = cur_page;
unsigned int cur_ofs2 = cur_ofs;
u8 *cb_pos2 = cb_pos;
ntfs_debug("Found uncompressed compression block.");
/* Uncompressed cb, copy it to the destination pages. */
/*
* TODO: As a big optimization, we could detect this case
* before we read all the pages and use block_read_full_folio()
* on all full pages instead (we still have to treat partial
* pages especially but at least we are getting rid of the
* synchronous io for the majority of pages.
* Or if we choose not to do the read-ahead/-behind stuff, we
* could just return block_read_full_folio(pages[xpage]) as long
* as PAGE_SIZE <= cb_size.
*/
if (cb_max_ofs)
cb_max_page--;
/* First stage: copy data into destination pages. */
for (; cur_page < cb_max_page; cur_page++) {
page = pages[cur_page];
if (page)
memcpy(page_address(page) + cur_ofs, cb_pos,
PAGE_SIZE - cur_ofs);
cb_pos += PAGE_SIZE - cur_ofs;
cur_ofs = 0;
if (cb_pos >= cb_end)
break;
}
/* If we have a partial final page, deal with it now. */
if (cb_max_ofs && cb_pos < cb_end) {
page = pages[cur_page];
if (page)
memcpy(page_address(page) + cur_ofs, cb_pos,
cb_max_ofs - cur_ofs);
cb_pos += cb_max_ofs - cur_ofs;
cur_ofs = cb_max_ofs;
}
/* We can sleep from now on, so drop lock. */
spin_unlock(&ntfs_cb_lock);
/* Second stage: finalize pages. */
for (; cur2_page < cb_max_page; cur2_page++) {
page = pages[cur2_page];
if (page) {
/*
* If we are outside the initialized size, zero
* the out of bounds page range.
*/
handle_bounds_compressed_page(page, i_size,
initialized_size);
flush_dcache_page(page);
kunmap(page);
SetPageUptodate(page);
unlock_page(page);
if (cur2_page == xpage)
xpage_done = 1;
else
put_page(page);
pages[cur2_page] = NULL;
}
cb_pos2 += PAGE_SIZE - cur_ofs2;
cur_ofs2 = 0;
if (cb_pos2 >= cb_end)
break;
}
} else {
/* Compressed cb, decompress it into the destination page(s). */
unsigned int prev_cur_page = cur_page;
ntfs_debug("Found compressed compression block.");
err = ntfs_decompress(pages, completed_pages, &cur_page,
&cur_ofs, cb_max_page, cb_max_ofs, xpage,
&xpage_done, cb_pos, cb_size - (cb_pos - cb),
i_size, initialized_size);
/*
* We can sleep from now on, lock already dropped by
* ntfs_decompress().
*/
if (err) {
ntfs_error(vol->sb, "ntfs_decompress() failed in inode "
"0x%lx with error code %i. Skipping "
"this compression block.",
ni->mft_no, -err);
/* Release the unfinished pages. */
for (; prev_cur_page < cur_page; prev_cur_page++) {
page = pages[prev_cur_page];
if (page) {
flush_dcache_page(page);
kunmap(page);
unlock_page(page);
if (prev_cur_page != xpage)
put_page(page);
pages[prev_cur_page] = NULL;
}
}
}
}
/* Release the buffer heads. */
for (i = 0; i < nr_bhs; i++)
brelse(bhs[i]);
/* Do we have more work to do? */
if (nr_cbs)
goto do_next_cb;
/* We no longer need the list of buffer heads. */
kfree(bhs);
/* Clean up if we have any pages left. Should never happen. */
for (cur_page = 0; cur_page < max_page; cur_page++) {
page = pages[cur_page];
if (page) {
ntfs_error(vol->sb, "Still have pages left! "
"Terminating them with extreme "
"prejudice. Inode 0x%lx, page index "
"0x%lx.", ni->mft_no, page->index);
flush_dcache_page(page);
kunmap(page);
unlock_page(page);
if (cur_page != xpage)
put_page(page);
pages[cur_page] = NULL;
}
}
/* We no longer need the list of pages. */
kfree(pages);
kfree(completed_pages);
/* If we have completed the requested page, we return success. */
if (likely(xpage_done))
return 0;
ntfs_debug("Failed. Returning error code %s.", err == -EOVERFLOW ?
"EOVERFLOW" : (!err ? "EIO" : "unknown error"));
return err < 0 ? err : -EIO;
read_err:
ntfs_error(vol->sb, "IO error while reading compressed data.");
/* Release the buffer heads. */
for (i = 0; i < nr_bhs; i++)
brelse(bhs[i]);
goto err_out;
map_rl_err:
ntfs_error(vol->sb, "ntfs_map_runlist() failed. Cannot read "
"compression block.");
goto err_out;
rl_err:
up_read(&ni->runlist.lock);
ntfs_error(vol->sb, "ntfs_rl_vcn_to_lcn() failed. Cannot read "
"compression block.");
goto err_out;
getblk_err:
up_read(&ni->runlist.lock);
ntfs_error(vol->sb, "getblk() failed. Cannot read compression block.");
err_out:
kfree(bhs);
for (i = cur_page; i < max_page; i++) {
page = pages[i];
if (page) {
flush_dcache_page(page);
kunmap(page);
unlock_page(page);
if (i != xpage)
put_page(page);
}
}
kfree(pages);
kfree(completed_pages);
return -EIO;
}

View File

@ -1,159 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* debug.c - NTFS kernel debug support. Part of the Linux-NTFS project.
*
* Copyright (c) 2001-2004 Anton Altaparmakov
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include "debug.h"
/**
* __ntfs_warning - output a warning to the syslog
* @function: name of function outputting the warning
* @sb: super block of mounted ntfs filesystem
* @fmt: warning string containing format specifications
* @...: a variable number of arguments specified in @fmt
*
* Outputs a warning to the syslog for the mounted ntfs filesystem described
* by @sb.
*
* @fmt and the corresponding @... is printf style format string containing
* the warning string and the corresponding format arguments, respectively.
*
* @function is the name of the function from which __ntfs_warning is being
* called.
*
* Note, you should be using debug.h::ntfs_warning(@sb, @fmt, @...) instead
* as this provides the @function parameter automatically.
*/
void __ntfs_warning(const char *function, const struct super_block *sb,
const char *fmt, ...)
{
struct va_format vaf;
va_list args;
int flen = 0;
#ifndef DEBUG
if (!printk_ratelimit())
return;
#endif
if (function)
flen = strlen(function);
va_start(args, fmt);
vaf.fmt = fmt;
vaf.va = &args;
if (sb)
pr_warn("(device %s): %s(): %pV\n",
sb->s_id, flen ? function : "", &vaf);
else
pr_warn("%s(): %pV\n", flen ? function : "", &vaf);
va_end(args);
}
/**
* __ntfs_error - output an error to the syslog
* @function: name of function outputting the error
* @sb: super block of mounted ntfs filesystem
* @fmt: error string containing format specifications
* @...: a variable number of arguments specified in @fmt
*
* Outputs an error to the syslog for the mounted ntfs filesystem described
* by @sb.
*
* @fmt and the corresponding @... is printf style format string containing
* the error string and the corresponding format arguments, respectively.
*
* @function is the name of the function from which __ntfs_error is being
* called.
*
* Note, you should be using debug.h::ntfs_error(@sb, @fmt, @...) instead
* as this provides the @function parameter automatically.
*/
void __ntfs_error(const char *function, const struct super_block *sb,
const char *fmt, ...)
{
struct va_format vaf;
va_list args;
int flen = 0;
#ifndef DEBUG
if (!printk_ratelimit())
return;
#endif
if (function)
flen = strlen(function);
va_start(args, fmt);
vaf.fmt = fmt;
vaf.va = &args;
if (sb)
pr_err("(device %s): %s(): %pV\n",
sb->s_id, flen ? function : "", &vaf);
else
pr_err("%s(): %pV\n", flen ? function : "", &vaf);
va_end(args);
}
#ifdef DEBUG
/* If 1, output debug messages, and if 0, don't. */
int debug_msgs = 0;
void __ntfs_debug(const char *file, int line, const char *function,
const char *fmt, ...)
{
struct va_format vaf;
va_list args;
int flen = 0;
if (!debug_msgs)
return;
if (function)
flen = strlen(function);
va_start(args, fmt);
vaf.fmt = fmt;
vaf.va = &args;
pr_debug("(%s, %d): %s(): %pV", file, line, flen ? function : "", &vaf);
va_end(args);
}
/* Dump a runlist. Caller has to provide synchronisation for @rl. */
void ntfs_debug_dump_runlist(const runlist_element *rl)
{
int i;
const char *lcn_str[5] = { "LCN_HOLE ", "LCN_RL_NOT_MAPPED",
"LCN_ENOENT ", "LCN_unknown " };
if (!debug_msgs)
return;
pr_debug("Dumping runlist (values in hex):\n");
if (!rl) {
pr_debug("Run list not present.\n");
return;
}
pr_debug("VCN LCN Run length\n");
for (i = 0; ; i++) {
LCN lcn = (rl + i)->lcn;
if (lcn < (LCN)0) {
int index = -lcn - 1;
if (index > -LCN_ENOENT - 1)
index = 3;
pr_debug("%-16Lx %s %-16Lx%s\n",
(long long)(rl + i)->vcn, lcn_str[index],
(long long)(rl + i)->length,
(rl + i)->length ? "" :
" (runlist end)");
} else
pr_debug("%-16Lx %-16Lx %-16Lx%s\n",
(long long)(rl + i)->vcn,
(long long)(rl + i)->lcn,
(long long)(rl + i)->length,
(rl + i)->length ? "" :
" (runlist end)");
if (!(rl + i)->length)
break;
}
}
#endif

View File

@ -1,57 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* debug.h - NTFS kernel debug support. Part of the Linux-NTFS project.
*
* Copyright (c) 2001-2004 Anton Altaparmakov
*/
#ifndef _LINUX_NTFS_DEBUG_H
#define _LINUX_NTFS_DEBUG_H
#include <linux/fs.h>
#include "runlist.h"
#ifdef DEBUG
extern int debug_msgs;
extern __printf(4, 5)
void __ntfs_debug(const char *file, int line, const char *function,
const char *format, ...);
/**
* ntfs_debug - write a debug level message to syslog
* @f: a printf format string containing the message
* @...: the variables to substitute into @f
*
* ntfs_debug() writes a DEBUG level message to the syslog but only if the
* driver was compiled with -DDEBUG. Otherwise, the call turns into a NOP.
*/
#define ntfs_debug(f, a...) \
__ntfs_debug(__FILE__, __LINE__, __func__, f, ##a)
extern void ntfs_debug_dump_runlist(const runlist_element *rl);
#else /* !DEBUG */
#define ntfs_debug(fmt, ...) \
do { \
if (0) \
no_printk(fmt, ##__VA_ARGS__); \
} while (0)
#define ntfs_debug_dump_runlist(rl) do {} while (0)
#endif /* !DEBUG */
extern __printf(3, 4)
void __ntfs_warning(const char *function, const struct super_block *sb,
const char *fmt, ...);
#define ntfs_warning(sb, f, a...) __ntfs_warning(__func__, sb, f, ##a)
extern __printf(3, 4)
void __ntfs_error(const char *function, const struct super_block *sb,
const char *fmt, ...);
#define ntfs_error(sb, f, a...) __ntfs_error(__func__, sb, f, ##a)
#endif /* _LINUX_NTFS_DEBUG_H */

File diff suppressed because it is too large Load Diff

View File

@ -1,34 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* dir.h - Defines for directory handling in NTFS Linux kernel driver. Part of
* the Linux-NTFS project.
*
* Copyright (c) 2002-2004 Anton Altaparmakov
*/
#ifndef _LINUX_NTFS_DIR_H
#define _LINUX_NTFS_DIR_H
#include "layout.h"
#include "inode.h"
#include "types.h"
/*
* ntfs_name is used to return the file name to the caller of
* ntfs_lookup_inode_by_name() in order for the caller (namei.c::ntfs_lookup())
* to be able to deal with dcache aliasing issues.
*/
typedef struct {
MFT_REF mref;
FILE_NAME_TYPE_FLAGS type;
u8 len;
ntfschar name[0];
} __attribute__ ((__packed__)) ntfs_name;
/* The little endian Unicode string $I30 as a global constant. */
extern ntfschar I30[5];
extern MFT_REF ntfs_lookup_inode_by_name(ntfs_inode *dir_ni,
const ntfschar *uname, const int uname_len, ntfs_name **res);
#endif /* _LINUX_NTFS_FS_DIR_H */

View File

@ -1,79 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* endian.h - Defines for endianness handling in NTFS Linux kernel driver.
* Part of the Linux-NTFS project.
*
* Copyright (c) 2001-2004 Anton Altaparmakov
*/
#ifndef _LINUX_NTFS_ENDIAN_H
#define _LINUX_NTFS_ENDIAN_H
#include <asm/byteorder.h>
#include "types.h"
/*
* Signed endianness conversion functions.
*/
static inline s16 sle16_to_cpu(sle16 x)
{
return le16_to_cpu((__force le16)x);
}
static inline s32 sle32_to_cpu(sle32 x)
{
return le32_to_cpu((__force le32)x);
}
static inline s64 sle64_to_cpu(sle64 x)
{
return le64_to_cpu((__force le64)x);
}
static inline s16 sle16_to_cpup(sle16 *x)
{
return le16_to_cpu(*(__force le16*)x);
}
static inline s32 sle32_to_cpup(sle32 *x)
{
return le32_to_cpu(*(__force le32*)x);
}
static inline s64 sle64_to_cpup(sle64 *x)
{
return le64_to_cpu(*(__force le64*)x);
}
static inline sle16 cpu_to_sle16(s16 x)
{
return (__force sle16)cpu_to_le16(x);
}
static inline sle32 cpu_to_sle32(s32 x)
{
return (__force sle32)cpu_to_le32(x);
}
static inline sle64 cpu_to_sle64(s64 x)
{
return (__force sle64)cpu_to_le64(x);
}
static inline sle16 cpu_to_sle16p(s16 *x)
{
return (__force sle16)cpu_to_le16(*x);
}
static inline sle32 cpu_to_sle32p(s32 *x)
{
return (__force sle32)cpu_to_le32(*x);
}
static inline sle64 cpu_to_sle64p(s64 *x)
{
return (__force sle64)cpu_to_le64(*x);
}
#endif /* _LINUX_NTFS_ENDIAN_H */

File diff suppressed because it is too large Load Diff

View File

@ -1,440 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* index.c - NTFS kernel index handling. Part of the Linux-NTFS project.
*
* Copyright (c) 2004-2005 Anton Altaparmakov
*/
#include <linux/slab.h>
#include "aops.h"
#include "collate.h"
#include "debug.h"
#include "index.h"
#include "ntfs.h"
/**
* ntfs_index_ctx_get - allocate and initialize a new index context
* @idx_ni: ntfs index inode with which to initialize the context
*
* Allocate a new index context, initialize it with @idx_ni and return it.
* Return NULL if allocation failed.
*
* Locking: Caller must hold i_mutex on the index inode.
*/
ntfs_index_context *ntfs_index_ctx_get(ntfs_inode *idx_ni)
{
ntfs_index_context *ictx;
ictx = kmem_cache_alloc(ntfs_index_ctx_cache, GFP_NOFS);
if (ictx)
*ictx = (ntfs_index_context){ .idx_ni = idx_ni };
return ictx;
}
/**
* ntfs_index_ctx_put - release an index context
* @ictx: index context to free
*
* Release the index context @ictx, releasing all associated resources.
*
* Locking: Caller must hold i_mutex on the index inode.
*/
void ntfs_index_ctx_put(ntfs_index_context *ictx)
{
if (ictx->entry) {
if (ictx->is_in_root) {
if (ictx->actx)
ntfs_attr_put_search_ctx(ictx->actx);
if (ictx->base_ni)
unmap_mft_record(ictx->base_ni);
} else {
struct page *page = ictx->page;
if (page) {
BUG_ON(!PageLocked(page));
unlock_page(page);
ntfs_unmap_page(page);
}
}
}
kmem_cache_free(ntfs_index_ctx_cache, ictx);
return;
}
/**
* ntfs_index_lookup - find a key in an index and return its index entry
* @key: [IN] key for which to search in the index
* @key_len: [IN] length of @key in bytes
* @ictx: [IN/OUT] context describing the index and the returned entry
*
* Before calling ntfs_index_lookup(), @ictx must have been obtained from a
* call to ntfs_index_ctx_get().
*
* Look for the @key in the index specified by the index lookup context @ictx.
* ntfs_index_lookup() walks the contents of the index looking for the @key.
*
* If the @key is found in the index, 0 is returned and @ictx is setup to
* describe the index entry containing the matching @key. @ictx->entry is the
* index entry and @ictx->data and @ictx->data_len are the index entry data and
* its length in bytes, respectively.
*
* If the @key is not found in the index, -ENOENT is returned and @ictx is
* setup to describe the index entry whose key collates immediately after the
* search @key, i.e. this is the position in the index at which an index entry
* with a key of @key would need to be inserted.
*
* If an error occurs return the negative error code and @ictx is left
* untouched.
*
* When finished with the entry and its data, call ntfs_index_ctx_put() to free
* the context and other associated resources.
*
* If the index entry was modified, call flush_dcache_index_entry_page()
* immediately after the modification and either ntfs_index_entry_mark_dirty()
* or ntfs_index_entry_write() before the call to ntfs_index_ctx_put() to
* ensure that the changes are written to disk.
*
* Locking: - Caller must hold i_mutex on the index inode.
* - Each page cache page in the index allocation mapping must be
* locked whilst being accessed otherwise we may find a corrupt
* page due to it being under ->writepage at the moment which
* applies the mst protection fixups before writing out and then
* removes them again after the write is complete after which it
* unlocks the page.
*/
int ntfs_index_lookup(const void *key, const int key_len,
ntfs_index_context *ictx)
{
VCN vcn, old_vcn;
ntfs_inode *idx_ni = ictx->idx_ni;
ntfs_volume *vol = idx_ni->vol;
struct super_block *sb = vol->sb;
ntfs_inode *base_ni = idx_ni->ext.base_ntfs_ino;
MFT_RECORD *m;
INDEX_ROOT *ir;
INDEX_ENTRY *ie;
INDEX_ALLOCATION *ia;
u8 *index_end, *kaddr;
ntfs_attr_search_ctx *actx;
struct address_space *ia_mapping;
struct page *page;
int rc, err = 0;
ntfs_debug("Entering.");
BUG_ON(!NInoAttr(idx_ni));
BUG_ON(idx_ni->type != AT_INDEX_ALLOCATION);
BUG_ON(idx_ni->nr_extents != -1);
BUG_ON(!base_ni);
BUG_ON(!key);
BUG_ON(key_len <= 0);
if (!ntfs_is_collation_rule_supported(
idx_ni->itype.index.collation_rule)) {
ntfs_error(sb, "Index uses unsupported collation rule 0x%x. "
"Aborting lookup.", le32_to_cpu(
idx_ni->itype.index.collation_rule));
return -EOPNOTSUPP;
}
/* Get hold of the mft record for the index inode. */
m = map_mft_record(base_ni);
if (IS_ERR(m)) {
ntfs_error(sb, "map_mft_record() failed with error code %ld.",
-PTR_ERR(m));
return PTR_ERR(m);
}
actx = ntfs_attr_get_search_ctx(base_ni, m);
if (unlikely(!actx)) {
err = -ENOMEM;
goto err_out;
}
/* Find the index root attribute in the mft record. */
err = ntfs_attr_lookup(AT_INDEX_ROOT, idx_ni->name, idx_ni->name_len,
CASE_SENSITIVE, 0, NULL, 0, actx);
if (unlikely(err)) {
if (err == -ENOENT) {
ntfs_error(sb, "Index root attribute missing in inode "
"0x%lx.", idx_ni->mft_no);
err = -EIO;
}
goto err_out;
}
/* Get to the index root value (it has been verified in read_inode). */
ir = (INDEX_ROOT*)((u8*)actx->attr +
le16_to_cpu(actx->attr->data.resident.value_offset));
index_end = (u8*)&ir->index + le32_to_cpu(ir->index.index_length);
/* The first index entry. */
ie = (INDEX_ENTRY*)((u8*)&ir->index +
le32_to_cpu(ir->index.entries_offset));
/*
* Loop until we exceed valid memory (corruption case) or until we
* reach the last entry.
*/
for (;; ie = (INDEX_ENTRY*)((u8*)ie + le16_to_cpu(ie->length))) {
/* Bounds checks. */
if ((u8*)ie < (u8*)actx->mrec || (u8*)ie +
sizeof(INDEX_ENTRY_HEADER) > index_end ||
(u8*)ie + le16_to_cpu(ie->length) > index_end)
goto idx_err_out;
/*
* The last entry cannot contain a key. It can however contain
* a pointer to a child node in the B+tree so we just break out.
*/
if (ie->flags & INDEX_ENTRY_END)
break;
/* Further bounds checks. */
if ((u32)sizeof(INDEX_ENTRY_HEADER) +
le16_to_cpu(ie->key_length) >
le16_to_cpu(ie->data.vi.data_offset) ||
(u32)le16_to_cpu(ie->data.vi.data_offset) +
le16_to_cpu(ie->data.vi.data_length) >
le16_to_cpu(ie->length))
goto idx_err_out;
/* If the keys match perfectly, we setup @ictx and return 0. */
if ((key_len == le16_to_cpu(ie->key_length)) && !memcmp(key,
&ie->key, key_len)) {
ir_done:
ictx->is_in_root = true;
ictx->ir = ir;
ictx->actx = actx;
ictx->base_ni = base_ni;
ictx->ia = NULL;
ictx->page = NULL;
done:
ictx->entry = ie;
ictx->data = (u8*)ie +
le16_to_cpu(ie->data.vi.data_offset);
ictx->data_len = le16_to_cpu(ie->data.vi.data_length);
ntfs_debug("Done.");
return err;
}
/*
* Not a perfect match, need to do full blown collation so we
* know which way in the B+tree we have to go.
*/
rc = ntfs_collate(vol, idx_ni->itype.index.collation_rule, key,
key_len, &ie->key, le16_to_cpu(ie->key_length));
/*
* If @key collates before the key of the current entry, there
* is definitely no such key in this index but we might need to
* descend into the B+tree so we just break out of the loop.
*/
if (rc == -1)
break;
/*
* A match should never happen as the memcmp() call should have
* cought it, but we still treat it correctly.
*/
if (!rc)
goto ir_done;
/* The keys are not equal, continue the search. */
}
/*
* We have finished with this index without success. Check for the
* presence of a child node and if not present setup @ictx and return
* -ENOENT.
*/
if (!(ie->flags & INDEX_ENTRY_NODE)) {
ntfs_debug("Entry not found.");
err = -ENOENT;
goto ir_done;
} /* Child node present, descend into it. */
/* Consistency check: Verify that an index allocation exists. */
if (!NInoIndexAllocPresent(idx_ni)) {
ntfs_error(sb, "No index allocation attribute but index entry "
"requires one. Inode 0x%lx is corrupt or "
"driver bug.", idx_ni->mft_no);
goto err_out;
}
/* Get the starting vcn of the index_block holding the child node. */
vcn = sle64_to_cpup((sle64*)((u8*)ie + le16_to_cpu(ie->length) - 8));
ia_mapping = VFS_I(idx_ni)->i_mapping;
/*
* We are done with the index root and the mft record. Release them,
* otherwise we deadlock with ntfs_map_page().
*/
ntfs_attr_put_search_ctx(actx);
unmap_mft_record(base_ni);
m = NULL;
actx = NULL;
descend_into_child_node:
/*
* Convert vcn to index into the index allocation attribute in units
* of PAGE_SIZE and map the page cache page, reading it from
* disk if necessary.
*/
page = ntfs_map_page(ia_mapping, vcn <<
idx_ni->itype.index.vcn_size_bits >> PAGE_SHIFT);
if (IS_ERR(page)) {
ntfs_error(sb, "Failed to map index page, error %ld.",
-PTR_ERR(page));
err = PTR_ERR(page);
goto err_out;
}
lock_page(page);
kaddr = (u8*)page_address(page);
fast_descend_into_child_node:
/* Get to the index allocation block. */
ia = (INDEX_ALLOCATION*)(kaddr + ((vcn <<
idx_ni->itype.index.vcn_size_bits) & ~PAGE_MASK));
/* Bounds checks. */
if ((u8*)ia < kaddr || (u8*)ia > kaddr + PAGE_SIZE) {
ntfs_error(sb, "Out of bounds check failed. Corrupt inode "
"0x%lx or driver bug.", idx_ni->mft_no);
goto unm_err_out;
}
/* Catch multi sector transfer fixup errors. */
if (unlikely(!ntfs_is_indx_record(ia->magic))) {
ntfs_error(sb, "Index record with vcn 0x%llx is corrupt. "
"Corrupt inode 0x%lx. Run chkdsk.",
(long long)vcn, idx_ni->mft_no);
goto unm_err_out;
}
if (sle64_to_cpu(ia->index_block_vcn) != vcn) {
ntfs_error(sb, "Actual VCN (0x%llx) of index buffer is "
"different from expected VCN (0x%llx). Inode "
"0x%lx is corrupt or driver bug.",
(unsigned long long)
sle64_to_cpu(ia->index_block_vcn),
(unsigned long long)vcn, idx_ni->mft_no);
goto unm_err_out;
}
if (le32_to_cpu(ia->index.allocated_size) + 0x18 !=
idx_ni->itype.index.block_size) {
ntfs_error(sb, "Index buffer (VCN 0x%llx) of inode 0x%lx has "
"a size (%u) differing from the index "
"specified size (%u). Inode is corrupt or "
"driver bug.", (unsigned long long)vcn,
idx_ni->mft_no,
le32_to_cpu(ia->index.allocated_size) + 0x18,
idx_ni->itype.index.block_size);
goto unm_err_out;
}
index_end = (u8*)ia + idx_ni->itype.index.block_size;
if (index_end > kaddr + PAGE_SIZE) {
ntfs_error(sb, "Index buffer (VCN 0x%llx) of inode 0x%lx "
"crosses page boundary. Impossible! Cannot "
"access! This is probably a bug in the "
"driver.", (unsigned long long)vcn,
idx_ni->mft_no);
goto unm_err_out;
}
index_end = (u8*)&ia->index + le32_to_cpu(ia->index.index_length);
if (index_end > (u8*)ia + idx_ni->itype.index.block_size) {
ntfs_error(sb, "Size of index buffer (VCN 0x%llx) of inode "
"0x%lx exceeds maximum size.",
(unsigned long long)vcn, idx_ni->mft_no);
goto unm_err_out;
}
/* The first index entry. */
ie = (INDEX_ENTRY*)((u8*)&ia->index +
le32_to_cpu(ia->index.entries_offset));
/*
* Iterate similar to above big loop but applied to index buffer, thus
* loop until we exceed valid memory (corruption case) or until we
* reach the last entry.
*/
for (;; ie = (INDEX_ENTRY*)((u8*)ie + le16_to_cpu(ie->length))) {
/* Bounds checks. */
if ((u8*)ie < (u8*)ia || (u8*)ie +
sizeof(INDEX_ENTRY_HEADER) > index_end ||
(u8*)ie + le16_to_cpu(ie->length) > index_end) {
ntfs_error(sb, "Index entry out of bounds in inode "
"0x%lx.", idx_ni->mft_no);
goto unm_err_out;
}
/*
* The last entry cannot contain a key. It can however contain
* a pointer to a child node in the B+tree so we just break out.
*/
if (ie->flags & INDEX_ENTRY_END)
break;
/* Further bounds checks. */
if ((u32)sizeof(INDEX_ENTRY_HEADER) +
le16_to_cpu(ie->key_length) >
le16_to_cpu(ie->data.vi.data_offset) ||
(u32)le16_to_cpu(ie->data.vi.data_offset) +
le16_to_cpu(ie->data.vi.data_length) >
le16_to_cpu(ie->length)) {
ntfs_error(sb, "Index entry out of bounds in inode "
"0x%lx.", idx_ni->mft_no);
goto unm_err_out;
}
/* If the keys match perfectly, we setup @ictx and return 0. */
if ((key_len == le16_to_cpu(ie->key_length)) && !memcmp(key,
&ie->key, key_len)) {
ia_done:
ictx->is_in_root = false;
ictx->actx = NULL;
ictx->base_ni = NULL;
ictx->ia = ia;
ictx->page = page;
goto done;
}
/*
* Not a perfect match, need to do full blown collation so we
* know which way in the B+tree we have to go.
*/
rc = ntfs_collate(vol, idx_ni->itype.index.collation_rule, key,
key_len, &ie->key, le16_to_cpu(ie->key_length));
/*
* If @key collates before the key of the current entry, there
* is definitely no such key in this index but we might need to
* descend into the B+tree so we just break out of the loop.
*/
if (rc == -1)
break;
/*
* A match should never happen as the memcmp() call should have
* cought it, but we still treat it correctly.
*/
if (!rc)
goto ia_done;
/* The keys are not equal, continue the search. */
}
/*
* We have finished with this index buffer without success. Check for
* the presence of a child node and if not present return -ENOENT.
*/
if (!(ie->flags & INDEX_ENTRY_NODE)) {
ntfs_debug("Entry not found.");
err = -ENOENT;
goto ia_done;
}
if ((ia->index.flags & NODE_MASK) == LEAF_NODE) {
ntfs_error(sb, "Index entry with child node found in a leaf "
"node in inode 0x%lx.", idx_ni->mft_no);
goto unm_err_out;
}
/* Child node present, descend into it. */
old_vcn = vcn;
vcn = sle64_to_cpup((sle64*)((u8*)ie + le16_to_cpu(ie->length) - 8));
if (vcn >= 0) {
/*
* If vcn is in the same page cache page as old_vcn we recycle
* the mapped page.
*/
if (old_vcn << vol->cluster_size_bits >>
PAGE_SHIFT == vcn <<
vol->cluster_size_bits >>
PAGE_SHIFT)
goto fast_descend_into_child_node;
unlock_page(page);
ntfs_unmap_page(page);
goto descend_into_child_node;
}
ntfs_error(sb, "Negative child node vcn in inode 0x%lx.",
idx_ni->mft_no);
unm_err_out:
unlock_page(page);
ntfs_unmap_page(page);
err_out:
if (!err)
err = -EIO;
if (actx)
ntfs_attr_put_search_ctx(actx);
if (m)
unmap_mft_record(base_ni);
return err;
idx_err_out:
ntfs_error(sb, "Corrupt index. Aborting lookup.");
goto err_out;
}

View File

@ -1,134 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* index.h - Defines for NTFS kernel index handling. Part of the Linux-NTFS
* project.
*
* Copyright (c) 2004 Anton Altaparmakov
*/
#ifndef _LINUX_NTFS_INDEX_H
#define _LINUX_NTFS_INDEX_H
#include <linux/fs.h>
#include "types.h"
#include "layout.h"
#include "inode.h"
#include "attrib.h"
#include "mft.h"
#include "aops.h"
/**
* @idx_ni: index inode containing the @entry described by this context
* @entry: index entry (points into @ir or @ia)
* @data: index entry data (points into @entry)
* @data_len: length in bytes of @data
* @is_in_root: 'true' if @entry is in @ir and 'false' if it is in @ia
* @ir: index root if @is_in_root and NULL otherwise
* @actx: attribute search context if @is_in_root and NULL otherwise
* @base_ni: base inode if @is_in_root and NULL otherwise
* @ia: index block if @is_in_root is 'false' and NULL otherwise
* @page: page if @is_in_root is 'false' and NULL otherwise
*
* @idx_ni is the index inode this context belongs to.
*
* @entry is the index entry described by this context. @data and @data_len
* are the index entry data and its length in bytes, respectively. @data
* simply points into @entry. This is probably what the user is interested in.
*
* If @is_in_root is 'true', @entry is in the index root attribute @ir described
* by the attribute search context @actx and the base inode @base_ni. @ia and
* @page are NULL in this case.
*
* If @is_in_root is 'false', @entry is in the index allocation attribute and @ia
* and @page point to the index allocation block and the mapped, locked page it
* is in, respectively. @ir, @actx and @base_ni are NULL in this case.
*
* To obtain a context call ntfs_index_ctx_get().
*
* We use this context to allow ntfs_index_lookup() to return the found index
* @entry and its @data without having to allocate a buffer and copy the @entry
* and/or its @data into it.
*
* When finished with the @entry and its @data, call ntfs_index_ctx_put() to
* free the context and other associated resources.
*
* If the index entry was modified, call flush_dcache_index_entry_page()
* immediately after the modification and either ntfs_index_entry_mark_dirty()
* or ntfs_index_entry_write() before the call to ntfs_index_ctx_put() to
* ensure that the changes are written to disk.
*/
typedef struct {
ntfs_inode *idx_ni;
INDEX_ENTRY *entry;
void *data;
u16 data_len;
bool is_in_root;
INDEX_ROOT *ir;
ntfs_attr_search_ctx *actx;
ntfs_inode *base_ni;
INDEX_ALLOCATION *ia;
struct page *page;
} ntfs_index_context;
extern ntfs_index_context *ntfs_index_ctx_get(ntfs_inode *idx_ni);
extern void ntfs_index_ctx_put(ntfs_index_context *ictx);
extern int ntfs_index_lookup(const void *key, const int key_len,
ntfs_index_context *ictx);
#ifdef NTFS_RW
/**
* ntfs_index_entry_flush_dcache_page - flush_dcache_page() for index entries
* @ictx: ntfs index context describing the index entry
*
* Call flush_dcache_page() for the page in which an index entry resides.
*
* This must be called every time an index entry is modified, just after the
* modification.
*
* If the index entry is in the index root attribute, simply flush the page
* containing the mft record containing the index root attribute.
*
* If the index entry is in an index block belonging to the index allocation
* attribute, simply flush the page cache page containing the index block.
*/
static inline void ntfs_index_entry_flush_dcache_page(ntfs_index_context *ictx)
{
if (ictx->is_in_root)
flush_dcache_mft_record_page(ictx->actx->ntfs_ino);
else
flush_dcache_page(ictx->page);
}
/**
* ntfs_index_entry_mark_dirty - mark an index entry dirty
* @ictx: ntfs index context describing the index entry
*
* Mark the index entry described by the index entry context @ictx dirty.
*
* If the index entry is in the index root attribute, simply mark the mft
* record containing the index root attribute dirty. This ensures the mft
* record, and hence the index root attribute, will be written out to disk
* later.
*
* If the index entry is in an index block belonging to the index allocation
* attribute, mark the buffers belonging to the index record as well as the
* page cache page the index block is in dirty. This automatically marks the
* VFS inode of the ntfs index inode to which the index entry belongs dirty,
* too (I_DIRTY_PAGES) and this in turn ensures the page buffers, and hence the
* dirty index block, will be written out to disk later.
*/
static inline void ntfs_index_entry_mark_dirty(ntfs_index_context *ictx)
{
if (ictx->is_in_root)
mark_mft_record_dirty(ictx->actx->ntfs_ino);
else
mark_ntfs_record_dirty(ictx->page,
(u8*)ictx->ia - (u8*)page_address(ictx->page));
}
#endif /* NTFS_RW */
#endif /* _LINUX_NTFS_INDEX_H */

File diff suppressed because it is too large Load Diff

View File

@ -1,310 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* inode.h - Defines for inode structures NTFS Linux kernel driver. Part of
* the Linux-NTFS project.
*
* Copyright (c) 2001-2007 Anton Altaparmakov
* Copyright (c) 2002 Richard Russon
*/
#ifndef _LINUX_NTFS_INODE_H
#define _LINUX_NTFS_INODE_H
#include <linux/atomic.h>
#include <linux/fs.h>
#include <linux/list.h>
#include <linux/mm.h>
#include <linux/mutex.h>
#include <linux/seq_file.h>
#include "layout.h"
#include "volume.h"
#include "types.h"
#include "runlist.h"
#include "debug.h"
typedef struct _ntfs_inode ntfs_inode;
/*
* The NTFS in-memory inode structure. It is just used as an extension to the
* fields already provided in the VFS inode.
*/
struct _ntfs_inode {
rwlock_t size_lock; /* Lock serializing access to inode sizes. */
s64 initialized_size; /* Copy from the attribute record. */
s64 allocated_size; /* Copy from the attribute record. */
unsigned long state; /* NTFS specific flags describing this inode.
See ntfs_inode_state_bits below. */
unsigned long mft_no; /* Number of the mft record / inode. */
u16 seq_no; /* Sequence number of the mft record. */
atomic_t count; /* Inode reference count for book keeping. */
ntfs_volume *vol; /* Pointer to the ntfs volume of this inode. */
/*
* If NInoAttr() is true, the below fields describe the attribute which
* this fake inode belongs to. The actual inode of this attribute is
* pointed to by base_ntfs_ino and nr_extents is always set to -1 (see
* below). For real inodes, we also set the type (AT_DATA for files and
* AT_INDEX_ALLOCATION for directories), with the name = NULL and
* name_len = 0 for files and name = I30 (global constant) and
* name_len = 4 for directories.
*/
ATTR_TYPE type; /* Attribute type of this fake inode. */
ntfschar *name; /* Attribute name of this fake inode. */
u32 name_len; /* Attribute name length of this fake inode. */
runlist runlist; /* If state has the NI_NonResident bit set,
the runlist of the unnamed data attribute
(if a file) or of the index allocation
attribute (directory) or of the attribute
described by the fake inode (if NInoAttr()).
If runlist.rl is NULL, the runlist has not
been read in yet or has been unmapped. If
NI_NonResident is clear, the attribute is
resident (file and fake inode) or there is
no $I30 index allocation attribute
(small directory). In the latter case
runlist.rl is always NULL.*/
/*
* The following fields are only valid for real inodes and extent
* inodes.
*/
struct mutex mrec_lock; /* Lock for serializing access to the
mft record belonging to this inode. */
struct page *page; /* The page containing the mft record of the
inode. This should only be touched by the
(un)map_mft_record*() functions. */
int page_ofs; /* Offset into the page at which the mft record
begins. This should only be touched by the
(un)map_mft_record*() functions. */
/*
* Attribute list support (only for use by the attribute lookup
* functions). Setup during read_inode for all inodes with attribute
* lists. Only valid if NI_AttrList is set in state, and attr_list_rl is
* further only valid if NI_AttrListNonResident is set.
*/
u32 attr_list_size; /* Length of attribute list value in bytes. */
u8 *attr_list; /* Attribute list value itself. */
runlist attr_list_rl; /* Run list for the attribute list value. */
union {
struct { /* It is a directory, $MFT, or an index inode. */
u32 block_size; /* Size of an index block. */
u32 vcn_size; /* Size of a vcn in this
index. */
COLLATION_RULE collation_rule; /* The collation rule
for the index. */
u8 block_size_bits; /* Log2 of the above. */
u8 vcn_size_bits; /* Log2 of the above. */
} index;
struct { /* It is a compressed/sparse file/attribute inode. */
s64 size; /* Copy of compressed_size from
$DATA. */
u32 block_size; /* Size of a compression block
(cb). */
u8 block_size_bits; /* Log2 of the size of a cb. */
u8 block_clusters; /* Number of clusters per cb. */
} compressed;
} itype;
struct mutex extent_lock; /* Lock for accessing/modifying the
below . */
s32 nr_extents; /* For a base mft record, the number of attached extent
inodes (0 if none), for extent records and for fake
inodes describing an attribute this is -1. */
union { /* This union is only used if nr_extents != 0. */
ntfs_inode **extent_ntfs_inos; /* For nr_extents > 0, array of
the ntfs inodes of the extent
mft records belonging to
this base inode which have
been loaded. */
ntfs_inode *base_ntfs_ino; /* For nr_extents == -1, the
ntfs inode of the base mft
record. For fake inodes, the
real (base) inode to which
the attribute belongs. */
} ext;
};
/*
* Defined bits for the state field in the ntfs_inode structure.
* (f) = files only, (d) = directories only, (a) = attributes/fake inodes only
*/
typedef enum {
NI_Dirty, /* 1: Mft record needs to be written to disk. */
NI_AttrList, /* 1: Mft record contains an attribute list. */
NI_AttrListNonResident, /* 1: Attribute list is non-resident. Implies
NI_AttrList is set. */
NI_Attr, /* 1: Fake inode for attribute i/o.
0: Real inode or extent inode. */
NI_MstProtected, /* 1: Attribute is protected by MST fixups.
0: Attribute is not protected by fixups. */
NI_NonResident, /* 1: Unnamed data attr is non-resident (f).
1: Attribute is non-resident (a). */
NI_IndexAllocPresent = NI_NonResident, /* 1: $I30 index alloc attr is
present (d). */
NI_Compressed, /* 1: Unnamed data attr is compressed (f).
1: Create compressed files by default (d).
1: Attribute is compressed (a). */
NI_Encrypted, /* 1: Unnamed data attr is encrypted (f).
1: Create encrypted files by default (d).
1: Attribute is encrypted (a). */
NI_Sparse, /* 1: Unnamed data attr is sparse (f).
1: Create sparse files by default (d).
1: Attribute is sparse (a). */
NI_SparseDisabled, /* 1: May not create sparse regions. */
NI_TruncateFailed, /* 1: Last ntfs_truncate() call failed. */
} ntfs_inode_state_bits;
/*
* NOTE: We should be adding dirty mft records to a list somewhere and they
* should be independent of the (ntfs/vfs) inode structure so that an inode can
* be removed but the record can be left dirty for syncing later.
*/
/*
* Macro tricks to expand the NInoFoo(), NInoSetFoo(), and NInoClearFoo()
* functions.
*/
#define NINO_FNS(flag) \
static inline int NIno##flag(ntfs_inode *ni) \
{ \
return test_bit(NI_##flag, &(ni)->state); \
} \
static inline void NInoSet##flag(ntfs_inode *ni) \
{ \
set_bit(NI_##flag, &(ni)->state); \
} \
static inline void NInoClear##flag(ntfs_inode *ni) \
{ \
clear_bit(NI_##flag, &(ni)->state); \
}
/*
* As above for NInoTestSetFoo() and NInoTestClearFoo().
*/
#define TAS_NINO_FNS(flag) \
static inline int NInoTestSet##flag(ntfs_inode *ni) \
{ \
return test_and_set_bit(NI_##flag, &(ni)->state); \
} \
static inline int NInoTestClear##flag(ntfs_inode *ni) \
{ \
return test_and_clear_bit(NI_##flag, &(ni)->state); \
}
/* Emit the ntfs inode bitops functions. */
NINO_FNS(Dirty)
TAS_NINO_FNS(Dirty)
NINO_FNS(AttrList)
NINO_FNS(AttrListNonResident)
NINO_FNS(Attr)
NINO_FNS(MstProtected)
NINO_FNS(NonResident)
NINO_FNS(IndexAllocPresent)
NINO_FNS(Compressed)
NINO_FNS(Encrypted)
NINO_FNS(Sparse)
NINO_FNS(SparseDisabled)
NINO_FNS(TruncateFailed)
/*
* The full structure containing a ntfs_inode and a vfs struct inode. Used for
* all real and fake inodes but not for extent inodes which lack the vfs struct
* inode.
*/
typedef struct {
ntfs_inode ntfs_inode;
struct inode vfs_inode; /* The vfs inode structure. */
} big_ntfs_inode;
/**
* NTFS_I - return the ntfs inode given a vfs inode
* @inode: VFS inode
*
* NTFS_I() returns the ntfs inode associated with the VFS @inode.
*/
static inline ntfs_inode *NTFS_I(struct inode *inode)
{
return (ntfs_inode *)container_of(inode, big_ntfs_inode, vfs_inode);
}
static inline struct inode *VFS_I(ntfs_inode *ni)
{
return &((big_ntfs_inode *)ni)->vfs_inode;
}
/**
* ntfs_attr - ntfs in memory attribute structure
* @mft_no: mft record number of the base mft record of this attribute
* @name: Unicode name of the attribute (NULL if unnamed)
* @name_len: length of @name in Unicode characters (0 if unnamed)
* @type: attribute type (see layout.h)
*
* This structure exists only to provide a small structure for the
* ntfs_{attr_}iget()/ntfs_test_inode()/ntfs_init_locked_inode() mechanism.
*
* NOTE: Elements are ordered by size to make the structure as compact as
* possible on all architectures.
*/
typedef struct {
unsigned long mft_no;
ntfschar *name;
u32 name_len;
ATTR_TYPE type;
} ntfs_attr;
extern int ntfs_test_inode(struct inode *vi, void *data);
extern struct inode *ntfs_iget(struct super_block *sb, unsigned long mft_no);
extern struct inode *ntfs_attr_iget(struct inode *base_vi, ATTR_TYPE type,
ntfschar *name, u32 name_len);
extern struct inode *ntfs_index_iget(struct inode *base_vi, ntfschar *name,
u32 name_len);
extern struct inode *ntfs_alloc_big_inode(struct super_block *sb);
extern void ntfs_free_big_inode(struct inode *inode);
extern void ntfs_evict_big_inode(struct inode *vi);
extern void __ntfs_init_inode(struct super_block *sb, ntfs_inode *ni);
static inline void ntfs_init_big_inode(struct inode *vi)
{
ntfs_inode *ni = NTFS_I(vi);
ntfs_debug("Entering.");
__ntfs_init_inode(vi->i_sb, ni);
ni->mft_no = vi->i_ino;
}
extern ntfs_inode *ntfs_new_extent_inode(struct super_block *sb,
unsigned long mft_no);
extern void ntfs_clear_extent_inode(ntfs_inode *ni);
extern int ntfs_read_inode_mount(struct inode *vi);
extern int ntfs_show_options(struct seq_file *sf, struct dentry *root);
#ifdef NTFS_RW
extern int ntfs_truncate(struct inode *vi);
extern void ntfs_truncate_vfs(struct inode *vi);
extern int ntfs_setattr(struct mnt_idmap *idmap,
struct dentry *dentry, struct iattr *attr);
extern int __ntfs_write_inode(struct inode *vi, int sync);
static inline void ntfs_commit_inode(struct inode *vi)
{
if (!is_bad_inode(vi))
__ntfs_write_inode(vi, 1);
return;
}
#else
static inline void ntfs_truncate_vfs(struct inode *vi) {}
#endif /* NTFS_RW */
#endif /* _LINUX_NTFS_INODE_H */

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,131 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* lcnalloc.h - Exports for NTFS kernel cluster (de)allocation. Part of the
* Linux-NTFS project.
*
* Copyright (c) 2004-2005 Anton Altaparmakov
*/
#ifndef _LINUX_NTFS_LCNALLOC_H
#define _LINUX_NTFS_LCNALLOC_H
#ifdef NTFS_RW
#include <linux/fs.h>
#include "attrib.h"
#include "types.h"
#include "inode.h"
#include "runlist.h"
#include "volume.h"
typedef enum {
FIRST_ZONE = 0, /* For sanity checking. */
MFT_ZONE = 0, /* Allocate from $MFT zone. */
DATA_ZONE = 1, /* Allocate from $DATA zone. */
LAST_ZONE = 1, /* For sanity checking. */
} NTFS_CLUSTER_ALLOCATION_ZONES;
extern runlist_element *ntfs_cluster_alloc(ntfs_volume *vol,
const VCN start_vcn, const s64 count, const LCN start_lcn,
const NTFS_CLUSTER_ALLOCATION_ZONES zone,
const bool is_extension);
extern s64 __ntfs_cluster_free(ntfs_inode *ni, const VCN start_vcn,
s64 count, ntfs_attr_search_ctx *ctx, const bool is_rollback);
/**
* ntfs_cluster_free - free clusters on an ntfs volume
* @ni: ntfs inode whose runlist describes the clusters to free
* @start_vcn: vcn in the runlist of @ni at which to start freeing clusters
* @count: number of clusters to free or -1 for all clusters
* @ctx: active attribute search context if present or NULL if not
*
* Free @count clusters starting at the cluster @start_vcn in the runlist
* described by the ntfs inode @ni.
*
* If @count is -1, all clusters from @start_vcn to the end of the runlist are
* deallocated. Thus, to completely free all clusters in a runlist, use
* @start_vcn = 0 and @count = -1.
*
* If @ctx is specified, it is an active search context of @ni and its base mft
* record. This is needed when ntfs_cluster_free() encounters unmapped runlist
* fragments and allows their mapping. If you do not have the mft record
* mapped, you can specify @ctx as NULL and ntfs_cluster_free() will perform
* the necessary mapping and unmapping.
*
* Note, ntfs_cluster_free() saves the state of @ctx on entry and restores it
* before returning. Thus, @ctx will be left pointing to the same attribute on
* return as on entry. However, the actual pointers in @ctx may point to
* different memory locations on return, so you must remember to reset any
* cached pointers from the @ctx, i.e. after the call to ntfs_cluster_free(),
* you will probably want to do:
* m = ctx->mrec;
* a = ctx->attr;
* Assuming you cache ctx->attr in a variable @a of type ATTR_RECORD * and that
* you cache ctx->mrec in a variable @m of type MFT_RECORD *.
*
* Note, ntfs_cluster_free() does not modify the runlist, so you have to remove
* from the runlist or mark sparse the freed runs later.
*
* Return the number of deallocated clusters (not counting sparse ones) on
* success and -errno on error.
*
* WARNING: If @ctx is supplied, regardless of whether success or failure is
* returned, you need to check IS_ERR(@ctx->mrec) and if 'true' the @ctx
* is no longer valid, i.e. you need to either call
* ntfs_attr_reinit_search_ctx() or ntfs_attr_put_search_ctx() on it.
* In that case PTR_ERR(@ctx->mrec) will give you the error code for
* why the mapping of the old inode failed.
*
* Locking: - The runlist described by @ni must be locked for writing on entry
* and is locked on return. Note the runlist may be modified when
* needed runlist fragments need to be mapped.
* - The volume lcn bitmap must be unlocked on entry and is unlocked
* on return.
* - This function takes the volume lcn bitmap lock for writing and
* modifies the bitmap contents.
* - If @ctx is NULL, the base mft record of @ni must not be mapped on
* entry and it will be left unmapped on return.
* - If @ctx is not NULL, the base mft record must be mapped on entry
* and it will be left mapped on return.
*/
static inline s64 ntfs_cluster_free(ntfs_inode *ni, const VCN start_vcn,
s64 count, ntfs_attr_search_ctx *ctx)
{
return __ntfs_cluster_free(ni, start_vcn, count, ctx, false);
}
extern int ntfs_cluster_free_from_rl_nolock(ntfs_volume *vol,
const runlist_element *rl);
/**
* ntfs_cluster_free_from_rl - free clusters from runlist
* @vol: mounted ntfs volume on which to free the clusters
* @rl: runlist describing the clusters to free
*
* Free all the clusters described by the runlist @rl on the volume @vol. In
* the case of an error being returned, at least some of the clusters were not
* freed.
*
* Return 0 on success and -errno on error.
*
* Locking: - This function takes the volume lcn bitmap lock for writing and
* modifies the bitmap contents.
* - The caller must have locked the runlist @rl for reading or
* writing.
*/
static inline int ntfs_cluster_free_from_rl(ntfs_volume *vol,
const runlist_element *rl)
{
int ret;
down_write(&vol->lcnbmp_lock);
ret = ntfs_cluster_free_from_rl_nolock(vol, rl);
up_write(&vol->lcnbmp_lock);
return ret;
}
#endif /* NTFS_RW */
#endif /* defined _LINUX_NTFS_LCNALLOC_H */

View File

@ -1,849 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* logfile.c - NTFS kernel journal handling. Part of the Linux-NTFS project.
*
* Copyright (c) 2002-2007 Anton Altaparmakov
*/
#ifdef NTFS_RW
#include <linux/types.h>
#include <linux/fs.h>
#include <linux/highmem.h>
#include <linux/buffer_head.h>
#include <linux/bitops.h>
#include <linux/log2.h>
#include <linux/bio.h>
#include "attrib.h"
#include "aops.h"
#include "debug.h"
#include "logfile.h"
#include "malloc.h"
#include "volume.h"
#include "ntfs.h"
/**
* ntfs_check_restart_page_header - check the page header for consistency
* @vi: $LogFile inode to which the restart page header belongs
* @rp: restart page header to check
* @pos: position in @vi at which the restart page header resides
*
* Check the restart page header @rp for consistency and return 'true' if it is
* consistent and 'false' otherwise.
*
* This function only needs NTFS_BLOCK_SIZE bytes in @rp, i.e. it does not
* require the full restart page.
*/
static bool ntfs_check_restart_page_header(struct inode *vi,
RESTART_PAGE_HEADER *rp, s64 pos)
{
u32 logfile_system_page_size, logfile_log_page_size;
u16 ra_ofs, usa_count, usa_ofs, usa_end = 0;
bool have_usa = true;
ntfs_debug("Entering.");
/*
* If the system or log page sizes are smaller than the ntfs block size
* or either is not a power of 2 we cannot handle this log file.
*/
logfile_system_page_size = le32_to_cpu(rp->system_page_size);
logfile_log_page_size = le32_to_cpu(rp->log_page_size);
if (logfile_system_page_size < NTFS_BLOCK_SIZE ||
logfile_log_page_size < NTFS_BLOCK_SIZE ||
logfile_system_page_size &
(logfile_system_page_size - 1) ||
!is_power_of_2(logfile_log_page_size)) {
ntfs_error(vi->i_sb, "$LogFile uses unsupported page size.");
return false;
}
/*
* We must be either at !pos (1st restart page) or at pos = system page
* size (2nd restart page).
*/
if (pos && pos != logfile_system_page_size) {
ntfs_error(vi->i_sb, "Found restart area in incorrect "
"position in $LogFile.");
return false;
}
/* We only know how to handle version 1.1. */
if (sle16_to_cpu(rp->major_ver) != 1 ||
sle16_to_cpu(rp->minor_ver) != 1) {
ntfs_error(vi->i_sb, "$LogFile version %i.%i is not "
"supported. (This driver supports version "
"1.1 only.)", (int)sle16_to_cpu(rp->major_ver),
(int)sle16_to_cpu(rp->minor_ver));
return false;
}
/*
* If chkdsk has been run the restart page may not be protected by an
* update sequence array.
*/
if (ntfs_is_chkd_record(rp->magic) && !le16_to_cpu(rp->usa_count)) {
have_usa = false;
goto skip_usa_checks;
}
/* Verify the size of the update sequence array. */
usa_count = 1 + (logfile_system_page_size >> NTFS_BLOCK_SIZE_BITS);
if (usa_count != le16_to_cpu(rp->usa_count)) {
ntfs_error(vi->i_sb, "$LogFile restart page specifies "
"inconsistent update sequence array count.");
return false;
}
/* Verify the position of the update sequence array. */
usa_ofs = le16_to_cpu(rp->usa_ofs);
usa_end = usa_ofs + usa_count * sizeof(u16);
if (usa_ofs < sizeof(RESTART_PAGE_HEADER) ||
usa_end > NTFS_BLOCK_SIZE - sizeof(u16)) {
ntfs_error(vi->i_sb, "$LogFile restart page specifies "
"inconsistent update sequence array offset.");
return false;
}
skip_usa_checks:
/*
* Verify the position of the restart area. It must be:
* - aligned to 8-byte boundary,
* - after the update sequence array, and
* - within the system page size.
*/
ra_ofs = le16_to_cpu(rp->restart_area_offset);
if (ra_ofs & 7 || (have_usa ? ra_ofs < usa_end :
ra_ofs < sizeof(RESTART_PAGE_HEADER)) ||
ra_ofs > logfile_system_page_size) {
ntfs_error(vi->i_sb, "$LogFile restart page specifies "
"inconsistent restart area offset.");
return false;
}
/*
* Only restart pages modified by chkdsk are allowed to have chkdsk_lsn
* set.
*/
if (!ntfs_is_chkd_record(rp->magic) && sle64_to_cpu(rp->chkdsk_lsn)) {
ntfs_error(vi->i_sb, "$LogFile restart page is not modified "
"by chkdsk but a chkdsk LSN is specified.");
return false;
}
ntfs_debug("Done.");
return true;
}
/**
* ntfs_check_restart_area - check the restart area for consistency
* @vi: $LogFile inode to which the restart page belongs
* @rp: restart page whose restart area to check
*
* Check the restart area of the restart page @rp for consistency and return
* 'true' if it is consistent and 'false' otherwise.
*
* This function assumes that the restart page header has already been
* consistency checked.
*
* This function only needs NTFS_BLOCK_SIZE bytes in @rp, i.e. it does not
* require the full restart page.
*/
static bool ntfs_check_restart_area(struct inode *vi, RESTART_PAGE_HEADER *rp)
{
u64 file_size;
RESTART_AREA *ra;
u16 ra_ofs, ra_len, ca_ofs;
u8 fs_bits;
ntfs_debug("Entering.");
ra_ofs = le16_to_cpu(rp->restart_area_offset);
ra = (RESTART_AREA*)((u8*)rp + ra_ofs);
/*
* Everything before ra->file_size must be before the first word
* protected by an update sequence number. This ensures that it is
* safe to access ra->client_array_offset.
*/
if (ra_ofs + offsetof(RESTART_AREA, file_size) >
NTFS_BLOCK_SIZE - sizeof(u16)) {
ntfs_error(vi->i_sb, "$LogFile restart area specifies "
"inconsistent file offset.");
return false;
}
/*
* Now that we can access ra->client_array_offset, make sure everything
* up to the log client array is before the first word protected by an
* update sequence number. This ensures we can access all of the
* restart area elements safely. Also, the client array offset must be
* aligned to an 8-byte boundary.
*/
ca_ofs = le16_to_cpu(ra->client_array_offset);
if (((ca_ofs + 7) & ~7) != ca_ofs ||
ra_ofs + ca_ofs > NTFS_BLOCK_SIZE - sizeof(u16)) {
ntfs_error(vi->i_sb, "$LogFile restart area specifies "
"inconsistent client array offset.");
return false;
}
/*
* The restart area must end within the system page size both when
* calculated manually and as specified by ra->restart_area_length.
* Also, the calculated length must not exceed the specified length.
*/
ra_len = ca_ofs + le16_to_cpu(ra->log_clients) *
sizeof(LOG_CLIENT_RECORD);
if (ra_ofs + ra_len > le32_to_cpu(rp->system_page_size) ||
ra_ofs + le16_to_cpu(ra->restart_area_length) >
le32_to_cpu(rp->system_page_size) ||
ra_len > le16_to_cpu(ra->restart_area_length)) {
ntfs_error(vi->i_sb, "$LogFile restart area is out of bounds "
"of the system page size specified by the "
"restart page header and/or the specified "
"restart area length is inconsistent.");
return false;
}
/*
* The ra->client_free_list and ra->client_in_use_list must be either
* LOGFILE_NO_CLIENT or less than ra->log_clients or they are
* overflowing the client array.
*/
if ((ra->client_free_list != LOGFILE_NO_CLIENT &&
le16_to_cpu(ra->client_free_list) >=
le16_to_cpu(ra->log_clients)) ||
(ra->client_in_use_list != LOGFILE_NO_CLIENT &&
le16_to_cpu(ra->client_in_use_list) >=
le16_to_cpu(ra->log_clients))) {
ntfs_error(vi->i_sb, "$LogFile restart area specifies "
"overflowing client free and/or in use lists.");
return false;
}
/*
* Check ra->seq_number_bits against ra->file_size for consistency.
* We cannot just use ffs() because the file size is not a power of 2.
*/
file_size = (u64)sle64_to_cpu(ra->file_size);
fs_bits = 0;
while (file_size) {
file_size >>= 1;
fs_bits++;
}
if (le32_to_cpu(ra->seq_number_bits) != 67 - fs_bits) {
ntfs_error(vi->i_sb, "$LogFile restart area specifies "
"inconsistent sequence number bits.");
return false;
}
/* The log record header length must be a multiple of 8. */
if (((le16_to_cpu(ra->log_record_header_length) + 7) & ~7) !=
le16_to_cpu(ra->log_record_header_length)) {
ntfs_error(vi->i_sb, "$LogFile restart area specifies "
"inconsistent log record header length.");
return false;
}
/* Dito for the log page data offset. */
if (((le16_to_cpu(ra->log_page_data_offset) + 7) & ~7) !=
le16_to_cpu(ra->log_page_data_offset)) {
ntfs_error(vi->i_sb, "$LogFile restart area specifies "
"inconsistent log page data offset.");
return false;
}
ntfs_debug("Done.");
return true;
}
/**
* ntfs_check_log_client_array - check the log client array for consistency
* @vi: $LogFile inode to which the restart page belongs
* @rp: restart page whose log client array to check
*
* Check the log client array of the restart page @rp for consistency and
* return 'true' if it is consistent and 'false' otherwise.
*
* This function assumes that the restart page header and the restart area have
* already been consistency checked.
*
* Unlike ntfs_check_restart_page_header() and ntfs_check_restart_area(), this
* function needs @rp->system_page_size bytes in @rp, i.e. it requires the full
* restart page and the page must be multi sector transfer deprotected.
*/
static bool ntfs_check_log_client_array(struct inode *vi,
RESTART_PAGE_HEADER *rp)
{
RESTART_AREA *ra;
LOG_CLIENT_RECORD *ca, *cr;
u16 nr_clients, idx;
bool in_free_list, idx_is_first;
ntfs_debug("Entering.");
ra = (RESTART_AREA*)((u8*)rp + le16_to_cpu(rp->restart_area_offset));
ca = (LOG_CLIENT_RECORD*)((u8*)ra +
le16_to_cpu(ra->client_array_offset));
/*
* Check the ra->client_free_list first and then check the
* ra->client_in_use_list. Check each of the log client records in
* each of the lists and check that the array does not overflow the
* ra->log_clients value. Also keep track of the number of records
* visited as there cannot be more than ra->log_clients records and
* that way we detect eventual loops in within a list.
*/
nr_clients = le16_to_cpu(ra->log_clients);
idx = le16_to_cpu(ra->client_free_list);
in_free_list = true;
check_list:
for (idx_is_first = true; idx != LOGFILE_NO_CLIENT_CPU; nr_clients--,
idx = le16_to_cpu(cr->next_client)) {
if (!nr_clients || idx >= le16_to_cpu(ra->log_clients))
goto err_out;
/* Set @cr to the current log client record. */
cr = ca + idx;
/* The first log client record must not have a prev_client. */
if (idx_is_first) {
if (cr->prev_client != LOGFILE_NO_CLIENT)
goto err_out;
idx_is_first = false;
}
}
/* Switch to and check the in use list if we just did the free list. */
if (in_free_list) {
in_free_list = false;
idx = le16_to_cpu(ra->client_in_use_list);
goto check_list;
}
ntfs_debug("Done.");
return true;
err_out:
ntfs_error(vi->i_sb, "$LogFile log client array is corrupt.");
return false;
}
/**
* ntfs_check_and_load_restart_page - check the restart page for consistency
* @vi: $LogFile inode to which the restart page belongs
* @rp: restart page to check
* @pos: position in @vi at which the restart page resides
* @wrp: [OUT] copy of the multi sector transfer deprotected restart page
* @lsn: [OUT] set to the current logfile lsn on success
*
* Check the restart page @rp for consistency and return 0 if it is consistent
* and -errno otherwise. The restart page may have been modified by chkdsk in
* which case its magic is CHKD instead of RSTR.
*
* This function only needs NTFS_BLOCK_SIZE bytes in @rp, i.e. it does not
* require the full restart page.
*
* If @wrp is not NULL, on success, *@wrp will point to a buffer containing a
* copy of the complete multi sector transfer deprotected page. On failure,
* *@wrp is undefined.
*
* Simillarly, if @lsn is not NULL, on success *@lsn will be set to the current
* logfile lsn according to this restart page. On failure, *@lsn is undefined.
*
* The following error codes are defined:
* -EINVAL - The restart page is inconsistent.
* -ENOMEM - Not enough memory to load the restart page.
* -EIO - Failed to reading from $LogFile.
*/
static int ntfs_check_and_load_restart_page(struct inode *vi,
RESTART_PAGE_HEADER *rp, s64 pos, RESTART_PAGE_HEADER **wrp,
LSN *lsn)
{
RESTART_AREA *ra;
RESTART_PAGE_HEADER *trp;
int size, err;
ntfs_debug("Entering.");
/* Check the restart page header for consistency. */
if (!ntfs_check_restart_page_header(vi, rp, pos)) {
/* Error output already done inside the function. */
return -EINVAL;
}
/* Check the restart area for consistency. */
if (!ntfs_check_restart_area(vi, rp)) {
/* Error output already done inside the function. */
return -EINVAL;
}
ra = (RESTART_AREA*)((u8*)rp + le16_to_cpu(rp->restart_area_offset));
/*
* Allocate a buffer to store the whole restart page so we can multi
* sector transfer deprotect it.
*/
trp = ntfs_malloc_nofs(le32_to_cpu(rp->system_page_size));
if (!trp) {
ntfs_error(vi->i_sb, "Failed to allocate memory for $LogFile "
"restart page buffer.");
return -ENOMEM;
}
/*
* Read the whole of the restart page into the buffer. If it fits
* completely inside @rp, just copy it from there. Otherwise map all
* the required pages and copy the data from them.
*/
size = PAGE_SIZE - (pos & ~PAGE_MASK);
if (size >= le32_to_cpu(rp->system_page_size)) {
memcpy(trp, rp, le32_to_cpu(rp->system_page_size));
} else {
pgoff_t idx;
struct page *page;
int have_read, to_read;
/* First copy what we already have in @rp. */
memcpy(trp, rp, size);
/* Copy the remaining data one page at a time. */
have_read = size;
to_read = le32_to_cpu(rp->system_page_size) - size;
idx = (pos + size) >> PAGE_SHIFT;
BUG_ON((pos + size) & ~PAGE_MASK);
do {
page = ntfs_map_page(vi->i_mapping, idx);
if (IS_ERR(page)) {
ntfs_error(vi->i_sb, "Error mapping $LogFile "
"page (index %lu).", idx);
err = PTR_ERR(page);
if (err != -EIO && err != -ENOMEM)
err = -EIO;
goto err_out;
}
size = min_t(int, to_read, PAGE_SIZE);
memcpy((u8*)trp + have_read, page_address(page), size);
ntfs_unmap_page(page);
have_read += size;
to_read -= size;
idx++;
} while (to_read > 0);
}
/*
* Perform the multi sector transfer deprotection on the buffer if the
* restart page is protected.
*/
if ((!ntfs_is_chkd_record(trp->magic) || le16_to_cpu(trp->usa_count))
&& post_read_mst_fixup((NTFS_RECORD*)trp,
le32_to_cpu(rp->system_page_size))) {
/*
* A multi sector tranfer error was detected. We only need to
* abort if the restart page contents exceed the multi sector
* transfer fixup of the first sector.
*/
if (le16_to_cpu(rp->restart_area_offset) +
le16_to_cpu(ra->restart_area_length) >
NTFS_BLOCK_SIZE - sizeof(u16)) {
ntfs_error(vi->i_sb, "Multi sector transfer error "
"detected in $LogFile restart page.");
err = -EINVAL;
goto err_out;
}
}
/*
* If the restart page is modified by chkdsk or there are no active
* logfile clients, the logfile is consistent. Otherwise, need to
* check the log client records for consistency, too.
*/
err = 0;
if (ntfs_is_rstr_record(rp->magic) &&
ra->client_in_use_list != LOGFILE_NO_CLIENT) {
if (!ntfs_check_log_client_array(vi, trp)) {
err = -EINVAL;
goto err_out;
}
}
if (lsn) {
if (ntfs_is_rstr_record(rp->magic))
*lsn = sle64_to_cpu(ra->current_lsn);
else /* if (ntfs_is_chkd_record(rp->magic)) */
*lsn = sle64_to_cpu(rp->chkdsk_lsn);
}
ntfs_debug("Done.");
if (wrp)
*wrp = trp;
else {
err_out:
ntfs_free(trp);
}
return err;
}
/**
* ntfs_check_logfile - check the journal for consistency
* @log_vi: struct inode of loaded journal $LogFile to check
* @rp: [OUT] on success this is a copy of the current restart page
*
* Check the $LogFile journal for consistency and return 'true' if it is
* consistent and 'false' if not. On success, the current restart page is
* returned in *@rp. Caller must call ntfs_free(*@rp) when finished with it.
*
* At present we only check the two restart pages and ignore the log record
* pages.
*
* Note that the MstProtected flag is not set on the $LogFile inode and hence
* when reading pages they are not deprotected. This is because we do not know
* if the $LogFile was created on a system with a different page size to ours
* yet and mst deprotection would fail if our page size is smaller.
*/
bool ntfs_check_logfile(struct inode *log_vi, RESTART_PAGE_HEADER **rp)
{
s64 size, pos;
LSN rstr1_lsn, rstr2_lsn;
ntfs_volume *vol = NTFS_SB(log_vi->i_sb);
struct address_space *mapping = log_vi->i_mapping;
struct page *page = NULL;
u8 *kaddr = NULL;
RESTART_PAGE_HEADER *rstr1_ph = NULL;
RESTART_PAGE_HEADER *rstr2_ph = NULL;
int log_page_size, err;
bool logfile_is_empty = true;
u8 log_page_bits;
ntfs_debug("Entering.");
/* An empty $LogFile must have been clean before it got emptied. */
if (NVolLogFileEmpty(vol))
goto is_empty;
size = i_size_read(log_vi);
/* Make sure the file doesn't exceed the maximum allowed size. */
if (size > MaxLogFileSize)
size = MaxLogFileSize;
/*
* Truncate size to a multiple of the page cache size or the default
* log page size if the page cache size is between the default log page
* log page size if the page cache size is between the default log page
* size and twice that.
*/
if (PAGE_SIZE >= DefaultLogPageSize && PAGE_SIZE <=
DefaultLogPageSize * 2)
log_page_size = DefaultLogPageSize;
else
log_page_size = PAGE_SIZE;
/*
* Use ntfs_ffs() instead of ffs() to enable the compiler to
* optimize log_page_size and log_page_bits into constants.
*/
log_page_bits = ntfs_ffs(log_page_size) - 1;
size &= ~(s64)(log_page_size - 1);
/*
* Ensure the log file is big enough to store at least the two restart
* pages and the minimum number of log record pages.
*/
if (size < log_page_size * 2 || (size - log_page_size * 2) >>
log_page_bits < MinLogRecordPages) {
ntfs_error(vol->sb, "$LogFile is too small.");
return false;
}
/*
* Read through the file looking for a restart page. Since the restart
* page header is at the beginning of a page we only need to search at
* what could be the beginning of a page (for each page size) rather
* than scanning the whole file byte by byte. If all potential places
* contain empty and uninitialzed records, the log file can be assumed
* to be empty.
*/
for (pos = 0; pos < size; pos <<= 1) {
pgoff_t idx = pos >> PAGE_SHIFT;
if (!page || page->index != idx) {
if (page)
ntfs_unmap_page(page);
page = ntfs_map_page(mapping, idx);
if (IS_ERR(page)) {
ntfs_error(vol->sb, "Error mapping $LogFile "
"page (index %lu).", idx);
goto err_out;
}
}
kaddr = (u8*)page_address(page) + (pos & ~PAGE_MASK);
/*
* A non-empty block means the logfile is not empty while an
* empty block after a non-empty block has been encountered
* means we are done.
*/
if (!ntfs_is_empty_recordp((le32*)kaddr))
logfile_is_empty = false;
else if (!logfile_is_empty)
break;
/*
* A log record page means there cannot be a restart page after
* this so no need to continue searching.
*/
if (ntfs_is_rcrd_recordp((le32*)kaddr))
break;
/* If not a (modified by chkdsk) restart page, continue. */
if (!ntfs_is_rstr_recordp((le32*)kaddr) &&
!ntfs_is_chkd_recordp((le32*)kaddr)) {
if (!pos)
pos = NTFS_BLOCK_SIZE >> 1;
continue;
}
/*
* Check the (modified by chkdsk) restart page for consistency
* and get a copy of the complete multi sector transfer
* deprotected restart page.
*/
err = ntfs_check_and_load_restart_page(log_vi,
(RESTART_PAGE_HEADER*)kaddr, pos,
!rstr1_ph ? &rstr1_ph : &rstr2_ph,
!rstr1_ph ? &rstr1_lsn : &rstr2_lsn);
if (!err) {
/*
* If we have now found the first (modified by chkdsk)
* restart page, continue looking for the second one.
*/
if (!pos) {
pos = NTFS_BLOCK_SIZE >> 1;
continue;
}
/*
* We have now found the second (modified by chkdsk)
* restart page, so we can stop looking.
*/
break;
}
/*
* Error output already done inside the function. Note, we do
* not abort if the restart page was invalid as we might still
* find a valid one further in the file.
*/
if (err != -EINVAL) {
ntfs_unmap_page(page);
goto err_out;
}
/* Continue looking. */
if (!pos)
pos = NTFS_BLOCK_SIZE >> 1;
}
if (page)
ntfs_unmap_page(page);
if (logfile_is_empty) {
NVolSetLogFileEmpty(vol);
is_empty:
ntfs_debug("Done. ($LogFile is empty.)");
return true;
}
if (!rstr1_ph) {
BUG_ON(rstr2_ph);
ntfs_error(vol->sb, "Did not find any restart pages in "
"$LogFile and it was not empty.");
return false;
}
/* If both restart pages were found, use the more recent one. */
if (rstr2_ph) {
/*
* If the second restart area is more recent, switch to it.
* Otherwise just throw it away.
*/
if (rstr2_lsn > rstr1_lsn) {
ntfs_debug("Using second restart page as it is more "
"recent.");
ntfs_free(rstr1_ph);
rstr1_ph = rstr2_ph;
/* rstr1_lsn = rstr2_lsn; */
} else {
ntfs_debug("Using first restart page as it is more "
"recent.");
ntfs_free(rstr2_ph);
}
rstr2_ph = NULL;
}
/* All consistency checks passed. */
if (rp)
*rp = rstr1_ph;
else
ntfs_free(rstr1_ph);
ntfs_debug("Done.");
return true;
err_out:
if (rstr1_ph)
ntfs_free(rstr1_ph);
return false;
}
/**
* ntfs_is_logfile_clean - check in the journal if the volume is clean
* @log_vi: struct inode of loaded journal $LogFile to check
* @rp: copy of the current restart page
*
* Analyze the $LogFile journal and return 'true' if it indicates the volume was
* shutdown cleanly and 'false' if not.
*
* At present we only look at the two restart pages and ignore the log record
* pages. This is a little bit crude in that there will be a very small number
* of cases where we think that a volume is dirty when in fact it is clean.
* This should only affect volumes that have not been shutdown cleanly but did
* not have any pending, non-check-pointed i/o, i.e. they were completely idle
* at least for the five seconds preceding the unclean shutdown.
*
* This function assumes that the $LogFile journal has already been consistency
* checked by a call to ntfs_check_logfile() and in particular if the $LogFile
* is empty this function requires that NVolLogFileEmpty() is true otherwise an
* empty volume will be reported as dirty.
*/
bool ntfs_is_logfile_clean(struct inode *log_vi, const RESTART_PAGE_HEADER *rp)
{
ntfs_volume *vol = NTFS_SB(log_vi->i_sb);
RESTART_AREA *ra;
ntfs_debug("Entering.");
/* An empty $LogFile must have been clean before it got emptied. */
if (NVolLogFileEmpty(vol)) {
ntfs_debug("Done. ($LogFile is empty.)");
return true;
}
BUG_ON(!rp);
if (!ntfs_is_rstr_record(rp->magic) &&
!ntfs_is_chkd_record(rp->magic)) {
ntfs_error(vol->sb, "Restart page buffer is invalid. This is "
"probably a bug in that the $LogFile should "
"have been consistency checked before calling "
"this function.");
return false;
}
ra = (RESTART_AREA*)((u8*)rp + le16_to_cpu(rp->restart_area_offset));
/*
* If the $LogFile has active clients, i.e. it is open, and we do not
* have the RESTART_VOLUME_IS_CLEAN bit set in the restart area flags,
* we assume there was an unclean shutdown.
*/
if (ra->client_in_use_list != LOGFILE_NO_CLIENT &&
!(ra->flags & RESTART_VOLUME_IS_CLEAN)) {
ntfs_debug("Done. $LogFile indicates a dirty shutdown.");
return false;
}
/* $LogFile indicates a clean shutdown. */
ntfs_debug("Done. $LogFile indicates a clean shutdown.");
return true;
}
/**
* ntfs_empty_logfile - empty the contents of the $LogFile journal
* @log_vi: struct inode of loaded journal $LogFile to empty
*
* Empty the contents of the $LogFile journal @log_vi and return 'true' on
* success and 'false' on error.
*
* This function assumes that the $LogFile journal has already been consistency
* checked by a call to ntfs_check_logfile() and that ntfs_is_logfile_clean()
* has been used to ensure that the $LogFile is clean.
*/
bool ntfs_empty_logfile(struct inode *log_vi)
{
VCN vcn, end_vcn;
ntfs_inode *log_ni = NTFS_I(log_vi);
ntfs_volume *vol = log_ni->vol;
struct super_block *sb = vol->sb;
runlist_element *rl;
unsigned long flags;
unsigned block_size, block_size_bits;
int err;
bool should_wait = true;
ntfs_debug("Entering.");
if (NVolLogFileEmpty(vol)) {
ntfs_debug("Done.");
return true;
}
/*
* We cannot use ntfs_attr_set() because we may be still in the middle
* of a mount operation. Thus we do the emptying by hand by first
* zapping the page cache pages for the $LogFile/$DATA attribute and
* then emptying each of the buffers in each of the clusters specified
* by the runlist by hand.
*/
block_size = sb->s_blocksize;
block_size_bits = sb->s_blocksize_bits;
vcn = 0;
read_lock_irqsave(&log_ni->size_lock, flags);
end_vcn = (log_ni->initialized_size + vol->cluster_size_mask) >>
vol->cluster_size_bits;
read_unlock_irqrestore(&log_ni->size_lock, flags);
truncate_inode_pages(log_vi->i_mapping, 0);
down_write(&log_ni->runlist.lock);
rl = log_ni->runlist.rl;
if (unlikely(!rl || vcn < rl->vcn || !rl->length)) {
map_vcn:
err = ntfs_map_runlist_nolock(log_ni, vcn, NULL);
if (err) {
ntfs_error(sb, "Failed to map runlist fragment (error "
"%d).", -err);
goto err;
}
rl = log_ni->runlist.rl;
BUG_ON(!rl || vcn < rl->vcn || !rl->length);
}
/* Seek to the runlist element containing @vcn. */
while (rl->length && vcn >= rl[1].vcn)
rl++;
do {
LCN lcn;
sector_t block, end_block;
s64 len;
/*
* If this run is not mapped map it now and start again as the
* runlist will have been updated.
*/
lcn = rl->lcn;
if (unlikely(lcn == LCN_RL_NOT_MAPPED)) {
vcn = rl->vcn;
goto map_vcn;
}
/* If this run is not valid abort with an error. */
if (unlikely(!rl->length || lcn < LCN_HOLE))
goto rl_err;
/* Skip holes. */
if (lcn == LCN_HOLE)
continue;
block = lcn << vol->cluster_size_bits >> block_size_bits;
len = rl->length;
if (rl[1].vcn > end_vcn)
len = end_vcn - rl->vcn;
end_block = (lcn + len) << vol->cluster_size_bits >>
block_size_bits;
/* Iterate over the blocks in the run and empty them. */
do {
struct buffer_head *bh;
/* Obtain the buffer, possibly not uptodate. */
bh = sb_getblk(sb, block);
BUG_ON(!bh);
/* Setup buffer i/o submission. */
lock_buffer(bh);
bh->b_end_io = end_buffer_write_sync;
get_bh(bh);
/* Set the entire contents of the buffer to 0xff. */
memset(bh->b_data, -1, block_size);
if (!buffer_uptodate(bh))
set_buffer_uptodate(bh);
if (buffer_dirty(bh))
clear_buffer_dirty(bh);
/*
* Submit the buffer and wait for i/o to complete but
* only for the first buffer so we do not miss really
* serious i/o errors. Once the first buffer has
* completed ignore errors afterwards as we can assume
* that if one buffer worked all of them will work.
*/
submit_bh(REQ_OP_WRITE, bh);
if (should_wait) {
should_wait = false;
wait_on_buffer(bh);
if (unlikely(!buffer_uptodate(bh)))
goto io_err;
}
brelse(bh);
} while (++block < end_block);
} while ((++rl)->vcn < end_vcn);
up_write(&log_ni->runlist.lock);
/*
* Zap the pages again just in case any got instantiated whilst we were
* emptying the blocks by hand. FIXME: We may not have completed
* writing to all the buffer heads yet so this may happen too early.
* We really should use a kernel thread to do the emptying
* asynchronously and then we can also set the volume dirty and output
* an error message if emptying should fail.
*/
truncate_inode_pages(log_vi->i_mapping, 0);
/* Set the flag so we do not have to do it again on remount. */
NVolSetLogFileEmpty(vol);
ntfs_debug("Done.");
return true;
io_err:
ntfs_error(sb, "Failed to write buffer. Unmount and run chkdsk.");
goto dirty_err;
rl_err:
ntfs_error(sb, "Runlist is corrupt. Unmount and run chkdsk.");
dirty_err:
NVolSetErrors(vol);
err = -EIO;
err:
up_write(&log_ni->runlist.lock);
ntfs_error(sb, "Failed to fill $LogFile with 0xff bytes (error %d).",
-err);
return false;
}
#endif /* NTFS_RW */

View File

@ -1,295 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* logfile.h - Defines for NTFS kernel journal ($LogFile) handling. Part of
* the Linux-NTFS project.
*
* Copyright (c) 2000-2005 Anton Altaparmakov
*/
#ifndef _LINUX_NTFS_LOGFILE_H
#define _LINUX_NTFS_LOGFILE_H
#ifdef NTFS_RW
#include <linux/fs.h>
#include "types.h"
#include "endian.h"
#include "layout.h"
/*
* Journal ($LogFile) organization:
*
* Two restart areas present in the first two pages (restart pages, one restart
* area in each page). When the volume is dismounted they should be identical,
* except for the update sequence array which usually has a different update
* sequence number.
*
* These are followed by log records organized in pages headed by a log record
* header going up to log file size. Not all pages contain log records when a
* volume is first formatted, but as the volume ages, all records will be used.
* When the log file fills up, the records at the beginning are purged (by
* modifying the oldest_lsn to a higher value presumably) and writing begins
* at the beginning of the file. Effectively, the log file is viewed as a
* circular entity.
*
* NOTE: Windows NT, 2000, and XP all use log file version 1.1 but they accept
* versions <= 1.x, including 0.-1. (Yes, that is a minus one in there!) We
* probably only want to support 1.1 as this seems to be the current version
* and we don't know how that differs from the older versions. The only
* exception is if the journal is clean as marked by the two restart pages
* then it doesn't matter whether we are on an earlier version. We can just
* reinitialize the logfile and start again with version 1.1.
*/
/* Some $LogFile related constants. */
#define MaxLogFileSize 0x100000000ULL
#define DefaultLogPageSize 4096
#define MinLogRecordPages 48
/*
* Log file restart page header (begins the restart area).
*/
typedef struct {
/*Ofs*/
/* 0 NTFS_RECORD; -- Unfolded here as gcc doesn't like unnamed structs. */
/* 0*/ NTFS_RECORD_TYPE magic; /* The magic is "RSTR". */
/* 4*/ le16 usa_ofs; /* See NTFS_RECORD definition in layout.h.
When creating, set this to be immediately
after this header structure (without any
alignment). */
/* 6*/ le16 usa_count; /* See NTFS_RECORD definition in layout.h. */
/* 8*/ leLSN chkdsk_lsn; /* The last log file sequence number found by
chkdsk. Only used when the magic is changed
to "CHKD". Otherwise this is zero. */
/* 16*/ le32 system_page_size; /* Byte size of system pages when the log file
was created, has to be >= 512 and a power of
2. Use this to calculate the required size
of the usa (usa_count) and add it to usa_ofs.
Then verify that the result is less than the
value of the restart_area_offset. */
/* 20*/ le32 log_page_size; /* Byte size of log file pages, has to be >=
512 and a power of 2. The default is 4096
and is used when the system page size is
between 4096 and 8192. Otherwise this is
set to the system page size instead. */
/* 24*/ le16 restart_area_offset;/* Byte offset from the start of this header to
the RESTART_AREA. Value has to be aligned
to 8-byte boundary. When creating, set this
to be after the usa. */
/* 26*/ sle16 minor_ver; /* Log file minor version. Only check if major
version is 1. */
/* 28*/ sle16 major_ver; /* Log file major version. We only support
version 1.1. */
/* sizeof() = 30 (0x1e) bytes */
} __attribute__ ((__packed__)) RESTART_PAGE_HEADER;
/*
* Constant for the log client indices meaning that there are no client records
* in this particular client array. Also inside the client records themselves,
* this means that there are no client records preceding or following this one.
*/
#define LOGFILE_NO_CLIENT cpu_to_le16(0xffff)
#define LOGFILE_NO_CLIENT_CPU 0xffff
/*
* These are the so far known RESTART_AREA_* flags (16-bit) which contain
* information about the log file in which they are present.
*/
enum {
RESTART_VOLUME_IS_CLEAN = cpu_to_le16(0x0002),
RESTART_SPACE_FILLER = cpu_to_le16(0xffff), /* gcc: Force enum bit width to 16. */
} __attribute__ ((__packed__));
typedef le16 RESTART_AREA_FLAGS;
/*
* Log file restart area record. The offset of this record is found by adding
* the offset of the RESTART_PAGE_HEADER to the restart_area_offset value found
* in it. See notes at restart_area_offset above.
*/
typedef struct {
/*Ofs*/
/* 0*/ leLSN current_lsn; /* The current, i.e. last LSN inside the log
when the restart area was last written.
This happens often but what is the interval?
Is it just fixed time or is it every time a
check point is written or somethine else?
On create set to 0. */
/* 8*/ le16 log_clients; /* Number of log client records in the array of
log client records which follows this
restart area. Must be 1. */
/* 10*/ le16 client_free_list; /* The index of the first free log client record
in the array of log client records.
LOGFILE_NO_CLIENT means that there are no
free log client records in the array.
If != LOGFILE_NO_CLIENT, check that
log_clients > client_free_list. On Win2k
and presumably earlier, on a clean volume
this is != LOGFILE_NO_CLIENT, and it should
be 0, i.e. the first (and only) client
record is free and thus the logfile is
closed and hence clean. A dirty volume
would have left the logfile open and hence
this would be LOGFILE_NO_CLIENT. On WinXP
and presumably later, the logfile is always
open, even on clean shutdown so this should
always be LOGFILE_NO_CLIENT. */
/* 12*/ le16 client_in_use_list;/* The index of the first in-use log client
record in the array of log client records.
LOGFILE_NO_CLIENT means that there are no
in-use log client records in the array. If
!= LOGFILE_NO_CLIENT check that log_clients
> client_in_use_list. On Win2k and
presumably earlier, on a clean volume this
is LOGFILE_NO_CLIENT, i.e. there are no
client records in use and thus the logfile
is closed and hence clean. A dirty volume
would have left the logfile open and hence
this would be != LOGFILE_NO_CLIENT, and it
should be 0, i.e. the first (and only)
client record is in use. On WinXP and
presumably later, the logfile is always
open, even on clean shutdown so this should
always be 0. */
/* 14*/ RESTART_AREA_FLAGS flags;/* Flags modifying LFS behaviour. On Win2k
and presumably earlier this is always 0. On
WinXP and presumably later, if the logfile
was shutdown cleanly, the second bit,
RESTART_VOLUME_IS_CLEAN, is set. This bit
is cleared when the volume is mounted by
WinXP and set when the volume is dismounted,
thus if the logfile is dirty, this bit is
clear. Thus we don't need to check the
Windows version to determine if the logfile
is clean. Instead if the logfile is closed,
we know it must be clean. If it is open and
this bit is set, we also know it must be
clean. If on the other hand the logfile is
open and this bit is clear, we can be almost
certain that the logfile is dirty. */
/* 16*/ le32 seq_number_bits; /* How many bits to use for the sequence
number. This is calculated as 67 - the
number of bits required to store the logfile
size in bytes and this can be used in with
the specified file_size as a consistency
check. */
/* 20*/ le16 restart_area_length;/* Length of the restart area including the
client array. Following checks required if
version matches. Otherwise, skip them.
restart_area_offset + restart_area_length
has to be <= system_page_size. Also,
restart_area_length has to be >=
client_array_offset + (log_clients *
sizeof(log client record)). */
/* 22*/ le16 client_array_offset;/* Offset from the start of this record to
the first log client record if versions are
matched. When creating, set this to be
after this restart area structure, aligned
to 8-bytes boundary. If the versions do not
match, this is ignored and the offset is
assumed to be (sizeof(RESTART_AREA) + 7) &
~7, i.e. rounded up to first 8-byte
boundary. Either way, client_array_offset
has to be aligned to an 8-byte boundary.
Also, restart_area_offset +
client_array_offset has to be <= 510.
Finally, client_array_offset + (log_clients
* sizeof(log client record)) has to be <=
system_page_size. On Win2k and presumably
earlier, this is 0x30, i.e. immediately
following this record. On WinXP and
presumably later, this is 0x40, i.e. there
are 16 extra bytes between this record and
the client array. This probably means that
the RESTART_AREA record is actually bigger
in WinXP and later. */
/* 24*/ sle64 file_size; /* Usable byte size of the log file. If the
restart_area_offset + the offset of the
file_size are > 510 then corruption has
occurred. This is the very first check when
starting with the restart_area as if it
fails it means that some of the above values
will be corrupted by the multi sector
transfer protection. The file_size has to
be rounded down to be a multiple of the
log_page_size in the RESTART_PAGE_HEADER and
then it has to be at least big enough to
store the two restart pages and 48 (0x30)
log record pages. */
/* 32*/ le32 last_lsn_data_length;/* Length of data of last LSN, not including
the log record header. On create set to
0. */
/* 36*/ le16 log_record_header_length;/* Byte size of the log record header.
If the version matches then check that the
value of log_record_header_length is a
multiple of 8, i.e.
(log_record_header_length + 7) & ~7 ==
log_record_header_length. When creating set
it to sizeof(LOG_RECORD_HEADER), aligned to
8 bytes. */
/* 38*/ le16 log_page_data_offset;/* Offset to the start of data in a log record
page. Must be a multiple of 8. On create
set it to immediately after the update
sequence array of the log record page. */
/* 40*/ le32 restart_log_open_count;/* A counter that gets incremented every
time the logfile is restarted which happens
at mount time when the logfile is opened.
When creating set to a random value. Win2k
sets it to the low 32 bits of the current
system time in NTFS format (see time.h). */
/* 44*/ le32 reserved; /* Reserved/alignment to 8-byte boundary. */
/* sizeof() = 48 (0x30) bytes */
} __attribute__ ((__packed__)) RESTART_AREA;
/*
* Log client record. The offset of this record is found by adding the offset
* of the RESTART_AREA to the client_array_offset value found in it.
*/
typedef struct {
/*Ofs*/
/* 0*/ leLSN oldest_lsn; /* Oldest LSN needed by this client. On create
set to 0. */
/* 8*/ leLSN client_restart_lsn;/* LSN at which this client needs to restart
the volume, i.e. the current position within
the log file. At present, if clean this
should = current_lsn in restart area but it
probably also = current_lsn when dirty most
of the time. At create set to 0. */
/* 16*/ le16 prev_client; /* The offset to the previous log client record
in the array of log client records.
LOGFILE_NO_CLIENT means there is no previous
client record, i.e. this is the first one.
This is always LOGFILE_NO_CLIENT. */
/* 18*/ le16 next_client; /* The offset to the next log client record in
the array of log client records.
LOGFILE_NO_CLIENT means there are no next
client records, i.e. this is the last one.
This is always LOGFILE_NO_CLIENT. */
/* 20*/ le16 seq_number; /* On Win2k and presumably earlier, this is set
to zero every time the logfile is restarted
and it is incremented when the logfile is
closed at dismount time. Thus it is 0 when
dirty and 1 when clean. On WinXP and
presumably later, this is always 0. */
/* 22*/ u8 reserved[6]; /* Reserved/alignment. */
/* 28*/ le32 client_name_length;/* Length of client name in bytes. Should
always be 8. */
/* 32*/ ntfschar client_name[64];/* Name of the client in Unicode. Should
always be "NTFS" with the remaining bytes
set to 0. */
/* sizeof() = 160 (0xa0) bytes */
} __attribute__ ((__packed__)) LOG_CLIENT_RECORD;
extern bool ntfs_check_logfile(struct inode *log_vi,
RESTART_PAGE_HEADER **rp);
extern bool ntfs_is_logfile_clean(struct inode *log_vi,
const RESTART_PAGE_HEADER *rp);
extern bool ntfs_empty_logfile(struct inode *log_vi);
#endif /* NTFS_RW */
#endif /* _LINUX_NTFS_LOGFILE_H */

View File

@ -1,77 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* malloc.h - NTFS kernel memory handling. Part of the Linux-NTFS project.
*
* Copyright (c) 2001-2005 Anton Altaparmakov
*/
#ifndef _LINUX_NTFS_MALLOC_H
#define _LINUX_NTFS_MALLOC_H
#include <linux/vmalloc.h>
#include <linux/slab.h>
#include <linux/highmem.h>
/**
* __ntfs_malloc - allocate memory in multiples of pages
* @size: number of bytes to allocate
* @gfp_mask: extra flags for the allocator
*
* Internal function. You probably want ntfs_malloc_nofs()...
*
* Allocates @size bytes of memory, rounded up to multiples of PAGE_SIZE and
* returns a pointer to the allocated memory.
*
* If there was insufficient memory to complete the request, return NULL.
* Depending on @gfp_mask the allocation may be guaranteed to succeed.
*/
static inline void *__ntfs_malloc(unsigned long size, gfp_t gfp_mask)
{
if (likely(size <= PAGE_SIZE)) {
BUG_ON(!size);
/* kmalloc() has per-CPU caches so is faster for now. */
return kmalloc(PAGE_SIZE, gfp_mask & ~__GFP_HIGHMEM);
/* return (void *)__get_free_page(gfp_mask); */
}
if (likely((size >> PAGE_SHIFT) < totalram_pages()))
return __vmalloc(size, gfp_mask);
return NULL;
}
/**
* ntfs_malloc_nofs - allocate memory in multiples of pages
* @size: number of bytes to allocate
*
* Allocates @size bytes of memory, rounded up to multiples of PAGE_SIZE and
* returns a pointer to the allocated memory.
*
* If there was insufficient memory to complete the request, return NULL.
*/
static inline void *ntfs_malloc_nofs(unsigned long size)
{
return __ntfs_malloc(size, GFP_NOFS | __GFP_HIGHMEM);
}
/**
* ntfs_malloc_nofs_nofail - allocate memory in multiples of pages
* @size: number of bytes to allocate
*
* Allocates @size bytes of memory, rounded up to multiples of PAGE_SIZE and
* returns a pointer to the allocated memory.
*
* This function guarantees that the allocation will succeed. It will sleep
* for as long as it takes to complete the allocation.
*
* If there was insufficient memory to complete the request, return NULL.
*/
static inline void *ntfs_malloc_nofs_nofail(unsigned long size)
{
return __ntfs_malloc(size, GFP_NOFS | __GFP_HIGHMEM | __GFP_NOFAIL);
}
static inline void ntfs_free(void *addr)
{
kvfree(addr);
}
#endif /* _LINUX_NTFS_MALLOC_H */

File diff suppressed because it is too large Load Diff

View File

@ -1,110 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* mft.h - Defines for mft record handling in NTFS Linux kernel driver.
* Part of the Linux-NTFS project.
*
* Copyright (c) 2001-2004 Anton Altaparmakov
*/
#ifndef _LINUX_NTFS_MFT_H
#define _LINUX_NTFS_MFT_H
#include <linux/fs.h>
#include <linux/highmem.h>
#include <linux/pagemap.h>
#include "inode.h"
extern MFT_RECORD *map_mft_record(ntfs_inode *ni);
extern void unmap_mft_record(ntfs_inode *ni);
extern MFT_RECORD *map_extent_mft_record(ntfs_inode *base_ni, MFT_REF mref,
ntfs_inode **ntfs_ino);
static inline void unmap_extent_mft_record(ntfs_inode *ni)
{
unmap_mft_record(ni);
return;
}
#ifdef NTFS_RW
/**
* flush_dcache_mft_record_page - flush_dcache_page() for mft records
* @ni: ntfs inode structure of mft record
*
* Call flush_dcache_page() for the page in which an mft record resides.
*
* This must be called every time an mft record is modified, just after the
* modification.
*/
static inline void flush_dcache_mft_record_page(ntfs_inode *ni)
{
flush_dcache_page(ni->page);
}
extern void __mark_mft_record_dirty(ntfs_inode *ni);
/**
* mark_mft_record_dirty - set the mft record and the page containing it dirty
* @ni: ntfs inode describing the mapped mft record
*
* Set the mapped (extent) mft record of the (base or extent) ntfs inode @ni,
* as well as the page containing the mft record, dirty. Also, mark the base
* vfs inode dirty. This ensures that any changes to the mft record are
* written out to disk.
*
* NOTE: Do not do anything if the mft record is already marked dirty.
*/
static inline void mark_mft_record_dirty(ntfs_inode *ni)
{
if (!NInoTestSetDirty(ni))
__mark_mft_record_dirty(ni);
}
extern int ntfs_sync_mft_mirror(ntfs_volume *vol, const unsigned long mft_no,
MFT_RECORD *m, int sync);
extern int write_mft_record_nolock(ntfs_inode *ni, MFT_RECORD *m, int sync);
/**
* write_mft_record - write out a mapped (extent) mft record
* @ni: ntfs inode describing the mapped (extent) mft record
* @m: mapped (extent) mft record to write
* @sync: if true, wait for i/o completion
*
* This is just a wrapper for write_mft_record_nolock() (see mft.c), which
* locks the page for the duration of the write. This ensures that there are
* no race conditions between writing the mft record via the dirty inode code
* paths and via the page cache write back code paths or between writing
* neighbouring mft records residing in the same page.
*
* Locking the page also serializes us against ->read_folio() if the page is not
* uptodate.
*
* On success, clean the mft record and return 0. On error, leave the mft
* record dirty and return -errno.
*/
static inline int write_mft_record(ntfs_inode *ni, MFT_RECORD *m, int sync)
{
struct page *page = ni->page;
int err;
BUG_ON(!page);
lock_page(page);
err = write_mft_record_nolock(ni, m, sync);
unlock_page(page);
return err;
}
extern bool ntfs_may_write_mft_record(ntfs_volume *vol,
const unsigned long mft_no, const MFT_RECORD *m,
ntfs_inode **locked_ni);
extern ntfs_inode *ntfs_mft_record_alloc(ntfs_volume *vol, const int mode,
ntfs_inode *base_ni, MFT_RECORD **mrec);
extern int ntfs_extent_mft_record_free(ntfs_inode *ni, MFT_RECORD *m);
#endif /* NTFS_RW */
#endif /* _LINUX_NTFS_MFT_H */

View File

@ -1,189 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* mst.c - NTFS multi sector transfer protection handling code. Part of the
* Linux-NTFS project.
*
* Copyright (c) 2001-2004 Anton Altaparmakov
*/
#include "ntfs.h"
/**
* post_read_mst_fixup - deprotect multi sector transfer protected data
* @b: pointer to the data to deprotect
* @size: size in bytes of @b
*
* Perform the necessary post read multi sector transfer fixup and detect the
* presence of incomplete multi sector transfers. - In that case, overwrite the
* magic of the ntfs record header being processed with "BAAD" (in memory only!)
* and abort processing.
*
* Return 0 on success and -EINVAL on error ("BAAD" magic will be present).
*
* NOTE: We consider the absence / invalidity of an update sequence array to
* mean that the structure is not protected at all and hence doesn't need to
* be fixed up. Thus, we return success and not failure in this case. This is
* in contrast to pre_write_mst_fixup(), see below.
*/
int post_read_mst_fixup(NTFS_RECORD *b, const u32 size)
{
u16 usa_ofs, usa_count, usn;
u16 *usa_pos, *data_pos;
/* Setup the variables. */
usa_ofs = le16_to_cpu(b->usa_ofs);
/* Decrement usa_count to get number of fixups. */
usa_count = le16_to_cpu(b->usa_count) - 1;
/* Size and alignment checks. */
if ( size & (NTFS_BLOCK_SIZE - 1) ||
usa_ofs & 1 ||
usa_ofs + (usa_count * 2) > size ||
(size >> NTFS_BLOCK_SIZE_BITS) != usa_count)
return 0;
/* Position of usn in update sequence array. */
usa_pos = (u16*)b + usa_ofs/sizeof(u16);
/*
* The update sequence number which has to be equal to each of the
* u16 values before they are fixed up. Note no need to care for
* endianness since we are comparing and moving data for on disk
* structures which means the data is consistent. - If it is
* consistenty the wrong endianness it doesn't make any difference.
*/
usn = *usa_pos;
/*
* Position in protected data of first u16 that needs fixing up.
*/
data_pos = (u16*)b + NTFS_BLOCK_SIZE/sizeof(u16) - 1;
/*
* Check for incomplete multi sector transfer(s).
*/
while (usa_count--) {
if (*data_pos != usn) {
/*
* Incomplete multi sector transfer detected! )-:
* Set the magic to "BAAD" and return failure.
* Note that magic_BAAD is already converted to le32.
*/
b->magic = magic_BAAD;
return -EINVAL;
}
data_pos += NTFS_BLOCK_SIZE/sizeof(u16);
}
/* Re-setup the variables. */
usa_count = le16_to_cpu(b->usa_count) - 1;
data_pos = (u16*)b + NTFS_BLOCK_SIZE/sizeof(u16) - 1;
/* Fixup all sectors. */
while (usa_count--) {
/*
* Increment position in usa and restore original data from
* the usa into the data buffer.
*/
*data_pos = *(++usa_pos);
/* Increment position in data as well. */
data_pos += NTFS_BLOCK_SIZE/sizeof(u16);
}
return 0;
}
/**
* pre_write_mst_fixup - apply multi sector transfer protection
* @b: pointer to the data to protect
* @size: size in bytes of @b
*
* Perform the necessary pre write multi sector transfer fixup on the data
* pointer to by @b of @size.
*
* Return 0 if fixup applied (success) or -EINVAL if no fixup was performed
* (assumed not needed). This is in contrast to post_read_mst_fixup() above.
*
* NOTE: We consider the absence / invalidity of an update sequence array to
* mean that the structure is not subject to protection and hence doesn't need
* to be fixed up. This means that you have to create a valid update sequence
* array header in the ntfs record before calling this function, otherwise it
* will fail (the header needs to contain the position of the update sequence
* array together with the number of elements in the array). You also need to
* initialise the update sequence number before calling this function
* otherwise a random word will be used (whatever was in the record at that
* position at that time).
*/
int pre_write_mst_fixup(NTFS_RECORD *b, const u32 size)
{
le16 *usa_pos, *data_pos;
u16 usa_ofs, usa_count, usn;
le16 le_usn;
/* Sanity check + only fixup if it makes sense. */
if (!b || ntfs_is_baad_record(b->magic) ||
ntfs_is_hole_record(b->magic))
return -EINVAL;
/* Setup the variables. */
usa_ofs = le16_to_cpu(b->usa_ofs);
/* Decrement usa_count to get number of fixups. */
usa_count = le16_to_cpu(b->usa_count) - 1;
/* Size and alignment checks. */
if ( size & (NTFS_BLOCK_SIZE - 1) ||
usa_ofs & 1 ||
usa_ofs + (usa_count * 2) > size ||
(size >> NTFS_BLOCK_SIZE_BITS) != usa_count)
return -EINVAL;
/* Position of usn in update sequence array. */
usa_pos = (le16*)((u8*)b + usa_ofs);
/*
* Cyclically increment the update sequence number
* (skipping 0 and -1, i.e. 0xffff).
*/
usn = le16_to_cpup(usa_pos) + 1;
if (usn == 0xffff || !usn)
usn = 1;
le_usn = cpu_to_le16(usn);
*usa_pos = le_usn;
/* Position in data of first u16 that needs fixing up. */
data_pos = (le16*)b + NTFS_BLOCK_SIZE/sizeof(le16) - 1;
/* Fixup all sectors. */
while (usa_count--) {
/*
* Increment the position in the usa and save the
* original data from the data buffer into the usa.
*/
*(++usa_pos) = *data_pos;
/* Apply fixup to data. */
*data_pos = le_usn;
/* Increment position in data as well. */
data_pos += NTFS_BLOCK_SIZE/sizeof(le16);
}
return 0;
}
/**
* post_write_mst_fixup - fast deprotect multi sector transfer protected data
* @b: pointer to the data to deprotect
*
* Perform the necessary post write multi sector transfer fixup, not checking
* for any errors, because we assume we have just used pre_write_mst_fixup(),
* thus the data will be fine or we would never have gotten here.
*/
void post_write_mst_fixup(NTFS_RECORD *b)
{
le16 *usa_pos, *data_pos;
u16 usa_ofs = le16_to_cpu(b->usa_ofs);
u16 usa_count = le16_to_cpu(b->usa_count) - 1;
/* Position of usn in update sequence array. */
usa_pos = (le16*)b + usa_ofs/sizeof(le16);
/* Position in protected data of first u16 that needs fixing up. */
data_pos = (le16*)b + NTFS_BLOCK_SIZE/sizeof(le16) - 1;
/* Fixup all sectors. */
while (usa_count--) {
/*
* Increment position in usa and restore original data from
* the usa into the data buffer.
*/
*data_pos = *(++usa_pos);
/* Increment position in data as well. */
data_pos += NTFS_BLOCK_SIZE/sizeof(le16);
}
}

View File

@ -1,392 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* namei.c - NTFS kernel directory inode operations. Part of the Linux-NTFS
* project.
*
* Copyright (c) 2001-2006 Anton Altaparmakov
*/
#include <linux/dcache.h>
#include <linux/exportfs.h>
#include <linux/security.h>
#include <linux/slab.h>
#include "attrib.h"
#include "debug.h"
#include "dir.h"
#include "mft.h"
#include "ntfs.h"
/**
* ntfs_lookup - find the inode represented by a dentry in a directory inode
* @dir_ino: directory inode in which to look for the inode
* @dent: dentry representing the inode to look for
* @flags: lookup flags
*
* In short, ntfs_lookup() looks for the inode represented by the dentry @dent
* in the directory inode @dir_ino and if found attaches the inode to the
* dentry @dent.
*
* In more detail, the dentry @dent specifies which inode to look for by
* supplying the name of the inode in @dent->d_name.name. ntfs_lookup()
* converts the name to Unicode and walks the contents of the directory inode
* @dir_ino looking for the converted Unicode name. If the name is found in the
* directory, the corresponding inode is loaded by calling ntfs_iget() on its
* inode number and the inode is associated with the dentry @dent via a call to
* d_splice_alias().
*
* If the name is not found in the directory, a NULL inode is inserted into the
* dentry @dent via a call to d_add(). The dentry is then termed a negative
* dentry.
*
* Only if an actual error occurs, do we return an error via ERR_PTR().
*
* In order to handle the case insensitivity issues of NTFS with regards to the
* dcache and the dcache requiring only one dentry per directory, we deal with
* dentry aliases that only differ in case in ->ntfs_lookup() while maintaining
* a case sensitive dcache. This means that we get the full benefit of dcache
* speed when the file/directory is looked up with the same case as returned by
* ->ntfs_readdir() but that a lookup for any other case (or for the short file
* name) will not find anything in dcache and will enter ->ntfs_lookup()
* instead, where we search the directory for a fully matching file name
* (including case) and if that is not found, we search for a file name that
* matches with different case and if that has non-POSIX semantics we return
* that. We actually do only one search (case sensitive) and keep tabs on
* whether we have found a case insensitive match in the process.
*
* To simplify matters for us, we do not treat the short vs long filenames as
* two hard links but instead if the lookup matches a short filename, we
* return the dentry for the corresponding long filename instead.
*
* There are three cases we need to distinguish here:
*
* 1) @dent perfectly matches (i.e. including case) a directory entry with a
* file name in the WIN32 or POSIX namespaces. In this case
* ntfs_lookup_inode_by_name() will return with name set to NULL and we
* just d_splice_alias() @dent.
* 2) @dent matches (not including case) a directory entry with a file name in
* the WIN32 namespace. In this case ntfs_lookup_inode_by_name() will return
* with name set to point to a kmalloc()ed ntfs_name structure containing
* the properly cased little endian Unicode name. We convert the name to the
* current NLS code page, search if a dentry with this name already exists
* and if so return that instead of @dent. At this point things are
* complicated by the possibility of 'disconnected' dentries due to NFS
* which we deal with appropriately (see the code comments). The VFS will
* then destroy the old @dent and use the one we returned. If a dentry is
* not found, we allocate a new one, d_splice_alias() it, and return it as
* above.
* 3) @dent matches either perfectly or not (i.e. we don't care about case) a
* directory entry with a file name in the DOS namespace. In this case
* ntfs_lookup_inode_by_name() will return with name set to point to a
* kmalloc()ed ntfs_name structure containing the mft reference (cpu endian)
* of the inode. We use the mft reference to read the inode and to find the
* file name in the WIN32 namespace corresponding to the matched short file
* name. We then convert the name to the current NLS code page, and proceed
* searching for a dentry with this name, etc, as in case 2), above.
*
* Locking: Caller must hold i_mutex on the directory.
*/
static struct dentry *ntfs_lookup(struct inode *dir_ino, struct dentry *dent,
unsigned int flags)
{
ntfs_volume *vol = NTFS_SB(dir_ino->i_sb);
struct inode *dent_inode;
ntfschar *uname;
ntfs_name *name = NULL;
MFT_REF mref;
unsigned long dent_ino;
int uname_len;
ntfs_debug("Looking up %pd in directory inode 0x%lx.",
dent, dir_ino->i_ino);
/* Convert the name of the dentry to Unicode. */
uname_len = ntfs_nlstoucs(vol, dent->d_name.name, dent->d_name.len,
&uname);
if (uname_len < 0) {
if (uname_len != -ENAMETOOLONG)
ntfs_error(vol->sb, "Failed to convert name to "
"Unicode.");
return ERR_PTR(uname_len);
}
mref = ntfs_lookup_inode_by_name(NTFS_I(dir_ino), uname, uname_len,
&name);
kmem_cache_free(ntfs_name_cache, uname);
if (!IS_ERR_MREF(mref)) {
dent_ino = MREF(mref);
ntfs_debug("Found inode 0x%lx. Calling ntfs_iget.", dent_ino);
dent_inode = ntfs_iget(vol->sb, dent_ino);
if (!IS_ERR(dent_inode)) {
/* Consistency check. */
if (is_bad_inode(dent_inode) || MSEQNO(mref) ==
NTFS_I(dent_inode)->seq_no ||
dent_ino == FILE_MFT) {
/* Perfect WIN32/POSIX match. -- Case 1. */
if (!name) {
ntfs_debug("Done. (Case 1.)");
return d_splice_alias(dent_inode, dent);
}
/*
* We are too indented. Handle imperfect
* matches and short file names further below.
*/
goto handle_name;
}
ntfs_error(vol->sb, "Found stale reference to inode "
"0x%lx (reference sequence number = "
"0x%x, inode sequence number = 0x%x), "
"returning -EIO. Run chkdsk.",
dent_ino, MSEQNO(mref),
NTFS_I(dent_inode)->seq_no);
iput(dent_inode);
dent_inode = ERR_PTR(-EIO);
} else
ntfs_error(vol->sb, "ntfs_iget(0x%lx) failed with "
"error code %li.", dent_ino,
PTR_ERR(dent_inode));
kfree(name);
/* Return the error code. */
return ERR_CAST(dent_inode);
}
/* It is guaranteed that @name is no longer allocated at this point. */
if (MREF_ERR(mref) == -ENOENT) {
ntfs_debug("Entry was not found, adding negative dentry.");
/* The dcache will handle negative entries. */
d_add(dent, NULL);
ntfs_debug("Done.");
return NULL;
}
ntfs_error(vol->sb, "ntfs_lookup_ino_by_name() failed with error "
"code %i.", -MREF_ERR(mref));
return ERR_PTR(MREF_ERR(mref));
// TODO: Consider moving this lot to a separate function! (AIA)
handle_name:
{
MFT_RECORD *m;
ntfs_attr_search_ctx *ctx;
ntfs_inode *ni = NTFS_I(dent_inode);
int err;
struct qstr nls_name;
nls_name.name = NULL;
if (name->type != FILE_NAME_DOS) { /* Case 2. */
ntfs_debug("Case 2.");
nls_name.len = (unsigned)ntfs_ucstonls(vol,
(ntfschar*)&name->name, name->len,
(unsigned char**)&nls_name.name, 0);
kfree(name);
} else /* if (name->type == FILE_NAME_DOS) */ { /* Case 3. */
FILE_NAME_ATTR *fn;
ntfs_debug("Case 3.");
kfree(name);
/* Find the WIN32 name corresponding to the matched DOS name. */
ni = NTFS_I(dent_inode);
m = map_mft_record(ni);
if (IS_ERR(m)) {
err = PTR_ERR(m);
m = NULL;
ctx = NULL;
goto err_out;
}
ctx = ntfs_attr_get_search_ctx(ni, m);
if (unlikely(!ctx)) {
err = -ENOMEM;
goto err_out;
}
do {
ATTR_RECORD *a;
u32 val_len;
err = ntfs_attr_lookup(AT_FILE_NAME, NULL, 0, 0, 0,
NULL, 0, ctx);
if (unlikely(err)) {
ntfs_error(vol->sb, "Inode corrupt: No WIN32 "
"namespace counterpart to DOS "
"file name. Run chkdsk.");
if (err == -ENOENT)
err = -EIO;
goto err_out;
}
/* Consistency checks. */
a = ctx->attr;
if (a->non_resident || a->flags)
goto eio_err_out;
val_len = le32_to_cpu(a->data.resident.value_length);
if (le16_to_cpu(a->data.resident.value_offset) +
val_len > le32_to_cpu(a->length))
goto eio_err_out;
fn = (FILE_NAME_ATTR*)((u8*)ctx->attr + le16_to_cpu(
ctx->attr->data.resident.value_offset));
if ((u32)(fn->file_name_length * sizeof(ntfschar) +
sizeof(FILE_NAME_ATTR)) > val_len)
goto eio_err_out;
} while (fn->file_name_type != FILE_NAME_WIN32);
/* Convert the found WIN32 name to current NLS code page. */
nls_name.len = (unsigned)ntfs_ucstonls(vol,
(ntfschar*)&fn->file_name, fn->file_name_length,
(unsigned char**)&nls_name.name, 0);
ntfs_attr_put_search_ctx(ctx);
unmap_mft_record(ni);
}
m = NULL;
ctx = NULL;
/* Check if a conversion error occurred. */
if ((signed)nls_name.len < 0) {
err = (signed)nls_name.len;
goto err_out;
}
nls_name.hash = full_name_hash(dent, nls_name.name, nls_name.len);
dent = d_add_ci(dent, dent_inode, &nls_name);
kfree(nls_name.name);
return dent;
eio_err_out:
ntfs_error(vol->sb, "Illegal file name attribute. Run chkdsk.");
err = -EIO;
err_out:
if (ctx)
ntfs_attr_put_search_ctx(ctx);
if (m)
unmap_mft_record(ni);
iput(dent_inode);
ntfs_error(vol->sb, "Failed, returning error code %i.", err);
return ERR_PTR(err);
}
}
/*
* Inode operations for directories.
*/
const struct inode_operations ntfs_dir_inode_ops = {
.lookup = ntfs_lookup, /* VFS: Lookup directory. */
};
/**
* ntfs_get_parent - find the dentry of the parent of a given directory dentry
* @child_dent: dentry of the directory whose parent directory to find
*
* Find the dentry for the parent directory of the directory specified by the
* dentry @child_dent. This function is called from
* fs/exportfs/expfs.c::find_exported_dentry() which in turn is called from the
* default ->decode_fh() which is export_decode_fh() in the same file.
*
* The code is based on the ext3 ->get_parent() implementation found in
* fs/ext3/namei.c::ext3_get_parent().
*
* Note: ntfs_get_parent() is called with @d_inode(child_dent)->i_mutex down.
*
* Return the dentry of the parent directory on success or the error code on
* error (IS_ERR() is true).
*/
static struct dentry *ntfs_get_parent(struct dentry *child_dent)
{
struct inode *vi = d_inode(child_dent);
ntfs_inode *ni = NTFS_I(vi);
MFT_RECORD *mrec;
ntfs_attr_search_ctx *ctx;
ATTR_RECORD *attr;
FILE_NAME_ATTR *fn;
unsigned long parent_ino;
int err;
ntfs_debug("Entering for inode 0x%lx.", vi->i_ino);
/* Get the mft record of the inode belonging to the child dentry. */
mrec = map_mft_record(ni);
if (IS_ERR(mrec))
return ERR_CAST(mrec);
/* Find the first file name attribute in the mft record. */
ctx = ntfs_attr_get_search_ctx(ni, mrec);
if (unlikely(!ctx)) {
unmap_mft_record(ni);
return ERR_PTR(-ENOMEM);
}
try_next:
err = ntfs_attr_lookup(AT_FILE_NAME, NULL, 0, CASE_SENSITIVE, 0, NULL,
0, ctx);
if (unlikely(err)) {
ntfs_attr_put_search_ctx(ctx);
unmap_mft_record(ni);
if (err == -ENOENT)
ntfs_error(vi->i_sb, "Inode 0x%lx does not have a "
"file name attribute. Run chkdsk.",
vi->i_ino);
return ERR_PTR(err);
}
attr = ctx->attr;
if (unlikely(attr->non_resident))
goto try_next;
fn = (FILE_NAME_ATTR *)((u8 *)attr +
le16_to_cpu(attr->data.resident.value_offset));
if (unlikely((u8 *)fn + le32_to_cpu(attr->data.resident.value_length) >
(u8*)attr + le32_to_cpu(attr->length)))
goto try_next;
/* Get the inode number of the parent directory. */
parent_ino = MREF_LE(fn->parent_directory);
/* Release the search context and the mft record of the child. */
ntfs_attr_put_search_ctx(ctx);
unmap_mft_record(ni);
return d_obtain_alias(ntfs_iget(vi->i_sb, parent_ino));
}
static struct inode *ntfs_nfs_get_inode(struct super_block *sb,
u64 ino, u32 generation)
{
struct inode *inode;
inode = ntfs_iget(sb, ino);
if (!IS_ERR(inode)) {
if (is_bad_inode(inode) || inode->i_generation != generation) {
iput(inode);
inode = ERR_PTR(-ESTALE);
}
}
return inode;
}
static struct dentry *ntfs_fh_to_dentry(struct super_block *sb, struct fid *fid,
int fh_len, int fh_type)
{
return generic_fh_to_dentry(sb, fid, fh_len, fh_type,
ntfs_nfs_get_inode);
}
static struct dentry *ntfs_fh_to_parent(struct super_block *sb, struct fid *fid,
int fh_len, int fh_type)
{
return generic_fh_to_parent(sb, fid, fh_len, fh_type,
ntfs_nfs_get_inode);
}
/*
* Export operations allowing NFS exporting of mounted NTFS partitions.
*
* We use the default ->encode_fh() for now. Note that they
* use 32 bits to store the inode number which is an unsigned long so on 64-bit
* architectures is usually 64 bits so it would all fail horribly on huge
* volumes. I guess we need to define our own encode and decode fh functions
* that store 64-bit inode numbers at some point but for now we will ignore the
* problem...
*
* We also use the default ->get_name() helper (used by ->decode_fh() via
* fs/exportfs/expfs.c::find_exported_dentry()) as that is completely fs
* independent.
*
* The default ->get_parent() just returns -EACCES so we have to provide our
* own and the default ->get_dentry() is incompatible with NTFS due to not
* allowing the inode number 0 which is used in NTFS for the system file $MFT
* and due to using iget() whereas NTFS needs ntfs_iget().
*/
const struct export_operations ntfs_export_ops = {
.encode_fh = generic_encode_ino32_fh,
.get_parent = ntfs_get_parent, /* Find the parent of a given
directory. */
.fh_to_dentry = ntfs_fh_to_dentry,
.fh_to_parent = ntfs_fh_to_parent,
};

View File

@ -1,150 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* ntfs.h - Defines for NTFS Linux kernel driver.
*
* Copyright (c) 2001-2014 Anton Altaparmakov and Tuxera Inc.
* Copyright (C) 2002 Richard Russon
*/
#ifndef _LINUX_NTFS_H
#define _LINUX_NTFS_H
#include <linux/stddef.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/compiler.h>
#include <linux/fs.h>
#include <linux/nls.h>
#include <linux/smp.h>
#include <linux/pagemap.h>
#include "types.h"
#include "volume.h"
#include "layout.h"
typedef enum {
NTFS_BLOCK_SIZE = 512,
NTFS_BLOCK_SIZE_BITS = 9,
NTFS_SB_MAGIC = 0x5346544e, /* 'NTFS' */
NTFS_MAX_NAME_LEN = 255,
NTFS_MAX_ATTR_NAME_LEN = 255,
NTFS_MAX_CLUSTER_SIZE = 64 * 1024, /* 64kiB */
NTFS_MAX_PAGES_PER_CLUSTER = NTFS_MAX_CLUSTER_SIZE / PAGE_SIZE,
} NTFS_CONSTANTS;
/* Global variables. */
/* Slab caches (from super.c). */
extern struct kmem_cache *ntfs_name_cache;
extern struct kmem_cache *ntfs_inode_cache;
extern struct kmem_cache *ntfs_big_inode_cache;
extern struct kmem_cache *ntfs_attr_ctx_cache;
extern struct kmem_cache *ntfs_index_ctx_cache;
/* The various operations structs defined throughout the driver files. */
extern const struct address_space_operations ntfs_normal_aops;
extern const struct address_space_operations ntfs_compressed_aops;
extern const struct address_space_operations ntfs_mst_aops;
extern const struct file_operations ntfs_file_ops;
extern const struct inode_operations ntfs_file_inode_ops;
extern const struct file_operations ntfs_dir_ops;
extern const struct inode_operations ntfs_dir_inode_ops;
extern const struct file_operations ntfs_empty_file_ops;
extern const struct inode_operations ntfs_empty_inode_ops;
extern const struct export_operations ntfs_export_ops;
/**
* NTFS_SB - return the ntfs volume given a vfs super block
* @sb: VFS super block
*
* NTFS_SB() returns the ntfs volume associated with the VFS super block @sb.
*/
static inline ntfs_volume *NTFS_SB(struct super_block *sb)
{
return sb->s_fs_info;
}
/* Declarations of functions and global variables. */
/* From fs/ntfs/compress.c */
extern int ntfs_read_compressed_block(struct page *page);
extern int allocate_compression_buffers(void);
extern void free_compression_buffers(void);
/* From fs/ntfs/super.c */
#define default_upcase_len 0x10000
extern struct mutex ntfs_lock;
typedef struct {
int val;
char *str;
} option_t;
extern const option_t on_errors_arr[];
/* From fs/ntfs/mst.c */
extern int post_read_mst_fixup(NTFS_RECORD *b, const u32 size);
extern int pre_write_mst_fixup(NTFS_RECORD *b, const u32 size);
extern void post_write_mst_fixup(NTFS_RECORD *b);
/* From fs/ntfs/unistr.c */
extern bool ntfs_are_names_equal(const ntfschar *s1, size_t s1_len,
const ntfschar *s2, size_t s2_len,
const IGNORE_CASE_BOOL ic,
const ntfschar *upcase, const u32 upcase_size);
extern int ntfs_collate_names(const ntfschar *name1, const u32 name1_len,
const ntfschar *name2, const u32 name2_len,
const int err_val, const IGNORE_CASE_BOOL ic,
const ntfschar *upcase, const u32 upcase_len);
extern int ntfs_ucsncmp(const ntfschar *s1, const ntfschar *s2, size_t n);
extern int ntfs_ucsncasecmp(const ntfschar *s1, const ntfschar *s2, size_t n,
const ntfschar *upcase, const u32 upcase_size);
extern void ntfs_upcase_name(ntfschar *name, u32 name_len,
const ntfschar *upcase, const u32 upcase_len);
extern void ntfs_file_upcase_value(FILE_NAME_ATTR *file_name_attr,
const ntfschar *upcase, const u32 upcase_len);
extern int ntfs_file_compare_values(FILE_NAME_ATTR *file_name_attr1,
FILE_NAME_ATTR *file_name_attr2,
const int err_val, const IGNORE_CASE_BOOL ic,
const ntfschar *upcase, const u32 upcase_len);
extern int ntfs_nlstoucs(const ntfs_volume *vol, const char *ins,
const int ins_len, ntfschar **outs);
extern int ntfs_ucstonls(const ntfs_volume *vol, const ntfschar *ins,
const int ins_len, unsigned char **outs, int outs_len);
/* From fs/ntfs/upcase.c */
extern ntfschar *generate_default_upcase(void);
static inline int ntfs_ffs(int x)
{
int r = 1;
if (!x)
return 0;
if (!(x & 0xffff)) {
x >>= 16;
r += 16;
}
if (!(x & 0xff)) {
x >>= 8;
r += 8;
}
if (!(x & 0xf)) {
x >>= 4;
r += 4;
}
if (!(x & 3)) {
x >>= 2;
r += 2;
}
if (!(x & 1)) {
x >>= 1;
r += 1;
}
return r;
}
#endif /* _LINUX_NTFS_H */

View File

@ -1,103 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* quota.c - NTFS kernel quota ($Quota) handling. Part of the Linux-NTFS
* project.
*
* Copyright (c) 2004 Anton Altaparmakov
*/
#ifdef NTFS_RW
#include "index.h"
#include "quota.h"
#include "debug.h"
#include "ntfs.h"
/**
* ntfs_mark_quotas_out_of_date - mark the quotas out of date on an ntfs volume
* @vol: ntfs volume on which to mark the quotas out of date
*
* Mark the quotas out of date on the ntfs volume @vol and return 'true' on
* success and 'false' on error.
*/
bool ntfs_mark_quotas_out_of_date(ntfs_volume *vol)
{
ntfs_index_context *ictx;
QUOTA_CONTROL_ENTRY *qce;
const le32 qid = QUOTA_DEFAULTS_ID;
int err;
ntfs_debug("Entering.");
if (NVolQuotaOutOfDate(vol))
goto done;
if (!vol->quota_ino || !vol->quota_q_ino) {
ntfs_error(vol->sb, "Quota inodes are not open.");
return false;
}
inode_lock(vol->quota_q_ino);
ictx = ntfs_index_ctx_get(NTFS_I(vol->quota_q_ino));
if (!ictx) {
ntfs_error(vol->sb, "Failed to get index context.");
goto err_out;
}
err = ntfs_index_lookup(&qid, sizeof(qid), ictx);
if (err) {
if (err == -ENOENT)
ntfs_error(vol->sb, "Quota defaults entry is not "
"present.");
else
ntfs_error(vol->sb, "Lookup of quota defaults entry "
"failed.");
goto err_out;
}
if (ictx->data_len < offsetof(QUOTA_CONTROL_ENTRY, sid)) {
ntfs_error(vol->sb, "Quota defaults entry size is invalid. "
"Run chkdsk.");
goto err_out;
}
qce = (QUOTA_CONTROL_ENTRY*)ictx->data;
if (le32_to_cpu(qce->version) != QUOTA_VERSION) {
ntfs_error(vol->sb, "Quota defaults entry version 0x%x is not "
"supported.", le32_to_cpu(qce->version));
goto err_out;
}
ntfs_debug("Quota defaults flags = 0x%x.", le32_to_cpu(qce->flags));
/* If quotas are already marked out of date, no need to do anything. */
if (qce->flags & QUOTA_FLAG_OUT_OF_DATE)
goto set_done;
/*
* If quota tracking is neither requested, nor enabled and there are no
* pending deletes, no need to mark the quotas out of date.
*/
if (!(qce->flags & (QUOTA_FLAG_TRACKING_ENABLED |
QUOTA_FLAG_TRACKING_REQUESTED |
QUOTA_FLAG_PENDING_DELETES)))
goto set_done;
/*
* Set the QUOTA_FLAG_OUT_OF_DATE bit thus marking quotas out of date.
* This is verified on WinXP to be sufficient to cause windows to
* rescan the volume on boot and update all quota entries.
*/
qce->flags |= QUOTA_FLAG_OUT_OF_DATE;
/* Ensure the modified flags are written to disk. */
ntfs_index_entry_flush_dcache_page(ictx);
ntfs_index_entry_mark_dirty(ictx);
set_done:
ntfs_index_ctx_put(ictx);
inode_unlock(vol->quota_q_ino);
/*
* We set the flag so we do not try to mark the quotas out of date
* again on remount.
*/
NVolSetQuotaOutOfDate(vol);
done:
ntfs_debug("Done.");
return true;
err_out:
if (ictx)
ntfs_index_ctx_put(ictx);
inode_unlock(vol->quota_q_ino);
return false;
}
#endif /* NTFS_RW */

View File

@ -1,21 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* quota.h - Defines for NTFS kernel quota ($Quota) handling. Part of the
* Linux-NTFS project.
*
* Copyright (c) 2004 Anton Altaparmakov
*/
#ifndef _LINUX_NTFS_QUOTA_H
#define _LINUX_NTFS_QUOTA_H
#ifdef NTFS_RW
#include "types.h"
#include "volume.h"
extern bool ntfs_mark_quotas_out_of_date(ntfs_volume *vol);
#endif /* NTFS_RW */
#endif /* _LINUX_NTFS_QUOTA_H */

File diff suppressed because it is too large Load Diff

View File

@ -1,88 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* runlist.h - Defines for runlist handling in NTFS Linux kernel driver.
* Part of the Linux-NTFS project.
*
* Copyright (c) 2001-2005 Anton Altaparmakov
* Copyright (c) 2002 Richard Russon
*/
#ifndef _LINUX_NTFS_RUNLIST_H
#define _LINUX_NTFS_RUNLIST_H
#include "types.h"
#include "layout.h"
#include "volume.h"
/**
* runlist_element - in memory vcn to lcn mapping array element
* @vcn: starting vcn of the current array element
* @lcn: starting lcn of the current array element
* @length: length in clusters of the current array element
*
* The last vcn (in fact the last vcn + 1) is reached when length == 0.
*
* When lcn == -1 this means that the count vcns starting at vcn are not
* physically allocated (i.e. this is a hole / data is sparse).
*/
typedef struct { /* In memory vcn to lcn mapping structure element. */
VCN vcn; /* vcn = Starting virtual cluster number. */
LCN lcn; /* lcn = Starting logical cluster number. */
s64 length; /* Run length in clusters. */
} runlist_element;
/**
* runlist - in memory vcn to lcn mapping array including a read/write lock
* @rl: pointer to an array of runlist elements
* @lock: read/write spinlock for serializing access to @rl
*
*/
typedef struct {
runlist_element *rl;
struct rw_semaphore lock;
} runlist;
static inline void ntfs_init_runlist(runlist *rl)
{
rl->rl = NULL;
init_rwsem(&rl->lock);
}
typedef enum {
LCN_HOLE = -1, /* Keep this as highest value or die! */
LCN_RL_NOT_MAPPED = -2,
LCN_ENOENT = -3,
LCN_ENOMEM = -4,
LCN_EIO = -5,
} LCN_SPECIAL_VALUES;
extern runlist_element *ntfs_runlists_merge(runlist_element *drl,
runlist_element *srl);
extern runlist_element *ntfs_mapping_pairs_decompress(const ntfs_volume *vol,
const ATTR_RECORD *attr, runlist_element *old_rl);
extern LCN ntfs_rl_vcn_to_lcn(const runlist_element *rl, const VCN vcn);
#ifdef NTFS_RW
extern runlist_element *ntfs_rl_find_vcn_nolock(runlist_element *rl,
const VCN vcn);
extern int ntfs_get_size_for_mapping_pairs(const ntfs_volume *vol,
const runlist_element *rl, const VCN first_vcn,
const VCN last_vcn);
extern int ntfs_mapping_pairs_build(const ntfs_volume *vol, s8 *dst,
const int dst_len, const runlist_element *rl,
const VCN first_vcn, const VCN last_vcn, VCN *const stop_vcn);
extern int ntfs_rl_truncate_nolock(const ntfs_volume *vol,
runlist *const runlist, const s64 new_length);
int ntfs_rl_punch_nolock(const ntfs_volume *vol, runlist *const runlist,
const VCN start, const s64 length);
#endif /* NTFS_RW */
#endif /* _LINUX_NTFS_RUNLIST_H */

File diff suppressed because it is too large Load Diff

View File

@ -1,58 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* sysctl.c - Code for sysctl handling in NTFS Linux kernel driver. Part of
* the Linux-NTFS project. Adapted from the old NTFS driver,
* Copyright (C) 1997 Martin von Löwis, Régis Duchesne
*
* Copyright (c) 2002-2005 Anton Altaparmakov
*/
#ifdef DEBUG
#include <linux/module.h>
#ifdef CONFIG_SYSCTL
#include <linux/proc_fs.h>
#include <linux/sysctl.h>
#include "sysctl.h"
#include "debug.h"
/* Definition of the ntfs sysctl. */
static struct ctl_table ntfs_sysctls[] = {
{
.procname = "ntfs-debug",
.data = &debug_msgs, /* Data pointer and size. */
.maxlen = sizeof(debug_msgs),
.mode = 0644, /* Mode, proc handler. */
.proc_handler = proc_dointvec
},
};
/* Storage for the sysctls header. */
static struct ctl_table_header *sysctls_root_table;
/**
* ntfs_sysctl - add or remove the debug sysctl
* @add: add (1) or remove (0) the sysctl
*
* Add or remove the debug sysctl. Return 0 on success or -errno on error.
*/
int ntfs_sysctl(int add)
{
if (add) {
BUG_ON(sysctls_root_table);
sysctls_root_table = register_sysctl("fs", ntfs_sysctls);
if (!sysctls_root_table)
return -ENOMEM;
} else {
BUG_ON(!sysctls_root_table);
unregister_sysctl_table(sysctls_root_table);
sysctls_root_table = NULL;
}
return 0;
}
#endif /* CONFIG_SYSCTL */
#endif /* DEBUG */

View File

@ -1,27 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* sysctl.h - Defines for sysctl handling in NTFS Linux kernel driver. Part of
* the Linux-NTFS project. Adapted from the old NTFS driver,
* Copyright (C) 1997 Martin von Löwis, Régis Duchesne
*
* Copyright (c) 2002-2004 Anton Altaparmakov
*/
#ifndef _LINUX_NTFS_SYSCTL_H
#define _LINUX_NTFS_SYSCTL_H
#if defined(DEBUG) && defined(CONFIG_SYSCTL)
extern int ntfs_sysctl(int add);
#else
/* Just return success. */
static inline int ntfs_sysctl(int add)
{
return 0;
}
#endif /* DEBUG && CONFIG_SYSCTL */
#endif /* _LINUX_NTFS_SYSCTL_H */

View File

@ -1,89 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* time.h - NTFS time conversion functions. Part of the Linux-NTFS project.
*
* Copyright (c) 2001-2005 Anton Altaparmakov
*/
#ifndef _LINUX_NTFS_TIME_H
#define _LINUX_NTFS_TIME_H
#include <linux/time.h> /* For current_kernel_time(). */
#include <asm/div64.h> /* For do_div(). */
#include "endian.h"
#define NTFS_TIME_OFFSET ((s64)(369 * 365 + 89) * 24 * 3600 * 10000000)
/**
* utc2ntfs - convert Linux UTC time to NTFS time
* @ts: Linux UTC time to convert to NTFS time
*
* Convert the Linux UTC time @ts to its corresponding NTFS time and return
* that in little endian format.
*
* Linux stores time in a struct timespec64 consisting of a time64_t tv_sec
* and a long tv_nsec where tv_sec is the number of 1-second intervals since
* 1st January 1970, 00:00:00 UTC and tv_nsec is the number of 1-nano-second
* intervals since the value of tv_sec.
*
* NTFS uses Microsoft's standard time format which is stored in a s64 and is
* measured as the number of 100-nano-second intervals since 1st January 1601,
* 00:00:00 UTC.
*/
static inline sle64 utc2ntfs(const struct timespec64 ts)
{
/*
* Convert the seconds to 100ns intervals, add the nano-seconds
* converted to 100ns intervals, and then add the NTFS time offset.
*/
return cpu_to_sle64((s64)ts.tv_sec * 10000000 + ts.tv_nsec / 100 +
NTFS_TIME_OFFSET);
}
/**
* get_current_ntfs_time - get the current time in little endian NTFS format
*
* Get the current time from the Linux kernel, convert it to its corresponding
* NTFS time and return that in little endian format.
*/
static inline sle64 get_current_ntfs_time(void)
{
struct timespec64 ts;
ktime_get_coarse_real_ts64(&ts);
return utc2ntfs(ts);
}
/**
* ntfs2utc - convert NTFS time to Linux time
* @time: NTFS time (little endian) to convert to Linux UTC
*
* Convert the little endian NTFS time @time to its corresponding Linux UTC
* time and return that in cpu format.
*
* Linux stores time in a struct timespec64 consisting of a time64_t tv_sec
* and a long tv_nsec where tv_sec is the number of 1-second intervals since
* 1st January 1970, 00:00:00 UTC and tv_nsec is the number of 1-nano-second
* intervals since the value of tv_sec.
*
* NTFS uses Microsoft's standard time format which is stored in a s64 and is
* measured as the number of 100 nano-second intervals since 1st January 1601,
* 00:00:00 UTC.
*/
static inline struct timespec64 ntfs2utc(const sle64 time)
{
struct timespec64 ts;
/* Subtract the NTFS time offset. */
u64 t = (u64)(sle64_to_cpu(time) - NTFS_TIME_OFFSET);
/*
* Convert the time to 1-second intervals and the remainder to
* 1-nano-second intervals.
*/
ts.tv_nsec = do_div(t, 10000000) * 100;
ts.tv_sec = t;
return ts;
}
#endif /* _LINUX_NTFS_TIME_H */

View File

@ -1,55 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* types.h - Defines for NTFS Linux kernel driver specific types.
* Part of the Linux-NTFS project.
*
* Copyright (c) 2001-2005 Anton Altaparmakov
*/
#ifndef _LINUX_NTFS_TYPES_H
#define _LINUX_NTFS_TYPES_H
#include <linux/types.h>
typedef __le16 le16;
typedef __le32 le32;
typedef __le64 le64;
typedef __u16 __bitwise sle16;
typedef __u32 __bitwise sle32;
typedef __u64 __bitwise sle64;
/* 2-byte Unicode character type. */
typedef le16 ntfschar;
#define UCHAR_T_SIZE_BITS 1
/*
* Clusters are signed 64-bit values on NTFS volumes. We define two types, LCN
* and VCN, to allow for type checking and better code readability.
*/
typedef s64 VCN;
typedef sle64 leVCN;
typedef s64 LCN;
typedef sle64 leLCN;
/*
* The NTFS journal $LogFile uses log sequence numbers which are signed 64-bit
* values. We define our own type LSN, to allow for type checking and better
* code readability.
*/
typedef s64 LSN;
typedef sle64 leLSN;
/*
* The NTFS transaction log $UsnJrnl uses usn which are signed 64-bit values.
* We define our own type USN, to allow for type checking and better code
* readability.
*/
typedef s64 USN;
typedef sle64 leUSN;
typedef enum {
CASE_SENSITIVE = 0,
IGNORE_CASE = 1,
} IGNORE_CASE_BOOL;
#endif /* _LINUX_NTFS_TYPES_H */

View File

@ -1,384 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* unistr.c - NTFS Unicode string handling. Part of the Linux-NTFS project.
*
* Copyright (c) 2001-2006 Anton Altaparmakov
*/
#include <linux/slab.h>
#include "types.h"
#include "debug.h"
#include "ntfs.h"
/*
* IMPORTANT
* =========
*
* All these routines assume that the Unicode characters are in little endian
* encoding inside the strings!!!
*/
/*
* This is used by the name collation functions to quickly determine what
* characters are (in)valid.
*/
static const u8 legal_ansi_char_array[0x40] = {
0x00, 0x10, 0x10, 0x10, 0x10, 0x10, 0x10, 0x10,
0x10, 0x10, 0x10, 0x10, 0x10, 0x10, 0x10, 0x10,
0x10, 0x10, 0x10, 0x10, 0x10, 0x10, 0x10, 0x10,
0x10, 0x10, 0x10, 0x10, 0x10, 0x10, 0x10, 0x10,
0x17, 0x07, 0x18, 0x17, 0x17, 0x17, 0x17, 0x17,
0x17, 0x17, 0x18, 0x16, 0x16, 0x17, 0x07, 0x00,
0x17, 0x17, 0x17, 0x17, 0x17, 0x17, 0x17, 0x17,
0x17, 0x17, 0x04, 0x16, 0x18, 0x16, 0x18, 0x18,
};
/**
* ntfs_are_names_equal - compare two Unicode names for equality
* @s1: name to compare to @s2
* @s1_len: length in Unicode characters of @s1
* @s2: name to compare to @s1
* @s2_len: length in Unicode characters of @s2
* @ic: ignore case bool
* @upcase: upcase table (only if @ic == IGNORE_CASE)
* @upcase_size: length in Unicode characters of @upcase (if present)
*
* Compare the names @s1 and @s2 and return 'true' (1) if the names are
* identical, or 'false' (0) if they are not identical. If @ic is IGNORE_CASE,
* the @upcase table is used to performa a case insensitive comparison.
*/
bool ntfs_are_names_equal(const ntfschar *s1, size_t s1_len,
const ntfschar *s2, size_t s2_len, const IGNORE_CASE_BOOL ic,
const ntfschar *upcase, const u32 upcase_size)
{
if (s1_len != s2_len)
return false;
if (ic == CASE_SENSITIVE)
return !ntfs_ucsncmp(s1, s2, s1_len);
return !ntfs_ucsncasecmp(s1, s2, s1_len, upcase, upcase_size);
}
/**
* ntfs_collate_names - collate two Unicode names
* @name1: first Unicode name to compare
* @name2: second Unicode name to compare
* @err_val: if @name1 contains an invalid character return this value
* @ic: either CASE_SENSITIVE or IGNORE_CASE
* @upcase: upcase table (ignored if @ic is CASE_SENSITIVE)
* @upcase_len: upcase table size (ignored if @ic is CASE_SENSITIVE)
*
* ntfs_collate_names collates two Unicode names and returns:
*
* -1 if the first name collates before the second one,
* 0 if the names match,
* 1 if the second name collates before the first one, or
* @err_val if an invalid character is found in @name1 during the comparison.
*
* The following characters are considered invalid: '"', '*', '<', '>' and '?'.
*/
int ntfs_collate_names(const ntfschar *name1, const u32 name1_len,
const ntfschar *name2, const u32 name2_len,
const int err_val, const IGNORE_CASE_BOOL ic,
const ntfschar *upcase, const u32 upcase_len)
{
u32 cnt, min_len;
u16 c1, c2;
min_len = name1_len;
if (name1_len > name2_len)
min_len = name2_len;
for (cnt = 0; cnt < min_len; ++cnt) {
c1 = le16_to_cpu(*name1++);
c2 = le16_to_cpu(*name2++);
if (ic) {
if (c1 < upcase_len)
c1 = le16_to_cpu(upcase[c1]);
if (c2 < upcase_len)
c2 = le16_to_cpu(upcase[c2]);
}
if (c1 < 64 && legal_ansi_char_array[c1] & 8)
return err_val;
if (c1 < c2)
return -1;
if (c1 > c2)
return 1;
}
if (name1_len < name2_len)
return -1;
if (name1_len == name2_len)
return 0;
/* name1_len > name2_len */
c1 = le16_to_cpu(*name1);
if (c1 < 64 && legal_ansi_char_array[c1] & 8)
return err_val;
return 1;
}
/**
* ntfs_ucsncmp - compare two little endian Unicode strings
* @s1: first string
* @s2: second string
* @n: maximum unicode characters to compare
*
* Compare the first @n characters of the Unicode strings @s1 and @s2,
* The strings in little endian format and appropriate le16_to_cpu()
* conversion is performed on non-little endian machines.
*
* The function returns an integer less than, equal to, or greater than zero
* if @s1 (or the first @n Unicode characters thereof) is found, respectively,
* to be less than, to match, or be greater than @s2.
*/
int ntfs_ucsncmp(const ntfschar *s1, const ntfschar *s2, size_t n)
{
u16 c1, c2;
size_t i;
for (i = 0; i < n; ++i) {
c1 = le16_to_cpu(s1[i]);
c2 = le16_to_cpu(s2[i]);
if (c1 < c2)
return -1;
if (c1 > c2)
return 1;
if (!c1)
break;
}
return 0;
}
/**
* ntfs_ucsncasecmp - compare two little endian Unicode strings, ignoring case
* @s1: first string
* @s2: second string
* @n: maximum unicode characters to compare
* @upcase: upcase table
* @upcase_size: upcase table size in Unicode characters
*
* Compare the first @n characters of the Unicode strings @s1 and @s2,
* ignoring case. The strings in little endian format and appropriate
* le16_to_cpu() conversion is performed on non-little endian machines.
*
* Each character is uppercased using the @upcase table before the comparison.
*
* The function returns an integer less than, equal to, or greater than zero
* if @s1 (or the first @n Unicode characters thereof) is found, respectively,
* to be less than, to match, or be greater than @s2.
*/
int ntfs_ucsncasecmp(const ntfschar *s1, const ntfschar *s2, size_t n,
const ntfschar *upcase, const u32 upcase_size)
{
size_t i;
u16 c1, c2;
for (i = 0; i < n; ++i) {
if ((c1 = le16_to_cpu(s1[i])) < upcase_size)
c1 = le16_to_cpu(upcase[c1]);
if ((c2 = le16_to_cpu(s2[i])) < upcase_size)
c2 = le16_to_cpu(upcase[c2]);
if (c1 < c2)
return -1;
if (c1 > c2)
return 1;
if (!c1)
break;
}
return 0;
}
void ntfs_upcase_name(ntfschar *name, u32 name_len, const ntfschar *upcase,
const u32 upcase_len)
{
u32 i;
u16 u;
for (i = 0; i < name_len; i++)
if ((u = le16_to_cpu(name[i])) < upcase_len)
name[i] = upcase[u];
}
void ntfs_file_upcase_value(FILE_NAME_ATTR *file_name_attr,
const ntfschar *upcase, const u32 upcase_len)
{
ntfs_upcase_name((ntfschar*)&file_name_attr->file_name,
file_name_attr->file_name_length, upcase, upcase_len);
}
int ntfs_file_compare_values(FILE_NAME_ATTR *file_name_attr1,
FILE_NAME_ATTR *file_name_attr2,
const int err_val, const IGNORE_CASE_BOOL ic,
const ntfschar *upcase, const u32 upcase_len)
{
return ntfs_collate_names((ntfschar*)&file_name_attr1->file_name,
file_name_attr1->file_name_length,
(ntfschar*)&file_name_attr2->file_name,
file_name_attr2->file_name_length,
err_val, ic, upcase, upcase_len);
}
/**
* ntfs_nlstoucs - convert NLS string to little endian Unicode string
* @vol: ntfs volume which we are working with
* @ins: input NLS string buffer
* @ins_len: length of input string in bytes
* @outs: on return contains the allocated output Unicode string buffer
*
* Convert the input string @ins, which is in whatever format the loaded NLS
* map dictates, into a little endian, 2-byte Unicode string.
*
* This function allocates the string and the caller is responsible for
* calling kmem_cache_free(ntfs_name_cache, *@outs); when finished with it.
*
* On success the function returns the number of Unicode characters written to
* the output string *@outs (>= 0), not counting the terminating Unicode NULL
* character. *@outs is set to the allocated output string buffer.
*
* On error, a negative number corresponding to the error code is returned. In
* that case the output string is not allocated. Both *@outs and *@outs_len
* are then undefined.
*
* This might look a bit odd due to fast path optimization...
*/
int ntfs_nlstoucs(const ntfs_volume *vol, const char *ins,
const int ins_len, ntfschar **outs)
{
struct nls_table *nls = vol->nls_map;
ntfschar *ucs;
wchar_t wc;
int i, o, wc_len;
/* We do not trust outside sources. */
if (likely(ins)) {
ucs = kmem_cache_alloc(ntfs_name_cache, GFP_NOFS);
if (likely(ucs)) {
for (i = o = 0; i < ins_len; i += wc_len) {
wc_len = nls->char2uni(ins + i, ins_len - i,
&wc);
if (likely(wc_len >= 0 &&
o < NTFS_MAX_NAME_LEN)) {
if (likely(wc)) {
ucs[o++] = cpu_to_le16(wc);
continue;
} /* else if (!wc) */
break;
} /* else if (wc_len < 0 ||
o >= NTFS_MAX_NAME_LEN) */
goto name_err;
}
ucs[o] = 0;
*outs = ucs;
return o;
} /* else if (!ucs) */
ntfs_error(vol->sb, "Failed to allocate buffer for converted "
"name from ntfs_name_cache.");
return -ENOMEM;
} /* else if (!ins) */
ntfs_error(vol->sb, "Received NULL pointer.");
return -EINVAL;
name_err:
kmem_cache_free(ntfs_name_cache, ucs);
if (wc_len < 0) {
ntfs_error(vol->sb, "Name using character set %s contains "
"characters that cannot be converted to "
"Unicode.", nls->charset);
i = -EILSEQ;
} else /* if (o >= NTFS_MAX_NAME_LEN) */ {
ntfs_error(vol->sb, "Name is too long (maximum length for a "
"name on NTFS is %d Unicode characters.",
NTFS_MAX_NAME_LEN);
i = -ENAMETOOLONG;
}
return i;
}
/**
* ntfs_ucstonls - convert little endian Unicode string to NLS string
* @vol: ntfs volume which we are working with
* @ins: input Unicode string buffer
* @ins_len: length of input string in Unicode characters
* @outs: on return contains the (allocated) output NLS string buffer
* @outs_len: length of output string buffer in bytes
*
* Convert the input little endian, 2-byte Unicode string @ins, of length
* @ins_len into the string format dictated by the loaded NLS.
*
* If *@outs is NULL, this function allocates the string and the caller is
* responsible for calling kfree(*@outs); when finished with it. In this case
* @outs_len is ignored and can be 0.
*
* On success the function returns the number of bytes written to the output
* string *@outs (>= 0), not counting the terminating NULL byte. If the output
* string buffer was allocated, *@outs is set to it.
*
* On error, a negative number corresponding to the error code is returned. In
* that case the output string is not allocated. The contents of *@outs are
* then undefined.
*
* This might look a bit odd due to fast path optimization...
*/
int ntfs_ucstonls(const ntfs_volume *vol, const ntfschar *ins,
const int ins_len, unsigned char **outs, int outs_len)
{
struct nls_table *nls = vol->nls_map;
unsigned char *ns;
int i, o, ns_len, wc;
/* We don't trust outside sources. */
if (ins) {
ns = *outs;
ns_len = outs_len;
if (ns && !ns_len) {
wc = -ENAMETOOLONG;
goto conversion_err;
}
if (!ns) {
ns_len = ins_len * NLS_MAX_CHARSET_SIZE;
ns = kmalloc(ns_len + 1, GFP_NOFS);
if (!ns)
goto mem_err_out;
}
for (i = o = 0; i < ins_len; i++) {
retry: wc = nls->uni2char(le16_to_cpu(ins[i]), ns + o,
ns_len - o);
if (wc > 0) {
o += wc;
continue;
} else if (!wc)
break;
else if (wc == -ENAMETOOLONG && ns != *outs) {
unsigned char *tc;
/* Grow in multiples of 64 bytes. */
tc = kmalloc((ns_len + 64) &
~63, GFP_NOFS);
if (tc) {
memcpy(tc, ns, ns_len);
ns_len = ((ns_len + 64) & ~63) - 1;
kfree(ns);
ns = tc;
goto retry;
} /* No memory so goto conversion_error; */
} /* wc < 0, real error. */
goto conversion_err;
}
ns[o] = 0;
*outs = ns;
return o;
} /* else (!ins) */
ntfs_error(vol->sb, "Received NULL pointer.");
return -EINVAL;
conversion_err:
ntfs_error(vol->sb, "Unicode name contains characters that cannot be "
"converted to character set %s. You might want to "
"try to use the mount option nls=utf8.", nls->charset);
if (ns != *outs)
kfree(ns);
if (wc != -ENAMETOOLONG)
wc = -EILSEQ;
return wc;
mem_err_out:
ntfs_error(vol->sb, "Failed to allocate name!");
return -ENOMEM;
}

View File

@ -1,73 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* upcase.c - Generate the full NTFS Unicode upcase table in little endian.
* Part of the Linux-NTFS project.
*
* Copyright (c) 2001 Richard Russon <ntfs@flatcap.org>
* Copyright (c) 2001-2006 Anton Altaparmakov
*/
#include "malloc.h"
#include "ntfs.h"
ntfschar *generate_default_upcase(void)
{
static const int uc_run_table[][3] = { /* Start, End, Add */
{0x0061, 0x007B, -32}, {0x0451, 0x045D, -80}, {0x1F70, 0x1F72, 74},
{0x00E0, 0x00F7, -32}, {0x045E, 0x0460, -80}, {0x1F72, 0x1F76, 86},
{0x00F8, 0x00FF, -32}, {0x0561, 0x0587, -48}, {0x1F76, 0x1F78, 100},
{0x0256, 0x0258, -205}, {0x1F00, 0x1F08, 8}, {0x1F78, 0x1F7A, 128},
{0x028A, 0x028C, -217}, {0x1F10, 0x1F16, 8}, {0x1F7A, 0x1F7C, 112},
{0x03AC, 0x03AD, -38}, {0x1F20, 0x1F28, 8}, {0x1F7C, 0x1F7E, 126},
{0x03AD, 0x03B0, -37}, {0x1F30, 0x1F38, 8}, {0x1FB0, 0x1FB2, 8},
{0x03B1, 0x03C2, -32}, {0x1F40, 0x1F46, 8}, {0x1FD0, 0x1FD2, 8},
{0x03C2, 0x03C3, -31}, {0x1F51, 0x1F52, 8}, {0x1FE0, 0x1FE2, 8},
{0x03C3, 0x03CC, -32}, {0x1F53, 0x1F54, 8}, {0x1FE5, 0x1FE6, 7},
{0x03CC, 0x03CD, -64}, {0x1F55, 0x1F56, 8}, {0x2170, 0x2180, -16},
{0x03CD, 0x03CF, -63}, {0x1F57, 0x1F58, 8}, {0x24D0, 0x24EA, -26},
{0x0430, 0x0450, -32}, {0x1F60, 0x1F68, 8}, {0xFF41, 0xFF5B, -32},
{0}
};
static const int uc_dup_table[][2] = { /* Start, End */
{0x0100, 0x012F}, {0x01A0, 0x01A6}, {0x03E2, 0x03EF}, {0x04CB, 0x04CC},
{0x0132, 0x0137}, {0x01B3, 0x01B7}, {0x0460, 0x0481}, {0x04D0, 0x04EB},
{0x0139, 0x0149}, {0x01CD, 0x01DD}, {0x0490, 0x04BF}, {0x04EE, 0x04F5},
{0x014A, 0x0178}, {0x01DE, 0x01EF}, {0x04BF, 0x04BF}, {0x04F8, 0x04F9},
{0x0179, 0x017E}, {0x01F4, 0x01F5}, {0x04C1, 0x04C4}, {0x1E00, 0x1E95},
{0x018B, 0x018B}, {0x01FA, 0x0218}, {0x04C7, 0x04C8}, {0x1EA0, 0x1EF9},
{0}
};
static const int uc_word_table[][2] = { /* Offset, Value */
{0x00FF, 0x0178}, {0x01AD, 0x01AC}, {0x01F3, 0x01F1}, {0x0269, 0x0196},
{0x0183, 0x0182}, {0x01B0, 0x01AF}, {0x0253, 0x0181}, {0x026F, 0x019C},
{0x0185, 0x0184}, {0x01B9, 0x01B8}, {0x0254, 0x0186}, {0x0272, 0x019D},
{0x0188, 0x0187}, {0x01BD, 0x01BC}, {0x0259, 0x018F}, {0x0275, 0x019F},
{0x018C, 0x018B}, {0x01C6, 0x01C4}, {0x025B, 0x0190}, {0x0283, 0x01A9},
{0x0192, 0x0191}, {0x01C9, 0x01C7}, {0x0260, 0x0193}, {0x0288, 0x01AE},
{0x0199, 0x0198}, {0x01CC, 0x01CA}, {0x0263, 0x0194}, {0x0292, 0x01B7},
{0x01A8, 0x01A7}, {0x01DD, 0x018E}, {0x0268, 0x0197},
{0}
};
int i, r;
ntfschar *uc;
uc = ntfs_malloc_nofs(default_upcase_len * sizeof(ntfschar));
if (!uc)
return uc;
memset(uc, 0, default_upcase_len * sizeof(ntfschar));
/* Generate the little endian Unicode upcase table used by ntfs. */
for (i = 0; i < default_upcase_len; i++)
uc[i] = cpu_to_le16(i);
for (r = 0; uc_run_table[r][0]; r++)
for (i = uc_run_table[r][0]; i < uc_run_table[r][1]; i++)
le16_add_cpu(&uc[i], uc_run_table[r][2]);
for (r = 0; uc_dup_table[r][0]; r++)
for (i = uc_dup_table[r][0]; i < uc_dup_table[r][1]; i += 2)
le16_add_cpu(&uc[i + 1], -1);
for (r = 0; uc_word_table[r][0]; r++)
uc[uc_word_table[r][0]] = cpu_to_le16(uc_word_table[r][1]);
return uc;
}

View File

@ -1,70 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* usnjrnl.h - NTFS kernel transaction log ($UsnJrnl) handling. Part of the
* Linux-NTFS project.
*
* Copyright (c) 2005 Anton Altaparmakov
*/
#ifdef NTFS_RW
#include <linux/fs.h>
#include <linux/highmem.h>
#include <linux/mm.h>
#include "aops.h"
#include "debug.h"
#include "endian.h"
#include "time.h"
#include "types.h"
#include "usnjrnl.h"
#include "volume.h"
/**
* ntfs_stamp_usnjrnl - stamp the transaction log ($UsnJrnl) on an ntfs volume
* @vol: ntfs volume on which to stamp the transaction log
*
* Stamp the transaction log ($UsnJrnl) on the ntfs volume @vol and return
* 'true' on success and 'false' on error.
*
* This function assumes that the transaction log has already been loaded and
* consistency checked by a call to fs/ntfs/super.c::load_and_init_usnjrnl().
*/
bool ntfs_stamp_usnjrnl(ntfs_volume *vol)
{
ntfs_debug("Entering.");
if (likely(!NVolUsnJrnlStamped(vol))) {
sle64 stamp;
struct page *page;
USN_HEADER *uh;
page = ntfs_map_page(vol->usnjrnl_max_ino->i_mapping, 0);
if (IS_ERR(page)) {
ntfs_error(vol->sb, "Failed to read from "
"$UsnJrnl/$DATA/$Max attribute.");
return false;
}
uh = (USN_HEADER*)page_address(page);
stamp = get_current_ntfs_time();
ntfs_debug("Stamping transaction log ($UsnJrnl): old "
"journal_id 0x%llx, old lowest_valid_usn "
"0x%llx, new journal_id 0x%llx, new "
"lowest_valid_usn 0x%llx.",
(long long)sle64_to_cpu(uh->journal_id),
(long long)sle64_to_cpu(uh->lowest_valid_usn),
(long long)sle64_to_cpu(stamp),
i_size_read(vol->usnjrnl_j_ino));
uh->lowest_valid_usn =
cpu_to_sle64(i_size_read(vol->usnjrnl_j_ino));
uh->journal_id = stamp;
flush_dcache_page(page);
set_page_dirty(page);
ntfs_unmap_page(page);
/* Set the flag so we do not have to do it again on remount. */
NVolSetUsnJrnlStamped(vol);
}
ntfs_debug("Done.");
return true;
}
#endif /* NTFS_RW */

View File

@ -1,191 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* usnjrnl.h - Defines for NTFS kernel transaction log ($UsnJrnl) handling.
* Part of the Linux-NTFS project.
*
* Copyright (c) 2005 Anton Altaparmakov
*/
#ifndef _LINUX_NTFS_USNJRNL_H
#define _LINUX_NTFS_USNJRNL_H
#ifdef NTFS_RW
#include "types.h"
#include "endian.h"
#include "layout.h"
#include "volume.h"
/*
* Transaction log ($UsnJrnl) organization:
*
* The transaction log records whenever a file is modified in any way. So for
* example it will record that file "blah" was written to at a particular time
* but not what was written. If will record that a file was deleted or
* created, that a file was truncated, etc. See below for all the reason
* codes used.
*
* The transaction log is in the $Extend directory which is in the root
* directory of each volume. If it is not present it means transaction
* logging is disabled. If it is present it means transaction logging is
* either enabled or in the process of being disabled in which case we can
* ignore it as it will go away as soon as Windows gets its hands on it.
*
* To determine whether the transaction logging is enabled or in the process
* of being disabled, need to check the volume flags in the
* $VOLUME_INFORMATION attribute in the $Volume system file (which is present
* in the root directory and has a fixed mft record number, see layout.h).
* If the flag VOLUME_DELETE_USN_UNDERWAY is set it means the transaction log
* is in the process of being disabled and if this flag is clear it means the
* transaction log is enabled.
*
* The transaction log consists of two parts; the $DATA/$Max attribute as well
* as the $DATA/$J attribute. $Max is a header describing the transaction
* log whilst $J is the transaction log data itself as a sequence of variable
* sized USN_RECORDs (see below for all the structures).
*
* We do not care about transaction logging at this point in time but we still
* need to let windows know that the transaction log is out of date. To do
* this we need to stamp the transaction log. This involves setting the
* lowest_valid_usn field in the $DATA/$Max attribute to the usn to be used
* for the next added USN_RECORD to the $DATA/$J attribute as well as
* generating a new journal_id in $DATA/$Max.
*
* The journal_id is as of the current version (2.0) of the transaction log
* simply the 64-bit timestamp of when the journal was either created or last
* stamped.
*
* To determine the next usn there are two ways. The first is to parse
* $DATA/$J and to find the last USN_RECORD in it and to add its record_length
* to its usn (which is the byte offset in the $DATA/$J attribute). The
* second is simply to take the data size of the attribute. Since the usns
* are simply byte offsets into $DATA/$J, this is exactly the next usn. For
* obvious reasons we use the second method as it is much simpler and faster.
*
* As an aside, note that to actually disable the transaction log, one would
* need to set the VOLUME_DELETE_USN_UNDERWAY flag (see above), then go
* through all the mft records on the volume and set the usn field in their
* $STANDARD_INFORMATION attribute to zero. Once that is done, one would need
* to delete the transaction log file, i.e. \$Extent\$UsnJrnl, and finally,
* one would need to clear the VOLUME_DELETE_USN_UNDERWAY flag.
*
* Note that if a volume is unmounted whilst the transaction log is being
* disabled, the process will continue the next time the volume is mounted.
* This is why we can safely mount read-write when we see a transaction log
* in the process of being deleted.
*/
/* Some $UsnJrnl related constants. */
#define UsnJrnlMajorVer 2
#define UsnJrnlMinorVer 0
/*
* $DATA/$Max attribute. This is (always?) resident and has a fixed size of
* 32 bytes. It contains the header describing the transaction log.
*/
typedef struct {
/*Ofs*/
/* 0*/sle64 maximum_size; /* The maximum on-disk size of the $DATA/$J
attribute. */
/* 8*/sle64 allocation_delta; /* Number of bytes by which to increase the
size of the $DATA/$J attribute. */
/*0x10*/sle64 journal_id; /* Current id of the transaction log. */
/*0x18*/leUSN lowest_valid_usn; /* Lowest valid usn in $DATA/$J for the
current journal_id. */
/* sizeof() = 32 (0x20) bytes */
} __attribute__ ((__packed__)) USN_HEADER;
/*
* Reason flags (32-bit). Cumulative flags describing the change(s) to the
* file since it was last opened. I think the names speak for themselves but
* if you disagree check out the descriptions in the Linux NTFS project NTFS
* documentation: http://www.linux-ntfs.org/
*/
enum {
USN_REASON_DATA_OVERWRITE = cpu_to_le32(0x00000001),
USN_REASON_DATA_EXTEND = cpu_to_le32(0x00000002),
USN_REASON_DATA_TRUNCATION = cpu_to_le32(0x00000004),
USN_REASON_NAMED_DATA_OVERWRITE = cpu_to_le32(0x00000010),
USN_REASON_NAMED_DATA_EXTEND = cpu_to_le32(0x00000020),
USN_REASON_NAMED_DATA_TRUNCATION= cpu_to_le32(0x00000040),
USN_REASON_FILE_CREATE = cpu_to_le32(0x00000100),
USN_REASON_FILE_DELETE = cpu_to_le32(0x00000200),
USN_REASON_EA_CHANGE = cpu_to_le32(0x00000400),
USN_REASON_SECURITY_CHANGE = cpu_to_le32(0x00000800),
USN_REASON_RENAME_OLD_NAME = cpu_to_le32(0x00001000),
USN_REASON_RENAME_NEW_NAME = cpu_to_le32(0x00002000),
USN_REASON_INDEXABLE_CHANGE = cpu_to_le32(0x00004000),
USN_REASON_BASIC_INFO_CHANGE = cpu_to_le32(0x00008000),
USN_REASON_HARD_LINK_CHANGE = cpu_to_le32(0x00010000),
USN_REASON_COMPRESSION_CHANGE = cpu_to_le32(0x00020000),
USN_REASON_ENCRYPTION_CHANGE = cpu_to_le32(0x00040000),
USN_REASON_OBJECT_ID_CHANGE = cpu_to_le32(0x00080000),
USN_REASON_REPARSE_POINT_CHANGE = cpu_to_le32(0x00100000),
USN_REASON_STREAM_CHANGE = cpu_to_le32(0x00200000),
USN_REASON_CLOSE = cpu_to_le32(0x80000000),
};
typedef le32 USN_REASON_FLAGS;
/*
* Source info flags (32-bit). Information about the source of the change(s)
* to the file. For detailed descriptions of what these mean, see the Linux
* NTFS project NTFS documentation:
* http://www.linux-ntfs.org/
*/
enum {
USN_SOURCE_DATA_MANAGEMENT = cpu_to_le32(0x00000001),
USN_SOURCE_AUXILIARY_DATA = cpu_to_le32(0x00000002),
USN_SOURCE_REPLICATION_MANAGEMENT = cpu_to_le32(0x00000004),
};
typedef le32 USN_SOURCE_INFO_FLAGS;
/*
* $DATA/$J attribute. This is always non-resident, is marked as sparse, and
* is of variabled size. It consists of a sequence of variable size
* USN_RECORDS. The minimum allocated_size is allocation_delta as
* specified in $DATA/$Max. When the maximum_size specified in $DATA/$Max is
* exceeded by more than allocation_delta bytes, allocation_delta bytes are
* allocated and appended to the $DATA/$J attribute and an equal number of
* bytes at the beginning of the attribute are freed and made sparse. Note the
* making sparse only happens at volume checkpoints and hence the actual
* $DATA/$J size can exceed maximum_size + allocation_delta temporarily.
*/
typedef struct {
/*Ofs*/
/* 0*/le32 length; /* Byte size of this record (8-byte
aligned). */
/* 4*/le16 major_ver; /* Major version of the transaction log used
for this record. */
/* 6*/le16 minor_ver; /* Minor version of the transaction log used
for this record. */
/* 8*/leMFT_REF mft_reference;/* The mft reference of the file (or
directory) described by this record. */
/*0x10*/leMFT_REF parent_directory;/* The mft reference of the parent
directory of the file described by this
record. */
/*0x18*/leUSN usn; /* The usn of this record. Equals the offset
within the $DATA/$J attribute. */
/*0x20*/sle64 time; /* Time when this record was created. */
/*0x28*/USN_REASON_FLAGS reason;/* Reason flags (see above). */
/*0x2c*/USN_SOURCE_INFO_FLAGS source_info;/* Source info flags (see above). */
/*0x30*/le32 security_id; /* File security_id copied from
$STANDARD_INFORMATION. */
/*0x34*/FILE_ATTR_FLAGS file_attributes; /* File attributes copied from
$STANDARD_INFORMATION or $FILE_NAME (not
sure which). */
/*0x38*/le16 file_name_size; /* Size of the file name in bytes. */
/*0x3a*/le16 file_name_offset; /* Offset to the file name in bytes from the
start of this record. */
/*0x3c*/ntfschar file_name[0]; /* Use when creating only. When reading use
file_name_offset to determine the location
of the name. */
/* sizeof() = 60 (0x3c) bytes */
} __attribute__ ((__packed__)) USN_RECORD;
extern bool ntfs_stamp_usnjrnl(ntfs_volume *vol);
#endif /* NTFS_RW */
#endif /* _LINUX_NTFS_USNJRNL_H */

View File

@ -1,164 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* volume.h - Defines for volume structures in NTFS Linux kernel driver. Part
* of the Linux-NTFS project.
*
* Copyright (c) 2001-2006 Anton Altaparmakov
* Copyright (c) 2002 Richard Russon
*/
#ifndef _LINUX_NTFS_VOLUME_H
#define _LINUX_NTFS_VOLUME_H
#include <linux/rwsem.h>
#include <linux/uidgid.h>
#include "types.h"
#include "layout.h"
/*
* The NTFS in memory super block structure.
*/
typedef struct {
/*
* FIXME: Reorder to have commonly used together element within the
* same cache line, aiming at a cache line size of 32 bytes. Aim for
* 64 bytes for less commonly used together elements. Put most commonly
* used elements to front of structure. Obviously do this only when the
* structure has stabilized... (AIA)
*/
/* Device specifics. */
struct super_block *sb; /* Pointer back to the super_block. */
LCN nr_blocks; /* Number of sb->s_blocksize bytes
sized blocks on the device. */
/* Configuration provided by user at mount time. */
unsigned long flags; /* Miscellaneous flags, see below. */
kuid_t uid; /* uid that files will be mounted as. */
kgid_t gid; /* gid that files will be mounted as. */
umode_t fmask; /* The mask for file permissions. */
umode_t dmask; /* The mask for directory
permissions. */
u8 mft_zone_multiplier; /* Initial mft zone multiplier. */
u8 on_errors; /* What to do on filesystem errors. */
/* NTFS bootsector provided information. */
u16 sector_size; /* in bytes */
u8 sector_size_bits; /* log2(sector_size) */
u32 cluster_size; /* in bytes */
u32 cluster_size_mask; /* cluster_size - 1 */
u8 cluster_size_bits; /* log2(cluster_size) */
u32 mft_record_size; /* in bytes */
u32 mft_record_size_mask; /* mft_record_size - 1 */
u8 mft_record_size_bits; /* log2(mft_record_size) */
u32 index_record_size; /* in bytes */
u32 index_record_size_mask; /* index_record_size - 1 */
u8 index_record_size_bits; /* log2(index_record_size) */
LCN nr_clusters; /* Volume size in clusters == number of
bits in lcn bitmap. */
LCN mft_lcn; /* Cluster location of mft data. */
LCN mftmirr_lcn; /* Cluster location of copy of mft. */
u64 serial_no; /* The volume serial number. */
/* Mount specific NTFS information. */
u32 upcase_len; /* Number of entries in upcase[]. */
ntfschar *upcase; /* The upcase table. */
s32 attrdef_size; /* Size of the attribute definition
table in bytes. */
ATTR_DEF *attrdef; /* Table of attribute definitions.
Obtained from FILE_AttrDef. */
#ifdef NTFS_RW
/* Variables used by the cluster and mft allocators. */
s64 mft_data_pos; /* Mft record number at which to
allocate the next mft record. */
LCN mft_zone_start; /* First cluster of the mft zone. */
LCN mft_zone_end; /* First cluster beyond the mft zone. */
LCN mft_zone_pos; /* Current position in the mft zone. */
LCN data1_zone_pos; /* Current position in the first data
zone. */
LCN data2_zone_pos; /* Current position in the second data
zone. */
#endif /* NTFS_RW */
struct inode *mft_ino; /* The VFS inode of $MFT. */
struct inode *mftbmp_ino; /* Attribute inode for $MFT/$BITMAP. */
struct rw_semaphore mftbmp_lock; /* Lock for serializing accesses to the
mft record bitmap ($MFT/$BITMAP). */
#ifdef NTFS_RW
struct inode *mftmirr_ino; /* The VFS inode of $MFTMirr. */
int mftmirr_size; /* Size of mft mirror in mft records. */
struct inode *logfile_ino; /* The VFS inode of $LogFile. */
#endif /* NTFS_RW */
struct inode *lcnbmp_ino; /* The VFS inode of $Bitmap. */
struct rw_semaphore lcnbmp_lock; /* Lock for serializing accesses to the
cluster bitmap ($Bitmap/$DATA). */
struct inode *vol_ino; /* The VFS inode of $Volume. */
VOLUME_FLAGS vol_flags; /* Volume flags. */
u8 major_ver; /* Ntfs major version of volume. */
u8 minor_ver; /* Ntfs minor version of volume. */
struct inode *root_ino; /* The VFS inode of the root
directory. */
struct inode *secure_ino; /* The VFS inode of $Secure (NTFS3.0+
only, otherwise NULL). */
struct inode *extend_ino; /* The VFS inode of $Extend (NTFS3.0+
only, otherwise NULL). */
#ifdef NTFS_RW
/* $Quota stuff is NTFS3.0+ specific. Unused/NULL otherwise. */
struct inode *quota_ino; /* The VFS inode of $Quota. */
struct inode *quota_q_ino; /* Attribute inode for $Quota/$Q. */
/* $UsnJrnl stuff is NTFS3.0+ specific. Unused/NULL otherwise. */
struct inode *usnjrnl_ino; /* The VFS inode of $UsnJrnl. */
struct inode *usnjrnl_max_ino; /* Attribute inode for $UsnJrnl/$Max. */
struct inode *usnjrnl_j_ino; /* Attribute inode for $UsnJrnl/$J. */
#endif /* NTFS_RW */
struct nls_table *nls_map;
} ntfs_volume;
/*
* Defined bits for the flags field in the ntfs_volume structure.
*/
typedef enum {
NV_Errors, /* 1: Volume has errors, prevent remount rw. */
NV_ShowSystemFiles, /* 1: Return system files in ntfs_readdir(). */
NV_CaseSensitive, /* 1: Treat file names as case sensitive and
create filenames in the POSIX namespace.
Otherwise be case insensitive but still
create file names in POSIX namespace. */
NV_LogFileEmpty, /* 1: $LogFile journal is empty. */
NV_QuotaOutOfDate, /* 1: $Quota is out of date. */
NV_UsnJrnlStamped, /* 1: $UsnJrnl has been stamped. */
NV_SparseEnabled, /* 1: May create sparse files. */
} ntfs_volume_flags;
/*
* Macro tricks to expand the NVolFoo(), NVolSetFoo(), and NVolClearFoo()
* functions.
*/
#define DEFINE_NVOL_BIT_OPS(flag) \
static inline int NVol##flag(ntfs_volume *vol) \
{ \
return test_bit(NV_##flag, &(vol)->flags); \
} \
static inline void NVolSet##flag(ntfs_volume *vol) \
{ \
set_bit(NV_##flag, &(vol)->flags); \
} \
static inline void NVolClear##flag(ntfs_volume *vol) \
{ \
clear_bit(NV_##flag, &(vol)->flags); \
}
/* Emit the ntfs volume bitops functions. */
DEFINE_NVOL_BIT_OPS(Errors)
DEFINE_NVOL_BIT_OPS(ShowSystemFiles)
DEFINE_NVOL_BIT_OPS(CaseSensitive)
DEFINE_NVOL_BIT_OPS(LogFileEmpty)
DEFINE_NVOL_BIT_OPS(QuotaOutOfDate)
DEFINE_NVOL_BIT_OPS(UsnJrnlStamped)
DEFINE_NVOL_BIT_OPS(SparseEnabled)
#endif /* _LINUX_NTFS_VOLUME_H */