License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 14:07:57 +00:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2005-04-16 22:20:36 +00:00
|
|
|
#ifndef _NFS_FS_SB
|
|
|
|
#define _NFS_FS_SB
|
|
|
|
|
|
|
|
#include <linux/list.h>
|
|
|
|
#include <linux/backing-dev.h>
|
2012-01-18 03:04:24 +00:00
|
|
|
#include <linux/idr.h>
|
2007-11-08 09:05:04 +00:00
|
|
|
#include <linux/wait.h>
|
2009-04-01 13:21:53 +00:00
|
|
|
#include <linux/nfs_xdr.h>
|
|
|
|
#include <linux/sunrpc/xprt.h>
|
2007-11-08 09:05:04 +00:00
|
|
|
|
2011-07-26 23:09:06 +00:00
|
|
|
#include <linux/atomic.h>
|
2017-10-20 09:53:38 +00:00
|
|
|
#include <linux/refcount.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2009-04-01 13:21:53 +00:00
|
|
|
struct nfs4_session;
|
2006-03-20 18:44:13 +00:00
|
|
|
struct nfs_iostats;
|
2008-01-11 22:09:52 +00:00
|
|
|
struct nlm_host;
|
2009-04-01 13:22:03 +00:00
|
|
|
struct nfs4_sequence_args;
|
|
|
|
struct nfs4_sequence_res;
|
|
|
|
struct nfs_server;
|
2010-06-16 13:52:26 +00:00
|
|
|
struct nfs4_minor_version_ops;
|
2012-05-22 02:44:31 +00:00
|
|
|
struct nfs41_server_scope;
|
2012-02-17 20:20:26 +00:00
|
|
|
struct nfs41_impl_id;
|
2006-03-20 18:44:13 +00:00
|
|
|
|
2006-08-23 00:06:10 +00:00
|
|
|
/*
|
|
|
|
* The nfs_client identifies our client state to the server.
|
|
|
|
*/
|
|
|
|
struct nfs_client {
|
2017-10-20 09:53:38 +00:00
|
|
|
refcount_t cl_count;
|
2012-06-14 17:08:38 +00:00
|
|
|
atomic_t cl_mds_count;
|
2006-08-23 00:06:10 +00:00
|
|
|
int cl_cons_state; /* current construction state (-ve: init error) */
|
|
|
|
#define NFS_CS_READY 0 /* ready to be used */
|
|
|
|
#define NFS_CS_INITING 1 /* busy initialising */
|
2009-04-01 13:22:38 +00:00
|
|
|
#define NFS_CS_SESSION_INITING 2 /* busy initialising session */
|
2006-08-23 00:06:10 +00:00
|
|
|
unsigned long cl_res_state; /* NFS resources state */
|
|
|
|
#define NFS_CS_CALLBACK 1 /* - callback started */
|
|
|
|
#define NFS_CS_IDMAP 2 /* - idmap started */
|
2006-08-24 05:03:05 +00:00
|
|
|
#define NFS_CS_RENEWD 3 /* - renewd started */
|
2011-03-01 01:34:10 +00:00
|
|
|
#define NFS_CS_STOP_RENEW 4 /* no more state to renew */
|
2011-03-01 01:34:11 +00:00
|
|
|
#define NFS_CS_CHECK_LEASE_TIME 5 /* need to check lease time */
|
2012-05-22 02:46:07 +00:00
|
|
|
unsigned long cl_flags; /* behavior switches */
|
|
|
|
#define NFS_CS_NORESVPORT 0 /* - use ephemeral src port */
|
|
|
|
#define NFS_CS_DISCRTRY 1 /* - disconnect on RPC retry */
|
2012-09-14 21:24:11 +00:00
|
|
|
#define NFS_CS_MIGRATION 2 /* - transparent state migr */
|
2013-04-14 15:49:51 +00:00
|
|
|
#define NFS_CS_INFINITE_SLOTS 3 /* - don't limit TCP slots */
|
2013-09-24 16:06:07 +00:00
|
|
|
#define NFS_CS_NO_RETRANS_TIMEOUT 4 /* - Disable retransmit timeouts */
|
2017-06-08 15:52:44 +00:00
|
|
|
#define NFS_CS_TSM_POSSIBLE 5 /* - Maybe state migration */
|
2019-10-03 18:08:43 +00:00
|
|
|
#define NFS_CS_NOPING 6 /* - don't ping on connect */
|
2019-10-17 13:49:45 +00:00
|
|
|
#define NFS_CS_DS 7 /* - Server is a DS */
|
2019-10-17 15:13:54 +00:00
|
|
|
#define NFS_CS_REUSEPORT 8 /* - reuse src port on reconnect */
|
2007-12-10 19:58:15 +00:00
|
|
|
struct sockaddr_storage cl_addr; /* server identifier */
|
|
|
|
size_t cl_addrlen;
|
2006-08-23 00:06:10 +00:00
|
|
|
char * cl_hostname; /* hostname of server */
|
2014-06-22 00:52:17 +00:00
|
|
|
char * cl_acceptor; /* GSSAPI acceptor name */
|
2006-08-23 00:06:10 +00:00
|
|
|
struct list_head cl_share_link; /* link in global client list */
|
|
|
|
struct list_head cl_superblocks; /* List of nfs_server structs */
|
|
|
|
|
|
|
|
struct rpc_clnt * cl_rpcclient;
|
2006-08-23 00:06:12 +00:00
|
|
|
const struct nfs_rpc_ops *rpc_ops; /* NFS protocol vector */
|
2008-01-03 21:29:06 +00:00
|
|
|
int cl_proto; /* Network transport protocol */
|
2012-07-30 20:05:16 +00:00
|
|
|
struct nfs_subversion * cl_nfs_mod; /* pointer to nfs version module */
|
2006-08-23 00:06:10 +00:00
|
|
|
|
2009-04-01 13:21:49 +00:00
|
|
|
u32 cl_minorversion;/* NFSv4 minorversion */
|
2017-04-27 15:13:40 +00:00
|
|
|
unsigned int cl_nconnect; /* Number of connections */
|
2021-08-27 18:37:17 +00:00
|
|
|
unsigned int cl_max_connect; /* max number of xprts allowed */
|
2018-12-03 00:30:30 +00:00
|
|
|
const char * cl_principal; /* used for machine cred */
|
2008-04-08 00:50:11 +00:00
|
|
|
|
2012-07-30 20:05:25 +00:00
|
|
|
#if IS_ENABLED(CONFIG_NFS_V4)
|
2013-09-06 18:14:00 +00:00
|
|
|
struct list_head cl_ds_clients; /* auth flavor data servers */
|
2006-08-23 00:06:10 +00:00
|
|
|
u64 cl_clientid; /* constant */
|
2011-04-24 18:28:18 +00:00
|
|
|
nfs4_verifier cl_confirm; /* Clientid verifier */
|
2006-08-23 00:06:10 +00:00
|
|
|
unsigned long cl_state;
|
|
|
|
|
|
|
|
spinlock_t cl_lock;
|
|
|
|
|
|
|
|
unsigned long cl_lease_time;
|
|
|
|
unsigned long cl_last_renewal;
|
2006-11-22 14:54:01 +00:00
|
|
|
struct delayed_work cl_renewd;
|
2006-08-23 00:06:10 +00:00
|
|
|
|
|
|
|
struct rpc_wait_queue cl_rpcwaitq;
|
|
|
|
|
|
|
|
/* idmapper */
|
|
|
|
struct idmap * cl_idmap;
|
|
|
|
|
2015-01-03 20:16:04 +00:00
|
|
|
/* Client owner identifier */
|
|
|
|
const char * cl_owner_id;
|
|
|
|
|
2011-01-06 02:04:30 +00:00
|
|
|
u32 cl_cb_ident; /* v4.0 callback identifier */
|
2010-06-16 13:52:26 +00:00
|
|
|
const struct nfs4_minor_version_ops *cl_mvops;
|
2013-10-17 18:13:02 +00:00
|
|
|
unsigned long cl_mig_gen;
|
2009-04-03 15:42:42 +00:00
|
|
|
|
2013-08-09 16:49:11 +00:00
|
|
|
/* NFSv4.0 transport blocking */
|
|
|
|
struct nfs4_slot_table *cl_slot_tbl;
|
|
|
|
|
2009-04-01 13:22:29 +00:00
|
|
|
/* The sequence id to use for the next CREATE_SESSION */
|
|
|
|
u32 cl_seqid;
|
|
|
|
/* The flags used for obtaining the clientid during EXCHANGE_ID */
|
|
|
|
u32 cl_exchange_flags;
|
2012-05-22 02:44:22 +00:00
|
|
|
struct nfs4_session *cl_session; /* shared session */
|
2012-09-14 21:24:32 +00:00
|
|
|
bool cl_preserve_clid;
|
2012-05-22 02:46:16 +00:00
|
|
|
struct nfs41_server_owner *cl_serverowner;
|
2012-05-22 02:44:31 +00:00
|
|
|
struct nfs41_server_scope *cl_serverscope;
|
2012-05-22 02:44:41 +00:00
|
|
|
struct nfs41_impl_id *cl_implid;
|
2013-08-13 20:37:32 +00:00
|
|
|
/* nfs 4.1+ state protection modes: */
|
|
|
|
unsigned long cl_sp4_flags;
|
|
|
|
#define NFS_SP4_MACH_CRED_MINIMAL 1 /* Minimal sp4_mach_cred - state ops
|
|
|
|
* must use machine cred */
|
2013-08-13 20:37:34 +00:00
|
|
|
#define NFS_SP4_MACH_CRED_CLEANUP 2 /* CLOSE and LOCKU */
|
2013-08-13 20:37:35 +00:00
|
|
|
#define NFS_SP4_MACH_CRED_SECINFO 3 /* SECINFO and SECINFO_NO_NAME */
|
2013-08-13 20:37:36 +00:00
|
|
|
#define NFS_SP4_MACH_CRED_STATEID 4 /* TEST_STATEID and FREE_STATEID */
|
2013-08-13 20:37:37 +00:00
|
|
|
#define NFS_SP4_MACH_CRED_WRITE 5 /* WRITE */
|
|
|
|
#define NFS_SP4_MACH_CRED_COMMIT 6 /* COMMIT */
|
2015-12-02 14:39:51 +00:00
|
|
|
#define NFS_SP4_MACH_CRED_PNFS_CLEANUP 7 /* LAYOUTRETURN */
|
2016-09-17 22:17:39 +00:00
|
|
|
#if IS_ENABLED(CONFIG_NFS_V4_1)
|
|
|
|
wait_queue_head_t cl_lock_waitq;
|
|
|
|
#endif /* CONFIG_NFS_V4_1 */
|
2011-03-09 21:00:53 +00:00
|
|
|
#endif /* CONFIG_NFS_V4 */
|
2009-04-01 13:21:53 +00:00
|
|
|
|
2014-05-30 10:15:57 +00:00
|
|
|
/* Our own IP address, as a null-terminated string.
|
|
|
|
* This is used to generate the mv0 callback address.
|
|
|
|
*/
|
|
|
|
char cl_ipaddr[48];
|
2012-05-22 02:44:50 +00:00
|
|
|
struct net *cl_net;
|
2018-07-09 19:13:32 +00:00
|
|
|
struct list_head pending_cb_stateids;
|
2006-08-23 00:06:10 +00:00
|
|
|
};
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/*
|
|
|
|
* NFS client parameters stored in the superblock.
|
|
|
|
*/
|
|
|
|
struct nfs_server {
|
2006-08-23 00:06:11 +00:00
|
|
|
struct nfs_client * nfs_client; /* shared client and NFS4 state */
|
NFS: Share NFS superblocks per-protocol per-server per-FSID
The attached patch makes NFS share superblocks between mounts from the same
server and FSID over the same protocol.
It does this by creating each superblock with a false root and returning the
real root dentry in the vfsmount presented by get_sb(). The root dentry set
starts off as an anonymous dentry if we don't already have the dentry for its
inode, otherwise it simply returns the dentry we already have.
We may thus end up with several trees of dentries in the superblock, and if at
some later point one of anonymous tree roots is discovered by normal filesystem
activity to be located in another tree within the superblock, the anonymous
root is named and materialises attached to the second tree at the appropriate
point.
Why do it this way? Why not pass an extra argument to the mount() syscall to
indicate the subpath and then pathwalk from the server root to the desired
directory? You can't guarantee this will work for two reasons:
(1) The root and intervening nodes may not be accessible to the client.
With NFS2 and NFS3, for instance, mountd is called on the server to get
the filehandle for the tip of a path. mountd won't give us handles for
anything we don't have permission to access, and so we can't set up NFS
inodes for such nodes, and so can't easily set up dentries (we'd have to
have ghost inodes or something).
With this patch we don't actually create dentries until we get handles
from the server that we can use to set up their inodes, and we don't
actually bind them into the tree until we know for sure where they go.
(2) Inaccessible symbolic links.
If we're asked to mount two exports from the server, eg:
mount warthog:/warthog/aaa/xxx /mmm
mount warthog:/warthog/bbb/yyy /nnn
We may not be able to access anything nearer the root than xxx and yyy,
but we may find out later that /mmm/www/yyy, say, is actually the same
directory as the one mounted on /nnn. What we might then find out, for
example, is that /warthog/bbb was actually a symbolic link to
/warthog/aaa/xxx/www, but we can't actually determine that by talking to
the server until /warthog is made available by NFS.
This would lead to having constructed an errneous dentry tree which we
can't easily fix. We can end up with a dentry marked as a directory when
it should actually be a symlink, or we could end up with an apparently
hardlinked directory.
With this patch we need not make assumptions about the type of a dentry
for which we can't retrieve information, nor need we assume we know its
place in the grand scheme of things until we actually see that place.
This patch reduces the possibility of aliasing in the inode and page caches for
inodes that may be accessed by more than one NFS export. It also reduces the
number of superblocks required for NFS where there are many NFS exports being
used from a server (home directory server + autofs for example).
This in turn makes it simpler to do local caching of network filesystems, as it
can then be guaranteed that there won't be links from multiple inodes in
separate superblocks to the same cache file.
Obviously, cache aliasing between different levels of NFS protocol could still
be a problem, but at least that gives us another key to use when indexing the
cache.
This patch makes the following changes:
(1) The server record construction/destruction has been abstracted out into
its own set of functions to make things easier to get right. These have
been moved into fs/nfs/client.c.
All the code in fs/nfs/client.c has to do with the management of
connections to servers, and doesn't touch superblocks in any way; the
remaining code in fs/nfs/super.c has to do with VFS superblock management.
(2) The sequence of events undertaken by NFS mount is now reordered:
(a) A volume representation (struct nfs_server) is allocated.
(b) A server representation (struct nfs_client) is acquired. This may be
allocated or shared, and is keyed on server address, port and NFS
version.
(c) If allocated, the client representation is initialised. The state
member variable of nfs_client is used to prevent a race during
initialisation from two mounts.
(d) For NFS4 a simple pathwalk is performed, walking from FH to FH to find
the root filehandle for the mount (fs/nfs/getroot.c). For NFS2/3 we
are given the root FH in advance.
(e) The volume FSID is probed for on the root FH.
(f) The volume representation is initialised from the FSINFO record
retrieved on the root FH.
(g) sget() is called to acquire a superblock. This may be allocated or
shared, keyed on client pointer and FSID.
(h) If allocated, the superblock is initialised.
(i) If the superblock is shared, then the new nfs_server record is
discarded.
(j) The root dentry for this mount is looked up from the root FH.
(k) The root dentry for this mount is assigned to the vfsmount.
(3) nfs_readdir_lookup() creates dentries for each of the entries readdir()
returns; this function now attaches disconnected trees from alternate
roots that happen to be discovered attached to a directory being read (in
the same way nfs_lookup() is made to do for lookup ops).
The new d_materialise_unique() function is now used to do this, thus
permitting the whole thing to be done under one set of locks, and thus
avoiding any race between mount and lookup operations on the same
directory.
(4) The client management code uses a new debug facility: NFSDBG_CLIENT which
is set by echoing 1024 to /proc/net/sunrpc/nfs_debug.
(5) Clone mounts are now called xdev mounts.
(6) Use the dentry passed to the statfs() op as the handle for retrieving fs
statistics rather than the root dentry of the superblock (which is now a
dummy).
Signed-Off-By: David Howells <dhowells@redhat.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2006-08-23 00:06:13 +00:00
|
|
|
struct list_head client_link; /* List of other nfs_server structs
|
|
|
|
* that share the same client
|
|
|
|
*/
|
|
|
|
struct list_head master_link; /* link in master servers list */
|
2005-04-16 22:20:36 +00:00
|
|
|
struct rpc_clnt * client; /* RPC client handle */
|
2005-06-22 17:16:27 +00:00
|
|
|
struct rpc_clnt * client_acl; /* ACL RPC client handle */
|
2008-01-11 22:09:52 +00:00
|
|
|
struct nlm_host *nlm_host; /* NLM client handle */
|
2010-02-02 05:39:01 +00:00
|
|
|
struct nfs_iostats __percpu *io_stats; /* I/O statistics */
|
2007-05-08 07:35:12 +00:00
|
|
|
atomic_long_t writeback; /* number of writeback pages */
|
2022-03-22 21:39:01 +00:00
|
|
|
unsigned int write_congested;/* flag set when writeback gets too high */
|
2021-02-12 21:49:47 +00:00
|
|
|
unsigned int flags; /* various flags */
|
2019-04-07 17:59:00 +00:00
|
|
|
|
|
|
|
/* The following are for internal use only. Also see uapi/linux/nfs_mount.h */
|
|
|
|
#define NFS_MOUNT_LOOKUP_CACHE_NONEG 0x10000
|
|
|
|
#define NFS_MOUNT_LOOKUP_CACHE_NONE 0x20000
|
|
|
|
#define NFS_MOUNT_NORESVPORT 0x40000
|
|
|
|
#define NFS_MOUNT_LEGACY_INTERFACE 0x80000
|
|
|
|
#define NFS_MOUNT_LOCAL_FLOCK 0x100000
|
|
|
|
#define NFS_MOUNT_LOCAL_FCNTL 0x200000
|
2019-04-07 17:59:01 +00:00
|
|
|
#define NFS_MOUNT_SOFTERR 0x400000
|
2020-01-06 20:39:37 +00:00
|
|
|
#define NFS_MOUNT_SOFTREVAL 0x800000
|
2021-02-12 21:49:48 +00:00
|
|
|
#define NFS_MOUNT_WRITE_EAGER 0x01000000
|
|
|
|
#define NFS_MOUNT_WRITE_WAIT 0x02000000
|
2022-03-16 22:24:26 +00:00
|
|
|
#define NFS_MOUNT_TRUNK_DISCOVERY 0x04000000
|
2019-04-07 17:59:00 +00:00
|
|
|
|
2021-04-21 11:48:43 +00:00
|
|
|
unsigned int fattr_valid; /* Valid attributes */
|
2005-04-16 22:20:36 +00:00
|
|
|
unsigned int caps; /* server capabilities */
|
|
|
|
unsigned int rsize; /* read size */
|
|
|
|
unsigned int rpages; /* read size (in pages) */
|
|
|
|
unsigned int wsize; /* write size */
|
|
|
|
unsigned int wpages; /* write size (in pages) */
|
|
|
|
unsigned int wtmult; /* server disk block size */
|
|
|
|
unsigned int dtsize; /* readdir size */
|
2008-03-14 18:10:22 +00:00
|
|
|
unsigned short port; /* "port=" setting */
|
2005-04-16 22:20:36 +00:00
|
|
|
unsigned int bsize; /* server block size */
|
NFSv4.2: define limits and sizes for user xattr handling
Set limits for extended attributes (attribute value size and listxattr
buffer size), based on the fs-independent limits (XATTR_*_MAX).
Define the maximum XDR sizes for the RFC 8276 XATTR operations.
In the case of operations that carry a larger payload (SETXATTR,
GETXATTR, LISTXATTR), these exclude that payload, which is added
as separate pages, like other operations do.
Define, much like for read and write operations, the maximum overhead
sizes for get/set/listxattr, and use them to limit the maximum payload
size for those operations, in combination with the channel attributes.
Signed-off-by: Frank van der Linden <fllinden@amazon.com>
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2020-06-23 22:38:54 +00:00
|
|
|
#ifdef CONFIG_NFS_V4_2
|
|
|
|
unsigned int gxasize; /* getxattr size */
|
|
|
|
unsigned int sxasize; /* setxattr size */
|
|
|
|
unsigned int lxasize; /* listxattr size */
|
|
|
|
#endif
|
2005-04-16 22:20:36 +00:00
|
|
|
unsigned int acregmin; /* attr cache timeouts */
|
|
|
|
unsigned int acregmax;
|
|
|
|
unsigned int acdirmin;
|
|
|
|
unsigned int acdirmax;
|
|
|
|
unsigned int namelen;
|
2009-04-03 15:42:42 +00:00
|
|
|
unsigned int options; /* extra options enabled by mount */
|
2015-09-25 18:24:37 +00:00
|
|
|
unsigned int clone_blksize; /* granularity of a CLONE operation */
|
2009-04-03 15:42:42 +00:00
|
|
|
#define NFS_OPTION_FSCACHE 0x00000001 /* - local caching enabled */
|
2012-09-14 21:24:11 +00:00
|
|
|
#define NFS_OPTION_MIGRATION 0x00000002 /* - NFSv4 migration enabled */
|
NFS: Share NFS superblocks per-protocol per-server per-FSID
The attached patch makes NFS share superblocks between mounts from the same
server and FSID over the same protocol.
It does this by creating each superblock with a false root and returning the
real root dentry in the vfsmount presented by get_sb(). The root dentry set
starts off as an anonymous dentry if we don't already have the dentry for its
inode, otherwise it simply returns the dentry we already have.
We may thus end up with several trees of dentries in the superblock, and if at
some later point one of anonymous tree roots is discovered by normal filesystem
activity to be located in another tree within the superblock, the anonymous
root is named and materialises attached to the second tree at the appropriate
point.
Why do it this way? Why not pass an extra argument to the mount() syscall to
indicate the subpath and then pathwalk from the server root to the desired
directory? You can't guarantee this will work for two reasons:
(1) The root and intervening nodes may not be accessible to the client.
With NFS2 and NFS3, for instance, mountd is called on the server to get
the filehandle for the tip of a path. mountd won't give us handles for
anything we don't have permission to access, and so we can't set up NFS
inodes for such nodes, and so can't easily set up dentries (we'd have to
have ghost inodes or something).
With this patch we don't actually create dentries until we get handles
from the server that we can use to set up their inodes, and we don't
actually bind them into the tree until we know for sure where they go.
(2) Inaccessible symbolic links.
If we're asked to mount two exports from the server, eg:
mount warthog:/warthog/aaa/xxx /mmm
mount warthog:/warthog/bbb/yyy /nnn
We may not be able to access anything nearer the root than xxx and yyy,
but we may find out later that /mmm/www/yyy, say, is actually the same
directory as the one mounted on /nnn. What we might then find out, for
example, is that /warthog/bbb was actually a symbolic link to
/warthog/aaa/xxx/www, but we can't actually determine that by talking to
the server until /warthog is made available by NFS.
This would lead to having constructed an errneous dentry tree which we
can't easily fix. We can end up with a dentry marked as a directory when
it should actually be a symlink, or we could end up with an apparently
hardlinked directory.
With this patch we need not make assumptions about the type of a dentry
for which we can't retrieve information, nor need we assume we know its
place in the grand scheme of things until we actually see that place.
This patch reduces the possibility of aliasing in the inode and page caches for
inodes that may be accessed by more than one NFS export. It also reduces the
number of superblocks required for NFS where there are many NFS exports being
used from a server (home directory server + autofs for example).
This in turn makes it simpler to do local caching of network filesystems, as it
can then be guaranteed that there won't be links from multiple inodes in
separate superblocks to the same cache file.
Obviously, cache aliasing between different levels of NFS protocol could still
be a problem, but at least that gives us another key to use when indexing the
cache.
This patch makes the following changes:
(1) The server record construction/destruction has been abstracted out into
its own set of functions to make things easier to get right. These have
been moved into fs/nfs/client.c.
All the code in fs/nfs/client.c has to do with the management of
connections to servers, and doesn't touch superblocks in any way; the
remaining code in fs/nfs/super.c has to do with VFS superblock management.
(2) The sequence of events undertaken by NFS mount is now reordered:
(a) A volume representation (struct nfs_server) is allocated.
(b) A server representation (struct nfs_client) is acquired. This may be
allocated or shared, and is keyed on server address, port and NFS
version.
(c) If allocated, the client representation is initialised. The state
member variable of nfs_client is used to prevent a race during
initialisation from two mounts.
(d) For NFS4 a simple pathwalk is performed, walking from FH to FH to find
the root filehandle for the mount (fs/nfs/getroot.c). For NFS2/3 we
are given the root FH in advance.
(e) The volume FSID is probed for on the root FH.
(f) The volume representation is initialised from the FSINFO record
retrieved on the root FH.
(g) sget() is called to acquire a superblock. This may be allocated or
shared, keyed on client pointer and FSID.
(h) If allocated, the superblock is initialised.
(i) If the superblock is shared, then the new nfs_server record is
discarded.
(j) The root dentry for this mount is looked up from the root FH.
(k) The root dentry for this mount is assigned to the vfsmount.
(3) nfs_readdir_lookup() creates dentries for each of the entries readdir()
returns; this function now attaches disconnected trees from alternate
roots that happen to be discovered attached to a directory being read (in
the same way nfs_lookup() is made to do for lookup ops).
The new d_materialise_unique() function is now used to do this, thus
permitting the whole thing to be done under one set of locks, and thus
avoiding any race between mount and lookup operations on the same
directory.
(4) The client management code uses a new debug facility: NFSDBG_CLIENT which
is set by echoing 1024 to /proc/net/sunrpc/nfs_debug.
(5) Clone mounts are now called xdev mounts.
(6) Use the dentry passed to the statfs() op as the handle for retrieving fs
statistics rather than the root dentry of the superblock (which is now a
dummy).
Signed-Off-By: David Howells <dhowells@redhat.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2006-08-23 00:06:13 +00:00
|
|
|
|
2021-03-26 13:50:19 +00:00
|
|
|
enum nfs4_change_attr_type
|
|
|
|
change_attr_type;/* Description of change attribute */
|
|
|
|
|
2006-06-09 13:34:19 +00:00
|
|
|
struct nfs_fsid fsid;
|
NFS: Share NFS superblocks per-protocol per-server per-FSID
The attached patch makes NFS share superblocks between mounts from the same
server and FSID over the same protocol.
It does this by creating each superblock with a false root and returning the
real root dentry in the vfsmount presented by get_sb(). The root dentry set
starts off as an anonymous dentry if we don't already have the dentry for its
inode, otherwise it simply returns the dentry we already have.
We may thus end up with several trees of dentries in the superblock, and if at
some later point one of anonymous tree roots is discovered by normal filesystem
activity to be located in another tree within the superblock, the anonymous
root is named and materialises attached to the second tree at the appropriate
point.
Why do it this way? Why not pass an extra argument to the mount() syscall to
indicate the subpath and then pathwalk from the server root to the desired
directory? You can't guarantee this will work for two reasons:
(1) The root and intervening nodes may not be accessible to the client.
With NFS2 and NFS3, for instance, mountd is called on the server to get
the filehandle for the tip of a path. mountd won't give us handles for
anything we don't have permission to access, and so we can't set up NFS
inodes for such nodes, and so can't easily set up dentries (we'd have to
have ghost inodes or something).
With this patch we don't actually create dentries until we get handles
from the server that we can use to set up their inodes, and we don't
actually bind them into the tree until we know for sure where they go.
(2) Inaccessible symbolic links.
If we're asked to mount two exports from the server, eg:
mount warthog:/warthog/aaa/xxx /mmm
mount warthog:/warthog/bbb/yyy /nnn
We may not be able to access anything nearer the root than xxx and yyy,
but we may find out later that /mmm/www/yyy, say, is actually the same
directory as the one mounted on /nnn. What we might then find out, for
example, is that /warthog/bbb was actually a symbolic link to
/warthog/aaa/xxx/www, but we can't actually determine that by talking to
the server until /warthog is made available by NFS.
This would lead to having constructed an errneous dentry tree which we
can't easily fix. We can end up with a dentry marked as a directory when
it should actually be a symlink, or we could end up with an apparently
hardlinked directory.
With this patch we need not make assumptions about the type of a dentry
for which we can't retrieve information, nor need we assume we know its
place in the grand scheme of things until we actually see that place.
This patch reduces the possibility of aliasing in the inode and page caches for
inodes that may be accessed by more than one NFS export. It also reduces the
number of superblocks required for NFS where there are many NFS exports being
used from a server (home directory server + autofs for example).
This in turn makes it simpler to do local caching of network filesystems, as it
can then be guaranteed that there won't be links from multiple inodes in
separate superblocks to the same cache file.
Obviously, cache aliasing between different levels of NFS protocol could still
be a problem, but at least that gives us another key to use when indexing the
cache.
This patch makes the following changes:
(1) The server record construction/destruction has been abstracted out into
its own set of functions to make things easier to get right. These have
been moved into fs/nfs/client.c.
All the code in fs/nfs/client.c has to do with the management of
connections to servers, and doesn't touch superblocks in any way; the
remaining code in fs/nfs/super.c has to do with VFS superblock management.
(2) The sequence of events undertaken by NFS mount is now reordered:
(a) A volume representation (struct nfs_server) is allocated.
(b) A server representation (struct nfs_client) is acquired. This may be
allocated or shared, and is keyed on server address, port and NFS
version.
(c) If allocated, the client representation is initialised. The state
member variable of nfs_client is used to prevent a race during
initialisation from two mounts.
(d) For NFS4 a simple pathwalk is performed, walking from FH to FH to find
the root filehandle for the mount (fs/nfs/getroot.c). For NFS2/3 we
are given the root FH in advance.
(e) The volume FSID is probed for on the root FH.
(f) The volume representation is initialised from the FSINFO record
retrieved on the root FH.
(g) sget() is called to acquire a superblock. This may be allocated or
shared, keyed on client pointer and FSID.
(h) If allocated, the superblock is initialised.
(i) If the superblock is shared, then the new nfs_server record is
discarded.
(j) The root dentry for this mount is looked up from the root FH.
(k) The root dentry for this mount is assigned to the vfsmount.
(3) nfs_readdir_lookup() creates dentries for each of the entries readdir()
returns; this function now attaches disconnected trees from alternate
roots that happen to be discovered attached to a directory being read (in
the same way nfs_lookup() is made to do for lookup ops).
The new d_materialise_unique() function is now used to do this, thus
permitting the whole thing to be done under one set of locks, and thus
avoiding any race between mount and lookup operations on the same
directory.
(4) The client management code uses a new debug facility: NFSDBG_CLIENT which
is set by echoing 1024 to /proc/net/sunrpc/nfs_debug.
(5) Clone mounts are now called xdev mounts.
(6) Use the dentry passed to the statfs() op as the handle for retrieving fs
statistics rather than the root dentry of the superblock (which is now a
dummy).
Signed-Off-By: David Howells <dhowells@redhat.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2006-08-23 00:06:13 +00:00
|
|
|
__u64 maxfilesize; /* maximum file size */
|
2019-10-04 20:38:56 +00:00
|
|
|
struct timespec64 time_delta; /* smallest time granularity */
|
2006-03-20 18:44:15 +00:00
|
|
|
unsigned long mount_time; /* when this fs was mounted */
|
2013-10-17 18:12:56 +00:00
|
|
|
struct super_block *super; /* VFS super block */
|
NFS: Share NFS superblocks per-protocol per-server per-FSID
The attached patch makes NFS share superblocks between mounts from the same
server and FSID over the same protocol.
It does this by creating each superblock with a false root and returning the
real root dentry in the vfsmount presented by get_sb(). The root dentry set
starts off as an anonymous dentry if we don't already have the dentry for its
inode, otherwise it simply returns the dentry we already have.
We may thus end up with several trees of dentries in the superblock, and if at
some later point one of anonymous tree roots is discovered by normal filesystem
activity to be located in another tree within the superblock, the anonymous
root is named and materialises attached to the second tree at the appropriate
point.
Why do it this way? Why not pass an extra argument to the mount() syscall to
indicate the subpath and then pathwalk from the server root to the desired
directory? You can't guarantee this will work for two reasons:
(1) The root and intervening nodes may not be accessible to the client.
With NFS2 and NFS3, for instance, mountd is called on the server to get
the filehandle for the tip of a path. mountd won't give us handles for
anything we don't have permission to access, and so we can't set up NFS
inodes for such nodes, and so can't easily set up dentries (we'd have to
have ghost inodes or something).
With this patch we don't actually create dentries until we get handles
from the server that we can use to set up their inodes, and we don't
actually bind them into the tree until we know for sure where they go.
(2) Inaccessible symbolic links.
If we're asked to mount two exports from the server, eg:
mount warthog:/warthog/aaa/xxx /mmm
mount warthog:/warthog/bbb/yyy /nnn
We may not be able to access anything nearer the root than xxx and yyy,
but we may find out later that /mmm/www/yyy, say, is actually the same
directory as the one mounted on /nnn. What we might then find out, for
example, is that /warthog/bbb was actually a symbolic link to
/warthog/aaa/xxx/www, but we can't actually determine that by talking to
the server until /warthog is made available by NFS.
This would lead to having constructed an errneous dentry tree which we
can't easily fix. We can end up with a dentry marked as a directory when
it should actually be a symlink, or we could end up with an apparently
hardlinked directory.
With this patch we need not make assumptions about the type of a dentry
for which we can't retrieve information, nor need we assume we know its
place in the grand scheme of things until we actually see that place.
This patch reduces the possibility of aliasing in the inode and page caches for
inodes that may be accessed by more than one NFS export. It also reduces the
number of superblocks required for NFS where there are many NFS exports being
used from a server (home directory server + autofs for example).
This in turn makes it simpler to do local caching of network filesystems, as it
can then be guaranteed that there won't be links from multiple inodes in
separate superblocks to the same cache file.
Obviously, cache aliasing between different levels of NFS protocol could still
be a problem, but at least that gives us another key to use when indexing the
cache.
This patch makes the following changes:
(1) The server record construction/destruction has been abstracted out into
its own set of functions to make things easier to get right. These have
been moved into fs/nfs/client.c.
All the code in fs/nfs/client.c has to do with the management of
connections to servers, and doesn't touch superblocks in any way; the
remaining code in fs/nfs/super.c has to do with VFS superblock management.
(2) The sequence of events undertaken by NFS mount is now reordered:
(a) A volume representation (struct nfs_server) is allocated.
(b) A server representation (struct nfs_client) is acquired. This may be
allocated or shared, and is keyed on server address, port and NFS
version.
(c) If allocated, the client representation is initialised. The state
member variable of nfs_client is used to prevent a race during
initialisation from two mounts.
(d) For NFS4 a simple pathwalk is performed, walking from FH to FH to find
the root filehandle for the mount (fs/nfs/getroot.c). For NFS2/3 we
are given the root FH in advance.
(e) The volume FSID is probed for on the root FH.
(f) The volume representation is initialised from the FSINFO record
retrieved on the root FH.
(g) sget() is called to acquire a superblock. This may be allocated or
shared, keyed on client pointer and FSID.
(h) If allocated, the superblock is initialised.
(i) If the superblock is shared, then the new nfs_server record is
discarded.
(j) The root dentry for this mount is looked up from the root FH.
(k) The root dentry for this mount is assigned to the vfsmount.
(3) nfs_readdir_lookup() creates dentries for each of the entries readdir()
returns; this function now attaches disconnected trees from alternate
roots that happen to be discovered attached to a directory being read (in
the same way nfs_lookup() is made to do for lookup ops).
The new d_materialise_unique() function is now used to do this, thus
permitting the whole thing to be done under one set of locks, and thus
avoiding any race between mount and lookup operations on the same
directory.
(4) The client management code uses a new debug facility: NFSDBG_CLIENT which
is set by echoing 1024 to /proc/net/sunrpc/nfs_debug.
(5) Clone mounts are now called xdev mounts.
(6) Use the dentry passed to the statfs() op as the handle for retrieving fs
statistics rather than the root dentry of the superblock (which is now a
dummy).
Signed-Off-By: David Howells <dhowells@redhat.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2006-08-23 00:06:13 +00:00
|
|
|
dev_t s_dev; /* superblock dev numbers */
|
2013-10-18 19:15:17 +00:00
|
|
|
struct nfs_auth_info auth_info; /* parsed auth flavors */
|
NFS: Share NFS superblocks per-protocol per-server per-FSID
The attached patch makes NFS share superblocks between mounts from the same
server and FSID over the same protocol.
It does this by creating each superblock with a false root and returning the
real root dentry in the vfsmount presented by get_sb(). The root dentry set
starts off as an anonymous dentry if we don't already have the dentry for its
inode, otherwise it simply returns the dentry we already have.
We may thus end up with several trees of dentries in the superblock, and if at
some later point one of anonymous tree roots is discovered by normal filesystem
activity to be located in another tree within the superblock, the anonymous
root is named and materialises attached to the second tree at the appropriate
point.
Why do it this way? Why not pass an extra argument to the mount() syscall to
indicate the subpath and then pathwalk from the server root to the desired
directory? You can't guarantee this will work for two reasons:
(1) The root and intervening nodes may not be accessible to the client.
With NFS2 and NFS3, for instance, mountd is called on the server to get
the filehandle for the tip of a path. mountd won't give us handles for
anything we don't have permission to access, and so we can't set up NFS
inodes for such nodes, and so can't easily set up dentries (we'd have to
have ghost inodes or something).
With this patch we don't actually create dentries until we get handles
from the server that we can use to set up their inodes, and we don't
actually bind them into the tree until we know for sure where they go.
(2) Inaccessible symbolic links.
If we're asked to mount two exports from the server, eg:
mount warthog:/warthog/aaa/xxx /mmm
mount warthog:/warthog/bbb/yyy /nnn
We may not be able to access anything nearer the root than xxx and yyy,
but we may find out later that /mmm/www/yyy, say, is actually the same
directory as the one mounted on /nnn. What we might then find out, for
example, is that /warthog/bbb was actually a symbolic link to
/warthog/aaa/xxx/www, but we can't actually determine that by talking to
the server until /warthog is made available by NFS.
This would lead to having constructed an errneous dentry tree which we
can't easily fix. We can end up with a dentry marked as a directory when
it should actually be a symlink, or we could end up with an apparently
hardlinked directory.
With this patch we need not make assumptions about the type of a dentry
for which we can't retrieve information, nor need we assume we know its
place in the grand scheme of things until we actually see that place.
This patch reduces the possibility of aliasing in the inode and page caches for
inodes that may be accessed by more than one NFS export. It also reduces the
number of superblocks required for NFS where there are many NFS exports being
used from a server (home directory server + autofs for example).
This in turn makes it simpler to do local caching of network filesystems, as it
can then be guaranteed that there won't be links from multiple inodes in
separate superblocks to the same cache file.
Obviously, cache aliasing between different levels of NFS protocol could still
be a problem, but at least that gives us another key to use when indexing the
cache.
This patch makes the following changes:
(1) The server record construction/destruction has been abstracted out into
its own set of functions to make things easier to get right. These have
been moved into fs/nfs/client.c.
All the code in fs/nfs/client.c has to do with the management of
connections to servers, and doesn't touch superblocks in any way; the
remaining code in fs/nfs/super.c has to do with VFS superblock management.
(2) The sequence of events undertaken by NFS mount is now reordered:
(a) A volume representation (struct nfs_server) is allocated.
(b) A server representation (struct nfs_client) is acquired. This may be
allocated or shared, and is keyed on server address, port and NFS
version.
(c) If allocated, the client representation is initialised. The state
member variable of nfs_client is used to prevent a race during
initialisation from two mounts.
(d) For NFS4 a simple pathwalk is performed, walking from FH to FH to find
the root filehandle for the mount (fs/nfs/getroot.c). For NFS2/3 we
are given the root FH in advance.
(e) The volume FSID is probed for on the root FH.
(f) The volume representation is initialised from the FSINFO record
retrieved on the root FH.
(g) sget() is called to acquire a superblock. This may be allocated or
shared, keyed on client pointer and FSID.
(h) If allocated, the superblock is initialised.
(i) If the superblock is shared, then the new nfs_server record is
discarded.
(j) The root dentry for this mount is looked up from the root FH.
(k) The root dentry for this mount is assigned to the vfsmount.
(3) nfs_readdir_lookup() creates dentries for each of the entries readdir()
returns; this function now attaches disconnected trees from alternate
roots that happen to be discovered attached to a directory being read (in
the same way nfs_lookup() is made to do for lookup ops).
The new d_materialise_unique() function is now used to do this, thus
permitting the whole thing to be done under one set of locks, and thus
avoiding any race between mount and lookup operations on the same
directory.
(4) The client management code uses a new debug facility: NFSDBG_CLIENT which
is set by echoing 1024 to /proc/net/sunrpc/nfs_debug.
(5) Clone mounts are now called xdev mounts.
(6) Use the dentry passed to the statfs() op as the handle for retrieving fs
statistics rather than the root dentry of the superblock (which is now a
dummy).
Signed-Off-By: David Howells <dhowells@redhat.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2006-08-23 00:06:13 +00:00
|
|
|
|
2009-04-03 15:42:42 +00:00
|
|
|
#ifdef CONFIG_NFS_FSCACHE
|
nfs: Convert to new fscache volume/cookie API
Change the nfs filesystem to support fscache's indexing rewrite and
reenable caching in nfs.
The following changes have been made:
(1) The fscache_netfs struct is no more, and there's no need to register
the filesystem as a whole.
(2) The session cookie is now an fscache_volume cookie, allocated with
fscache_acquire_volume(). That takes three parameters: a string
representing the "volume" in the index, a string naming the cache to
use (or NULL) and a u64 that conveys coherency metadata for the
volume.
For nfs, I've made it render the volume name string as:
"nfs,<ver>,<family>,<address>,<port>,<fsidH>,<fsidL>*<,param>[,<uniq>]"
(3) The fscache_cookie_def is no more and needed information is passed
directly to fscache_acquire_cookie(). The cache no longer calls back
into the filesystem, but rather metadata changes are indicated at
other times.
fscache_acquire_cookie() is passed the same keying and coherency
information as before.
(4) fscache_enable/disable_cookie() have been removed.
Call fscache_use_cookie() and fscache_unuse_cookie() when a file is
opened or closed to prevent a cache file from being culled and to keep
resources to hand that are needed to do I/O.
If a file is opened for writing, we invalidate it with
FSCACHE_INVAL_DIO_WRITE in lieu of doing writeback to the cache,
thereby making it cease caching until all currently open files are
closed. This should give the same behaviour as the uptream code.
Making the cache store local modifications isn't straightforward for
NFS, so that's left for future patches.
(5) fscache_invalidate() now needs to be given uptodate auxiliary data and
a file size. It also takes a flag to indicate if this was due to a
DIO write.
(6) Call nfs_fscache_invalidate() with FSCACHE_INVAL_DIO_WRITE on a file
to which a DIO write is made.
(7) Call fscache_note_page_release() from nfs_release_page().
(8) Use a killable wait in nfs_vm_page_mkwrite() when waiting for
PG_fscache to be cleared.
(9) The functions to read and write data to/from the cache are stubbed out
pending a conversion to use netfslib.
Changes
=======
ver #3:
- Added missing =n fallback for nfs_fscache_release_file()[1][2].
ver #2:
- Use gfpflags_allow_blocking() rather than using flag directly.
- fscache_acquire_volume() now returns errors.
- Remove NFS_INO_FSCACHE as it's no longer used.
- Need to unuse a cookie on file-release, not inode-clear.
Signed-off-by: Dave Wysochanski <dwysocha@redhat.com>
Co-developed-by: David Howells <dhowells@redhat.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Dave Wysochanski <dwysocha@redhat.com>
Acked-by: Jeff Layton <jlayton@kernel.org>
cc: Trond Myklebust <trond.myklebust@hammerspace.com>
cc: Anna Schumaker <anna.schumaker@netapp.com>
cc: linux-nfs@vger.kernel.org
cc: linux-cachefs@redhat.com
Link: https://lore.kernel.org/r/202112100804.nksO8K4u-lkp@intel.com/ [1]
Link: https://lore.kernel.org/r/202112100957.2oEDT20W-lkp@intel.com/ [2]
Link: https://lore.kernel.org/r/163819668938.215744.14448852181937731615.stgit@warthog.procyon.org.uk/ # v1
Link: https://lore.kernel.org/r/163906979003.143852.2601189243864854724.stgit@warthog.procyon.org.uk/ # v2
Link: https://lore.kernel.org/r/163967182112.1823006.7791504655391213379.stgit@warthog.procyon.org.uk/ # v3
Link: https://lore.kernel.org/r/164021575950.640689.12069642327533368467.stgit@warthog.procyon.org.uk/ # v4
2020-11-14 18:43:54 +00:00
|
|
|
struct fscache_volume *fscache; /* superblock cookie */
|
|
|
|
char *fscache_uniq; /* Uniquifier (or NULL) */
|
2009-04-03 15:42:42 +00:00
|
|
|
#endif
|
|
|
|
|
2011-07-31 20:39:04 +00:00
|
|
|
u32 pnfs_blksize; /* layout_blksize attr */
|
2012-07-30 20:05:25 +00:00
|
|
|
#if IS_ENABLED(CONFIG_NFS_V4)
|
2011-07-31 00:52:37 +00:00
|
|
|
u32 attr_bitmask[3];/* V4 bitmask representing the set
|
2005-04-16 22:20:36 +00:00
|
|
|
of attributes supported on this
|
|
|
|
filesystem */
|
2013-05-22 16:50:44 +00:00
|
|
|
u32 attr_bitmask_nl[3];
|
|
|
|
/* V4 bitmask representing the
|
|
|
|
set of attributes supported
|
|
|
|
on this filesystem excluding
|
|
|
|
the label support bit. */
|
2015-08-26 13:12:58 +00:00
|
|
|
u32 exclcreat_bitmask[3];
|
|
|
|
/* V4 bitmask representing the
|
|
|
|
set of attributes supported
|
|
|
|
on this filesystem for the
|
|
|
|
exclusive create. */
|
2013-05-22 16:50:41 +00:00
|
|
|
u32 cache_consistency_bitmask[3];
|
2009-03-11 18:10:28 +00:00
|
|
|
/* V4 bitmask representing the subset
|
|
|
|
of change attribute, size, ctime
|
|
|
|
and mtime attributes supported by
|
|
|
|
the server */
|
2005-04-16 22:20:36 +00:00
|
|
|
u32 acl_bitmask; /* V4 bitmask representing the ACEs
|
|
|
|
that are supported on this
|
|
|
|
filesystem */
|
2012-03-01 22:02:05 +00:00
|
|
|
u32 fh_expire_type; /* V4 bitmask representing file
|
|
|
|
handle volatility type for
|
|
|
|
this filesystem */
|
2010-10-20 04:17:58 +00:00
|
|
|
struct pnfs_layoutdriver_type *pnfs_curr_ld; /* Active layout driver */
|
2011-01-06 11:36:32 +00:00
|
|
|
struct rpc_wait_queue roc_rpcwaitq;
|
2011-07-31 00:52:46 +00:00
|
|
|
void *pnfs_ld_data; /* per mount point data */
|
2010-12-24 01:32:43 +00:00
|
|
|
|
|
|
|
/* the following fields are protected by nfs_client->cl_lock */
|
|
|
|
struct rb_root state_owners;
|
2005-04-16 22:20:36 +00:00
|
|
|
#endif
|
2012-01-18 03:04:24 +00:00
|
|
|
struct ida openowner_id;
|
2012-01-18 03:04:25 +00:00
|
|
|
struct ida lockowner_id;
|
NFS: Cache state owners after files are closed
Servers have a finite amount of memory to store NFSv4 open and lock
owners. Moreover, servers may have a difficult time determining when
they can reap their state owner table, thanks to gray areas in the
NFSv4 protocol specification. Thus clients should be careful to reuse
state owners when possible.
Currently Linux is not too careful. When a user has closed all her
files on one mount point, the state owner's reference count goes to
zero, and it is released. The next OPEN allocates a new one. A
workload that serially opens and closes files can run through a large
number of open owners this way.
When a state owner's reference count goes to zero, slap it onto a free
list for that nfs_server, with an expiry time. Garbage collect before
looking for a state owner. This makes state owners for active users
available for re-use.
Now that there can be unused state owners remaining at umount time,
purge the state owner free list when a server is destroyed. Also be
sure not to reclaim unused state owners during state recovery.
This change has benefits for the client as well. For some workloads,
this approach drops the number of OPEN_CONFIRM calls from the same as
the number of OPEN calls, down to just one. This reduces wire traffic
and thus open(2) latency. Before this patch, untarring a kernel
source tarball shows the OPEN_CONFIRM call counter steadily increasing
through the test. With the patch, the OPEN_CONFIRM count remains at 1
throughout the entire untar.
As long as the expiry time is kept short, I don't think garbage
collection should be terribly expensive, although it does bounce the
clp->cl_lock around a bit.
[ At some point we should rationalize the use of the nfs_server
->destroy method. ]
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
[Trond: Fixed a garbage collection race and a few efficiency issues]
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2011-12-06 21:13:48 +00:00
|
|
|
struct list_head state_owners_lru;
|
2011-06-01 20:44:44 +00:00
|
|
|
struct list_head layouts;
|
2010-12-24 01:33:04 +00:00
|
|
|
struct list_head delegations;
|
2018-07-09 19:13:31 +00:00
|
|
|
struct list_head ss_copies;
|
2013-10-17 18:13:02 +00:00
|
|
|
|
|
|
|
unsigned long mig_gen;
|
|
|
|
unsigned long mig_status;
|
|
|
|
#define NFS_MIG_IN_TRANSITION (1)
|
|
|
|
#define NFS_MIG_FAILED (2)
|
2017-06-08 15:52:44 +00:00
|
|
|
#define NFS_MIG_TSM_POSSIBLE (3)
|
2013-10-17 18:13:02 +00:00
|
|
|
|
NFS: Share NFS superblocks per-protocol per-server per-FSID
The attached patch makes NFS share superblocks between mounts from the same
server and FSID over the same protocol.
It does this by creating each superblock with a false root and returning the
real root dentry in the vfsmount presented by get_sb(). The root dentry set
starts off as an anonymous dentry if we don't already have the dentry for its
inode, otherwise it simply returns the dentry we already have.
We may thus end up with several trees of dentries in the superblock, and if at
some later point one of anonymous tree roots is discovered by normal filesystem
activity to be located in another tree within the superblock, the anonymous
root is named and materialises attached to the second tree at the appropriate
point.
Why do it this way? Why not pass an extra argument to the mount() syscall to
indicate the subpath and then pathwalk from the server root to the desired
directory? You can't guarantee this will work for two reasons:
(1) The root and intervening nodes may not be accessible to the client.
With NFS2 and NFS3, for instance, mountd is called on the server to get
the filehandle for the tip of a path. mountd won't give us handles for
anything we don't have permission to access, and so we can't set up NFS
inodes for such nodes, and so can't easily set up dentries (we'd have to
have ghost inodes or something).
With this patch we don't actually create dentries until we get handles
from the server that we can use to set up their inodes, and we don't
actually bind them into the tree until we know for sure where they go.
(2) Inaccessible symbolic links.
If we're asked to mount two exports from the server, eg:
mount warthog:/warthog/aaa/xxx /mmm
mount warthog:/warthog/bbb/yyy /nnn
We may not be able to access anything nearer the root than xxx and yyy,
but we may find out later that /mmm/www/yyy, say, is actually the same
directory as the one mounted on /nnn. What we might then find out, for
example, is that /warthog/bbb was actually a symbolic link to
/warthog/aaa/xxx/www, but we can't actually determine that by talking to
the server until /warthog is made available by NFS.
This would lead to having constructed an errneous dentry tree which we
can't easily fix. We can end up with a dentry marked as a directory when
it should actually be a symlink, or we could end up with an apparently
hardlinked directory.
With this patch we need not make assumptions about the type of a dentry
for which we can't retrieve information, nor need we assume we know its
place in the grand scheme of things until we actually see that place.
This patch reduces the possibility of aliasing in the inode and page caches for
inodes that may be accessed by more than one NFS export. It also reduces the
number of superblocks required for NFS where there are many NFS exports being
used from a server (home directory server + autofs for example).
This in turn makes it simpler to do local caching of network filesystems, as it
can then be guaranteed that there won't be links from multiple inodes in
separate superblocks to the same cache file.
Obviously, cache aliasing between different levels of NFS protocol could still
be a problem, but at least that gives us another key to use when indexing the
cache.
This patch makes the following changes:
(1) The server record construction/destruction has been abstracted out into
its own set of functions to make things easier to get right. These have
been moved into fs/nfs/client.c.
All the code in fs/nfs/client.c has to do with the management of
connections to servers, and doesn't touch superblocks in any way; the
remaining code in fs/nfs/super.c has to do with VFS superblock management.
(2) The sequence of events undertaken by NFS mount is now reordered:
(a) A volume representation (struct nfs_server) is allocated.
(b) A server representation (struct nfs_client) is acquired. This may be
allocated or shared, and is keyed on server address, port and NFS
version.
(c) If allocated, the client representation is initialised. The state
member variable of nfs_client is used to prevent a race during
initialisation from two mounts.
(d) For NFS4 a simple pathwalk is performed, walking from FH to FH to find
the root filehandle for the mount (fs/nfs/getroot.c). For NFS2/3 we
are given the root FH in advance.
(e) The volume FSID is probed for on the root FH.
(f) The volume representation is initialised from the FSINFO record
retrieved on the root FH.
(g) sget() is called to acquire a superblock. This may be allocated or
shared, keyed on client pointer and FSID.
(h) If allocated, the superblock is initialised.
(i) If the superblock is shared, then the new nfs_server record is
discarded.
(j) The root dentry for this mount is looked up from the root FH.
(k) The root dentry for this mount is assigned to the vfsmount.
(3) nfs_readdir_lookup() creates dentries for each of the entries readdir()
returns; this function now attaches disconnected trees from alternate
roots that happen to be discovered attached to a directory being read (in
the same way nfs_lookup() is made to do for lookup ops).
The new d_materialise_unique() function is now used to do this, thus
permitting the whole thing to be done under one set of locks, and thus
avoiding any race between mount and lookup operations on the same
directory.
(4) The client management code uses a new debug facility: NFSDBG_CLIENT which
is set by echoing 1024 to /proc/net/sunrpc/nfs_debug.
(5) Clone mounts are now called xdev mounts.
(6) Use the dentry passed to the statfs() op as the handle for retrieving fs
statistics rather than the root dentry of the superblock (which is now a
dummy).
Signed-Off-By: David Howells <dhowells@redhat.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2006-08-23 00:06:13 +00:00
|
|
|
void (*destroy)(struct nfs_server *);
|
2007-11-08 09:05:04 +00:00
|
|
|
|
|
|
|
atomic_t active; /* Keep trace of any activity to this server */
|
2008-03-14 18:10:30 +00:00
|
|
|
|
|
|
|
/* mountd-related mount options */
|
|
|
|
struct sockaddr_storage mountd_address;
|
|
|
|
size_t mountd_addrlen;
|
|
|
|
u32 mountd_version;
|
|
|
|
unsigned short mountd_port;
|
|
|
|
unsigned short mountd_protocol;
|
2017-04-11 16:50:10 +00:00
|
|
|
struct rpc_wait_queue uoc_rpcwaitq;
|
2018-03-05 17:03:00 +00:00
|
|
|
|
|
|
|
/* XDR related information */
|
|
|
|
unsigned int read_hdrsize;
|
2019-04-24 21:46:43 +00:00
|
|
|
|
|
|
|
/* User namespace info */
|
|
|
|
const struct cred *cred;
|
2021-02-19 22:22:33 +00:00
|
|
|
bool has_sec_mnt_opts;
|
2005-04-16 22:20:36 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
/* Server capabilities */
|
|
|
|
#define NFS_CAP_READDIRPLUS (1U << 0)
|
|
|
|
#define NFS_CAP_HARDLINKS (1U << 1)
|
|
|
|
#define NFS_CAP_SYMLINKS (1U << 2)
|
|
|
|
#define NFS_CAP_ACLS (1U << 3)
|
|
|
|
#define NFS_CAP_ATOMIC_OPEN (1U << 4)
|
2016-10-04 19:26:41 +00:00
|
|
|
#define NFS_CAP_LGOPEN (1U << 5)
|
2021-12-17 20:36:54 +00:00
|
|
|
#define NFS_CAP_CASE_INSENSITIVE (1U << 6)
|
|
|
|
#define NFS_CAP_CASE_PRESERVING (1U << 7)
|
2010-04-11 20:48:44 +00:00
|
|
|
#define NFS_CAP_POSIX_LOCK (1U << 14)
|
2011-02-22 23:44:32 +00:00
|
|
|
#define NFS_CAP_UIDGID_NOMAP (1U << 15)
|
2013-03-17 19:31:15 +00:00
|
|
|
#define NFS_CAP_STATEID_NFSV41 (1U << 16)
|
2013-03-15 20:44:28 +00:00
|
|
|
#define NFS_CAP_ATOMIC_OPEN_V1 (1U << 17)
|
2013-05-22 16:50:39 +00:00
|
|
|
#define NFS_CAP_SECURITY_LABEL (1U << 18)
|
2014-09-26 17:58:48 +00:00
|
|
|
#define NFS_CAP_SEEK (1U << 19)
|
2014-11-25 18:18:15 +00:00
|
|
|
#define NFS_CAP_ALLOCATE (1U << 20)
|
2014-11-25 18:18:16 +00:00
|
|
|
#define NFS_CAP_DEALLOCATE (1U << 21)
|
2015-06-27 15:45:46 +00:00
|
|
|
#define NFS_CAP_LAYOUTSTATS (1U << 22)
|
2015-09-25 18:24:35 +00:00
|
|
|
#define NFS_CAP_CLONE (1U << 23)
|
2013-05-21 20:53:03 +00:00
|
|
|
#define NFS_CAP_COPY (1U << 24)
|
2018-07-09 19:13:29 +00:00
|
|
|
#define NFS_CAP_OFFLOAD_CANCEL (1U << 25)
|
2019-02-08 15:31:05 +00:00
|
|
|
#define NFS_CAP_LAYOUTERROR (1U << 26)
|
2019-06-04 20:14:30 +00:00
|
|
|
#define NFS_CAP_COPY_NOTIFY (1U << 27)
|
2020-06-23 22:38:55 +00:00
|
|
|
#define NFS_CAP_XATTR (1U << 28)
|
2014-05-28 17:41:22 +00:00
|
|
|
#define NFS_CAP_READ_PLUS (1U << 29)
|
2021-12-09 19:53:30 +00:00
|
|
|
#define NFS_CAP_FS_LOCATIONS (1U << 30)
|
2022-05-25 16:12:59 +00:00
|
|
|
#define NFS_CAP_MOVEABLE (1U << 31)
|
2005-04-16 22:20:36 +00:00
|
|
|
#endif
|