2019-06-04 08:11:33 +00:00
|
|
|
// SPDX-License-Identifier: GPL-2.0-only
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
/*
|
|
|
|
* Copyright (c) 2008-2009 Patrick McHardy <kaber@trash.net>
|
|
|
|
*
|
|
|
|
* Development of this code funded by Astaro AG (http://www.astaro.com/)
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <linux/kernel.h>
|
|
|
|
#include <linux/init.h>
|
|
|
|
#include <linux/module.h>
|
|
|
|
#include <linux/list.h>
|
|
|
|
#include <linux/rbtree.h>
|
|
|
|
#include <linux/netlink.h>
|
|
|
|
#include <linux/netfilter.h>
|
|
|
|
#include <linux/netfilter/nf_tables.h>
|
2019-08-08 05:28:08 +00:00
|
|
|
#include <net/netfilter/nf_tables_core.h>
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
|
|
|
|
struct nft_rbtree {
|
|
|
|
struct rb_root root;
|
2017-07-28 08:34:42 +00:00
|
|
|
rwlock_t lock;
|
2020-07-20 15:55:21 +00:00
|
|
|
seqcount_rwlock_t count;
|
2018-05-16 20:58:34 +00:00
|
|
|
struct delayed_work gc_work;
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
struct nft_rbtree_elem {
|
|
|
|
struct rb_node node;
|
2015-03-25 13:07:50 +00:00
|
|
|
struct nft_set_ext ext;
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
};
|
|
|
|
|
2016-04-12 21:50:36 +00:00
|
|
|
static bool nft_rbtree_interval_end(const struct nft_rbtree_elem *rbe)
|
|
|
|
{
|
|
|
|
return nft_set_ext_exists(&rbe->ext, NFT_SET_EXT_FLAGS) &&
|
|
|
|
(*nft_set_ext_flags(&rbe->ext) & NFT_SET_ELEM_INTERVAL_END);
|
|
|
|
}
|
2015-03-25 14:08:50 +00:00
|
|
|
|
2020-03-22 02:22:00 +00:00
|
|
|
static bool nft_rbtree_interval_start(const struct nft_rbtree_elem *rbe)
|
|
|
|
{
|
|
|
|
return !nft_rbtree_interval_end(rbe);
|
|
|
|
}
|
|
|
|
|
netfilter: nft_set_rbtree: Switch to node list walk for overlap detection
[ Upstream commit c9e6978e2725a7d4b6cd23b2facd3f11422c0643 ]
...instead of a tree descent, which became overly complicated in an
attempt to cover cases where expired or inactive elements would affect
comparisons with the new element being inserted.
Further, it turned out that it's probably impossible to cover all those
cases, as inactive nodes might entirely hide subtrees consisting of a
complete interval plus a node that makes the current insertion not
overlap.
To speed up the overlap check, descent the tree to find a greater
element that is closer to the key value to insert. Then walk down the
node list for overlap detection. Starting the overlap check from
rb_first() unconditionally is slow, it takes 10 times longer due to the
full linear traversal of the list.
Moreover, perform garbage collection of expired elements when walking
down the node list to avoid bogus overlap reports.
For the insertion operation itself, this essentially reverts back to the
implementation before commit 7c84d41416d8 ("netfilter: nft_set_rbtree:
Detect partial overlaps on insertion"), except that cases of complete
overlap are already handled in the overlap detection phase itself, which
slightly simplifies the loop to find the insertion point.
Based on initial patch from Stefano Brivio, including text from the
original patch description too.
Fixes: 7c84d41416d8 ("netfilter: nft_set_rbtree: Detect partial overlaps on insertion")
Reviewed-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-01-14 22:38:32 +00:00
|
|
|
static int nft_rbtree_cmp(const struct nft_set *set,
|
|
|
|
const struct nft_rbtree_elem *e1,
|
|
|
|
const struct nft_rbtree_elem *e2)
|
2016-04-12 21:50:37 +00:00
|
|
|
{
|
netfilter: nft_set_rbtree: Switch to node list walk for overlap detection
[ Upstream commit c9e6978e2725a7d4b6cd23b2facd3f11422c0643 ]
...instead of a tree descent, which became overly complicated in an
attempt to cover cases where expired or inactive elements would affect
comparisons with the new element being inserted.
Further, it turned out that it's probably impossible to cover all those
cases, as inactive nodes might entirely hide subtrees consisting of a
complete interval plus a node that makes the current insertion not
overlap.
To speed up the overlap check, descent the tree to find a greater
element that is closer to the key value to insert. Then walk down the
node list for overlap detection. Starting the overlap check from
rb_first() unconditionally is slow, it takes 10 times longer due to the
full linear traversal of the list.
Moreover, perform garbage collection of expired elements when walking
down the node list to avoid bogus overlap reports.
For the insertion operation itself, this essentially reverts back to the
implementation before commit 7c84d41416d8 ("netfilter: nft_set_rbtree:
Detect partial overlaps on insertion"), except that cases of complete
overlap are already handled in the overlap detection phase itself, which
slightly simplifies the loop to find the insertion point.
Based on initial patch from Stefano Brivio, including text from the
original patch description too.
Fixes: 7c84d41416d8 ("netfilter: nft_set_rbtree: Detect partial overlaps on insertion")
Reviewed-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-01-14 22:38:32 +00:00
|
|
|
return memcmp(nft_set_ext_key(&e1->ext), nft_set_ext_key(&e2->ext),
|
|
|
|
set->klen);
|
2016-04-12 21:50:37 +00:00
|
|
|
}
|
|
|
|
|
2023-09-22 16:30:15 +00:00
|
|
|
static bool nft_rbtree_elem_expired(const struct nft_rbtree_elem *rbe)
|
|
|
|
{
|
|
|
|
return nft_set_elem_expired(&rbe->ext) ||
|
|
|
|
nft_set_elem_is_dead(&rbe->ext);
|
|
|
|
}
|
|
|
|
|
2017-07-28 08:34:42 +00:00
|
|
|
static bool __nft_rbtree_lookup(const struct net *net, const struct nft_set *set,
|
|
|
|
const u32 *key, const struct nft_set_ext **ext,
|
|
|
|
unsigned int seq)
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
{
|
2017-03-12 11:38:47 +00:00
|
|
|
struct nft_rbtree *priv = nft_set_priv(set);
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
const struct nft_rbtree_elem *rbe, *interval = NULL;
|
2016-07-08 12:41:49 +00:00
|
|
|
u8 genmask = nft_genmask_cur(net);
|
2015-03-21 15:19:14 +00:00
|
|
|
const struct rb_node *parent;
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
int d;
|
|
|
|
|
2017-07-28 08:34:42 +00:00
|
|
|
parent = rcu_dereference_raw(priv->root.rb_node);
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
while (parent != NULL) {
|
2017-07-28 08:34:42 +00:00
|
|
|
if (read_seqcount_retry(&priv->count, seq))
|
|
|
|
return false;
|
|
|
|
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
rbe = rb_entry(parent, struct nft_rbtree_elem, node);
|
|
|
|
|
netfilter: nft_set_rbtree: Switch to node list walk for overlap detection
[ Upstream commit c9e6978e2725a7d4b6cd23b2facd3f11422c0643 ]
...instead of a tree descent, which became overly complicated in an
attempt to cover cases where expired or inactive elements would affect
comparisons with the new element being inserted.
Further, it turned out that it's probably impossible to cover all those
cases, as inactive nodes might entirely hide subtrees consisting of a
complete interval plus a node that makes the current insertion not
overlap.
To speed up the overlap check, descent the tree to find a greater
element that is closer to the key value to insert. Then walk down the
node list for overlap detection. Starting the overlap check from
rb_first() unconditionally is slow, it takes 10 times longer due to the
full linear traversal of the list.
Moreover, perform garbage collection of expired elements when walking
down the node list to avoid bogus overlap reports.
For the insertion operation itself, this essentially reverts back to the
implementation before commit 7c84d41416d8 ("netfilter: nft_set_rbtree:
Detect partial overlaps on insertion"), except that cases of complete
overlap are already handled in the overlap detection phase itself, which
slightly simplifies the loop to find the insertion point.
Based on initial patch from Stefano Brivio, including text from the
original patch description too.
Fixes: 7c84d41416d8 ("netfilter: nft_set_rbtree: Detect partial overlaps on insertion")
Reviewed-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-01-14 22:38:32 +00:00
|
|
|
d = memcmp(nft_set_ext_key(&rbe->ext), key, set->klen);
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
if (d < 0) {
|
2017-07-28 08:34:42 +00:00
|
|
|
parent = rcu_dereference_raw(parent->rb_left);
|
2017-03-01 17:15:11 +00:00
|
|
|
if (interval &&
|
netfilter: nft_set_rbtree: Switch to node list walk for overlap detection
[ Upstream commit c9e6978e2725a7d4b6cd23b2facd3f11422c0643 ]
...instead of a tree descent, which became overly complicated in an
attempt to cover cases where expired or inactive elements would affect
comparisons with the new element being inserted.
Further, it turned out that it's probably impossible to cover all those
cases, as inactive nodes might entirely hide subtrees consisting of a
complete interval plus a node that makes the current insertion not
overlap.
To speed up the overlap check, descent the tree to find a greater
element that is closer to the key value to insert. Then walk down the
node list for overlap detection. Starting the overlap check from
rb_first() unconditionally is slow, it takes 10 times longer due to the
full linear traversal of the list.
Moreover, perform garbage collection of expired elements when walking
down the node list to avoid bogus overlap reports.
For the insertion operation itself, this essentially reverts back to the
implementation before commit 7c84d41416d8 ("netfilter: nft_set_rbtree:
Detect partial overlaps on insertion"), except that cases of complete
overlap are already handled in the overlap detection phase itself, which
slightly simplifies the loop to find the insertion point.
Based on initial patch from Stefano Brivio, including text from the
original patch description too.
Fixes: 7c84d41416d8 ("netfilter: nft_set_rbtree: Detect partial overlaps on insertion")
Reviewed-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-01-14 22:38:32 +00:00
|
|
|
!nft_rbtree_cmp(set, rbe, interval) &&
|
2018-06-06 17:05:12 +00:00
|
|
|
nft_rbtree_interval_end(rbe) &&
|
2020-03-22 02:22:00 +00:00
|
|
|
nft_rbtree_interval_start(interval))
|
2016-04-12 21:50:37 +00:00
|
|
|
continue;
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
interval = rbe;
|
|
|
|
} else if (d > 0)
|
2017-07-28 08:34:42 +00:00
|
|
|
parent = rcu_dereference_raw(parent->rb_right);
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
else {
|
2015-03-25 14:08:50 +00:00
|
|
|
if (!nft_set_elem_active(&rbe->ext, genmask)) {
|
2017-07-28 08:34:42 +00:00
|
|
|
parent = rcu_dereference_raw(parent->rb_left);
|
2015-03-25 14:08:50 +00:00
|
|
|
continue;
|
|
|
|
}
|
2020-05-11 13:31:41 +00:00
|
|
|
|
2023-09-22 16:30:15 +00:00
|
|
|
if (nft_rbtree_elem_expired(rbe))
|
2020-05-11 13:31:41 +00:00
|
|
|
return false;
|
|
|
|
|
netfilter: nft_set_rbtree: bogus lookup/get on consecutive elements in named sets
The existing rbtree implementation might store consecutive elements
where the closing element and the opening element might overlap, eg.
[ a, a+1) [ a+1, a+2)
This patch removes the optimization for non-anonymous sets in the exact
matching case, where it is assumed to stop searching in case that the
closing element is found. Instead, invalidate candidate interval and
keep looking further in the tree.
The lookup/get operation might return false, while there is an element
in the rbtree. Moreover, the get operation returns true as if a+2 would
be in the tree. This happens with named sets after several set updates.
The existing lookup optimization (that only works for the anonymous
sets) might not reach the opening [ a+1,... element if the closing
...,a+1) is found in first place when walking over the rbtree. Hence,
walking the full tree in that case is needed.
This patch fixes the lookup and get operations.
Fixes: e701001e7cbe ("netfilter: nft_rbtree: allow adjacent intervals with dynamic updates")
Fixes: ba0e4d9917b4 ("netfilter: nf_tables: get set elements via netlink")
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2019-12-06 19:23:29 +00:00
|
|
|
if (nft_rbtree_interval_end(rbe)) {
|
|
|
|
if (nft_set_is_anonymous(set))
|
|
|
|
return false;
|
|
|
|
parent = rcu_dereference_raw(parent->rb_left);
|
|
|
|
interval = NULL;
|
|
|
|
continue;
|
|
|
|
}
|
2015-03-25 14:08:48 +00:00
|
|
|
|
|
|
|
*ext = &rbe->ext;
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-08-01 11:13:08 +00:00
|
|
|
if (set->flags & NFT_SET_INTERVAL && interval != NULL &&
|
|
|
|
nft_set_elem_active(&interval->ext, genmask) &&
|
2023-09-22 16:30:15 +00:00
|
|
|
!nft_rbtree_elem_expired(interval) &&
|
2020-03-22 02:22:00 +00:00
|
|
|
nft_rbtree_interval_start(interval)) {
|
2016-08-01 11:13:08 +00:00
|
|
|
*ext = &interval->ext;
|
|
|
|
return true;
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
}
|
netfilter: nft_set_rbtree: bogus lookup/get on consecutive elements in named sets
The existing rbtree implementation might store consecutive elements
where the closing element and the opening element might overlap, eg.
[ a, a+1) [ a+1, a+2)
This patch removes the optimization for non-anonymous sets in the exact
matching case, where it is assumed to stop searching in case that the
closing element is found. Instead, invalidate candidate interval and
keep looking further in the tree.
The lookup/get operation might return false, while there is an element
in the rbtree. Moreover, the get operation returns true as if a+2 would
be in the tree. This happens with named sets after several set updates.
The existing lookup optimization (that only works for the anonymous
sets) might not reach the opening [ a+1,... element if the closing
...,a+1) is found in first place when walking over the rbtree. Hence,
walking the full tree in that case is needed.
This patch fixes the lookup and get operations.
Fixes: e701001e7cbe ("netfilter: nft_rbtree: allow adjacent intervals with dynamic updates")
Fixes: ba0e4d9917b4 ("netfilter: nf_tables: get set elements via netlink")
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2019-12-06 19:23:29 +00:00
|
|
|
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2021-05-13 20:29:56 +00:00
|
|
|
INDIRECT_CALLABLE_SCOPE
|
|
|
|
bool nft_rbtree_lookup(const struct net *net, const struct nft_set *set,
|
|
|
|
const u32 *key, const struct nft_set_ext **ext)
|
2017-07-28 08:34:42 +00:00
|
|
|
{
|
|
|
|
struct nft_rbtree *priv = nft_set_priv(set);
|
|
|
|
unsigned int seq = read_seqcount_begin(&priv->count);
|
|
|
|
bool ret;
|
|
|
|
|
|
|
|
ret = __nft_rbtree_lookup(net, set, key, ext, seq);
|
|
|
|
if (ret || !read_seqcount_retry(&priv->count, seq))
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
read_lock_bh(&priv->lock);
|
|
|
|
seq = read_seqcount_begin(&priv->count);
|
|
|
|
ret = __nft_rbtree_lookup(net, set, key, ext, seq);
|
|
|
|
read_unlock_bh(&priv->lock);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2017-10-09 17:52:28 +00:00
|
|
|
static bool __nft_rbtree_get(const struct net *net, const struct nft_set *set,
|
|
|
|
const u32 *key, struct nft_rbtree_elem **elem,
|
|
|
|
unsigned int seq, unsigned int flags, u8 genmask)
|
|
|
|
{
|
|
|
|
struct nft_rbtree_elem *rbe, *interval = NULL;
|
|
|
|
struct nft_rbtree *priv = nft_set_priv(set);
|
|
|
|
const struct rb_node *parent;
|
|
|
|
const void *this;
|
|
|
|
int d;
|
|
|
|
|
|
|
|
parent = rcu_dereference_raw(priv->root.rb_node);
|
|
|
|
while (parent != NULL) {
|
|
|
|
if (read_seqcount_retry(&priv->count, seq))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
rbe = rb_entry(parent, struct nft_rbtree_elem, node);
|
|
|
|
|
|
|
|
this = nft_set_ext_key(&rbe->ext);
|
|
|
|
d = memcmp(this, key, set->klen);
|
|
|
|
if (d < 0) {
|
|
|
|
parent = rcu_dereference_raw(parent->rb_left);
|
2018-10-01 11:27:32 +00:00
|
|
|
if (!(flags & NFT_SET_ELEM_INTERVAL_END))
|
|
|
|
interval = rbe;
|
2017-10-09 17:52:28 +00:00
|
|
|
} else if (d > 0) {
|
|
|
|
parent = rcu_dereference_raw(parent->rb_right);
|
2018-10-01 11:27:32 +00:00
|
|
|
if (flags & NFT_SET_ELEM_INTERVAL_END)
|
|
|
|
interval = rbe;
|
2017-10-09 17:52:28 +00:00
|
|
|
} else {
|
netfilter: nft_set_rbtree: bogus lookup/get on consecutive elements in named sets
The existing rbtree implementation might store consecutive elements
where the closing element and the opening element might overlap, eg.
[ a, a+1) [ a+1, a+2)
This patch removes the optimization for non-anonymous sets in the exact
matching case, where it is assumed to stop searching in case that the
closing element is found. Instead, invalidate candidate interval and
keep looking further in the tree.
The lookup/get operation might return false, while there is an element
in the rbtree. Moreover, the get operation returns true as if a+2 would
be in the tree. This happens with named sets after several set updates.
The existing lookup optimization (that only works for the anonymous
sets) might not reach the opening [ a+1,... element if the closing
...,a+1) is found in first place when walking over the rbtree. Hence,
walking the full tree in that case is needed.
This patch fixes the lookup and get operations.
Fixes: e701001e7cbe ("netfilter: nft_rbtree: allow adjacent intervals with dynamic updates")
Fixes: ba0e4d9917b4 ("netfilter: nf_tables: get set elements via netlink")
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2019-12-06 19:23:29 +00:00
|
|
|
if (!nft_set_elem_active(&rbe->ext, genmask)) {
|
2017-10-09 17:52:28 +00:00
|
|
|
parent = rcu_dereference_raw(parent->rb_left);
|
netfilter: nft_set_rbtree: bogus lookup/get on consecutive elements in named sets
The existing rbtree implementation might store consecutive elements
where the closing element and the opening element might overlap, eg.
[ a, a+1) [ a+1, a+2)
This patch removes the optimization for non-anonymous sets in the exact
matching case, where it is assumed to stop searching in case that the
closing element is found. Instead, invalidate candidate interval and
keep looking further in the tree.
The lookup/get operation might return false, while there is an element
in the rbtree. Moreover, the get operation returns true as if a+2 would
be in the tree. This happens with named sets after several set updates.
The existing lookup optimization (that only works for the anonymous
sets) might not reach the opening [ a+1,... element if the closing
...,a+1) is found in first place when walking over the rbtree. Hence,
walking the full tree in that case is needed.
This patch fixes the lookup and get operations.
Fixes: e701001e7cbe ("netfilter: nft_rbtree: allow adjacent intervals with dynamic updates")
Fixes: ba0e4d9917b4 ("netfilter: nf_tables: get set elements via netlink")
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2019-12-06 19:23:29 +00:00
|
|
|
continue;
|
|
|
|
}
|
2017-10-09 17:52:28 +00:00
|
|
|
|
2020-05-11 13:31:41 +00:00
|
|
|
if (nft_set_elem_expired(&rbe->ext))
|
|
|
|
return false;
|
|
|
|
|
2017-10-09 17:52:28 +00:00
|
|
|
if (!nft_set_ext_exists(&rbe->ext, NFT_SET_EXT_FLAGS) ||
|
|
|
|
(*nft_set_ext_flags(&rbe->ext) & NFT_SET_ELEM_INTERVAL_END) ==
|
|
|
|
(flags & NFT_SET_ELEM_INTERVAL_END)) {
|
|
|
|
*elem = rbe;
|
|
|
|
return true;
|
|
|
|
}
|
netfilter: nft_set_rbtree: bogus lookup/get on consecutive elements in named sets
The existing rbtree implementation might store consecutive elements
where the closing element and the opening element might overlap, eg.
[ a, a+1) [ a+1, a+2)
This patch removes the optimization for non-anonymous sets in the exact
matching case, where it is assumed to stop searching in case that the
closing element is found. Instead, invalidate candidate interval and
keep looking further in the tree.
The lookup/get operation might return false, while there is an element
in the rbtree. Moreover, the get operation returns true as if a+2 would
be in the tree. This happens with named sets after several set updates.
The existing lookup optimization (that only works for the anonymous
sets) might not reach the opening [ a+1,... element if the closing
...,a+1) is found in first place when walking over the rbtree. Hence,
walking the full tree in that case is needed.
This patch fixes the lookup and get operations.
Fixes: e701001e7cbe ("netfilter: nft_rbtree: allow adjacent intervals with dynamic updates")
Fixes: ba0e4d9917b4 ("netfilter: nf_tables: get set elements via netlink")
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2019-12-06 19:23:29 +00:00
|
|
|
|
|
|
|
if (nft_rbtree_interval_end(rbe))
|
|
|
|
interval = NULL;
|
|
|
|
|
|
|
|
parent = rcu_dereference_raw(parent->rb_left);
|
2017-10-09 17:52:28 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (set->flags & NFT_SET_INTERVAL && interval != NULL &&
|
|
|
|
nft_set_elem_active(&interval->ext, genmask) &&
|
2020-05-11 13:31:41 +00:00
|
|
|
!nft_set_elem_expired(&interval->ext) &&
|
2018-10-01 11:27:32 +00:00
|
|
|
((!nft_rbtree_interval_end(interval) &&
|
|
|
|
!(flags & NFT_SET_ELEM_INTERVAL_END)) ||
|
|
|
|
(nft_rbtree_interval_end(interval) &&
|
|
|
|
(flags & NFT_SET_ELEM_INTERVAL_END)))) {
|
2017-10-09 17:52:28 +00:00
|
|
|
*elem = interval;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void *nft_rbtree_get(const struct net *net, const struct nft_set *set,
|
|
|
|
const struct nft_set_elem *elem, unsigned int flags)
|
|
|
|
{
|
|
|
|
struct nft_rbtree *priv = nft_set_priv(set);
|
|
|
|
unsigned int seq = read_seqcount_begin(&priv->count);
|
|
|
|
struct nft_rbtree_elem *rbe = ERR_PTR(-ENOENT);
|
|
|
|
const u32 *key = (const u32 *)&elem->key.val;
|
|
|
|
u8 genmask = nft_genmask_cur(net);
|
|
|
|
bool ret;
|
|
|
|
|
|
|
|
ret = __nft_rbtree_get(net, set, key, &rbe, seq, flags, genmask);
|
|
|
|
if (ret || !read_seqcount_retry(&priv->count, seq))
|
|
|
|
return rbe;
|
|
|
|
|
|
|
|
read_lock_bh(&priv->lock);
|
|
|
|
seq = read_seqcount_begin(&priv->count);
|
|
|
|
ret = __nft_rbtree_get(net, set, key, &rbe, seq, flags, genmask);
|
|
|
|
if (!ret)
|
|
|
|
rbe = ERR_PTR(-ENOENT);
|
|
|
|
read_unlock_bh(&priv->lock);
|
|
|
|
|
|
|
|
return rbe;
|
|
|
|
}
|
|
|
|
|
2023-09-22 16:30:15 +00:00
|
|
|
static void nft_rbtree_gc_remove(struct net *net, struct nft_set *set,
|
|
|
|
struct nft_rbtree *priv,
|
|
|
|
struct nft_rbtree_elem *rbe)
|
|
|
|
{
|
|
|
|
struct nft_set_elem elem = {
|
|
|
|
.priv = rbe,
|
|
|
|
};
|
|
|
|
|
|
|
|
nft_setelem_data_deactivate(net, set, &elem);
|
|
|
|
rb_erase(&rbe->node, &priv->root);
|
|
|
|
}
|
|
|
|
|
netfilter: nf_tables: nft_set_rbtree: fix spurious insertion failure
[ Upstream commit 087388278e0f301f4c61ddffb1911d3a180f84b8 ]
nft_rbtree_gc_elem() walks back and removes the end interval element that
comes before the expired element.
There is a small chance that we've cached this element as 'rbe_ge'.
If this happens, we hold and test a pointer that has been queued for
freeing.
It also causes spurious insertion failures:
$ cat test-testcases-sets-0044interval_overlap_0.1/testout.log
Error: Could not process rule: File exists
add element t s { 0 - 2 }
^^^^^^
Failed to insert 0 - 2 given:
table ip t {
set s {
type inet_service
flags interval,timeout
timeout 2s
gc-interval 2s
}
}
The set (rbtree) is empty. The 'failure' doesn't happen on next attempt.
Reason is that when we try to insert, the tree may hold an expired
element that collides with the range we're adding.
While we do evict/erase this element, we can trip over this check:
if (rbe_ge && nft_rbtree_interval_end(rbe_ge) && nft_rbtree_interval_end(new))
return -ENOTEMPTY;
rbe_ge was erased by the synchronous gc, we should not have done this
check. Next attempt won't find it, so retry results in successful
insertion.
Restart in-kernel to avoid such spurious errors.
Such restart are rare, unless userspace intentionally adds very large
numbers of elements with very short timeouts while setting a huge
gc interval.
Even in this case, this cannot loop forever, on each retry an existing
element has been removed.
As the caller is holding the transaction mutex, its impossible
for a second entity to add more expiring elements to the tree.
After this it also becomes feasible to remove the async gc worker
and perform all garbage collection from the commit path.
Fixes: c9e6978e2725 ("netfilter: nft_set_rbtree: Switch to node list walk for overlap detection")
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-09-28 13:12:44 +00:00
|
|
|
static const struct nft_rbtree_elem *
|
|
|
|
nft_rbtree_gc_elem(const struct nft_set *__set, struct nft_rbtree *priv,
|
|
|
|
struct nft_rbtree_elem *rbe, u8 genmask)
|
netfilter: nft_set_rbtree: Switch to node list walk for overlap detection
[ Upstream commit c9e6978e2725a7d4b6cd23b2facd3f11422c0643 ]
...instead of a tree descent, which became overly complicated in an
attempt to cover cases where expired or inactive elements would affect
comparisons with the new element being inserted.
Further, it turned out that it's probably impossible to cover all those
cases, as inactive nodes might entirely hide subtrees consisting of a
complete interval plus a node that makes the current insertion not
overlap.
To speed up the overlap check, descent the tree to find a greater
element that is closer to the key value to insert. Then walk down the
node list for overlap detection. Starting the overlap check from
rb_first() unconditionally is slow, it takes 10 times longer due to the
full linear traversal of the list.
Moreover, perform garbage collection of expired elements when walking
down the node list to avoid bogus overlap reports.
For the insertion operation itself, this essentially reverts back to the
implementation before commit 7c84d41416d8 ("netfilter: nft_set_rbtree:
Detect partial overlaps on insertion"), except that cases of complete
overlap are already handled in the overlap detection phase itself, which
slightly simplifies the loop to find the insertion point.
Based on initial patch from Stefano Brivio, including text from the
original patch description too.
Fixes: 7c84d41416d8 ("netfilter: nft_set_rbtree: Detect partial overlaps on insertion")
Reviewed-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-01-14 22:38:32 +00:00
|
|
|
{
|
|
|
|
struct nft_set *set = (struct nft_set *)__set;
|
|
|
|
struct rb_node *prev = rb_prev(&rbe->node);
|
2023-09-22 16:30:15 +00:00
|
|
|
struct net *net = read_pnet(&set->net);
|
2023-07-20 19:30:05 +00:00
|
|
|
struct nft_rbtree_elem *rbe_prev;
|
2023-09-22 16:30:15 +00:00
|
|
|
struct nft_trans_gc *gc;
|
netfilter: nft_set_rbtree: Switch to node list walk for overlap detection
[ Upstream commit c9e6978e2725a7d4b6cd23b2facd3f11422c0643 ]
...instead of a tree descent, which became overly complicated in an
attempt to cover cases where expired or inactive elements would affect
comparisons with the new element being inserted.
Further, it turned out that it's probably impossible to cover all those
cases, as inactive nodes might entirely hide subtrees consisting of a
complete interval plus a node that makes the current insertion not
overlap.
To speed up the overlap check, descent the tree to find a greater
element that is closer to the key value to insert. Then walk down the
node list for overlap detection. Starting the overlap check from
rb_first() unconditionally is slow, it takes 10 times longer due to the
full linear traversal of the list.
Moreover, perform garbage collection of expired elements when walking
down the node list to avoid bogus overlap reports.
For the insertion operation itself, this essentially reverts back to the
implementation before commit 7c84d41416d8 ("netfilter: nft_set_rbtree:
Detect partial overlaps on insertion"), except that cases of complete
overlap are already handled in the overlap detection phase itself, which
slightly simplifies the loop to find the insertion point.
Based on initial patch from Stefano Brivio, including text from the
original patch description too.
Fixes: 7c84d41416d8 ("netfilter: nft_set_rbtree: Detect partial overlaps on insertion")
Reviewed-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-01-14 22:38:32 +00:00
|
|
|
|
2023-09-22 16:30:15 +00:00
|
|
|
gc = nft_trans_gc_alloc(set, 0, GFP_ATOMIC);
|
|
|
|
if (!gc)
|
netfilter: nf_tables: nft_set_rbtree: fix spurious insertion failure
[ Upstream commit 087388278e0f301f4c61ddffb1911d3a180f84b8 ]
nft_rbtree_gc_elem() walks back and removes the end interval element that
comes before the expired element.
There is a small chance that we've cached this element as 'rbe_ge'.
If this happens, we hold and test a pointer that has been queued for
freeing.
It also causes spurious insertion failures:
$ cat test-testcases-sets-0044interval_overlap_0.1/testout.log
Error: Could not process rule: File exists
add element t s { 0 - 2 }
^^^^^^
Failed to insert 0 - 2 given:
table ip t {
set s {
type inet_service
flags interval,timeout
timeout 2s
gc-interval 2s
}
}
The set (rbtree) is empty. The 'failure' doesn't happen on next attempt.
Reason is that when we try to insert, the tree may hold an expired
element that collides with the range we're adding.
While we do evict/erase this element, we can trip over this check:
if (rbe_ge && nft_rbtree_interval_end(rbe_ge) && nft_rbtree_interval_end(new))
return -ENOTEMPTY;
rbe_ge was erased by the synchronous gc, we should not have done this
check. Next attempt won't find it, so retry results in successful
insertion.
Restart in-kernel to avoid such spurious errors.
Such restart are rare, unless userspace intentionally adds very large
numbers of elements with very short timeouts while setting a huge
gc interval.
Even in this case, this cannot loop forever, on each retry an existing
element has been removed.
As the caller is holding the transaction mutex, its impossible
for a second entity to add more expiring elements to the tree.
After this it also becomes feasible to remove the async gc worker
and perform all garbage collection from the commit path.
Fixes: c9e6978e2725 ("netfilter: nft_set_rbtree: Switch to node list walk for overlap detection")
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-09-28 13:12:44 +00:00
|
|
|
return ERR_PTR(-ENOMEM);
|
netfilter: nft_set_rbtree: Switch to node list walk for overlap detection
[ Upstream commit c9e6978e2725a7d4b6cd23b2facd3f11422c0643 ]
...instead of a tree descent, which became overly complicated in an
attempt to cover cases where expired or inactive elements would affect
comparisons with the new element being inserted.
Further, it turned out that it's probably impossible to cover all those
cases, as inactive nodes might entirely hide subtrees consisting of a
complete interval plus a node that makes the current insertion not
overlap.
To speed up the overlap check, descent the tree to find a greater
element that is closer to the key value to insert. Then walk down the
node list for overlap detection. Starting the overlap check from
rb_first() unconditionally is slow, it takes 10 times longer due to the
full linear traversal of the list.
Moreover, perform garbage collection of expired elements when walking
down the node list to avoid bogus overlap reports.
For the insertion operation itself, this essentially reverts back to the
implementation before commit 7c84d41416d8 ("netfilter: nft_set_rbtree:
Detect partial overlaps on insertion"), except that cases of complete
overlap are already handled in the overlap detection phase itself, which
slightly simplifies the loop to find the insertion point.
Based on initial patch from Stefano Brivio, including text from the
original patch description too.
Fixes: 7c84d41416d8 ("netfilter: nft_set_rbtree: Detect partial overlaps on insertion")
Reviewed-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-01-14 22:38:32 +00:00
|
|
|
|
2023-07-20 19:30:05 +00:00
|
|
|
/* search for end interval coming before this element.
|
|
|
|
* end intervals don't carry a timeout extension, they
|
|
|
|
* are coupled with the interval start element.
|
|
|
|
*/
|
2023-05-11 20:39:30 +00:00
|
|
|
while (prev) {
|
netfilter: nft_set_rbtree: Switch to node list walk for overlap detection
[ Upstream commit c9e6978e2725a7d4b6cd23b2facd3f11422c0643 ]
...instead of a tree descent, which became overly complicated in an
attempt to cover cases where expired or inactive elements would affect
comparisons with the new element being inserted.
Further, it turned out that it's probably impossible to cover all those
cases, as inactive nodes might entirely hide subtrees consisting of a
complete interval plus a node that makes the current insertion not
overlap.
To speed up the overlap check, descent the tree to find a greater
element that is closer to the key value to insert. Then walk down the
node list for overlap detection. Starting the overlap check from
rb_first() unconditionally is slow, it takes 10 times longer due to the
full linear traversal of the list.
Moreover, perform garbage collection of expired elements when walking
down the node list to avoid bogus overlap reports.
For the insertion operation itself, this essentially reverts back to the
implementation before commit 7c84d41416d8 ("netfilter: nft_set_rbtree:
Detect partial overlaps on insertion"), except that cases of complete
overlap are already handled in the overlap detection phase itself, which
slightly simplifies the loop to find the insertion point.
Based on initial patch from Stefano Brivio, including text from the
original patch description too.
Fixes: 7c84d41416d8 ("netfilter: nft_set_rbtree: Detect partial overlaps on insertion")
Reviewed-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-01-14 22:38:32 +00:00
|
|
|
rbe_prev = rb_entry(prev, struct nft_rbtree_elem, node);
|
2023-07-20 19:30:05 +00:00
|
|
|
if (nft_rbtree_interval_end(rbe_prev) &&
|
|
|
|
nft_set_elem_active(&rbe_prev->ext, genmask))
|
netfilter: nft_set_rbtree: Switch to node list walk for overlap detection
[ Upstream commit c9e6978e2725a7d4b6cd23b2facd3f11422c0643 ]
...instead of a tree descent, which became overly complicated in an
attempt to cover cases where expired or inactive elements would affect
comparisons with the new element being inserted.
Further, it turned out that it's probably impossible to cover all those
cases, as inactive nodes might entirely hide subtrees consisting of a
complete interval plus a node that makes the current insertion not
overlap.
To speed up the overlap check, descent the tree to find a greater
element that is closer to the key value to insert. Then walk down the
node list for overlap detection. Starting the overlap check from
rb_first() unconditionally is slow, it takes 10 times longer due to the
full linear traversal of the list.
Moreover, perform garbage collection of expired elements when walking
down the node list to avoid bogus overlap reports.
For the insertion operation itself, this essentially reverts back to the
implementation before commit 7c84d41416d8 ("netfilter: nft_set_rbtree:
Detect partial overlaps on insertion"), except that cases of complete
overlap are already handled in the overlap detection phase itself, which
slightly simplifies the loop to find the insertion point.
Based on initial patch from Stefano Brivio, including text from the
original patch description too.
Fixes: 7c84d41416d8 ("netfilter: nft_set_rbtree: Detect partial overlaps on insertion")
Reviewed-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-01-14 22:38:32 +00:00
|
|
|
break;
|
|
|
|
|
|
|
|
prev = rb_prev(prev);
|
2023-05-11 20:39:30 +00:00
|
|
|
}
|
|
|
|
|
netfilter: nf_tables: nft_set_rbtree: fix spurious insertion failure
[ Upstream commit 087388278e0f301f4c61ddffb1911d3a180f84b8 ]
nft_rbtree_gc_elem() walks back and removes the end interval element that
comes before the expired element.
There is a small chance that we've cached this element as 'rbe_ge'.
If this happens, we hold and test a pointer that has been queued for
freeing.
It also causes spurious insertion failures:
$ cat test-testcases-sets-0044interval_overlap_0.1/testout.log
Error: Could not process rule: File exists
add element t s { 0 - 2 }
^^^^^^
Failed to insert 0 - 2 given:
table ip t {
set s {
type inet_service
flags interval,timeout
timeout 2s
gc-interval 2s
}
}
The set (rbtree) is empty. The 'failure' doesn't happen on next attempt.
Reason is that when we try to insert, the tree may hold an expired
element that collides with the range we're adding.
While we do evict/erase this element, we can trip over this check:
if (rbe_ge && nft_rbtree_interval_end(rbe_ge) && nft_rbtree_interval_end(new))
return -ENOTEMPTY;
rbe_ge was erased by the synchronous gc, we should not have done this
check. Next attempt won't find it, so retry results in successful
insertion.
Restart in-kernel to avoid such spurious errors.
Such restart are rare, unless userspace intentionally adds very large
numbers of elements with very short timeouts while setting a huge
gc interval.
Even in this case, this cannot loop forever, on each retry an existing
element has been removed.
As the caller is holding the transaction mutex, its impossible
for a second entity to add more expiring elements to the tree.
After this it also becomes feasible to remove the async gc worker
and perform all garbage collection from the commit path.
Fixes: c9e6978e2725 ("netfilter: nft_set_rbtree: Switch to node list walk for overlap detection")
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-09-28 13:12:44 +00:00
|
|
|
rbe_prev = NULL;
|
2023-07-20 19:30:05 +00:00
|
|
|
if (prev) {
|
|
|
|
rbe_prev = rb_entry(prev, struct nft_rbtree_elem, node);
|
2023-09-22 16:30:15 +00:00
|
|
|
nft_rbtree_gc_remove(net, set, priv, rbe_prev);
|
2023-07-20 19:30:05 +00:00
|
|
|
|
2023-09-22 16:30:15 +00:00
|
|
|
/* There is always room in this trans gc for this element,
|
|
|
|
* memory allocation never actually happens, hence, the warning
|
|
|
|
* splat in such case. No need to set NFT_SET_ELEM_DEAD_BIT,
|
|
|
|
* this is synchronous gc which never fails.
|
|
|
|
*/
|
|
|
|
gc = nft_trans_gc_queue_sync(gc, GFP_ATOMIC);
|
|
|
|
if (WARN_ON_ONCE(!gc))
|
netfilter: nf_tables: nft_set_rbtree: fix spurious insertion failure
[ Upstream commit 087388278e0f301f4c61ddffb1911d3a180f84b8 ]
nft_rbtree_gc_elem() walks back and removes the end interval element that
comes before the expired element.
There is a small chance that we've cached this element as 'rbe_ge'.
If this happens, we hold and test a pointer that has been queued for
freeing.
It also causes spurious insertion failures:
$ cat test-testcases-sets-0044interval_overlap_0.1/testout.log
Error: Could not process rule: File exists
add element t s { 0 - 2 }
^^^^^^
Failed to insert 0 - 2 given:
table ip t {
set s {
type inet_service
flags interval,timeout
timeout 2s
gc-interval 2s
}
}
The set (rbtree) is empty. The 'failure' doesn't happen on next attempt.
Reason is that when we try to insert, the tree may hold an expired
element that collides with the range we're adding.
While we do evict/erase this element, we can trip over this check:
if (rbe_ge && nft_rbtree_interval_end(rbe_ge) && nft_rbtree_interval_end(new))
return -ENOTEMPTY;
rbe_ge was erased by the synchronous gc, we should not have done this
check. Next attempt won't find it, so retry results in successful
insertion.
Restart in-kernel to avoid such spurious errors.
Such restart are rare, unless userspace intentionally adds very large
numbers of elements with very short timeouts while setting a huge
gc interval.
Even in this case, this cannot loop forever, on each retry an existing
element has been removed.
As the caller is holding the transaction mutex, its impossible
for a second entity to add more expiring elements to the tree.
After this it also becomes feasible to remove the async gc worker
and perform all garbage collection from the commit path.
Fixes: c9e6978e2725 ("netfilter: nft_set_rbtree: Switch to node list walk for overlap detection")
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-09-28 13:12:44 +00:00
|
|
|
return ERR_PTR(-ENOMEM);
|
2023-09-22 16:30:15 +00:00
|
|
|
|
|
|
|
nft_trans_gc_elem_add(gc, rbe_prev);
|
2023-05-11 20:39:30 +00:00
|
|
|
}
|
netfilter: nft_set_rbtree: Switch to node list walk for overlap detection
[ Upstream commit c9e6978e2725a7d4b6cd23b2facd3f11422c0643 ]
...instead of a tree descent, which became overly complicated in an
attempt to cover cases where expired or inactive elements would affect
comparisons with the new element being inserted.
Further, it turned out that it's probably impossible to cover all those
cases, as inactive nodes might entirely hide subtrees consisting of a
complete interval plus a node that makes the current insertion not
overlap.
To speed up the overlap check, descent the tree to find a greater
element that is closer to the key value to insert. Then walk down the
node list for overlap detection. Starting the overlap check from
rb_first() unconditionally is slow, it takes 10 times longer due to the
full linear traversal of the list.
Moreover, perform garbage collection of expired elements when walking
down the node list to avoid bogus overlap reports.
For the insertion operation itself, this essentially reverts back to the
implementation before commit 7c84d41416d8 ("netfilter: nft_set_rbtree:
Detect partial overlaps on insertion"), except that cases of complete
overlap are already handled in the overlap detection phase itself, which
slightly simplifies the loop to find the insertion point.
Based on initial patch from Stefano Brivio, including text from the
original patch description too.
Fixes: 7c84d41416d8 ("netfilter: nft_set_rbtree: Detect partial overlaps on insertion")
Reviewed-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-01-14 22:38:32 +00:00
|
|
|
|
2023-09-22 16:30:15 +00:00
|
|
|
nft_rbtree_gc_remove(net, set, priv, rbe);
|
|
|
|
gc = nft_trans_gc_queue_sync(gc, GFP_ATOMIC);
|
|
|
|
if (WARN_ON_ONCE(!gc))
|
netfilter: nf_tables: nft_set_rbtree: fix spurious insertion failure
[ Upstream commit 087388278e0f301f4c61ddffb1911d3a180f84b8 ]
nft_rbtree_gc_elem() walks back and removes the end interval element that
comes before the expired element.
There is a small chance that we've cached this element as 'rbe_ge'.
If this happens, we hold and test a pointer that has been queued for
freeing.
It also causes spurious insertion failures:
$ cat test-testcases-sets-0044interval_overlap_0.1/testout.log
Error: Could not process rule: File exists
add element t s { 0 - 2 }
^^^^^^
Failed to insert 0 - 2 given:
table ip t {
set s {
type inet_service
flags interval,timeout
timeout 2s
gc-interval 2s
}
}
The set (rbtree) is empty. The 'failure' doesn't happen on next attempt.
Reason is that when we try to insert, the tree may hold an expired
element that collides with the range we're adding.
While we do evict/erase this element, we can trip over this check:
if (rbe_ge && nft_rbtree_interval_end(rbe_ge) && nft_rbtree_interval_end(new))
return -ENOTEMPTY;
rbe_ge was erased by the synchronous gc, we should not have done this
check. Next attempt won't find it, so retry results in successful
insertion.
Restart in-kernel to avoid such spurious errors.
Such restart are rare, unless userspace intentionally adds very large
numbers of elements with very short timeouts while setting a huge
gc interval.
Even in this case, this cannot loop forever, on each retry an existing
element has been removed.
As the caller is holding the transaction mutex, its impossible
for a second entity to add more expiring elements to the tree.
After this it also becomes feasible to remove the async gc worker
and perform all garbage collection from the commit path.
Fixes: c9e6978e2725 ("netfilter: nft_set_rbtree: Switch to node list walk for overlap detection")
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-09-28 13:12:44 +00:00
|
|
|
return ERR_PTR(-ENOMEM);
|
2023-09-22 16:30:15 +00:00
|
|
|
|
|
|
|
nft_trans_gc_elem_add(gc, rbe);
|
netfilter: nft_set_rbtree: Switch to node list walk for overlap detection
[ Upstream commit c9e6978e2725a7d4b6cd23b2facd3f11422c0643 ]
...instead of a tree descent, which became overly complicated in an
attempt to cover cases where expired or inactive elements would affect
comparisons with the new element being inserted.
Further, it turned out that it's probably impossible to cover all those
cases, as inactive nodes might entirely hide subtrees consisting of a
complete interval plus a node that makes the current insertion not
overlap.
To speed up the overlap check, descent the tree to find a greater
element that is closer to the key value to insert. Then walk down the
node list for overlap detection. Starting the overlap check from
rb_first() unconditionally is slow, it takes 10 times longer due to the
full linear traversal of the list.
Moreover, perform garbage collection of expired elements when walking
down the node list to avoid bogus overlap reports.
For the insertion operation itself, this essentially reverts back to the
implementation before commit 7c84d41416d8 ("netfilter: nft_set_rbtree:
Detect partial overlaps on insertion"), except that cases of complete
overlap are already handled in the overlap detection phase itself, which
slightly simplifies the loop to find the insertion point.
Based on initial patch from Stefano Brivio, including text from the
original patch description too.
Fixes: 7c84d41416d8 ("netfilter: nft_set_rbtree: Detect partial overlaps on insertion")
Reviewed-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-01-14 22:38:32 +00:00
|
|
|
|
2023-09-22 16:30:15 +00:00
|
|
|
nft_trans_gc_queue_sync_done(gc);
|
netfilter: nft_set_rbtree: Switch to node list walk for overlap detection
[ Upstream commit c9e6978e2725a7d4b6cd23b2facd3f11422c0643 ]
...instead of a tree descent, which became overly complicated in an
attempt to cover cases where expired or inactive elements would affect
comparisons with the new element being inserted.
Further, it turned out that it's probably impossible to cover all those
cases, as inactive nodes might entirely hide subtrees consisting of a
complete interval plus a node that makes the current insertion not
overlap.
To speed up the overlap check, descent the tree to find a greater
element that is closer to the key value to insert. Then walk down the
node list for overlap detection. Starting the overlap check from
rb_first() unconditionally is slow, it takes 10 times longer due to the
full linear traversal of the list.
Moreover, perform garbage collection of expired elements when walking
down the node list to avoid bogus overlap reports.
For the insertion operation itself, this essentially reverts back to the
implementation before commit 7c84d41416d8 ("netfilter: nft_set_rbtree:
Detect partial overlaps on insertion"), except that cases of complete
overlap are already handled in the overlap detection phase itself, which
slightly simplifies the loop to find the insertion point.
Based on initial patch from Stefano Brivio, including text from the
original patch description too.
Fixes: 7c84d41416d8 ("netfilter: nft_set_rbtree: Detect partial overlaps on insertion")
Reviewed-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-01-14 22:38:32 +00:00
|
|
|
|
netfilter: nf_tables: nft_set_rbtree: fix spurious insertion failure
[ Upstream commit 087388278e0f301f4c61ddffb1911d3a180f84b8 ]
nft_rbtree_gc_elem() walks back and removes the end interval element that
comes before the expired element.
There is a small chance that we've cached this element as 'rbe_ge'.
If this happens, we hold and test a pointer that has been queued for
freeing.
It also causes spurious insertion failures:
$ cat test-testcases-sets-0044interval_overlap_0.1/testout.log
Error: Could not process rule: File exists
add element t s { 0 - 2 }
^^^^^^
Failed to insert 0 - 2 given:
table ip t {
set s {
type inet_service
flags interval,timeout
timeout 2s
gc-interval 2s
}
}
The set (rbtree) is empty. The 'failure' doesn't happen on next attempt.
Reason is that when we try to insert, the tree may hold an expired
element that collides with the range we're adding.
While we do evict/erase this element, we can trip over this check:
if (rbe_ge && nft_rbtree_interval_end(rbe_ge) && nft_rbtree_interval_end(new))
return -ENOTEMPTY;
rbe_ge was erased by the synchronous gc, we should not have done this
check. Next attempt won't find it, so retry results in successful
insertion.
Restart in-kernel to avoid such spurious errors.
Such restart are rare, unless userspace intentionally adds very large
numbers of elements with very short timeouts while setting a huge
gc interval.
Even in this case, this cannot loop forever, on each retry an existing
element has been removed.
As the caller is holding the transaction mutex, its impossible
for a second entity to add more expiring elements to the tree.
After this it also becomes feasible to remove the async gc worker
and perform all garbage collection from the commit path.
Fixes: c9e6978e2725 ("netfilter: nft_set_rbtree: Switch to node list walk for overlap detection")
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-09-28 13:12:44 +00:00
|
|
|
return rbe_prev;
|
netfilter: nft_set_rbtree: Switch to node list walk for overlap detection
[ Upstream commit c9e6978e2725a7d4b6cd23b2facd3f11422c0643 ]
...instead of a tree descent, which became overly complicated in an
attempt to cover cases where expired or inactive elements would affect
comparisons with the new element being inserted.
Further, it turned out that it's probably impossible to cover all those
cases, as inactive nodes might entirely hide subtrees consisting of a
complete interval plus a node that makes the current insertion not
overlap.
To speed up the overlap check, descent the tree to find a greater
element that is closer to the key value to insert. Then walk down the
node list for overlap detection. Starting the overlap check from
rb_first() unconditionally is slow, it takes 10 times longer due to the
full linear traversal of the list.
Moreover, perform garbage collection of expired elements when walking
down the node list to avoid bogus overlap reports.
For the insertion operation itself, this essentially reverts back to the
implementation before commit 7c84d41416d8 ("netfilter: nft_set_rbtree:
Detect partial overlaps on insertion"), except that cases of complete
overlap are already handled in the overlap detection phase itself, which
slightly simplifies the loop to find the insertion point.
Based on initial patch from Stefano Brivio, including text from the
original patch description too.
Fixes: 7c84d41416d8 ("netfilter: nft_set_rbtree: Detect partial overlaps on insertion")
Reviewed-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-01-14 22:38:32 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static bool nft_rbtree_update_first(const struct nft_set *set,
|
|
|
|
struct nft_rbtree_elem *rbe,
|
|
|
|
struct rb_node *first)
|
|
|
|
{
|
|
|
|
struct nft_rbtree_elem *first_elem;
|
|
|
|
|
|
|
|
first_elem = rb_entry(first, struct nft_rbtree_elem, node);
|
|
|
|
/* this element is closest to where the new element is to be inserted:
|
|
|
|
* update the first element for the node list path.
|
|
|
|
*/
|
|
|
|
if (nft_rbtree_cmp(set, rbe, first_elem) < 0)
|
|
|
|
return true;
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2016-07-08 12:41:49 +00:00
|
|
|
static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
|
2016-08-24 10:41:54 +00:00
|
|
|
struct nft_rbtree_elem *new,
|
|
|
|
struct nft_set_ext **ext)
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
{
|
netfilter: nft_set_rbtree: Switch to node list walk for overlap detection
[ Upstream commit c9e6978e2725a7d4b6cd23b2facd3f11422c0643 ]
...instead of a tree descent, which became overly complicated in an
attempt to cover cases where expired or inactive elements would affect
comparisons with the new element being inserted.
Further, it turned out that it's probably impossible to cover all those
cases, as inactive nodes might entirely hide subtrees consisting of a
complete interval plus a node that makes the current insertion not
overlap.
To speed up the overlap check, descent the tree to find a greater
element that is closer to the key value to insert. Then walk down the
node list for overlap detection. Starting the overlap check from
rb_first() unconditionally is slow, it takes 10 times longer due to the
full linear traversal of the list.
Moreover, perform garbage collection of expired elements when walking
down the node list to avoid bogus overlap reports.
For the insertion operation itself, this essentially reverts back to the
implementation before commit 7c84d41416d8 ("netfilter: nft_set_rbtree:
Detect partial overlaps on insertion"), except that cases of complete
overlap are already handled in the overlap detection phase itself, which
slightly simplifies the loop to find the insertion point.
Based on initial patch from Stefano Brivio, including text from the
original patch description too.
Fixes: 7c84d41416d8 ("netfilter: nft_set_rbtree: Detect partial overlaps on insertion")
Reviewed-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-01-14 22:38:32 +00:00
|
|
|
struct nft_rbtree_elem *rbe, *rbe_le = NULL, *rbe_ge = NULL;
|
2023-05-11 20:39:30 +00:00
|
|
|
struct rb_node *node, *next, *parent, **p, *first = NULL;
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
struct nft_rbtree *priv = nft_set_priv(set);
|
2023-09-22 16:30:24 +00:00
|
|
|
u8 cur_genmask = nft_genmask_cur(net);
|
2016-07-08 12:41:49 +00:00
|
|
|
u8 genmask = nft_genmask_next(net);
|
netfilter: nf_tables: nft_set_rbtree: fix spurious insertion failure
[ Upstream commit 087388278e0f301f4c61ddffb1911d3a180f84b8 ]
nft_rbtree_gc_elem() walks back and removes the end interval element that
comes before the expired element.
There is a small chance that we've cached this element as 'rbe_ge'.
If this happens, we hold and test a pointer that has been queued for
freeing.
It also causes spurious insertion failures:
$ cat test-testcases-sets-0044interval_overlap_0.1/testout.log
Error: Could not process rule: File exists
add element t s { 0 - 2 }
^^^^^^
Failed to insert 0 - 2 given:
table ip t {
set s {
type inet_service
flags interval,timeout
timeout 2s
gc-interval 2s
}
}
The set (rbtree) is empty. The 'failure' doesn't happen on next attempt.
Reason is that when we try to insert, the tree may hold an expired
element that collides with the range we're adding.
While we do evict/erase this element, we can trip over this check:
if (rbe_ge && nft_rbtree_interval_end(rbe_ge) && nft_rbtree_interval_end(new))
return -ENOTEMPTY;
rbe_ge was erased by the synchronous gc, we should not have done this
check. Next attempt won't find it, so retry results in successful
insertion.
Restart in-kernel to avoid such spurious errors.
Such restart are rare, unless userspace intentionally adds very large
numbers of elements with very short timeouts while setting a huge
gc interval.
Even in this case, this cannot loop forever, on each retry an existing
element has been removed.
As the caller is holding the transaction mutex, its impossible
for a second entity to add more expiring elements to the tree.
After this it also becomes feasible to remove the async gc worker
and perform all garbage collection from the commit path.
Fixes: c9e6978e2725 ("netfilter: nft_set_rbtree: Switch to node list walk for overlap detection")
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-09-28 13:12:44 +00:00
|
|
|
int d;
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
|
netfilter: nft_set_rbtree: Switch to node list walk for overlap detection
[ Upstream commit c9e6978e2725a7d4b6cd23b2facd3f11422c0643 ]
...instead of a tree descent, which became overly complicated in an
attempt to cover cases where expired or inactive elements would affect
comparisons with the new element being inserted.
Further, it turned out that it's probably impossible to cover all those
cases, as inactive nodes might entirely hide subtrees consisting of a
complete interval plus a node that makes the current insertion not
overlap.
To speed up the overlap check, descent the tree to find a greater
element that is closer to the key value to insert. Then walk down the
node list for overlap detection. Starting the overlap check from
rb_first() unconditionally is slow, it takes 10 times longer due to the
full linear traversal of the list.
Moreover, perform garbage collection of expired elements when walking
down the node list to avoid bogus overlap reports.
For the insertion operation itself, this essentially reverts back to the
implementation before commit 7c84d41416d8 ("netfilter: nft_set_rbtree:
Detect partial overlaps on insertion"), except that cases of complete
overlap are already handled in the overlap detection phase itself, which
slightly simplifies the loop to find the insertion point.
Based on initial patch from Stefano Brivio, including text from the
original patch description too.
Fixes: 7c84d41416d8 ("netfilter: nft_set_rbtree: Detect partial overlaps on insertion")
Reviewed-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-01-14 22:38:32 +00:00
|
|
|
/* Descend the tree to search for an existing element greater than the
|
|
|
|
* key value to insert that is greater than the new element. This is the
|
|
|
|
* first element to walk the ordered elements to find possible overlap.
|
netfilter: nft_set_rbtree: Detect partial overlaps on insertion
...and return -ENOTEMPTY to the front-end in this case, instead of
proceeding. Currently, nft takes care of checking for these cases
and not sending them to the kernel, but if we drop the set_overlap()
call in nft we can end up in situations like:
# nft add table t
# nft add set t s '{ type inet_service ; flags interval ; }'
# nft add element t s '{ 1 - 5 }'
# nft add element t s '{ 6 - 10 }'
# nft add element t s '{ 4 - 7 }'
# nft list set t s
table ip t {
set s {
type inet_service
flags interval
elements = { 1-3, 4-5, 6-7 }
}
}
This change has the primary purpose of making the behaviour
consistent with nft_set_pipapo, but is also functional to avoid
inconsistent behaviour if userspace sends overlapping elements for
any reason.
v2: When we meet the same key data in the tree, as start element while
inserting an end element, or as end element while inserting a start
element, actually check that the existing element is active, before
resetting the overlap flag (Pablo Neira Ayuso)
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2020-03-22 02:22:01 +00:00
|
|
|
*/
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
parent = NULL;
|
|
|
|
p = &priv->root.rb_node;
|
|
|
|
while (*p != NULL) {
|
|
|
|
parent = *p;
|
|
|
|
rbe = rb_entry(parent, struct nft_rbtree_elem, node);
|
netfilter: nft_set_rbtree: Switch to node list walk for overlap detection
[ Upstream commit c9e6978e2725a7d4b6cd23b2facd3f11422c0643 ]
...instead of a tree descent, which became overly complicated in an
attempt to cover cases where expired or inactive elements would affect
comparisons with the new element being inserted.
Further, it turned out that it's probably impossible to cover all those
cases, as inactive nodes might entirely hide subtrees consisting of a
complete interval plus a node that makes the current insertion not
overlap.
To speed up the overlap check, descent the tree to find a greater
element that is closer to the key value to insert. Then walk down the
node list for overlap detection. Starting the overlap check from
rb_first() unconditionally is slow, it takes 10 times longer due to the
full linear traversal of the list.
Moreover, perform garbage collection of expired elements when walking
down the node list to avoid bogus overlap reports.
For the insertion operation itself, this essentially reverts back to the
implementation before commit 7c84d41416d8 ("netfilter: nft_set_rbtree:
Detect partial overlaps on insertion"), except that cases of complete
overlap are already handled in the overlap detection phase itself, which
slightly simplifies the loop to find the insertion point.
Based on initial patch from Stefano Brivio, including text from the
original patch description too.
Fixes: 7c84d41416d8 ("netfilter: nft_set_rbtree: Detect partial overlaps on insertion")
Reviewed-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-01-14 22:38:32 +00:00
|
|
|
d = nft_rbtree_cmp(set, rbe, new);
|
|
|
|
|
netfilter: nft_set_rbtree: Detect partial overlaps on insertion
...and return -ENOTEMPTY to the front-end in this case, instead of
proceeding. Currently, nft takes care of checking for these cases
and not sending them to the kernel, but if we drop the set_overlap()
call in nft we can end up in situations like:
# nft add table t
# nft add set t s '{ type inet_service ; flags interval ; }'
# nft add element t s '{ 1 - 5 }'
# nft add element t s '{ 6 - 10 }'
# nft add element t s '{ 4 - 7 }'
# nft list set t s
table ip t {
set s {
type inet_service
flags interval
elements = { 1-3, 4-5, 6-7 }
}
}
This change has the primary purpose of making the behaviour
consistent with nft_set_pipapo, but is also functional to avoid
inconsistent behaviour if userspace sends overlapping elements for
any reason.
v2: When we meet the same key data in the tree, as start element while
inserting an end element, or as end element while inserting a start
element, actually check that the existing element is active, before
resetting the overlap flag (Pablo Neira Ayuso)
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2020-03-22 02:22:01 +00:00
|
|
|
if (d < 0) {
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
p = &parent->rb_left;
|
netfilter: nft_set_rbtree: Detect partial overlaps on insertion
...and return -ENOTEMPTY to the front-end in this case, instead of
proceeding. Currently, nft takes care of checking for these cases
and not sending them to the kernel, but if we drop the set_overlap()
call in nft we can end up in situations like:
# nft add table t
# nft add set t s '{ type inet_service ; flags interval ; }'
# nft add element t s '{ 1 - 5 }'
# nft add element t s '{ 6 - 10 }'
# nft add element t s '{ 4 - 7 }'
# nft list set t s
table ip t {
set s {
type inet_service
flags interval
elements = { 1-3, 4-5, 6-7 }
}
}
This change has the primary purpose of making the behaviour
consistent with nft_set_pipapo, but is also functional to avoid
inconsistent behaviour if userspace sends overlapping elements for
any reason.
v2: When we meet the same key data in the tree, as start element while
inserting an end element, or as end element while inserting a start
element, actually check that the existing element is active, before
resetting the overlap flag (Pablo Neira Ayuso)
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2020-03-22 02:22:01 +00:00
|
|
|
} else if (d > 0) {
|
netfilter: nft_set_rbtree: Switch to node list walk for overlap detection
[ Upstream commit c9e6978e2725a7d4b6cd23b2facd3f11422c0643 ]
...instead of a tree descent, which became overly complicated in an
attempt to cover cases where expired or inactive elements would affect
comparisons with the new element being inserted.
Further, it turned out that it's probably impossible to cover all those
cases, as inactive nodes might entirely hide subtrees consisting of a
complete interval plus a node that makes the current insertion not
overlap.
To speed up the overlap check, descent the tree to find a greater
element that is closer to the key value to insert. Then walk down the
node list for overlap detection. Starting the overlap check from
rb_first() unconditionally is slow, it takes 10 times longer due to the
full linear traversal of the list.
Moreover, perform garbage collection of expired elements when walking
down the node list to avoid bogus overlap reports.
For the insertion operation itself, this essentially reverts back to the
implementation before commit 7c84d41416d8 ("netfilter: nft_set_rbtree:
Detect partial overlaps on insertion"), except that cases of complete
overlap are already handled in the overlap detection phase itself, which
slightly simplifies the loop to find the insertion point.
Based on initial patch from Stefano Brivio, including text from the
original patch description too.
Fixes: 7c84d41416d8 ("netfilter: nft_set_rbtree: Detect partial overlaps on insertion")
Reviewed-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-01-14 22:38:32 +00:00
|
|
|
if (!first ||
|
|
|
|
nft_rbtree_update_first(set, rbe, first))
|
|
|
|
first = &rbe->node;
|
netfilter: nft_set_rbtree: Detect partial overlaps on insertion
...and return -ENOTEMPTY to the front-end in this case, instead of
proceeding. Currently, nft takes care of checking for these cases
and not sending them to the kernel, but if we drop the set_overlap()
call in nft we can end up in situations like:
# nft add table t
# nft add set t s '{ type inet_service ; flags interval ; }'
# nft add element t s '{ 1 - 5 }'
# nft add element t s '{ 6 - 10 }'
# nft add element t s '{ 4 - 7 }'
# nft list set t s
table ip t {
set s {
type inet_service
flags interval
elements = { 1-3, 4-5, 6-7 }
}
}
This change has the primary purpose of making the behaviour
consistent with nft_set_pipapo, but is also functional to avoid
inconsistent behaviour if userspace sends overlapping elements for
any reason.
v2: When we meet the same key data in the tree, as start element while
inserting an end element, or as end element while inserting a start
element, actually check that the existing element is active, before
resetting the overlap flag (Pablo Neira Ayuso)
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2020-03-22 02:22:01 +00:00
|
|
|
|
netfilter: nft_set_rbtree: Switch to node list walk for overlap detection
[ Upstream commit c9e6978e2725a7d4b6cd23b2facd3f11422c0643 ]
...instead of a tree descent, which became overly complicated in an
attempt to cover cases where expired or inactive elements would affect
comparisons with the new element being inserted.
Further, it turned out that it's probably impossible to cover all those
cases, as inactive nodes might entirely hide subtrees consisting of a
complete interval plus a node that makes the current insertion not
overlap.
To speed up the overlap check, descent the tree to find a greater
element that is closer to the key value to insert. Then walk down the
node list for overlap detection. Starting the overlap check from
rb_first() unconditionally is slow, it takes 10 times longer due to the
full linear traversal of the list.
Moreover, perform garbage collection of expired elements when walking
down the node list to avoid bogus overlap reports.
For the insertion operation itself, this essentially reverts back to the
implementation before commit 7c84d41416d8 ("netfilter: nft_set_rbtree:
Detect partial overlaps on insertion"), except that cases of complete
overlap are already handled in the overlap detection phase itself, which
slightly simplifies the loop to find the insertion point.
Based on initial patch from Stefano Brivio, including text from the
original patch description too.
Fixes: 7c84d41416d8 ("netfilter: nft_set_rbtree: Detect partial overlaps on insertion")
Reviewed-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-01-14 22:38:32 +00:00
|
|
|
p = &parent->rb_right;
|
netfilter: nft_set_rbtree: Detect partial overlaps on insertion
...and return -ENOTEMPTY to the front-end in this case, instead of
proceeding. Currently, nft takes care of checking for these cases
and not sending them to the kernel, but if we drop the set_overlap()
call in nft we can end up in situations like:
# nft add table t
# nft add set t s '{ type inet_service ; flags interval ; }'
# nft add element t s '{ 1 - 5 }'
# nft add element t s '{ 6 - 10 }'
# nft add element t s '{ 4 - 7 }'
# nft list set t s
table ip t {
set s {
type inet_service
flags interval
elements = { 1-3, 4-5, 6-7 }
}
}
This change has the primary purpose of making the behaviour
consistent with nft_set_pipapo, but is also functional to avoid
inconsistent behaviour if userspace sends overlapping elements for
any reason.
v2: When we meet the same key data in the tree, as start element while
inserting an end element, or as end element while inserting a start
element, actually check that the existing element is active, before
resetting the overlap flag (Pablo Neira Ayuso)
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2020-03-22 02:22:01 +00:00
|
|
|
} else {
|
netfilter: nft_set_rbtree: Switch to node list walk for overlap detection
[ Upstream commit c9e6978e2725a7d4b6cd23b2facd3f11422c0643 ]
...instead of a tree descent, which became overly complicated in an
attempt to cover cases where expired or inactive elements would affect
comparisons with the new element being inserted.
Further, it turned out that it's probably impossible to cover all those
cases, as inactive nodes might entirely hide subtrees consisting of a
complete interval plus a node that makes the current insertion not
overlap.
To speed up the overlap check, descent the tree to find a greater
element that is closer to the key value to insert. Then walk down the
node list for overlap detection. Starting the overlap check from
rb_first() unconditionally is slow, it takes 10 times longer due to the
full linear traversal of the list.
Moreover, perform garbage collection of expired elements when walking
down the node list to avoid bogus overlap reports.
For the insertion operation itself, this essentially reverts back to the
implementation before commit 7c84d41416d8 ("netfilter: nft_set_rbtree:
Detect partial overlaps on insertion"), except that cases of complete
overlap are already handled in the overlap detection phase itself, which
slightly simplifies the loop to find the insertion point.
Based on initial patch from Stefano Brivio, including text from the
original patch description too.
Fixes: 7c84d41416d8 ("netfilter: nft_set_rbtree: Detect partial overlaps on insertion")
Reviewed-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-01-14 22:38:32 +00:00
|
|
|
if (nft_rbtree_interval_end(rbe))
|
2017-05-20 22:37:10 +00:00
|
|
|
p = &parent->rb_left;
|
netfilter: nft_set_rbtree: Switch to node list walk for overlap detection
[ Upstream commit c9e6978e2725a7d4b6cd23b2facd3f11422c0643 ]
...instead of a tree descent, which became overly complicated in an
attempt to cover cases where expired or inactive elements would affect
comparisons with the new element being inserted.
Further, it turned out that it's probably impossible to cover all those
cases, as inactive nodes might entirely hide subtrees consisting of a
complete interval plus a node that makes the current insertion not
overlap.
To speed up the overlap check, descent the tree to find a greater
element that is closer to the key value to insert. Then walk down the
node list for overlap detection. Starting the overlap check from
rb_first() unconditionally is slow, it takes 10 times longer due to the
full linear traversal of the list.
Moreover, perform garbage collection of expired elements when walking
down the node list to avoid bogus overlap reports.
For the insertion operation itself, this essentially reverts back to the
implementation before commit 7c84d41416d8 ("netfilter: nft_set_rbtree:
Detect partial overlaps on insertion"), except that cases of complete
overlap are already handled in the overlap detection phase itself, which
slightly simplifies the loop to find the insertion point.
Based on initial patch from Stefano Brivio, including text from the
original patch description too.
Fixes: 7c84d41416d8 ("netfilter: nft_set_rbtree: Detect partial overlaps on insertion")
Reviewed-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-01-14 22:38:32 +00:00
|
|
|
else
|
2017-05-20 22:37:10 +00:00
|
|
|
p = &parent->rb_right;
|
netfilter: nft_set_rbtree: Switch to node list walk for overlap detection
[ Upstream commit c9e6978e2725a7d4b6cd23b2facd3f11422c0643 ]
...instead of a tree descent, which became overly complicated in an
attempt to cover cases where expired or inactive elements would affect
comparisons with the new element being inserted.
Further, it turned out that it's probably impossible to cover all those
cases, as inactive nodes might entirely hide subtrees consisting of a
complete interval plus a node that makes the current insertion not
overlap.
To speed up the overlap check, descent the tree to find a greater
element that is closer to the key value to insert. Then walk down the
node list for overlap detection. Starting the overlap check from
rb_first() unconditionally is slow, it takes 10 times longer due to the
full linear traversal of the list.
Moreover, perform garbage collection of expired elements when walking
down the node list to avoid bogus overlap reports.
For the insertion operation itself, this essentially reverts back to the
implementation before commit 7c84d41416d8 ("netfilter: nft_set_rbtree:
Detect partial overlaps on insertion"), except that cases of complete
overlap are already handled in the overlap detection phase itself, which
slightly simplifies the loop to find the insertion point.
Based on initial patch from Stefano Brivio, including text from the
original patch description too.
Fixes: 7c84d41416d8 ("netfilter: nft_set_rbtree: Detect partial overlaps on insertion")
Reviewed-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-01-14 22:38:32 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!first)
|
|
|
|
first = rb_first(&priv->root);
|
|
|
|
|
|
|
|
/* Detect overlap by going through the list of valid tree nodes.
|
|
|
|
* Values stored in the tree are in reversed order, starting from
|
|
|
|
* highest to lowest value.
|
|
|
|
*/
|
2023-05-11 20:39:30 +00:00
|
|
|
for (node = first; node != NULL; node = next) {
|
|
|
|
next = rb_next(node);
|
|
|
|
|
netfilter: nft_set_rbtree: Switch to node list walk for overlap detection
[ Upstream commit c9e6978e2725a7d4b6cd23b2facd3f11422c0643 ]
...instead of a tree descent, which became overly complicated in an
attempt to cover cases where expired or inactive elements would affect
comparisons with the new element being inserted.
Further, it turned out that it's probably impossible to cover all those
cases, as inactive nodes might entirely hide subtrees consisting of a
complete interval plus a node that makes the current insertion not
overlap.
To speed up the overlap check, descent the tree to find a greater
element that is closer to the key value to insert. Then walk down the
node list for overlap detection. Starting the overlap check from
rb_first() unconditionally is slow, it takes 10 times longer due to the
full linear traversal of the list.
Moreover, perform garbage collection of expired elements when walking
down the node list to avoid bogus overlap reports.
For the insertion operation itself, this essentially reverts back to the
implementation before commit 7c84d41416d8 ("netfilter: nft_set_rbtree:
Detect partial overlaps on insertion"), except that cases of complete
overlap are already handled in the overlap detection phase itself, which
slightly simplifies the loop to find the insertion point.
Based on initial patch from Stefano Brivio, including text from the
original patch description too.
Fixes: 7c84d41416d8 ("netfilter: nft_set_rbtree: Detect partial overlaps on insertion")
Reviewed-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-01-14 22:38:32 +00:00
|
|
|
rbe = rb_entry(node, struct nft_rbtree_elem, node);
|
|
|
|
|
|
|
|
if (!nft_set_elem_active(&rbe->ext, genmask))
|
|
|
|
continue;
|
|
|
|
|
2023-09-22 16:30:24 +00:00
|
|
|
/* perform garbage collection to avoid bogus overlap reports
|
|
|
|
* but skip new elements in this transaction.
|
|
|
|
*/
|
|
|
|
if (nft_set_elem_expired(&rbe->ext) &&
|
|
|
|
nft_set_elem_active(&rbe->ext, cur_genmask)) {
|
netfilter: nf_tables: nft_set_rbtree: fix spurious insertion failure
[ Upstream commit 087388278e0f301f4c61ddffb1911d3a180f84b8 ]
nft_rbtree_gc_elem() walks back and removes the end interval element that
comes before the expired element.
There is a small chance that we've cached this element as 'rbe_ge'.
If this happens, we hold and test a pointer that has been queued for
freeing.
It also causes spurious insertion failures:
$ cat test-testcases-sets-0044interval_overlap_0.1/testout.log
Error: Could not process rule: File exists
add element t s { 0 - 2 }
^^^^^^
Failed to insert 0 - 2 given:
table ip t {
set s {
type inet_service
flags interval,timeout
timeout 2s
gc-interval 2s
}
}
The set (rbtree) is empty. The 'failure' doesn't happen on next attempt.
Reason is that when we try to insert, the tree may hold an expired
element that collides with the range we're adding.
While we do evict/erase this element, we can trip over this check:
if (rbe_ge && nft_rbtree_interval_end(rbe_ge) && nft_rbtree_interval_end(new))
return -ENOTEMPTY;
rbe_ge was erased by the synchronous gc, we should not have done this
check. Next attempt won't find it, so retry results in successful
insertion.
Restart in-kernel to avoid such spurious errors.
Such restart are rare, unless userspace intentionally adds very large
numbers of elements with very short timeouts while setting a huge
gc interval.
Even in this case, this cannot loop forever, on each retry an existing
element has been removed.
As the caller is holding the transaction mutex, its impossible
for a second entity to add more expiring elements to the tree.
After this it also becomes feasible to remove the async gc worker
and perform all garbage collection from the commit path.
Fixes: c9e6978e2725 ("netfilter: nft_set_rbtree: Switch to node list walk for overlap detection")
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-09-28 13:12:44 +00:00
|
|
|
const struct nft_rbtree_elem *removed_end;
|
|
|
|
|
|
|
|
removed_end = nft_rbtree_gc_elem(set, priv, rbe, genmask);
|
|
|
|
if (IS_ERR(removed_end))
|
|
|
|
return PTR_ERR(removed_end);
|
|
|
|
|
|
|
|
if (removed_end == rbe_le || removed_end == rbe_ge)
|
|
|
|
return -EAGAIN;
|
netfilter: nft_set_rbtree: Detect partial overlaps on insertion
...and return -ENOTEMPTY to the front-end in this case, instead of
proceeding. Currently, nft takes care of checking for these cases
and not sending them to the kernel, but if we drop the set_overlap()
call in nft we can end up in situations like:
# nft add table t
# nft add set t s '{ type inet_service ; flags interval ; }'
# nft add element t s '{ 1 - 5 }'
# nft add element t s '{ 6 - 10 }'
# nft add element t s '{ 4 - 7 }'
# nft list set t s
table ip t {
set s {
type inet_service
flags interval
elements = { 1-3, 4-5, 6-7 }
}
}
This change has the primary purpose of making the behaviour
consistent with nft_set_pipapo, but is also functional to avoid
inconsistent behaviour if userspace sends overlapping elements for
any reason.
v2: When we meet the same key data in the tree, as start element while
inserting an end element, or as end element while inserting a start
element, actually check that the existing element is active, before
resetting the overlap flag (Pablo Neira Ayuso)
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2020-03-22 02:22:01 +00:00
|
|
|
|
netfilter: nft_set_rbtree: Switch to node list walk for overlap detection
[ Upstream commit c9e6978e2725a7d4b6cd23b2facd3f11422c0643 ]
...instead of a tree descent, which became overly complicated in an
attempt to cover cases where expired or inactive elements would affect
comparisons with the new element being inserted.
Further, it turned out that it's probably impossible to cover all those
cases, as inactive nodes might entirely hide subtrees consisting of a
complete interval plus a node that makes the current insertion not
overlap.
To speed up the overlap check, descent the tree to find a greater
element that is closer to the key value to insert. Then walk down the
node list for overlap detection. Starting the overlap check from
rb_first() unconditionally is slow, it takes 10 times longer due to the
full linear traversal of the list.
Moreover, perform garbage collection of expired elements when walking
down the node list to avoid bogus overlap reports.
For the insertion operation itself, this essentially reverts back to the
implementation before commit 7c84d41416d8 ("netfilter: nft_set_rbtree:
Detect partial overlaps on insertion"), except that cases of complete
overlap are already handled in the overlap detection phase itself, which
slightly simplifies the loop to find the insertion point.
Based on initial patch from Stefano Brivio, including text from the
original patch description too.
Fixes: 7c84d41416d8 ("netfilter: nft_set_rbtree: Detect partial overlaps on insertion")
Reviewed-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-01-14 22:38:32 +00:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
d = nft_rbtree_cmp(set, rbe, new);
|
|
|
|
if (d == 0) {
|
|
|
|
/* Matching end element: no need to look for an
|
|
|
|
* overlapping greater or equal element.
|
|
|
|
*/
|
|
|
|
if (nft_rbtree_interval_end(rbe)) {
|
|
|
|
rbe_le = rbe;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* first element that is greater or equal to key value. */
|
|
|
|
if (!rbe_ge) {
|
|
|
|
rbe_ge = rbe;
|
|
|
|
continue;
|
2016-04-12 21:50:37 +00:00
|
|
|
}
|
netfilter: nft_set_rbtree: Switch to node list walk for overlap detection
[ Upstream commit c9e6978e2725a7d4b6cd23b2facd3f11422c0643 ]
...instead of a tree descent, which became overly complicated in an
attempt to cover cases where expired or inactive elements would affect
comparisons with the new element being inserted.
Further, it turned out that it's probably impossible to cover all those
cases, as inactive nodes might entirely hide subtrees consisting of a
complete interval plus a node that makes the current insertion not
overlap.
To speed up the overlap check, descent the tree to find a greater
element that is closer to the key value to insert. Then walk down the
node list for overlap detection. Starting the overlap check from
rb_first() unconditionally is slow, it takes 10 times longer due to the
full linear traversal of the list.
Moreover, perform garbage collection of expired elements when walking
down the node list to avoid bogus overlap reports.
For the insertion operation itself, this essentially reverts back to the
implementation before commit 7c84d41416d8 ("netfilter: nft_set_rbtree:
Detect partial overlaps on insertion"), except that cases of complete
overlap are already handled in the overlap detection phase itself, which
slightly simplifies the loop to find the insertion point.
Based on initial patch from Stefano Brivio, including text from the
original patch description too.
Fixes: 7c84d41416d8 ("netfilter: nft_set_rbtree: Detect partial overlaps on insertion")
Reviewed-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-01-14 22:38:32 +00:00
|
|
|
|
|
|
|
/* this is a closer more or equal element, update it. */
|
|
|
|
if (nft_rbtree_cmp(set, rbe_ge, new) != 0) {
|
|
|
|
rbe_ge = rbe;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* element is equal to key value, make sure flags are
|
|
|
|
* the same, an existing more or equal start element
|
|
|
|
* must not be replaced by more or equal end element.
|
|
|
|
*/
|
|
|
|
if ((nft_rbtree_interval_start(new) &&
|
|
|
|
nft_rbtree_interval_start(rbe_ge)) ||
|
|
|
|
(nft_rbtree_interval_end(new) &&
|
|
|
|
nft_rbtree_interval_end(rbe_ge))) {
|
|
|
|
rbe_ge = rbe;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
} else if (d > 0) {
|
|
|
|
/* annotate element greater than the new element. */
|
|
|
|
rbe_ge = rbe;
|
|
|
|
continue;
|
|
|
|
} else if (d < 0) {
|
|
|
|
/* annotate element less than the new element. */
|
|
|
|
rbe_le = rbe;
|
|
|
|
break;
|
2015-03-25 14:08:50 +00:00
|
|
|
}
|
netfilter: nft_set_rbtree: Switch to node list walk for overlap detection
[ Upstream commit c9e6978e2725a7d4b6cd23b2facd3f11422c0643 ]
...instead of a tree descent, which became overly complicated in an
attempt to cover cases where expired or inactive elements would affect
comparisons with the new element being inserted.
Further, it turned out that it's probably impossible to cover all those
cases, as inactive nodes might entirely hide subtrees consisting of a
complete interval plus a node that makes the current insertion not
overlap.
To speed up the overlap check, descent the tree to find a greater
element that is closer to the key value to insert. Then walk down the
node list for overlap detection. Starting the overlap check from
rb_first() unconditionally is slow, it takes 10 times longer due to the
full linear traversal of the list.
Moreover, perform garbage collection of expired elements when walking
down the node list to avoid bogus overlap reports.
For the insertion operation itself, this essentially reverts back to the
implementation before commit 7c84d41416d8 ("netfilter: nft_set_rbtree:
Detect partial overlaps on insertion"), except that cases of complete
overlap are already handled in the overlap detection phase itself, which
slightly simplifies the loop to find the insertion point.
Based on initial patch from Stefano Brivio, including text from the
original patch description too.
Fixes: 7c84d41416d8 ("netfilter: nft_set_rbtree: Detect partial overlaps on insertion")
Reviewed-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-01-14 22:38:32 +00:00
|
|
|
}
|
netfilter: nft_set_rbtree: Detect partial overlap with start endpoint match
Getting creative with nft and omitting the interval_overlap()
check from the set_overlap() function, without omitting
set_overlap() altogether, led to the observation of a partial
overlap that wasn't detected, and would actually result in
replacement of the end element of an existing interval.
This is due to the fact that we'll return -EEXIST on a matching,
pre-existing start element, instead of -ENOTEMPTY, and the error
is cleared by API if NLM_F_EXCL is not given. At this point, we
can insert a matching start, and duplicate the end element as long
as we don't end up into other intervals.
For instance, inserting interval 0 - 2 with an existing 0 - 3
interval would result in a single 0 - 2 interval, and a dangling
'3' end element. This is because nft will proceed after inserting
the '0' start element as no error is reported, and no further
conflicting intervals are detected on insertion of the end element.
This needs a different approach as it's a local condition that can
be detected by looking for duplicate ends coming from left and
right, separately. Track those and directly report -ENOTEMPTY on
duplicated end elements for a matching start.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2020-08-19 21:59:15 +00:00
|
|
|
|
netfilter: nft_set_rbtree: Switch to node list walk for overlap detection
[ Upstream commit c9e6978e2725a7d4b6cd23b2facd3f11422c0643 ]
...instead of a tree descent, which became overly complicated in an
attempt to cover cases where expired or inactive elements would affect
comparisons with the new element being inserted.
Further, it turned out that it's probably impossible to cover all those
cases, as inactive nodes might entirely hide subtrees consisting of a
complete interval plus a node that makes the current insertion not
overlap.
To speed up the overlap check, descent the tree to find a greater
element that is closer to the key value to insert. Then walk down the
node list for overlap detection. Starting the overlap check from
rb_first() unconditionally is slow, it takes 10 times longer due to the
full linear traversal of the list.
Moreover, perform garbage collection of expired elements when walking
down the node list to avoid bogus overlap reports.
For the insertion operation itself, this essentially reverts back to the
implementation before commit 7c84d41416d8 ("netfilter: nft_set_rbtree:
Detect partial overlaps on insertion"), except that cases of complete
overlap are already handled in the overlap detection phase itself, which
slightly simplifies the loop to find the insertion point.
Based on initial patch from Stefano Brivio, including text from the
original patch description too.
Fixes: 7c84d41416d8 ("netfilter: nft_set_rbtree: Detect partial overlaps on insertion")
Reviewed-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-01-14 22:38:32 +00:00
|
|
|
/* - new start element matching existing start element: full overlap
|
|
|
|
* reported as -EEXIST, cleared by caller if NLM_F_EXCL is not given.
|
|
|
|
*/
|
|
|
|
if (rbe_ge && !nft_rbtree_cmp(set, new, rbe_ge) &&
|
|
|
|
nft_rbtree_interval_start(rbe_ge) == nft_rbtree_interval_start(new)) {
|
|
|
|
*ext = &rbe_ge->ext;
|
|
|
|
return -EEXIST;
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
}
|
netfilter: nft_set_rbtree: Detect partial overlaps on insertion
...and return -ENOTEMPTY to the front-end in this case, instead of
proceeding. Currently, nft takes care of checking for these cases
and not sending them to the kernel, but if we drop the set_overlap()
call in nft we can end up in situations like:
# nft add table t
# nft add set t s '{ type inet_service ; flags interval ; }'
# nft add element t s '{ 1 - 5 }'
# nft add element t s '{ 6 - 10 }'
# nft add element t s '{ 4 - 7 }'
# nft list set t s
table ip t {
set s {
type inet_service
flags interval
elements = { 1-3, 4-5, 6-7 }
}
}
This change has the primary purpose of making the behaviour
consistent with nft_set_pipapo, but is also functional to avoid
inconsistent behaviour if userspace sends overlapping elements for
any reason.
v2: When we meet the same key data in the tree, as start element while
inserting an end element, or as end element while inserting a start
element, actually check that the existing element is active, before
resetting the overlap flag (Pablo Neira Ayuso)
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2020-03-22 02:22:01 +00:00
|
|
|
|
netfilter: nft_set_rbtree: Switch to node list walk for overlap detection
[ Upstream commit c9e6978e2725a7d4b6cd23b2facd3f11422c0643 ]
...instead of a tree descent, which became overly complicated in an
attempt to cover cases where expired or inactive elements would affect
comparisons with the new element being inserted.
Further, it turned out that it's probably impossible to cover all those
cases, as inactive nodes might entirely hide subtrees consisting of a
complete interval plus a node that makes the current insertion not
overlap.
To speed up the overlap check, descent the tree to find a greater
element that is closer to the key value to insert. Then walk down the
node list for overlap detection. Starting the overlap check from
rb_first() unconditionally is slow, it takes 10 times longer due to the
full linear traversal of the list.
Moreover, perform garbage collection of expired elements when walking
down the node list to avoid bogus overlap reports.
For the insertion operation itself, this essentially reverts back to the
implementation before commit 7c84d41416d8 ("netfilter: nft_set_rbtree:
Detect partial overlaps on insertion"), except that cases of complete
overlap are already handled in the overlap detection phase itself, which
slightly simplifies the loop to find the insertion point.
Based on initial patch from Stefano Brivio, including text from the
original patch description too.
Fixes: 7c84d41416d8 ("netfilter: nft_set_rbtree: Detect partial overlaps on insertion")
Reviewed-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-01-14 22:38:32 +00:00
|
|
|
/* - new end element matching existing end element: full overlap
|
|
|
|
* reported as -EEXIST, cleared by caller if NLM_F_EXCL is not given.
|
|
|
|
*/
|
|
|
|
if (rbe_le && !nft_rbtree_cmp(set, new, rbe_le) &&
|
|
|
|
nft_rbtree_interval_end(rbe_le) == nft_rbtree_interval_end(new)) {
|
|
|
|
*ext = &rbe_le->ext;
|
|
|
|
return -EEXIST;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* - new start element with existing closest, less or equal key value
|
|
|
|
* being a start element: partial overlap, reported as -ENOTEMPTY.
|
|
|
|
* Anonymous sets allow for two consecutive start element since they
|
|
|
|
* are constant, skip them to avoid bogus overlap reports.
|
|
|
|
*/
|
|
|
|
if (!nft_set_is_anonymous(set) && rbe_le &&
|
|
|
|
nft_rbtree_interval_start(rbe_le) && nft_rbtree_interval_start(new))
|
netfilter: nft_set_rbtree: Detect partial overlaps on insertion
...and return -ENOTEMPTY to the front-end in this case, instead of
proceeding. Currently, nft takes care of checking for these cases
and not sending them to the kernel, but if we drop the set_overlap()
call in nft we can end up in situations like:
# nft add table t
# nft add set t s '{ type inet_service ; flags interval ; }'
# nft add element t s '{ 1 - 5 }'
# nft add element t s '{ 6 - 10 }'
# nft add element t s '{ 4 - 7 }'
# nft list set t s
table ip t {
set s {
type inet_service
flags interval
elements = { 1-3, 4-5, 6-7 }
}
}
This change has the primary purpose of making the behaviour
consistent with nft_set_pipapo, but is also functional to avoid
inconsistent behaviour if userspace sends overlapping elements for
any reason.
v2: When we meet the same key data in the tree, as start element while
inserting an end element, or as end element while inserting a start
element, actually check that the existing element is active, before
resetting the overlap flag (Pablo Neira Ayuso)
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2020-03-22 02:22:01 +00:00
|
|
|
return -ENOTEMPTY;
|
|
|
|
|
netfilter: nft_set_rbtree: Switch to node list walk for overlap detection
[ Upstream commit c9e6978e2725a7d4b6cd23b2facd3f11422c0643 ]
...instead of a tree descent, which became overly complicated in an
attempt to cover cases where expired or inactive elements would affect
comparisons with the new element being inserted.
Further, it turned out that it's probably impossible to cover all those
cases, as inactive nodes might entirely hide subtrees consisting of a
complete interval plus a node that makes the current insertion not
overlap.
To speed up the overlap check, descent the tree to find a greater
element that is closer to the key value to insert. Then walk down the
node list for overlap detection. Starting the overlap check from
rb_first() unconditionally is slow, it takes 10 times longer due to the
full linear traversal of the list.
Moreover, perform garbage collection of expired elements when walking
down the node list to avoid bogus overlap reports.
For the insertion operation itself, this essentially reverts back to the
implementation before commit 7c84d41416d8 ("netfilter: nft_set_rbtree:
Detect partial overlaps on insertion"), except that cases of complete
overlap are already handled in the overlap detection phase itself, which
slightly simplifies the loop to find the insertion point.
Based on initial patch from Stefano Brivio, including text from the
original patch description too.
Fixes: 7c84d41416d8 ("netfilter: nft_set_rbtree: Detect partial overlaps on insertion")
Reviewed-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-01-14 22:38:32 +00:00
|
|
|
/* - new end element with existing closest, less or equal key value
|
|
|
|
* being a end element: partial overlap, reported as -ENOTEMPTY.
|
|
|
|
*/
|
|
|
|
if (rbe_le &&
|
|
|
|
nft_rbtree_interval_end(rbe_le) && nft_rbtree_interval_end(new))
|
|
|
|
return -ENOTEMPTY;
|
|
|
|
|
|
|
|
/* - new end element with existing closest, greater or equal key value
|
|
|
|
* being an end element: partial overlap, reported as -ENOTEMPTY
|
|
|
|
*/
|
|
|
|
if (rbe_ge &&
|
|
|
|
nft_rbtree_interval_end(rbe_ge) && nft_rbtree_interval_end(new))
|
|
|
|
return -ENOTEMPTY;
|
|
|
|
|
|
|
|
/* Accepted element: pick insertion point depending on key value */
|
|
|
|
parent = NULL;
|
|
|
|
p = &priv->root.rb_node;
|
|
|
|
while (*p != NULL) {
|
|
|
|
parent = *p;
|
|
|
|
rbe = rb_entry(parent, struct nft_rbtree_elem, node);
|
|
|
|
d = nft_rbtree_cmp(set, rbe, new);
|
|
|
|
|
|
|
|
if (d < 0)
|
|
|
|
p = &parent->rb_left;
|
|
|
|
else if (d > 0)
|
|
|
|
p = &parent->rb_right;
|
|
|
|
else if (nft_rbtree_interval_end(rbe))
|
|
|
|
p = &parent->rb_left;
|
|
|
|
else
|
|
|
|
p = &parent->rb_right;
|
|
|
|
}
|
|
|
|
|
2017-07-28 08:34:42 +00:00
|
|
|
rb_link_node_rcu(&new->node, parent, p);
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
rb_insert_color(&new->node, &priv->root);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2016-07-08 12:41:49 +00:00
|
|
|
static int nft_rbtree_insert(const struct net *net, const struct nft_set *set,
|
2016-08-24 10:41:54 +00:00
|
|
|
const struct nft_set_elem *elem,
|
|
|
|
struct nft_set_ext **ext)
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
{
|
2017-03-12 11:38:47 +00:00
|
|
|
struct nft_rbtree *priv = nft_set_priv(set);
|
2015-03-25 13:07:50 +00:00
|
|
|
struct nft_rbtree_elem *rbe = elem->priv;
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
int err;
|
|
|
|
|
netfilter: nf_tables: nft_set_rbtree: fix spurious insertion failure
[ Upstream commit 087388278e0f301f4c61ddffb1911d3a180f84b8 ]
nft_rbtree_gc_elem() walks back and removes the end interval element that
comes before the expired element.
There is a small chance that we've cached this element as 'rbe_ge'.
If this happens, we hold and test a pointer that has been queued for
freeing.
It also causes spurious insertion failures:
$ cat test-testcases-sets-0044interval_overlap_0.1/testout.log
Error: Could not process rule: File exists
add element t s { 0 - 2 }
^^^^^^
Failed to insert 0 - 2 given:
table ip t {
set s {
type inet_service
flags interval,timeout
timeout 2s
gc-interval 2s
}
}
The set (rbtree) is empty. The 'failure' doesn't happen on next attempt.
Reason is that when we try to insert, the tree may hold an expired
element that collides with the range we're adding.
While we do evict/erase this element, we can trip over this check:
if (rbe_ge && nft_rbtree_interval_end(rbe_ge) && nft_rbtree_interval_end(new))
return -ENOTEMPTY;
rbe_ge was erased by the synchronous gc, we should not have done this
check. Next attempt won't find it, so retry results in successful
insertion.
Restart in-kernel to avoid such spurious errors.
Such restart are rare, unless userspace intentionally adds very large
numbers of elements with very short timeouts while setting a huge
gc interval.
Even in this case, this cannot loop forever, on each retry an existing
element has been removed.
As the caller is holding the transaction mutex, its impossible
for a second entity to add more expiring elements to the tree.
After this it also becomes feasible to remove the async gc worker
and perform all garbage collection from the commit path.
Fixes: c9e6978e2725 ("netfilter: nft_set_rbtree: Switch to node list walk for overlap detection")
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-09-28 13:12:44 +00:00
|
|
|
do {
|
|
|
|
if (fatal_signal_pending(current))
|
|
|
|
return -EINTR;
|
|
|
|
|
|
|
|
cond_resched();
|
|
|
|
|
|
|
|
write_lock_bh(&priv->lock);
|
|
|
|
write_seqcount_begin(&priv->count);
|
|
|
|
err = __nft_rbtree_insert(net, set, rbe, ext);
|
|
|
|
write_seqcount_end(&priv->count);
|
|
|
|
write_unlock_bh(&priv->lock);
|
|
|
|
} while (err == -EAGAIN);
|
2015-03-25 13:07:50 +00:00
|
|
|
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2017-01-18 17:30:07 +00:00
|
|
|
static void nft_rbtree_remove(const struct net *net,
|
|
|
|
const struct nft_set *set,
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
const struct nft_set_elem *elem)
|
|
|
|
{
|
|
|
|
struct nft_rbtree *priv = nft_set_priv(set);
|
2015-03-25 14:08:50 +00:00
|
|
|
struct nft_rbtree_elem *rbe = elem->priv;
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
|
2017-03-12 11:38:47 +00:00
|
|
|
write_lock_bh(&priv->lock);
|
2017-07-28 08:34:42 +00:00
|
|
|
write_seqcount_begin(&priv->count);
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
rb_erase(&rbe->node, &priv->root);
|
2017-07-28 08:34:42 +00:00
|
|
|
write_seqcount_end(&priv->count);
|
2017-03-12 11:38:47 +00:00
|
|
|
write_unlock_bh(&priv->lock);
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
}
|
|
|
|
|
2016-07-08 12:41:49 +00:00
|
|
|
static void nft_rbtree_activate(const struct net *net,
|
|
|
|
const struct nft_set *set,
|
2015-03-25 14:08:50 +00:00
|
|
|
const struct nft_set_elem *elem)
|
|
|
|
{
|
|
|
|
struct nft_rbtree_elem *rbe = elem->priv;
|
|
|
|
|
2016-07-08 12:41:49 +00:00
|
|
|
nft_set_elem_change_active(net, set, &rbe->ext);
|
2015-03-25 14:08:50 +00:00
|
|
|
}
|
|
|
|
|
2017-01-18 17:30:09 +00:00
|
|
|
static bool nft_rbtree_flush(const struct net *net,
|
|
|
|
const struct nft_set *set, void *priv)
|
2016-12-05 22:35:49 +00:00
|
|
|
{
|
|
|
|
struct nft_rbtree_elem *rbe = priv;
|
|
|
|
|
2023-09-22 16:30:15 +00:00
|
|
|
nft_set_elem_change_active(net, set, &rbe->ext);
|
|
|
|
|
|
|
|
return true;
|
2016-12-05 22:35:49 +00:00
|
|
|
}
|
|
|
|
|
2016-07-08 12:41:49 +00:00
|
|
|
static void *nft_rbtree_deactivate(const struct net *net,
|
|
|
|
const struct nft_set *set,
|
2015-03-25 14:08:50 +00:00
|
|
|
const struct nft_set_elem *elem)
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
{
|
|
|
|
const struct nft_rbtree *priv = nft_set_priv(set);
|
|
|
|
const struct rb_node *parent = priv->root.rb_node;
|
2016-04-12 21:50:37 +00:00
|
|
|
struct nft_rbtree_elem *rbe, *this = elem->priv;
|
2016-07-08 12:41:49 +00:00
|
|
|
u8 genmask = nft_genmask_next(net);
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
int d;
|
|
|
|
|
|
|
|
while (parent != NULL) {
|
|
|
|
rbe = rb_entry(parent, struct nft_rbtree_elem, node);
|
|
|
|
|
2015-04-11 01:27:39 +00:00
|
|
|
d = memcmp(nft_set_ext_key(&rbe->ext), &elem->key.val,
|
|
|
|
set->klen);
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
if (d < 0)
|
|
|
|
parent = parent->rb_left;
|
|
|
|
else if (d > 0)
|
|
|
|
parent = parent->rb_right;
|
|
|
|
else {
|
2016-04-12 21:50:37 +00:00
|
|
|
if (nft_rbtree_interval_end(rbe) &&
|
2020-03-22 02:22:00 +00:00
|
|
|
nft_rbtree_interval_start(this)) {
|
2016-04-12 21:50:37 +00:00
|
|
|
parent = parent->rb_left;
|
|
|
|
continue;
|
2020-03-22 02:22:00 +00:00
|
|
|
} else if (nft_rbtree_interval_start(rbe) &&
|
2016-04-12 21:50:37 +00:00
|
|
|
nft_rbtree_interval_end(this)) {
|
|
|
|
parent = parent->rb_right;
|
|
|
|
continue;
|
2023-10-17 10:28:27 +00:00
|
|
|
} else if (nft_set_elem_expired(&rbe->ext)) {
|
|
|
|
break;
|
2019-03-12 11:10:59 +00:00
|
|
|
} else if (!nft_set_elem_active(&rbe->ext, genmask)) {
|
|
|
|
parent = parent->rb_left;
|
|
|
|
continue;
|
2016-04-12 21:50:37 +00:00
|
|
|
}
|
2017-01-18 17:30:09 +00:00
|
|
|
nft_rbtree_flush(net, set, rbe);
|
2015-03-25 14:08:50 +00:00
|
|
|
return rbe;
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
}
|
|
|
|
}
|
2015-03-25 14:08:50 +00:00
|
|
|
return NULL;
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void nft_rbtree_walk(const struct nft_ctx *ctx,
|
2017-01-23 23:51:41 +00:00
|
|
|
struct nft_set *set,
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
struct nft_set_iter *iter)
|
|
|
|
{
|
2017-03-12 11:38:47 +00:00
|
|
|
struct nft_rbtree *priv = nft_set_priv(set);
|
2015-03-25 13:07:50 +00:00
|
|
|
struct nft_rbtree_elem *rbe;
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
struct nft_set_elem elem;
|
|
|
|
struct rb_node *node;
|
|
|
|
|
2017-03-12 11:38:47 +00:00
|
|
|
read_lock_bh(&priv->lock);
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
for (node = rb_first(&priv->root); node != NULL; node = rb_next(node)) {
|
2015-03-25 14:08:50 +00:00
|
|
|
rbe = rb_entry(node, struct nft_rbtree_elem, node);
|
|
|
|
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
if (iter->count < iter->skip)
|
|
|
|
goto cont;
|
2016-06-11 04:20:27 +00:00
|
|
|
if (!nft_set_elem_active(&rbe->ext, iter->genmask))
|
2015-03-25 14:08:50 +00:00
|
|
|
goto cont;
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
|
2015-03-25 13:07:50 +00:00
|
|
|
elem.priv = rbe;
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
|
|
|
|
iter->err = iter->fn(ctx, set, iter, &elem);
|
2014-05-28 13:27:18 +00:00
|
|
|
if (iter->err < 0) {
|
2017-03-12 11:38:47 +00:00
|
|
|
read_unlock_bh(&priv->lock);
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
return;
|
2014-05-28 13:27:18 +00:00
|
|
|
}
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
cont:
|
|
|
|
iter->count++;
|
|
|
|
}
|
2017-03-12 11:38:47 +00:00
|
|
|
read_unlock_bh(&priv->lock);
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
}
|
|
|
|
|
2018-05-16 20:58:34 +00:00
|
|
|
static void nft_rbtree_gc(struct work_struct *work)
|
|
|
|
{
|
2023-09-22 16:30:15 +00:00
|
|
|
struct nft_rbtree_elem *rbe, *rbe_end = NULL;
|
|
|
|
struct nftables_pernet *nft_net;
|
2018-05-16 20:58:34 +00:00
|
|
|
struct nft_rbtree *priv;
|
2023-09-22 16:30:15 +00:00
|
|
|
struct nft_trans_gc *gc;
|
2018-08-30 08:56:52 +00:00
|
|
|
struct rb_node *node;
|
2018-05-16 20:58:34 +00:00
|
|
|
struct nft_set *set;
|
2023-09-22 16:30:15 +00:00
|
|
|
unsigned int gc_seq;
|
2023-01-14 22:49:46 +00:00
|
|
|
struct net *net;
|
2018-05-16 20:58:34 +00:00
|
|
|
|
|
|
|
priv = container_of(work, struct nft_rbtree, gc_work.work);
|
|
|
|
set = nft_set_container_of(priv);
|
2023-01-14 22:49:46 +00:00
|
|
|
net = read_pnet(&set->net);
|
2023-09-22 16:30:15 +00:00
|
|
|
nft_net = nft_pernet(net);
|
|
|
|
gc_seq = READ_ONCE(nft_net->gc_seq);
|
|
|
|
|
2023-09-22 16:30:23 +00:00
|
|
|
if (nft_set_gc_is_pending(set))
|
|
|
|
goto done;
|
|
|
|
|
2023-09-22 16:30:15 +00:00
|
|
|
gc = nft_trans_gc_alloc(set, gc_seq, GFP_KERNEL);
|
|
|
|
if (!gc)
|
|
|
|
goto done;
|
2018-05-16 20:58:34 +00:00
|
|
|
|
2023-09-22 16:30:25 +00:00
|
|
|
read_lock_bh(&priv->lock);
|
2018-05-16 20:58:34 +00:00
|
|
|
for (node = rb_first(&priv->root); node != NULL; node = rb_next(node)) {
|
2023-09-22 16:30:15 +00:00
|
|
|
|
|
|
|
/* Ruleset has been updated, try later. */
|
|
|
|
if (READ_ONCE(nft_net->gc_seq) != gc_seq) {
|
|
|
|
nft_trans_gc_destroy(gc);
|
|
|
|
gc = NULL;
|
|
|
|
goto try_later;
|
|
|
|
}
|
|
|
|
|
2018-05-16 20:58:34 +00:00
|
|
|
rbe = rb_entry(node, struct nft_rbtree_elem, node);
|
|
|
|
|
2023-09-22 16:30:15 +00:00
|
|
|
if (nft_set_elem_is_dead(&rbe->ext))
|
|
|
|
goto dead_elem;
|
2023-01-14 22:49:46 +00:00
|
|
|
|
|
|
|
/* elements are reversed in the rbtree for historical reasons,
|
|
|
|
* from highest to lowest value, that is why end element is
|
|
|
|
* always visited before the start element.
|
|
|
|
*/
|
2018-05-16 20:58:34 +00:00
|
|
|
if (nft_rbtree_interval_end(rbe)) {
|
2018-08-30 08:56:52 +00:00
|
|
|
rbe_end = rbe;
|
2018-05-16 20:58:34 +00:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (!nft_set_elem_expired(&rbe->ext))
|
|
|
|
continue;
|
2023-01-14 22:49:46 +00:00
|
|
|
|
2023-09-22 16:30:15 +00:00
|
|
|
nft_set_elem_dead(&rbe->ext);
|
|
|
|
|
|
|
|
if (!rbe_end)
|
2018-05-16 20:58:34 +00:00
|
|
|
continue;
|
|
|
|
|
2023-09-22 16:30:15 +00:00
|
|
|
nft_set_elem_dead(&rbe_end->ext);
|
2018-05-16 20:58:34 +00:00
|
|
|
|
2023-09-22 16:30:15 +00:00
|
|
|
gc = nft_trans_gc_queue_async(gc, gc_seq, GFP_ATOMIC);
|
|
|
|
if (!gc)
|
|
|
|
goto try_later;
|
2018-05-16 20:58:34 +00:00
|
|
|
|
2023-09-22 16:30:15 +00:00
|
|
|
nft_trans_gc_elem_add(gc, rbe_end);
|
|
|
|
rbe_end = NULL;
|
|
|
|
dead_elem:
|
|
|
|
gc = nft_trans_gc_queue_async(gc, gc_seq, GFP_ATOMIC);
|
|
|
|
if (!gc)
|
|
|
|
goto try_later;
|
|
|
|
|
|
|
|
nft_trans_gc_elem_add(gc, rbe);
|
2018-05-16 20:58:34 +00:00
|
|
|
}
|
2023-09-22 16:30:15 +00:00
|
|
|
|
2023-09-22 16:30:26 +00:00
|
|
|
gc = nft_trans_gc_catchall_async(gc, gc_seq);
|
2023-09-22 16:30:15 +00:00
|
|
|
|
|
|
|
try_later:
|
2023-09-22 16:30:25 +00:00
|
|
|
read_unlock_bh(&priv->lock);
|
2018-05-16 20:58:34 +00:00
|
|
|
|
2023-09-22 16:30:15 +00:00
|
|
|
if (gc)
|
|
|
|
nft_trans_gc_queue_async_done(gc);
|
|
|
|
done:
|
2018-05-16 20:58:34 +00:00
|
|
|
queue_delayed_work(system_power_efficient_wq, &priv->gc_work,
|
|
|
|
nft_set_gc_interval(set));
|
|
|
|
}
|
|
|
|
|
2018-07-25 15:39:51 +00:00
|
|
|
static u64 nft_rbtree_privsize(const struct nlattr * const nla[],
|
|
|
|
const struct nft_set_desc *desc)
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
{
|
|
|
|
return sizeof(struct nft_rbtree);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int nft_rbtree_init(const struct nft_set *set,
|
2014-03-28 10:19:47 +00:00
|
|
|
const struct nft_set_desc *desc,
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
const struct nlattr * const nla[])
|
|
|
|
{
|
|
|
|
struct nft_rbtree *priv = nft_set_priv(set);
|
|
|
|
|
2017-03-12 11:38:47 +00:00
|
|
|
rwlock_init(&priv->lock);
|
2020-07-20 15:55:21 +00:00
|
|
|
seqcount_rwlock_init(&priv->count, &priv->lock);
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
priv->root = RB_ROOT;
|
2018-05-16 20:58:34 +00:00
|
|
|
|
|
|
|
INIT_DEFERRABLE_WORK(&priv->gc_work, nft_rbtree_gc);
|
|
|
|
if (set->flags & NFT_SET_TIMEOUT)
|
|
|
|
queue_delayed_work(system_power_efficient_wq, &priv->gc_work,
|
|
|
|
nft_set_gc_interval(set));
|
|
|
|
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2023-06-16 12:51:49 +00:00
|
|
|
static void nft_rbtree_destroy(const struct nft_ctx *ctx,
|
|
|
|
const struct nft_set *set)
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
{
|
|
|
|
struct nft_rbtree *priv = nft_set_priv(set);
|
|
|
|
struct nft_rbtree_elem *rbe;
|
|
|
|
struct rb_node *node;
|
|
|
|
|
2018-05-16 20:58:34 +00:00
|
|
|
cancel_delayed_work_sync(&priv->gc_work);
|
netfilter: nft_set_rbtree: fix panic when destroying set by GC
This patch fixes below.
1. check null pointer of rb_next.
rb_next can return null. so null check routine should be added.
2. add rcu_barrier in destroy routine.
GC uses call_rcu to remove elements. but all elements should be
removed before destroying set and chains. so that rcu_barrier is added.
test script:
%cat test.nft
table inet aa {
map map1 {
type ipv4_addr : verdict; flags interval, timeout;
elements = {
0-1 : jump a0,
3-4 : jump a0,
6-7 : jump a0,
9-10 : jump a0,
12-13 : jump a0,
15-16 : jump a0,
18-19 : jump a0,
21-22 : jump a0,
24-25 : jump a0,
27-28 : jump a0,
}
timeout 1s;
}
chain a0 {
}
}
flush ruleset
table inet aa {
map map1 {
type ipv4_addr : verdict; flags interval, timeout;
elements = {
0-1 : jump a0,
3-4 : jump a0,
6-7 : jump a0,
9-10 : jump a0,
12-13 : jump a0,
15-16 : jump a0,
18-19 : jump a0,
21-22 : jump a0,
24-25 : jump a0,
27-28 : jump a0,
}
timeout 1s;
}
chain a0 {
}
}
flush ruleset
splat looks like:
[ 2402.419838] kasan: GPF could be caused by NULL-ptr deref or user memory access
[ 2402.428433] general protection fault: 0000 [#1] SMP DEBUG_PAGEALLOC KASAN PTI
[ 2402.429343] CPU: 1 PID: 1350 Comm: kworker/1:1 Not tainted 4.18.0-rc2+ #1
[ 2402.429343] Hardware name: To be filled by O.E.M. To be filled by O.E.M./Aptio CRB, BIOS 5.6.5 03/23/2017
[ 2402.429343] Workqueue: events_power_efficient nft_rbtree_gc [nft_set_rbtree]
[ 2402.429343] RIP: 0010:rb_next+0x1e/0x130
[ 2402.429343] Code: e9 de f2 ff ff 0f 1f 80 00 00 00 00 41 55 48 89 fa 41 54 55 53 48 c1 ea 03 48 b8 00 00 00 0
[ 2402.429343] RSP: 0018:ffff880105f77678 EFLAGS: 00010296
[ 2402.429343] RAX: dffffc0000000000 RBX: ffff8801143e3428 RCX: 1ffff1002287c69c
[ 2402.429343] RDX: 0000000000000000 RSI: 0000000000000004 RDI: 0000000000000000
[ 2402.429343] RBP: 0000000000000000 R08: ffffed0016aabc24 R09: ffffed0016aabc24
[ 2402.429343] R10: 0000000000000001 R11: ffffed0016aabc23 R12: 0000000000000000
[ 2402.429343] R13: ffff8800b6933388 R14: dffffc0000000000 R15: ffff8801143e3440
[ 2402.534486] kasan: CONFIG_KASAN_INLINE enabled
[ 2402.534212] FS: 0000000000000000(0000) GS:ffff88011b600000(0000) knlGS:0000000000000000
[ 2402.534212] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 2402.534212] CR2: 0000000000863008 CR3: 00000000a3c16000 CR4: 00000000001006e0
[ 2402.534212] Call Trace:
[ 2402.534212] nft_rbtree_gc+0x2b5/0x5f0 [nft_set_rbtree]
[ 2402.534212] process_one_work+0xc1b/0x1ee0
[ 2402.540329] kasan: GPF could be caused by NULL-ptr deref or user memory access
[ 2402.534212] ? _raw_spin_unlock_irq+0x29/0x40
[ 2402.534212] ? pwq_dec_nr_in_flight+0x3e0/0x3e0
[ 2402.534212] ? set_load_weight+0x270/0x270
[ 2402.534212] ? __schedule+0x6ea/0x1fb0
[ 2402.534212] ? __sched_text_start+0x8/0x8
[ 2402.534212] ? save_trace+0x320/0x320
[ 2402.534212] ? sched_clock_local+0xe2/0x150
[ 2402.534212] ? find_held_lock+0x39/0x1c0
[ 2402.534212] ? worker_thread+0x35f/0x1150
[ 2402.534212] ? lock_contended+0xe90/0xe90
[ 2402.534212] ? __lock_acquire+0x4520/0x4520
[ 2402.534212] ? do_raw_spin_unlock+0xb1/0x350
[ 2402.534212] ? do_raw_spin_trylock+0x111/0x1b0
[ 2402.534212] ? do_raw_spin_lock+0x1f0/0x1f0
[ 2402.534212] worker_thread+0x169/0x1150
Fixes: 8d8540c4f5e0("netfilter: nft_set_rbtree: add timeout support")
Signed-off-by: Taehee Yoo <ap420073@gmail.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2018-07-10 14:22:01 +00:00
|
|
|
rcu_barrier();
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
while ((node = priv->root.rb_node) != NULL) {
|
|
|
|
rb_erase(node, &priv->root);
|
|
|
|
rbe = rb_entry(node, struct nft_rbtree_elem, node);
|
2023-06-16 12:51:49 +00:00
|
|
|
nf_tables_set_elem_destroy(ctx, set, rbe);
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-03-28 10:19:47 +00:00
|
|
|
static bool nft_rbtree_estimate(const struct nft_set_desc *desc, u32 features,
|
|
|
|
struct nft_set_estimate *est)
|
|
|
|
{
|
netfilter: nf_tables: Support for sets with multiple ranged fields
Introduce a new nested netlink attribute, NFTA_SET_DESC_CONCAT, used
to specify the length of each field in a set concatenation.
This allows set implementations to support concatenation of multiple
ranged items, as they can divide the input key into matching data for
every single field. Such set implementations would be selected as
they specify support for NFT_SET_INTERVAL and allow desc->field_count
to be greater than one. Explicitly disallow this for nft_set_rbtree.
In order to specify the interval for a set entry, userspace would
include in NFTA_SET_DESC_CONCAT attributes field lengths, and pass
range endpoints as two separate keys, represented by attributes
NFTA_SET_ELEM_KEY and NFTA_SET_ELEM_KEY_END.
While at it, export the number of 32-bit registers available for
packet matching, as nftables will need this to know the maximum
number of field lengths that can be specified.
For example, "packets with an IPv4 address between 192.0.2.0 and
192.0.2.42, with destination port between 22 and 25", can be
expressed as two concatenated elements:
NFTA_SET_ELEM_KEY: 192.0.2.0 . 22
NFTA_SET_ELEM_KEY_END: 192.0.2.42 . 25
and NFTA_SET_DESC_CONCAT attribute would contain:
NFTA_LIST_ELEM
NFTA_SET_FIELD_LEN: 4
NFTA_LIST_ELEM
NFTA_SET_FIELD_LEN: 2
v4: No changes
v3: Complete rework, NFTA_SET_DESC_CONCAT instead of NFTA_SET_SUBKEY
v2: No changes
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2020-01-21 23:17:53 +00:00
|
|
|
if (desc->field_count > 1)
|
|
|
|
return false;
|
|
|
|
|
2014-03-28 10:19:47 +00:00
|
|
|
if (desc->size)
|
2017-05-22 16:47:45 +00:00
|
|
|
est->size = sizeof(struct nft_rbtree) +
|
|
|
|
desc->size * sizeof(struct nft_rbtree_elem);
|
2014-03-28 10:19:47 +00:00
|
|
|
else
|
2017-05-22 16:47:45 +00:00
|
|
|
est->size = ~0;
|
2014-03-28 10:19:47 +00:00
|
|
|
|
2017-01-18 17:30:11 +00:00
|
|
|
est->lookup = NFT_SET_CLASS_O_LOG_N;
|
2017-01-18 17:30:12 +00:00
|
|
|
est->space = NFT_SET_CLASS_O_N;
|
2014-03-28 10:19:47 +00:00
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2020-02-18 10:59:27 +00:00
|
|
|
const struct nft_set_type nft_set_rbtree_type = {
|
2018-05-16 20:58:34 +00:00
|
|
|
.features = NFT_SET_INTERVAL | NFT_SET_MAP | NFT_SET_OBJECT | NFT_SET_TIMEOUT,
|
2018-04-03 21:15:39 +00:00
|
|
|
.ops = {
|
|
|
|
.privsize = nft_rbtree_privsize,
|
|
|
|
.elemsize = offsetof(struct nft_rbtree_elem, ext),
|
|
|
|
.estimate = nft_rbtree_estimate,
|
|
|
|
.init = nft_rbtree_init,
|
|
|
|
.destroy = nft_rbtree_destroy,
|
|
|
|
.insert = nft_rbtree_insert,
|
|
|
|
.remove = nft_rbtree_remove,
|
|
|
|
.deactivate = nft_rbtree_deactivate,
|
|
|
|
.flush = nft_rbtree_flush,
|
|
|
|
.activate = nft_rbtree_activate,
|
|
|
|
.lookup = nft_rbtree_lookup,
|
|
|
|
.walk = nft_rbtree_walk,
|
|
|
|
.get = nft_rbtree_get,
|
|
|
|
},
|
netfilter: nf_tables: add netlink set API
This patch adds the new netlink API for maintaining nf_tables sets
independently of the ruleset. The API supports the following operations:
- creation of sets
- deletion of sets
- querying of specific sets
- dumping of all sets
- addition of set elements
- removal of set elements
- dumping of all set elements
Sets are identified by name, each table defines an individual namespace.
The name of a set may be allocated automatically, this is mostly useful
in combination with the NFT_SET_ANONYMOUS flag, which destroys a set
automatically once the last reference has been released.
Sets can be marked constant, meaning they're not allowed to change while
linked to a rule. This allows to perform lockless operation for set
types that would otherwise require locking.
Additionally, if the implementation supports it, sets can (as before) be
used as maps, associating a data value with each key (or range), by
specifying the NFT_SET_MAP flag and can be used for interval queries by
specifying the NFT_SET_INTERVAL flag.
Set elements are added and removed incrementally. All element operations
support batching, reducing netlink message and set lookup overhead.
The old "set" and "hash" expressions are replaced by a generic "lookup"
expression, which binds to the specified set. Userspace is not aware
of the actual set implementation used by the kernel anymore, all
configuration options are generic.
Currently the implementation selection logic is largely missing and the
kernel will simply use the first registered implementation supporting the
requested operation. Eventually, the plan is to have userspace supply a
description of the data characteristics and select the implementation
based on expected performance and memory use.
This patch includes the new 'lookup' expression to look up for element
matching in the set.
This patch includes kernel-doc descriptions for this set API and it
also includes the following fixes.
From Patrick McHardy:
* netfilter: nf_tables: fix set element data type in dumps
* netfilter: nf_tables: fix indentation of struct nft_set_elem comments
* netfilter: nf_tables: fix oops in nft_validate_data_load()
* netfilter: nf_tables: fix oops while listing sets of built-in tables
* netfilter: nf_tables: destroy anonymous sets immediately if binding fails
* netfilter: nf_tables: propagate context to set iter callback
* netfilter: nf_tables: add loop detection
From Pablo Neira Ayuso:
* netfilter: nf_tables: allow to dump all existing sets
* netfilter: nf_tables: fix wrong type for flags variable in newelem
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2013-10-11 10:06:22 +00:00
|
|
|
};
|