linux-stable/include/net/netfilter
Jakub Kicinski 95d1815f09 Merge git://git.kernel.org/pub/scm/linux/kernel/git/netfilter/nf-next
Pablo Neira Ayuso says:

====================
Netfilter/IPVS updates for net-next

1) Incorrect error check in nft_expr_inner_parse(), from Dan Carpenter.

2) Add DATA_SENT state to SCTP connection tracking helper, from
   Sriram Yagnaraman.

3) Consolidate nf_confirm for ipv4 and ipv6, from Florian Westphal.

4) Add bitmask support for ipset, from Vishwanath Pai.

5) Handle icmpv6 redirects as RELATED, from Florian Westphal.

6) Add WARN_ON_ONCE() to impossible case in flowtable datapath,
   from Li Qiong.

7) A large batch of IPVS updates to replace timer-based estimators by
   kthreads to scale up wrt. CPUs and workload (millions of estimators).

Julian Anastasov says:

	This patchset implements stats estimation in kthread context.
It replaces the code that runs on single CPU in timer context every 2
seconds and causing latency splats as shown in reports [1], [2], [3].
The solution targets setups with thousands of IPVS services,
destinations and multi-CPU boxes.

	Spread the estimation on multiple (configured) CPUs and multiple
time slots (timer ticks) by using multiple chains organized under RCU
rules.  When stats are not needed, it is recommended to use
run_estimation=0 as already implemented before this change.

RCU Locking:

- As stats are now RCU-locked, tot_stats, svc and dest which
hold estimator structures are now always freed from RCU
callback. This ensures RCU grace period after the
ip_vs_stop_estimator() call.

Kthread data:

- every kthread works over its own data structure and all
such structures are attached to array. For now we limit
kthreads depending on the number of CPUs.

- even while there can be a kthread structure, its task
may not be running, eg. before first service is added or
while the sysctl var is set to an empty cpulist or
when run_estimation is set to 0 to disable the estimation.

- the allocated kthread context may grow from 1 to 50
allocated structures for timer ticks which saves memory for
setups with small number of estimators

- a task and its structure may be released if all
estimators are unlinked from its chains, leaving the
slot in the array empty

- every kthread data structure allows limited number
of estimators. Kthread 0 is also used to initially
calculate the max number of estimators to allow in every
chain considering a sub-100 microsecond cond_resched
rate. This number can be from 1 to hundreds.

- kthread 0 has an additional job of optimizing the
adding of estimators: they are first added in
temp list (est_temp_list) and later kthread 0
distributes them to other kthreads. The optimization
is based on the fact that newly added estimator
should be estimated after 2 seconds, so we have the
time to offload the adding to chain from controlling
process to kthread 0.

- to add new estimators we use the last added kthread
context (est_add_ktid). The new estimators are linked to
the chains just before the estimated one, based on add_row.
This ensures their estimation will start after 2 seconds.
If estimators are added in bursts, common case if all
services and dests are initially configured, we may
spread the estimators to more chains and as result,
reducing the initial delay below 2 seconds.

Many thanks to Jiri Wiesner for his valuable comments
and for spending a lot of time reviewing and testing
the changes on different platforms with 48-256 CPUs and
1-8 NUMA nodes under different cpufreq governors.

The new IPVS estimators do not use workqueue infrastructure
because:

- The estimation can take long time when using multiple IPVS rules (eg.
  millions estimator structures) and especially when box has multiple
  CPUs due to the for_each_possible_cpu usage that expects packets from
  any CPU. With est_nice sysctl we have more control how to prioritize the
  estimation kthreads compared to other processes/kthreads that have
  latency requirements (such as servers). As a benefit, we can see these
  kthreads in top and decide if we will need some further control to limit
  their CPU usage (max number of structure to estimate per kthread).

- with kthreads we run code that is read-mostly, no write/lock
  operations to process the estimators in 2-second intervals.

- work items are one-shot: as estimators are processed every
  2 seconds, they need to be re-added every time. This again
  loads the timers (add_timer) if we use delayed works, as there are
  no kthreads to do the timings.

[1] Report from Yunhong Jiang:
    https://lore.kernel.org/netdev/D25792C1-1B89-45DE-9F10-EC350DC04ADC@gmail.com/
[2] https://marc.info/?l=linux-virtual-server&m=159679809118027&w=2
[3] Report from Dust:
    https://archive.linuxvirtualserver.org/html/lvs-devel/2020-12/msg00000.html

* git://git.kernel.org/pub/scm/linux/kernel/git/netfilter/nf-next:
  ipvs: run_estimation should control the kthread tasks
  ipvs: add est_cpulist and est_nice sysctl vars
  ipvs: use kthreads for stats estimation
  ipvs: use u64_stats_t for the per-cpu counters
  ipvs: use common functions for stats allocation
  ipvs: add rcu protection to stats
  netfilter: flowtable: add a 'default' case to flowtable datapath
  netfilter: conntrack: set icmpv6 redirects as RELATED
  netfilter: ipset: Add support for new bitmask parameter
  netfilter: conntrack: merge ipv4+ipv6 confirm functions
  netfilter: conntrack: add sctp DATA_SENT state
  netfilter: nft_inner: fix IS_ERR() vs NULL check
====================

Link: https://lore.kernel.org/r/20221211101204.1751-1-pablo@netfilter.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-12 14:45:36 -08:00
..
ipv4 netfilter: disable defrag once its no longer needed 2021-04-26 03:20:07 +02:00
ipv6 netfilter: conntrack: fix boot failure with nf_conntrack.enable_hooks=1 2021-09-28 13:04:55 +02:00
br_netfilter.h netfilter: remove CONFIG_NETFILTER checks from headers. 2019-09-13 12:47:36 +02:00
nf_conntrack.h netfilter: remove nf_conntrack_helper sysctl and modparam toggles 2022-08-31 12:12:32 +02:00
nf_conntrack_acct.h netfilter: conntrack: remove extension register api 2022-02-04 06:30:28 +01:00
nf_conntrack_act_ct.h net/sched: act_ct: Fill offloading tuple iifidx 2022-01-04 12:12:55 +00:00
nf_conntrack_bpf.h net: netfilter: move bpf_ct_set_nat_info kfunc in nf_nat_bpf.c 2022-10-03 09:17:32 -07:00
nf_conntrack_bridge.h netfilter: remove CONFIG_NETFILTER checks from headers. 2019-09-13 12:47:36 +02:00
nf_conntrack_core.h netfilter: conntrack: merge ipv4+ipv6 confirm functions 2022-11-30 18:55:30 +01:00
nf_conntrack_count.h netfilter: nf_conncount: reduce unnecessary GC 2022-05-16 13:05:40 +02:00
nf_conntrack_ecache.h netfilter: prefer extension check to pointer check 2022-05-13 18:56:28 +02:00
nf_conntrack_expect.h netfilter: fix coding-style errors. 2019-09-13 11:39:38 +02:00
nf_conntrack_extend.h netfilter: extensions: introduce extension genid count 2022-05-13 18:52:16 +02:00
nf_conntrack_helper.h net: move add ct helper function to nf_conntrack_helper for ovs and tc 2022-11-08 12:15:19 +01:00
nf_conntrack_l4proto.h netfilter: conntrack: pass hook state to log functions 2021-06-18 14:47:43 +02:00
nf_conntrack_labels.h netfilter: extensions: introduce extension genid count 2022-05-13 18:52:16 +02:00
nf_conntrack_seqadj.h netfilter: conntrack: remove extension register api 2022-02-04 06:30:28 +01:00
nf_conntrack_synproxy.h netfilter: conntrack: wrap two inline functions in config checks. 2019-09-13 12:47:10 +02:00
nf_conntrack_timeout.h netfilter: nf_conntrack: add missing __rcu annotations 2022-07-11 16:25:15 +02:00
nf_conntrack_timestamp.h netfilter: conntrack: remove extension register api 2022-02-04 06:30:28 +01:00
nf_conntrack_tuple.h netfilter: remove CONFIG_NETFILTER checks from headers. 2019-09-13 12:47:36 +02:00
nf_conntrack_zones.h netfilter: conntrack: remove CONFIG_NF_CONNTRACK checks from nf_conntrack_zones.h. 2019-09-13 12:47:41 +02:00
nf_dup_netdev.h netfilter: nft_{fwd,dup}_netdev: add offload support 2019-09-10 22:44:29 +02:00
nf_flow_table.h netfilter: flowtable: fix stuck flows on cleanup due to pending work 2022-08-24 07:43:21 +02:00
nf_hooks_lwtunnel.h netfilter: add netfilter hooks to SRv6 data plane 2021-08-30 01:51:36 +02:00
nf_log.h netfilter: nf_log_common: merge with nf_log_syslog 2021-03-31 22:34:10 +02:00
nf_nat.h net: move the nat function to nf_nat_ovs for ovs and tc 2022-12-12 10:14:03 +00:00
nf_nat_helper.h netfilter: nat: move repetitive nat port reserve loop to a helper 2022-09-07 16:46:04 +02:00
nf_nat_masquerade.h netfilter: update include directives. 2019-09-13 12:33:06 +02:00
nf_nat_redirect.h netfilter: add missing includes to a number of header-files. 2019-08-13 12:14:39 +02:00
nf_queue.h treewide: use get_random_u32() when possible 2022-10-11 17:42:58 -06:00
nf_reject.h netfilter: conntrack: skip verification of zero UDP checksum 2022-05-13 18:56:28 +02:00
nf_socket.h
nf_synproxy.h netfilter: remove CONFIG_NETFILTER checks from headers. 2019-09-13 12:47:36 +02:00
nf_tables.h netfilter: nf_tables: Introduce NFT_MSG_GETRULE_RESET 2022-11-15 10:53:17 +01:00
nf_tables_core.h netfilter: nft_inner: add percpu inner context 2022-10-25 13:48:42 +02:00
nf_tables_ipv4.h netfilter: nf_tables: reduce nft_pktinfo by 8 bytes 2022-10-25 13:44:14 +02:00
nf_tables_ipv6.h netfilter: nf_tables: reduce nft_pktinfo by 8 bytes 2022-10-25 13:44:14 +02:00
nf_tables_offload.h netfilter: nf_tables: bail out early if hardware offload is not supported 2022-06-06 19:19:15 +02:00
nf_tproxy.h
nft_fib.h netfilter: nf_tables: Extend nft_expr_ops::dump callback parameters 2022-11-15 10:46:34 +01:00
nft_meta.h netfilter: nf_tables: Extend nft_expr_ops::dump callback parameters 2022-11-15 10:46:34 +01:00
nft_reject.h netfilter: nf_tables: Extend nft_expr_ops::dump callback parameters 2022-11-15 10:46:34 +01:00
xt_rateest.h net: sched: Merge Qdisc::bstats and Qdisc::cpu_bstats data types 2021-10-18 12:54:41 +01:00