linux-stable/Documentation/networking/index.rst

137 lines
1.8 KiB
ReStructuredText
Raw Normal View History

Networking
==========
Refer to :ref:`netdev-FAQ` for a guide on netdev development process specifics.
Contents:
.. toctree::
:maxdepth: 2
af_xdp
bareudp
batman-adv
can
can_ucan_protocol
device_drivers/index
dsa/index
devlink/index
caif/index
ethtool-netlink
ieee802154
j1939
kapi
msg_zerocopy
failover
net_dim
net_failover
page_pool
phy
sfp-phylink
alias
bridge
snmp_counter
checksum-offloads
segmentation-offloads
scaling
tls
tls-offload
tls-handshake
nfc
6lowpan
6pack
arcnet-hardware
arcnet
atm
ax25
bonding
cdc_mbim
dccp
dctcp
dns_resolver
driver
eql
fib_trie
filter
generic-hdlc
generic_netlink
netlink_spec/index
gen_stats
gtp
ila
ioam6-sysctl
ip_dynaddr
ipsec
ip-sysctl
ipv6
ipvlan
ipvs-sysctl
kcm
l2tp
lapb-module
mac80211-injection
mctp
mpls-sysctl
mptcp-sysctl
multiqueue
multi-pf-netdev
napi
Documentations: Analyze heavily used Networking related structs Analyzed a few structs in the networking stack by looking at variables within them that are used in the TCP/IP fast path. Fast path is defined as TCP path where data is transferred from sender to receiver unidirectionally. It doesn't include phases other than TCP_ESTABLISHED, nor does it look at error paths. We hope to re-organizing variables that span many cachelines whose fast path variables are also spread out, and this document can help future developers keep networking fast path cachelines small. Optimized_cacheline field is computed as (Fastpath_Bytes/L3_cacheline_size_x86), and not the actual organized results (see patches to come for these). Investigation is done on 6.5 Name Struct_Cachelines Cur_fastpath_cache Fastpath_Bytes Optimized_cacheline tcp_sock 42 (2664 Bytes) 12 396 8 net_device 39 (2240 bytes) 12 234 4 inet_sock 15 (960 bytes) 14 922 14 Inet_connection_sock 22 (1368 bytes) 18 1166 18 Netns_ipv4 (sysctls) 12 (768 bytes) 4 77 2 linux_mib 16 (1060) 6 104 2 Note how there isn't much improvement space for inet_sock and Inet_connection_sock because sk and icsk_inet respectively takes up so much of the struct that rest of the variables become a small portion of the struct size. So, we decided to reorganize tcp_sock, net_device, netns_ipv4 Suggested-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Coco Li <lixiaoyan@google.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Shakeel Butt <shakeelb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-11-29 07:27:52 +00:00
net_cachelines/index
netconsole
netdev-features
netdevices
netfilter-sysctl
netif-msg
nexthop-group-resilient
nf_conntrack-sysctl
nf_flowtable
openvswitch
operstates
packet_mmap
phonet
pktgen
plip
ppp_generic
proc_net_tcp
radiotap-headers
rds
regulatory
representors
rxrpc
sctp
secid
seg6-sysctl
skbuff
smc-sysctl
net: tighten the definition of interface statistics This patch is born out of an investigation into which IEEE statistics correspond to which struct rtnl_link_stats64 members. Turns out that there seems to be reasonable consensus on the matter, among many drivers. To save others the time (and it took more time than I'm comfortable admitting) I'm adding comments referring to IEEE attributes to struct rtnl_link_stats64. Up until now we had two forms of documentation for stats - in Documentation/ABI/testing/sysfs-class-net-statistics and the comments on struct rtnl_link_stats64 itself. While the former is very cautious in defining the expected behavior, the latter feel quite dated and may not be easy to understand for modern day driver author (e.g. rx_over_errors). At the same time modern systems are far more complex and once obvious definitions lost their clarity. For example - does rx_packet count at the MAC layer (aFramesReceivedOK)? packets processed correctly by hardware? received by the driver? or maybe received by the stack? I tried to clarify the expectations, further clarifications from others are very welcome. The part hardest to untangle is rx_over_errors vs rx_fifo_errors vs rx_missed_errors. After much deliberation I concluded that for modern HW only two of the counters will make sense. The distinction between internal FIFO overflow and packets dropped due to back-pressure from the host is likely too implementation (driver and device) specific to expose in the standard stats. Now - which two of those counters we select to use is anyone's pick: sysfs documentation suggests rx_over_errors counts packets which did not fit into buffers due to MTU being too small, which I reused. There don't seem to be many modern drivers using it (well, CAN drivers seem to love this statistic). Of the remaining two I picked rx_missed_errors to report device drops. bnxt reports it and it's folded into "drop"s in procfs (while rx_fifo_errors is an error, and modern devices usually receive the frame OK, they just can't admit it into the pipeline). Of the drivers I looked at only AMD Lance-like and NS8390-like use all three of these counters. rx_missed_errors counts missed frames, rx_over_errors counts overflow events, and rx_fifo_errors counts frames which were truncated because they didn't fit into buffers. This suggests that rx_fifo_errors may be the correct stat for truncated packets, but I'd think a FIFO stat counting truncated packets would be very confusing to a modern reader. v2: - add driver developer notes about ethtool stat count and reset - replace Ethernet with IEEE 802.3 to better indicate source of attrs - mention byte counters don't count FCS - clarify RX counter is from device to host - drop "sightly" from sysfs paragraph - add examples of ethtool stats - s/incoming/received/ s/incoming/transmitted/ Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-09-03 23:14:31 +00:00
statistics
strparser
switchdev
sysfs-tagging
tc-actions-env-rules
tc-queue-filters
tcp_ao
tcp-thin
team
timestamping
tipc
tproxy
tuntap
udplite
vrf
vxlan
x25
x25-iface
xfrm_device
xfrm_proc
xfrm_sync
xfrm_sysctl
xdp-rx-metadata
xsk-tx-metadata
.. only:: subproject and html
Indices
=======
* :ref:`genindex`