linux-stable/drivers/net/Kconfig

593 lines
19 KiB
Text
Raw Normal View History

# SPDX-License-Identifier: GPL-2.0-only
#
# Network device configuration
#
menuconfig NETDEVICES
default y if UML
depends on NET
bool "Network device support"
help
You can say N here if you don't intend to connect your Linux box to
any other computer at all.
You'll have to say Y if your computer contains a network card that
you want to use under Linux. If you are going to run SLIP or PPP over
telephone line or null modem cable you need say Y here. Connecting
two machines with parallel ports using PLIP needs this, as well as
AX.25/KISS for sending Internet traffic over amateur radio links.
See also "The Linux Network Administrator's Guide" by Olaf Kirch and
Terry Dawson. Available at <http://www.tldp.org/guides.html>.
If unsure, say Y.
# All the following symbols are dependent on NETDEVICES - do not repeat
# that for each of the symbols.
if NETDEVICES
config MII
tristate
config NET_CORE
default y
bool "Network core driver support"
help
You can say N here if you do not intend to use any of the
networking core drivers (i.e. VLAN, bridging, bonding, etc.)
if NET_CORE
config BONDING
tristate "Bonding driver support"
depends on INET
depends on IPV6 || IPV6=n
help
Say 'Y' or 'M' if you wish to be able to 'bond' multiple Ethernet
Channels together. This is called 'Etherchannel' by Cisco,
'Trunking' by Sun, 802.3ad by the IEEE, and 'Bonding' in Linux.
The driver supports multiple bonding modes to allow for both high
performance and high availability operation.
Refer to <file:Documentation/networking/bonding.rst> for more
information.
To compile this driver as a module, choose M here: the module
will be called bonding.
config DUMMY
tristate "Dummy net driver support"
help
This is essentially a bit-bucket device (i.e. traffic you send to
this device is consigned into oblivion) with a configurable IP
address. It is most commonly used in order to make your currently
inactive SLIP address seem like a real address for local programs.
If you use SLIP or PPP, you might want to say Y here. It won't
enlarge your kernel. What a deal. Read about it in the Network
Administrator's Guide, available from
<http://www.tldp.org/docs.html#guide>.
To compile this driver as a module, choose M here: the module
will be called dummy.
net: WireGuard secure network tunnel WireGuard is a layer 3 secure networking tunnel made specifically for the kernel, that aims to be much simpler and easier to audit than IPsec. Extensive documentation and description of the protocol and considerations, along with formal proofs of the cryptography, are available at: * https://www.wireguard.com/ * https://www.wireguard.com/papers/wireguard.pdf This commit implements WireGuard as a simple network device driver, accessible in the usual RTNL way used by virtual network drivers. It makes use of the udp_tunnel APIs, GRO, GSO, NAPI, and the usual set of networking subsystem APIs. It has a somewhat novel multicore queueing system designed for maximum throughput and minimal latency of encryption operations, but it is implemented modestly using workqueues and NAPI. Configuration is done via generic Netlink, and following a review from the Netlink maintainer a year ago, several high profile userspace tools have already implemented the API. This commit also comes with several different tests, both in-kernel tests and out-of-kernel tests based on network namespaces, taking profit of the fact that sockets used by WireGuard intentionally stay in the namespace the WireGuard interface was originally created, exactly like the semantics of userspace tun devices. See wireguard.com/netns/ for pictures and examples. The source code is fairly short, but rather than combining everything into a single file, WireGuard is developed as cleanly separable files, making auditing and comprehension easier. Things are laid out as follows: * noise.[ch], cookie.[ch], messages.h: These implement the bulk of the cryptographic aspects of the protocol, and are mostly data-only in nature, taking in buffers of bytes and spitting out buffers of bytes. They also handle reference counting for their various shared pieces of data, like keys and key lists. * ratelimiter.[ch]: Used as an integral part of cookie.[ch] for ratelimiting certain types of cryptographic operations in accordance with particular WireGuard semantics. * allowedips.[ch], peerlookup.[ch]: The main lookup structures of WireGuard, the former being trie-like with particular semantics, an integral part of the design of the protocol, and the latter just being nice helper functions around the various hashtables we use. * device.[ch]: Implementation of functions for the netdevice and for rtnl, responsible for maintaining the life of a given interface and wiring it up to the rest of WireGuard. * peer.[ch]: Each interface has a list of peers, with helper functions available here for creation, destruction, and reference counting. * socket.[ch]: Implementation of functions related to udp_socket and the general set of kernel socket APIs, for sending and receiving ciphertext UDP packets, and taking care of WireGuard-specific sticky socket routing semantics for the automatic roaming. * netlink.[ch]: Userspace API entry point for configuring WireGuard peers and devices. The API has been implemented by several userspace tools and network management utility, and the WireGuard project distributes the basic wg(8) tool. * queueing.[ch]: Shared function on the rx and tx path for handling the various queues used in the multicore algorithms. * send.c: Handles encrypting outgoing packets in parallel on multiple cores, before sending them in order on a single core, via workqueues and ring buffers. Also handles sending handshake and cookie messages as part of the protocol, in parallel. * receive.c: Handles decrypting incoming packets in parallel on multiple cores, before passing them off in order to be ingested via the rest of the networking subsystem with GRO via the typical NAPI poll function. Also handles receiving handshake and cookie messages as part of the protocol, in parallel. * timers.[ch]: Uses the timer wheel to implement protocol particular event timeouts, and gives a set of very simple event-driven entry point functions for callers. * main.c, version.h: Initialization and deinitialization of the module. * selftest/*.h: Runtime unit tests for some of the most security sensitive functions. * tools/testing/selftests/wireguard/netns.sh: Aforementioned testing script using network namespaces. This commit aims to be as self-contained as possible, implementing WireGuard as a standalone module not needing much special handling or coordination from the network subsystem. I expect for future optimizations to the network stack to positively improve WireGuard, and vice-versa, but for the time being, this exists as intentionally standalone. We introduce a menu option for CONFIG_WIREGUARD, as well as providing a verbose debug log and self-tests via CONFIG_WIREGUARD_DEBUG. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Cc: David Miller <davem@davemloft.net> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: netdev@vger.kernel.org Signed-off-by: David S. Miller <davem@davemloft.net>
2019-12-08 23:27:34 +00:00
config WIREGUARD
tristate "WireGuard secure network tunnel"
depends on NET && INET
depends on IPV6 || !IPV6
select NET_UDP_TUNNEL
select DST_CACHE
select CRYPTO
select CRYPTO_LIB_CURVE25519
select CRYPTO_LIB_CHACHA20POLY1305
select CRYPTO_LIB_BLAKE2S
select CRYPTO_CHACHA20_X86_64 if X86 && 64BIT
select CRYPTO_POLY1305_X86_64 if X86 && 64BIT
select CRYPTO_BLAKE2S_X86 if X86 && 64BIT
select CRYPTO_CURVE25519_X86 if X86 && 64BIT
select ARM_CRYPTO if ARM
select ARM64_CRYPTO if ARM64
net: WireGuard secure network tunnel WireGuard is a layer 3 secure networking tunnel made specifically for the kernel, that aims to be much simpler and easier to audit than IPsec. Extensive documentation and description of the protocol and considerations, along with formal proofs of the cryptography, are available at: * https://www.wireguard.com/ * https://www.wireguard.com/papers/wireguard.pdf This commit implements WireGuard as a simple network device driver, accessible in the usual RTNL way used by virtual network drivers. It makes use of the udp_tunnel APIs, GRO, GSO, NAPI, and the usual set of networking subsystem APIs. It has a somewhat novel multicore queueing system designed for maximum throughput and minimal latency of encryption operations, but it is implemented modestly using workqueues and NAPI. Configuration is done via generic Netlink, and following a review from the Netlink maintainer a year ago, several high profile userspace tools have already implemented the API. This commit also comes with several different tests, both in-kernel tests and out-of-kernel tests based on network namespaces, taking profit of the fact that sockets used by WireGuard intentionally stay in the namespace the WireGuard interface was originally created, exactly like the semantics of userspace tun devices. See wireguard.com/netns/ for pictures and examples. The source code is fairly short, but rather than combining everything into a single file, WireGuard is developed as cleanly separable files, making auditing and comprehension easier. Things are laid out as follows: * noise.[ch], cookie.[ch], messages.h: These implement the bulk of the cryptographic aspects of the protocol, and are mostly data-only in nature, taking in buffers of bytes and spitting out buffers of bytes. They also handle reference counting for their various shared pieces of data, like keys and key lists. * ratelimiter.[ch]: Used as an integral part of cookie.[ch] for ratelimiting certain types of cryptographic operations in accordance with particular WireGuard semantics. * allowedips.[ch], peerlookup.[ch]: The main lookup structures of WireGuard, the former being trie-like with particular semantics, an integral part of the design of the protocol, and the latter just being nice helper functions around the various hashtables we use. * device.[ch]: Implementation of functions for the netdevice and for rtnl, responsible for maintaining the life of a given interface and wiring it up to the rest of WireGuard. * peer.[ch]: Each interface has a list of peers, with helper functions available here for creation, destruction, and reference counting. * socket.[ch]: Implementation of functions related to udp_socket and the general set of kernel socket APIs, for sending and receiving ciphertext UDP packets, and taking care of WireGuard-specific sticky socket routing semantics for the automatic roaming. * netlink.[ch]: Userspace API entry point for configuring WireGuard peers and devices. The API has been implemented by several userspace tools and network management utility, and the WireGuard project distributes the basic wg(8) tool. * queueing.[ch]: Shared function on the rx and tx path for handling the various queues used in the multicore algorithms. * send.c: Handles encrypting outgoing packets in parallel on multiple cores, before sending them in order on a single core, via workqueues and ring buffers. Also handles sending handshake and cookie messages as part of the protocol, in parallel. * receive.c: Handles decrypting incoming packets in parallel on multiple cores, before passing them off in order to be ingested via the rest of the networking subsystem with GRO via the typical NAPI poll function. Also handles receiving handshake and cookie messages as part of the protocol, in parallel. * timers.[ch]: Uses the timer wheel to implement protocol particular event timeouts, and gives a set of very simple event-driven entry point functions for callers. * main.c, version.h: Initialization and deinitialization of the module. * selftest/*.h: Runtime unit tests for some of the most security sensitive functions. * tools/testing/selftests/wireguard/netns.sh: Aforementioned testing script using network namespaces. This commit aims to be as self-contained as possible, implementing WireGuard as a standalone module not needing much special handling or coordination from the network subsystem. I expect for future optimizations to the network stack to positively improve WireGuard, and vice-versa, but for the time being, this exists as intentionally standalone. We introduce a menu option for CONFIG_WIREGUARD, as well as providing a verbose debug log and self-tests via CONFIG_WIREGUARD_DEBUG. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Cc: David Miller <davem@davemloft.net> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: netdev@vger.kernel.org Signed-off-by: David S. Miller <davem@davemloft.net>
2019-12-08 23:27:34 +00:00
select CRYPTO_CHACHA20_NEON if (ARM || ARM64) && KERNEL_MODE_NEON
select CRYPTO_POLY1305_NEON if ARM64 && KERNEL_MODE_NEON
select CRYPTO_POLY1305_ARM if ARM
select CRYPTO_CURVE25519_NEON if ARM && KERNEL_MODE_NEON
select CRYPTO_CHACHA_MIPS if CPU_MIPS32_R2
select CRYPTO_POLY1305_MIPS if CPU_MIPS32 || (CPU_MIPS64 && 64BIT)
help
WireGuard is a secure, fast, and easy to use replacement for IPSec
that uses modern cryptography and clever networking tricks. It's
designed to be fairly general purpose and abstract enough to fit most
use cases, while at the same time remaining extremely simple to
configure. See www.wireguard.com for more info.
It's safe to say Y or M here, as the driver is very lightweight and
is only in use when an administrator chooses to add an interface.
config WIREGUARD_DEBUG
bool "Debugging checks and verbose messages"
depends on WIREGUARD
help
This will write log messages for handshake and other events
that occur for a WireGuard interface. It will also perform some
extra validation checks and unit tests at various points. This is
only useful for debugging.
Say N here unless you know what you're doing.
config EQUALIZER
tristate "EQL (serial line load balancing) support"
help
If you have two serial connections to some other computer (this
usually requires two modems and two telephone lines) and you use
SLIP (the protocol for sending Internet traffic over telephone
lines) or PPP (a better SLIP) on them, you can make them behave like
one double speed connection using this driver. Naturally, this has
to be supported at the other end as well, either with a similar EQL
Linux driver or with a Livingston Portmaster 2e.
Say Y if you want this and read
<file:Documentation/networking/eql.rst>. You may also want to read
section 6.2 of the NET-3-HOWTO, available from
<http://www.tldp.org/docs.html#howto>.
To compile this driver as a module, choose M here: the module
will be called eql. If unsure, say N.
config NET_FC
bool "Fibre Channel driver support"
depends on SCSI && PCI
help
Fibre Channel is a high speed serial protocol mainly used to connect
large storage devices to the computer; it is compatible with and
intended to replace SCSI.
If you intend to use Fibre Channel, you need to have a Fibre channel
adaptor card in your computer; say Y here and to the driver for your
adaptor below. You also should have said Y to "SCSI support" and
"SCSI generic support".
config IFB
tristate "Intermediate Functional Block support"
depends on NET_CLS_ACT
select NET_REDIRECT
help
This is an intermediate driver that allows sharing of
resources.
To compile this driver as a module, choose M here: the module
will be called ifb. If you want to use more than one ifb
device at a time, you need to compile this driver as a module.
Instead of 'ifb', the devices will then be called 'ifb0',
'ifb1' etc.
Look at the iproute2 documentation directory for usage etc
net: introduce ethernet teaming device This patch introduces new network device called team. It supposes to be very fast, simple, userspace-driven alternative to existing bonding driver. Userspace library called libteam with couple of demo apps is available here: https://github.com/jpirko/libteam Note it's still in its dipers atm. team<->libteam use generic netlink for communication. That and rtnl suppose to be the only way to configure team device, no sysfs etc. Python binding of libteam was recently introduced. Daemon providing arpmon/miimon active-backup functionality will be introduced shortly. All what's necessary is already implemented in kernel team driver. v7->v8: - check ndo_ndo_vlan_rx_[add/kill]_vid functions before calling them. - use dev_kfree_skb_any() instead of dev_kfree_skb() v6->v7: - transmit and receive functions are not checked in hot paths. That also resolves memory leak on transmit when no port is present v5->v6: - changed couple of _rcu calls to non _rcu ones in non-readers v4->v5: - team_change_mtu() uses team->lock while travesing though port list - mac address changes are moved completely to jurisdiction of userspace daemon. This way the daemon can do FOM1, FOM2 and possibly other weird things with mac addresses. Only round-robin mode sets up all ports to bond's address then enslaved. - Extended Kconfig text v3->v4: - remove redundant synchronize_rcu from __team_change_mode() - revert "set and clear of mode_ops happens per pointer, not per byte" - extend comment of function __team_change_mode() v2->v3: - team_change_mtu() uses rcu version of list traversal to unwind - set and clear of mode_ops happens per pointer, not per byte - port hashlist changed to be embedded into team structure - error branch in team_port_enter() does cleanup now - fixed rtln->rtnl v1->v2: - modes are made as modules. Makes team more modular and extendable. - several commenters' nitpicks found on v1 were fixed - several other bugs were fixed. - note I ignored Eric's comment about roundrobin port selector as Eric's way may be easily implemented as another mode (mode "random") in future. Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2011-11-11 22:16:48 +00:00
source "drivers/net/team/Kconfig"
config MACVLAN
tristate "MAC-VLAN support"
help
This allows one to create virtual interfaces that map packets to
or from specific MAC addresses to a particular interface.
Macvlan devices can be added using the "ip" command from the
iproute2 package starting with the iproute2-2.6.23 release:
"ip link add link <real dev> [ address MAC ] [ NAME ] type macvlan"
To compile this driver as a module, choose M here: the module
will be called macvlan.
config MACVTAP
tristate "MAC-VLAN based tap driver"
depends on MACVLAN
depends on INET
select TAP
help
This adds a specialized tap character device driver that is based
on the MAC-VLAN network interface, called macvtap. A macvtap device
can be added in the same way as a macvlan device, using 'type
macvtap', and then be accessed through the tap user space interface.
To compile this driver as a module, choose M here: the module
will be called macvtap.
config IPVLAN_L3S
depends on NETFILTER
depends on IPVLAN
def_bool y
select NET_L3_MASTER_DEV
ipvlan: Initial check-in of the IPVLAN driver. This driver is very similar to the macvlan driver except that it uses L3 on the frame to determine the logical interface while functioning as packet dispatcher. It inherits L2 of the master device hence the packets on wire will have the same L2 for all the packets originating from all virtual devices off of the same master device. This driver was developed keeping the namespace use-case in mind. Hence most of the examples given here take that as the base setup where main-device belongs to the default-ns and virtual devices are assigned to the additional namespaces. The device operates in two different modes and the difference in these two modes in primarily in the TX side. (a) L2 mode : In this mode, the device behaves as a L2 device. TX processing upto L2 happens on the stack of the virtual device associated with (namespace). Packets are switched after that into the main device (default-ns) and queued for xmit. RX processing is simple and all multicast, broadcast (if applicable), and unicast belonging to the address(es) are delivered to the virtual devices. (b) L3 mode : In this mode, the device behaves like a L3 device. TX processing upto L3 happens on the stack of the virtual device associated with (namespace). Packets are switched to the main-device (default-ns) for the L2 processing. Hence the routing table of the default-ns will be used in this mode. RX processins is somewhat similar to the L2 mode except that in this mode only Unicast packets are delivered to the virtual device while main-dev will handle all other packets. The devices can be added using the "ip" command from the iproute2 package - ip link add link <master> <virtual> type ipvlan mode [ l2 | l3 ] Signed-off-by: Mahesh Bandewar <maheshb@google.com> Cc: Eric Dumazet <edumazet@google.com> Cc: Maciej Żenczykowski <maze@google.com> Cc: Laurent Chavey <chavey@google.com> Cc: Tim Hockin <thockin@google.com> Cc: Brandon Philips <brandon.philips@coreos.com> Cc: Pavel Emelianov <xemul@parallels.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-24 07:07:46 +00:00
config IPVLAN
tristate "IP-VLAN support"
depends on INET
depends on IPV6 || !IPV6
help
This allows one to create virtual devices off of a main interface
and packets will be delivered based on the dest L3 (IPv6/IPv4 addr)
on packets. All interfaces (including the main interface) share L2
making it transparent to the connected L2 switch.
ipvlan: Initial check-in of the IPVLAN driver. This driver is very similar to the macvlan driver except that it uses L3 on the frame to determine the logical interface while functioning as packet dispatcher. It inherits L2 of the master device hence the packets on wire will have the same L2 for all the packets originating from all virtual devices off of the same master device. This driver was developed keeping the namespace use-case in mind. Hence most of the examples given here take that as the base setup where main-device belongs to the default-ns and virtual devices are assigned to the additional namespaces. The device operates in two different modes and the difference in these two modes in primarily in the TX side. (a) L2 mode : In this mode, the device behaves as a L2 device. TX processing upto L2 happens on the stack of the virtual device associated with (namespace). Packets are switched after that into the main device (default-ns) and queued for xmit. RX processing is simple and all multicast, broadcast (if applicable), and unicast belonging to the address(es) are delivered to the virtual devices. (b) L3 mode : In this mode, the device behaves like a L3 device. TX processing upto L3 happens on the stack of the virtual device associated with (namespace). Packets are switched to the main-device (default-ns) for the L2 processing. Hence the routing table of the default-ns will be used in this mode. RX processins is somewhat similar to the L2 mode except that in this mode only Unicast packets are delivered to the virtual device while main-dev will handle all other packets. The devices can be added using the "ip" command from the iproute2 package - ip link add link <master> <virtual> type ipvlan mode [ l2 | l3 ] Signed-off-by: Mahesh Bandewar <maheshb@google.com> Cc: Eric Dumazet <edumazet@google.com> Cc: Maciej Żenczykowski <maze@google.com> Cc: Laurent Chavey <chavey@google.com> Cc: Tim Hockin <thockin@google.com> Cc: Brandon Philips <brandon.philips@coreos.com> Cc: Pavel Emelianov <xemul@parallels.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-24 07:07:46 +00:00
Ipvlan devices can be added using the "ip" command from the
iproute2 package starting with the iproute2-3.19 release:
ipvlan: Initial check-in of the IPVLAN driver. This driver is very similar to the macvlan driver except that it uses L3 on the frame to determine the logical interface while functioning as packet dispatcher. It inherits L2 of the master device hence the packets on wire will have the same L2 for all the packets originating from all virtual devices off of the same master device. This driver was developed keeping the namespace use-case in mind. Hence most of the examples given here take that as the base setup where main-device belongs to the default-ns and virtual devices are assigned to the additional namespaces. The device operates in two different modes and the difference in these two modes in primarily in the TX side. (a) L2 mode : In this mode, the device behaves as a L2 device. TX processing upto L2 happens on the stack of the virtual device associated with (namespace). Packets are switched after that into the main device (default-ns) and queued for xmit. RX processing is simple and all multicast, broadcast (if applicable), and unicast belonging to the address(es) are delivered to the virtual devices. (b) L3 mode : In this mode, the device behaves like a L3 device. TX processing upto L3 happens on the stack of the virtual device associated with (namespace). Packets are switched to the main-device (default-ns) for the L2 processing. Hence the routing table of the default-ns will be used in this mode. RX processins is somewhat similar to the L2 mode except that in this mode only Unicast packets are delivered to the virtual device while main-dev will handle all other packets. The devices can be added using the "ip" command from the iproute2 package - ip link add link <master> <virtual> type ipvlan mode [ l2 | l3 ] Signed-off-by: Mahesh Bandewar <maheshb@google.com> Cc: Eric Dumazet <edumazet@google.com> Cc: Maciej Żenczykowski <maze@google.com> Cc: Laurent Chavey <chavey@google.com> Cc: Tim Hockin <thockin@google.com> Cc: Brandon Philips <brandon.philips@coreos.com> Cc: Pavel Emelianov <xemul@parallels.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-24 07:07:46 +00:00
"ip link add link <main-dev> [ NAME ] type ipvlan"
ipvlan: Initial check-in of the IPVLAN driver. This driver is very similar to the macvlan driver except that it uses L3 on the frame to determine the logical interface while functioning as packet dispatcher. It inherits L2 of the master device hence the packets on wire will have the same L2 for all the packets originating from all virtual devices off of the same master device. This driver was developed keeping the namespace use-case in mind. Hence most of the examples given here take that as the base setup where main-device belongs to the default-ns and virtual devices are assigned to the additional namespaces. The device operates in two different modes and the difference in these two modes in primarily in the TX side. (a) L2 mode : In this mode, the device behaves as a L2 device. TX processing upto L2 happens on the stack of the virtual device associated with (namespace). Packets are switched after that into the main device (default-ns) and queued for xmit. RX processing is simple and all multicast, broadcast (if applicable), and unicast belonging to the address(es) are delivered to the virtual devices. (b) L3 mode : In this mode, the device behaves like a L3 device. TX processing upto L3 happens on the stack of the virtual device associated with (namespace). Packets are switched to the main-device (default-ns) for the L2 processing. Hence the routing table of the default-ns will be used in this mode. RX processins is somewhat similar to the L2 mode except that in this mode only Unicast packets are delivered to the virtual device while main-dev will handle all other packets. The devices can be added using the "ip" command from the iproute2 package - ip link add link <master> <virtual> type ipvlan mode [ l2 | l3 ] Signed-off-by: Mahesh Bandewar <maheshb@google.com> Cc: Eric Dumazet <edumazet@google.com> Cc: Maciej Żenczykowski <maze@google.com> Cc: Laurent Chavey <chavey@google.com> Cc: Tim Hockin <thockin@google.com> Cc: Brandon Philips <brandon.philips@coreos.com> Cc: Pavel Emelianov <xemul@parallels.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-24 07:07:46 +00:00
To compile this driver as a module, choose M here: the module
will be called ipvlan.
ipvlan: Initial check-in of the IPVLAN driver. This driver is very similar to the macvlan driver except that it uses L3 on the frame to determine the logical interface while functioning as packet dispatcher. It inherits L2 of the master device hence the packets on wire will have the same L2 for all the packets originating from all virtual devices off of the same master device. This driver was developed keeping the namespace use-case in mind. Hence most of the examples given here take that as the base setup where main-device belongs to the default-ns and virtual devices are assigned to the additional namespaces. The device operates in two different modes and the difference in these two modes in primarily in the TX side. (a) L2 mode : In this mode, the device behaves as a L2 device. TX processing upto L2 happens on the stack of the virtual device associated with (namespace). Packets are switched after that into the main device (default-ns) and queued for xmit. RX processing is simple and all multicast, broadcast (if applicable), and unicast belonging to the address(es) are delivered to the virtual devices. (b) L3 mode : In this mode, the device behaves like a L3 device. TX processing upto L3 happens on the stack of the virtual device associated with (namespace). Packets are switched to the main-device (default-ns) for the L2 processing. Hence the routing table of the default-ns will be used in this mode. RX processins is somewhat similar to the L2 mode except that in this mode only Unicast packets are delivered to the virtual device while main-dev will handle all other packets. The devices can be added using the "ip" command from the iproute2 package - ip link add link <master> <virtual> type ipvlan mode [ l2 | l3 ] Signed-off-by: Mahesh Bandewar <maheshb@google.com> Cc: Eric Dumazet <edumazet@google.com> Cc: Maciej Żenczykowski <maze@google.com> Cc: Laurent Chavey <chavey@google.com> Cc: Tim Hockin <thockin@google.com> Cc: Brandon Philips <brandon.philips@coreos.com> Cc: Pavel Emelianov <xemul@parallels.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-24 07:07:46 +00:00
config IPVTAP
tristate "IP-VLAN based tap driver"
depends on IPVLAN
depends on INET
select TAP
help
This adds a specialized tap character device driver that is based
on the IP-VLAN network interface, called ipvtap. An ipvtap device
can be added in the same way as a ipvlan device, using 'type
ipvtap', and then be accessed through the tap user space interface.
To compile this driver as a module, choose M here: the module
will be called ipvtap.
ipvlan: Initial check-in of the IPVLAN driver. This driver is very similar to the macvlan driver except that it uses L3 on the frame to determine the logical interface while functioning as packet dispatcher. It inherits L2 of the master device hence the packets on wire will have the same L2 for all the packets originating from all virtual devices off of the same master device. This driver was developed keeping the namespace use-case in mind. Hence most of the examples given here take that as the base setup where main-device belongs to the default-ns and virtual devices are assigned to the additional namespaces. The device operates in two different modes and the difference in these two modes in primarily in the TX side. (a) L2 mode : In this mode, the device behaves as a L2 device. TX processing upto L2 happens on the stack of the virtual device associated with (namespace). Packets are switched after that into the main device (default-ns) and queued for xmit. RX processing is simple and all multicast, broadcast (if applicable), and unicast belonging to the address(es) are delivered to the virtual devices. (b) L3 mode : In this mode, the device behaves like a L3 device. TX processing upto L3 happens on the stack of the virtual device associated with (namespace). Packets are switched to the main-device (default-ns) for the L2 processing. Hence the routing table of the default-ns will be used in this mode. RX processins is somewhat similar to the L2 mode except that in this mode only Unicast packets are delivered to the virtual device while main-dev will handle all other packets. The devices can be added using the "ip" command from the iproute2 package - ip link add link <master> <virtual> type ipvlan mode [ l2 | l3 ] Signed-off-by: Mahesh Bandewar <maheshb@google.com> Cc: Eric Dumazet <edumazet@google.com> Cc: Maciej Żenczykowski <maze@google.com> Cc: Laurent Chavey <chavey@google.com> Cc: Tim Hockin <thockin@google.com> Cc: Brandon Philips <brandon.philips@coreos.com> Cc: Pavel Emelianov <xemul@parallels.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-24 07:07:46 +00:00
config VXLAN
tristate "Virtual eXtensible Local Area Network (VXLAN)"
depends on INET
select NET_UDP_TUNNEL
select GRO_CELLS
help
This allows one to create vxlan virtual interfaces that provide
Layer 2 Networks over Layer 3 Networks. VXLAN is often used
to tunnel virtual network infrastructure in virtualized environments.
For more information see:
http://tools.ietf.org/html/draft-mahalingam-dutt-dcops-vxlan-02
To compile this driver as a module, choose M here: the module
will be called vxlan.
config GENEVE
tristate "Generic Network Virtualization Encapsulation"
depends on INET
depends on IPV6 || !IPV6
select NET_UDP_TUNNEL
select GRO_CELLS
help
This allows one to create geneve virtual interfaces that provide
Layer 2 Networks over Layer 3 Networks. GENEVE is often used
to tunnel virtual network infrastructure in virtualized environments.
For more information see:
http://tools.ietf.org/html/draft-gross-geneve-02
To compile this driver as a module, choose M here: the module
will be called geneve.
config BAREUDP
tristate "Bare UDP Encapsulation"
depends on INET
depends on IPV6 || !IPV6
select NET_UDP_TUNNEL
select GRO_CELLS
help
This adds a bare UDP tunnel module for tunnelling different
kinds of traffic like MPLS, IP, etc. inside a UDP tunnel.
To compile this driver as a module, choose M here: the module
will be called bareudp.
config GTP
tristate "GPRS Tunneling Protocol datapath (GTP-U)"
depends on INET
select NET_UDP_TUNNEL
help
This allows one to create gtp virtual interfaces that provide
the GPRS Tunneling Protocol datapath (GTP-U). This tunneling protocol
is used to prevent subscribers from accessing mobile carrier core
network infrastructure. This driver requires a userspace software that
implements the signaling protocol (GTP-C) to update its PDP context
base, such as OpenGGSN <http://git.osmocom.org/openggsn/). This
tunneling protocol is implemented according to the GSM TS 09.60 and
3GPP TS 29.060 standards.
To compile this drivers as a module, choose M here: the module
wil be called gtp.
config MACSEC
tristate "IEEE 802.1AE MAC-level encryption (MACsec)"
select CRYPTO
select CRYPTO_AES
select CRYPTO_GCM
select GRO_CELLS
help
MACsec is an encryption standard for Ethernet.
config NETCONSOLE
tristate "Network console logging support"
help
If you want to log kernel messages over the network, enable this.
See <file:Documentation/networking/netconsole.rst> for details.
config NETCONSOLE_DYNAMIC
bool "Dynamic reconfiguration of logging targets"
depends on NETCONSOLE && SYSFS && CONFIGFS_FS && \
!(NETCONSOLE=y && CONFIGFS_FS=m)
help
This option enables the ability to dynamically reconfigure target
parameters (interface, IP addresses, port numbers, MAC addresses)
at runtime through a userspace interface exported using configfs.
See <file:Documentation/networking/netconsole.rst> for details.
config NETPOLL
def_bool NETCONSOLE
select SRCU
config NET_POLL_CONTROLLER
def_bool NETPOLL
config NTB_NETDEV
tristate "Virtual Ethernet over NTB Transport"
depends on NTB_TRANSPORT
config RIONET
tristate "RapidIO Ethernet over messaging driver support"
depends on RAPIDIO
config RIONET_TX_SIZE
int "Number of outbound queue entries"
depends on RIONET
default "128"
config RIONET_RX_SIZE
int "Number of inbound queue entries"
depends on RIONET
default "128"
config TUN
tristate "Universal TUN/TAP device driver support"
depends on INET
select CRC32
help
TUN/TAP provides packet reception and transmission for user space
programs. It can be viewed as a simple Point-to-Point or Ethernet
device, which instead of receiving packets from a physical media,
receives them from user space program and instead of sending packets
via physical media writes them to the user space program.
When a program opens /dev/net/tun, driver creates and registers
corresponding net device tunX or tapX. After a program closed above
devices, driver will automatically delete tunXX or tapXX device and
all routes corresponding to it.
Please read <file:Documentation/networking/tuntap.rst> for more
information.
To compile this driver as a module, choose M here: the module
will be called tun.
If you don't know what to use this for, you don't need it.
config TAP
tristate
help
This option is selected by any driver implementing tap user space
interface for a virtual interface to re-use core tap functionality.
config TUN_VNET_CROSS_LE
bool "Support for cross-endian vnet headers on little-endian kernels"
default n
help
This option allows TUN/TAP and MACVTAP device drivers in a
little-endian kernel to parse vnet headers that come from a
big-endian legacy virtio device.
Userspace programs can control the feature using the TUNSETVNETBE
and TUNGETVNETBE ioctls.
Unless you have a little-endian system hosting a big-endian virtual
machine with a legacy virtio NIC, you should say N.
config VETH
tristate "Virtual ethernet pair device"
help
This device is a local ethernet tunnel. Devices are created in pairs.
When one end receives the packet it appears on its pair and vice
versa.
config VIRTIO_NET
tristate "Virtio network driver"
depends on VIRTIO
select NET_FAILOVER
help
This is the virtual network driver for virtio. It can be used with
QEMU based VMMs (like KVM or Xen). Say Y or M.
config NLMON
tristate "Virtual netlink monitoring device"
help
This option enables a monitoring net device for netlink skbs. The
purpose of this is to analyze netlink messages with packet sockets.
Thus applications like tcpdump will be able to see local netlink
messages if they tap into the netlink device, record pcaps for further
diagnostics, etc. This is mostly intended for developers or support
to debug netlink issues. If unsure, say N.
config NET_VRF
tristate "Virtual Routing and Forwarding (Lite)"
depends on IP_MULTIPLE_TABLES
depends on NET_L3_MASTER_DEV
depends on IPV6 || IPV6=n
depends on IPV6_MULTIPLE_TABLES || IPV6=n
help
This option enables the support for mapping interfaces into VRF's. The
support enables VRF devices.
config VSOCKMON
tristate "Virtual vsock monitoring device"
depends on VHOST_VSOCK
help
This option enables a monitoring net device for vsock sockets. It is
mostly intended for developers or support to debug vsock issues. If
unsure, say N.
endif # NET_CORE
config SUNGEM_PHY
tristate
source "drivers/net/arcnet/Kconfig"
source "drivers/atm/Kconfig"
source "drivers/net/caif/Kconfig"
source "drivers/net/dsa/Kconfig"
source "drivers/net/ethernet/Kconfig"
source "drivers/net/fddi/Kconfig"
source "drivers/net/hippi/Kconfig"
source "drivers/net/ipa/Kconfig"
config NET_SB1000
tristate "General Instruments Surfboard 1000"
depends on PNP
help
This is a driver for the General Instrument (also known as
NextLevel) SURFboard 1000 internal
cable modem. This is an ISA card which is used by a number of cable
TV companies to provide cable modem access. It's a one-way
downstream-only cable modem, meaning that your upstream net link is
provided by your regular phone modem.
At present this driver only compiles as a module, so say M here if
you have this card. The module will be called sb1000. Then read
<file:Documentation/networking/device_drivers/cable/sb1000.rst> for
information on how to use this module, as it needs special ppp
scripts for establishing a connection. Further documentation
and the necessary scripts can be found at:
<http://www.jacksonville.net/~fventuri/>
<http://home.adelphia.net/~siglercm/sb1000.html>
<http://linuxpower.cx/~cable/>
If you don't have this card, of course say N.
source "drivers/net/phy/Kconfig"
source "drivers/net/plip/Kconfig"
source "drivers/net/ppp/Kconfig"
source "drivers/net/slip/Kconfig"
source "drivers/s390/net/Kconfig"
source "drivers/net/usb/Kconfig"
source "drivers/net/wireless/Kconfig"
source "drivers/net/wimax/Kconfig"
source "drivers/net/wan/Kconfig"
source "drivers/net/ieee802154/Kconfig"
config XEN_NETDEV_FRONTEND
tristate "Xen network device frontend driver"
depends on XEN
select XEN_XENBUS_FRONTEND
select PAGE_POOL
default y
help
xen network backend driver netback is the host side counterpart to the frontend driver in drivers/net/xen-netfront.c. The PV protocol is also implemented by frontend drivers in other OSes too, such as the BSDs and even Windows. The patch is based on the driver from the xen.git pvops kernel tree but has been put through the checkpatch.pl wringer plus several manual cleanup passes and review iterations. The driver has been moved from drivers/xen/netback to drivers/net/xen-netback. One major change from xen.git is that the guest transmit path (i.e. what looks like receive to netback) has been significantly reworked to remove the dependency on the out of tree PageForeign page flag (a core kernel patch which enables a per page destructor callback on the final put_page). This page flag was used in order to implement a grant map based transmit path (where guest pages are mapped directly into SKB frags). Instead this version of netback uses grant copy operations into regular memory belonging to the backend domain. Reinstating the grant map functionality is something which I would like to revisit in the future. Note that this driver depends on 2e820f58f7ad "xen/irq: implement bind_interdomain_evtchn_to_irqhandler for backend drivers" which is in linux next via the "xen-two" tree and is intended for the 2.6.39 merge window: git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stable/backends this branch has only that single commit since 2.6.38-rc2 and is safe for cross merging into the net branch. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Reviewed-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2011-03-15 00:06:18 +00:00
This driver provides support for Xen paravirtual network
devices exported by a Xen network driver domain (often
domain 0).
The corresponding Linux backend driver is enabled by the
CONFIG_XEN_NETDEV_BACKEND option.
If you are compiling a kernel for use as Xen guest, you
should say Y here. To compile this driver as a module, chose
M here: the module will be called xen-netfront.
config XEN_NETDEV_BACKEND
tristate "Xen backend network device"
depends on XEN_BACKEND
help
This driver allows the kernel to act as a Xen network driver
domain which exports paravirtual network devices to other
Xen domains. These devices can be accessed by any operating
system that implements a compatible front end.
The corresponding Linux frontend driver is enabled by the
CONFIG_XEN_NETDEV_FRONTEND configuration option.
The backend driver presents a standard network device
endpoint for each paravirtual network device to the driver
domain network stack. These can then be bridged or routed
etc in order to provide full network connectivity.
If you are compiling a kernel to run in a Xen network driver
domain (often this is domain 0) you should say Y here. To
compile this driver as a module, chose M here: the module
will be called xen-netback.
config VMXNET3
tristate "VMware VMXNET3 ethernet driver"
depends on PCI && INET
depends on !(PAGE_SIZE_64KB || ARM64_64K_PAGES || \
IA64_PAGE_SIZE_64KB || MICROBLAZE_64K_PAGES || \
PARISC_PAGE_SIZE_64KB || PPC_64K_PAGES)
help
This driver supports VMware's vmxnet3 virtual ethernet NIC.
To compile this driver as a module, choose M here: the
module will be called vmxnet3.
config FUJITSU_ES
tristate "FUJITSU Extended Socket Network Device driver"
depends on ACPI
help
This driver provides support for Extended Socket network device
on Extended Partitioning of FUJITSU PRIMEQUEST 2000 E2 series.
config USB4_NET
tristate "Networking over USB4 and Thunderbolt cables"
depends on USB4 && INET
help
Select this if you want to create network between two computers
over a USB4 and Thunderbolt cables. The driver supports Apple
ThunderboltIP protocol and allows communication with any host
supporting the same protocol including Windows and macOS.
To compile this driver a module, choose M here. The module will be
called thunderbolt-net.
source "drivers/net/hyperv/Kconfig"
config NETDEVSIM
tristate "Simulated networking device"
depends on DEBUG_FS
depends on INET
depends on IPV6 || IPV6=n
select NET_DEVLINK
help
This driver is a developer testing tool and software model that can
be used to test various control path networking APIs, especially
HW-offload related.
To compile this driver as a module, choose M here: the module
will be called netdevsim.
config NET_FAILOVER
tristate "Failover driver"
select FAILOVER
help
This provides an automated failover mechanism via APIs to create
and destroy a failover master netdev and manages a primary and
standby slave netdevs that get registered via the generic failover
infrastructure. This can be used by paravirtual drivers to enable
an alternate low latency datapath. It also enables live migration of
a VM with direct attached VF by failing over to the paravirtual
datapath when the VF is unplugged.
endif # NETDEVICES